]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
buildsys: not our job to handle editor files
[pve-docs.git] / pvecm.adoc
1 ifdef::manvolnum[]
2 pvecm(1)
3 ========
4 :pve-toplevel:
5
6 NAME
7 ----
8
9 pvecm - Proxmox VE Cluster Manager
10
11 SYNOPSIS
12 --------
13
14 include::pvecm.1-synopsis.adoc[]
15
16 DESCRIPTION
17 -----------
18 endif::manvolnum[]
19
20 ifndef::manvolnum[]
21 Cluster Manager
22 ===============
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 The {PVE} cluster manager `pvecm` is a tool to create a group of
27 physical servers. Such a group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such clusters can consist of up to 32 physical nodes
30 (probably more, dependent on network latency).
31
32 `pvecm` can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35 is used to transparently distribute the cluster configuration to all cluster
36 nodes.
37
38 Grouping nodes into a cluster has the following advantages:
39
40 * Centralized, web based management
41
42 * Multi-master clusters: each node can do all management task
43
44 * `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
46
47 * Easy migration of virtual machines and containers between physical
48 hosts
49
50 * Fast deployment
51
52 * Cluster-wide services like firewall and HA
53
54
55 Requirements
56 ------------
57
58 * All nodes must be in the same network as `corosync` uses IP Multicast
59 to communicate between nodes (also see
60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
61 ports 5404 and 5405 for cluster communication.
62 +
63 NOTE: Some switches do not support IP multicast by default and must be
64 manually enabled first.
65
66 * Date and time have to be synchronized.
67
68 * SSH tunnel on TCP port 22 between nodes is used.
69
70 * If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
73
74 * We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
78 Proxmox VE 4.0 cluster nodes.
79
80
81 Preparing Nodes
82 ---------------
83
84 First, install {PVE} on all nodes. Make sure that each node is
85 installed with the final hostname and IP configuration. Changing the
86 hostname and IP is not possible after cluster creation.
87
88 Currently the cluster creation has to be done on the console, so you
89 need to login via `ssh`.
90
91 Create the Cluster
92 ------------------
93
94 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95 This name cannot be changed later.
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
99 CAUTION: The cluster name is used to compute the default multicast
100 address. Please use unique cluster names if you run more than one
101 cluster inside your network.
102
103 To check the state of your cluster use:
104
105 hp1# pvecm status
106
107
108 Adding Nodes to the Cluster
109 ---------------------------
110
111 Login via `ssh` to the node you want to add.
112
113 hp2# pvecm add IP-ADDRESS-CLUSTER
114
115 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
116
117 CAUTION: A new node cannot hold any VMs, because you would get
118 conflicts about identical VM IDs. Also, all existing configuration in
119 `/etc/pve` is overwritten when you join a new node to the cluster. To
120 workaround, use `vzdump` to backup and restore to a different VMID after
121 adding the node to the cluster.
122
123 To check the state of cluster:
124
125 # pvecm status
126
127 .Cluster status after adding 4 nodes
128 ----
129 hp2# pvecm status
130 Quorum information
131 ~~~~~~~~~~~~~~~~~~
132 Date: Mon Apr 20 12:30:13 2015
133 Quorum provider: corosync_votequorum
134 Nodes: 4
135 Node ID: 0x00000001
136 Ring ID: 1928
137 Quorate: Yes
138
139 Votequorum information
140 ~~~~~~~~~~~~~~~~~~~~~~
141 Expected votes: 4
142 Highest expected: 4
143 Total votes: 4
144 Quorum: 2
145 Flags: Quorate
146
147 Membership information
148 ~~~~~~~~~~~~~~~~~~~~~~
149 Nodeid Votes Name
150 0x00000001 1 192.168.15.91
151 0x00000002 1 192.168.15.92 (local)
152 0x00000003 1 192.168.15.93
153 0x00000004 1 192.168.15.94
154 ----
155
156 If you only want the list of all nodes use:
157
158 # pvecm nodes
159
160 .List nodes in a cluster
161 ----
162 hp2# pvecm nodes
163
164 Membership information
165 ~~~~~~~~~~~~~~~~~~~~~~
166 Nodeid Votes Name
167 1 1 hp1
168 2 1 hp2 (local)
169 3 1 hp3
170 4 1 hp4
171 ----
172
173 Adding Nodes With Separated Cluster Network
174 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
175
176 When adding a node to a cluster with a separated cluster network you need to
177 use the 'ringX_addr' parameters to set the nodes address on those networks:
178
179 [source,bash]
180 ----
181 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
182 ----
183
184 If you want to use the Redundant Ring Protocol you will also want to pass the
185 'ring1_addr' parameter.
186
187
188 Remove a Cluster Node
189 ---------------------
190
191 CAUTION: Read carefully the procedure before proceeding, as it could
192 not be what you want or need.
193
194 Move all virtual machines from the node. Make sure you have no local
195 data or backups you want to keep, or save them accordingly.
196 In the following example we will remove the node hp4 from the cluster.
197
198 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
199 command to identify the node ID to remove:
200
201 ----
202 hp1# pvecm nodes
203
204 Membership information
205 ~~~~~~~~~~~~~~~~~~~~~~
206 Nodeid Votes Name
207 1 1 hp1 (local)
208 2 1 hp2
209 3 1 hp3
210 4 1 hp4
211 ----
212
213
214 At this point you must power off hp4 and
215 make sure that it will not power on again (in the network) as it
216 is.
217
218 IMPORTANT: As said above, it is critical to power off the node
219 *before* removal, and make sure that it will *never* power on again
220 (in the existing cluster network) as it is.
221 If you power on the node as it is, your cluster will be screwed up and
222 it could be difficult to restore a clean cluster state.
223
224 After powering off the node hp4, we can safely remove it from the cluster.
225
226 hp1# pvecm delnode hp4
227
228 If the operation succeeds no output is returned, just check the node
229 list again with `pvecm nodes` or `pvecm status`. You should see
230 something like:
231
232 ----
233 hp1# pvecm status
234
235 Quorum information
236 ~~~~~~~~~~~~~~~~~~
237 Date: Mon Apr 20 12:44:28 2015
238 Quorum provider: corosync_votequorum
239 Nodes: 3
240 Node ID: 0x00000001
241 Ring ID: 1992
242 Quorate: Yes
243
244 Votequorum information
245 ~~~~~~~~~~~~~~~~~~~~~~
246 Expected votes: 3
247 Highest expected: 3
248 Total votes: 3
249 Quorum: 3
250 Flags: Quorate
251
252 Membership information
253 ~~~~~~~~~~~~~~~~~~~~~~
254 Nodeid Votes Name
255 0x00000001 1 192.168.15.90 (local)
256 0x00000002 1 192.168.15.91
257 0x00000003 1 192.168.15.92
258 ----
259
260 If, for whatever reason, you want that this server joins the same
261 cluster again, you have to
262
263 * reinstall {pve} on it from scratch
264
265 * then join it, as explained in the previous section.
266
267 [[pvecm_separate_node_without_reinstall]]
268 Separate A Node Without Reinstalling
269 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
270
271 CAUTION: This is *not* the recommended method, proceed with caution. Use the
272 above mentioned method if you're unsure.
273
274 You can also separate a node from a cluster without reinstalling it from
275 scratch. But after removing the node from the cluster it will still have
276 access to the shared storages! This must be resolved before you start removing
277 the node from the cluster. A {pve} cluster cannot share the exact same
278 storage with another cluster, as storage locking doesn't work over cluster
279 boundary. Further, it may also lead to VMID conflicts.
280
281 Its suggested that you create a new storage where only the node which you want
282 to separate has access. This can be an new export on your NFS or a new Ceph
283 pool, to name a few examples. Its just important that the exact same storage
284 does not gets accessed by multiple clusters. After setting this storage up move
285 all data from the node and its VMs to it. Then you are ready to separate the
286 node from the cluster.
287
288 WARNING: Ensure all shared resources are cleanly separated! You will run into
289 conflicts and problems else.
290
291 First stop the corosync and the pve-cluster services on the node:
292 [source,bash]
293 ----
294 systemctl stop pve-cluster
295 systemctl stop corosync
296 ----
297
298 Start the cluster filesystem again in local mode:
299 [source,bash]
300 ----
301 pmxcfs -l
302 ----
303
304 Delete the corosync configuration files:
305 [source,bash]
306 ----
307 rm /etc/pve/corosync.conf
308 rm /etc/corosync/*
309 ----
310
311 You can now start the filesystem again as normal service:
312 [source,bash]
313 ----
314 killall pmxcfs
315 systemctl start pve-cluster
316 ----
317
318 The node is now separated from the cluster. You can deleted it from a remaining
319 node of the cluster with:
320 [source,bash]
321 ----
322 pvecm delnode oldnode
323 ----
324
325 If the command failed, because the remaining node in the cluster lost quorum
326 when the now separate node exited, you may set the expected votes to 1 as a workaround:
327 [source,bash]
328 ----
329 pvecm expected 1
330 ----
331
332 And the repeat the 'pvecm delnode' command.
333
334 Now switch back to the separated node, here delete all remaining files left
335 from the old cluster. This ensures that the node can be added to another
336 cluster again without problems.
337
338 [source,bash]
339 ----
340 rm /var/lib/corosync/*
341 ----
342
343 As the configuration files from the other nodes are still in the cluster
344 filesystem you may want to clean those up too. Remove simply the whole
345 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
346 you used the correct one before deleting it.
347
348 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
349 the nodes can still connect to each other with public key authentication. This
350 should be fixed by removing the respective keys from the
351 '/etc/pve/priv/authorized_keys' file.
352
353 Quorum
354 ------
355
356 {pve} use a quorum-based technique to provide a consistent state among
357 all cluster nodes.
358
359 [quote, from Wikipedia, Quorum (distributed computing)]
360 ____
361 A quorum is the minimum number of votes that a distributed transaction
362 has to obtain in order to be allowed to perform an operation in a
363 distributed system.
364 ____
365
366 In case of network partitioning, state changes requires that a
367 majority of nodes are online. The cluster switches to read-only mode
368 if it loses quorum.
369
370 NOTE: {pve} assigns a single vote to each node by default.
371
372 Cluster Network
373 ---------------
374
375 The cluster network is the core of a cluster. All messages sent over it have to
376 be delivered reliable to all nodes in their respective order. In {pve} this
377 part is done by corosync, an implementation of a high performance low overhead
378 high availability development toolkit. It serves our decentralized
379 configuration file system (`pmxcfs`).
380
381 [[cluster-network-requirements]]
382 Network Requirements
383 ~~~~~~~~~~~~~~~~~~~~
384 This needs a reliable network with latencies under 2 milliseconds (LAN
385 performance) to work properly. While corosync can also use unicast for
386 communication between nodes its **highly recommended** to have a multicast
387 capable network. The network should not be used heavily by other members,
388 ideally corosync runs on its own network.
389 *never* share it with network where storage communicates too.
390
391 Before setting up a cluster it is good practice to check if the network is fit
392 for that purpose.
393
394 * Ensure that all nodes are in the same subnet. This must only be true for the
395 network interfaces used for cluster communication (corosync).
396
397 * Ensure all nodes can reach each other over those interfaces, using `ping` is
398 enough for a basic test.
399
400 * Ensure that multicast works in general and a high package rates. This can be
401 done with the `omping` tool. The final "%loss" number should be < 1%.
402 +
403 [source,bash]
404 ----
405 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
406 ----
407
408 * Ensure that multicast communication works over an extended period of time.
409 This covers up problems where IGMP snooping is activated on the network but
410 no multicast querier is active. This test has a duration of around 10
411 minutes.
412 +
413 [source,bash]
414 ----
415 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
416 ----
417
418 Your network is not ready for clustering if any of these test fails. Recheck
419 your network configuration. Especially switches are notorious for having
420 multicast disabled by default or IGMP snooping enabled with no IGMP querier
421 active.
422
423 In smaller cluster its also an option to use unicast if you really cannot get
424 multicast to work.
425
426 Separate Cluster Network
427 ~~~~~~~~~~~~~~~~~~~~~~~~
428
429 When creating a cluster without any parameters the cluster network is generally
430 shared with the Web UI and the VMs and its traffic. Depending on your setup
431 even storage traffic may get sent over the same network. Its recommended to
432 change that, as corosync is a time critical real time application.
433
434 Setting Up A New Network
435 ^^^^^^^^^^^^^^^^^^^^^^^^
436
437 First you have to setup a new network interface. It should be on a physical
438 separate network. Ensure that your network fulfills the
439 <<cluster-network-requirements,cluster network requirements>>.
440
441 Separate On Cluster Creation
442 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
443
444 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
445 the 'pvecm create' command used for creating a new cluster.
446
447 If you have setup a additional NIC with a static address on 10.10.10.1/25
448 and want to send and receive all cluster communication over this interface
449 you would execute:
450
451 [source,bash]
452 ----
453 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
454 ----
455
456 To check if everything is working properly execute:
457 [source,bash]
458 ----
459 systemctl status corosync
460 ----
461
462 [[separate-cluster-net-after-creation]]
463 Separate After Cluster Creation
464 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
465
466 You can do this also if you have already created a cluster and want to switch
467 its communication to another network, without rebuilding the whole cluster.
468 This change may lead to short durations of quorum loss in the cluster, as nodes
469 have to restart corosync and come up one after the other on the new network.
470
471 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
472 The open it and you should see a file similar to:
473
474 ----
475 logging {
476 debug: off
477 to_syslog: yes
478 }
479
480 nodelist {
481
482 node {
483 name: due
484 nodeid: 2
485 quorum_votes: 1
486 ring0_addr: due
487 }
488
489 node {
490 name: tre
491 nodeid: 3
492 quorum_votes: 1
493 ring0_addr: tre
494 }
495
496 node {
497 name: uno
498 nodeid: 1
499 quorum_votes: 1
500 ring0_addr: uno
501 }
502
503 }
504
505 quorum {
506 provider: corosync_votequorum
507 }
508
509 totem {
510 cluster_name: thomas-testcluster
511 config_version: 3
512 ip_version: ipv4
513 secauth: on
514 version: 2
515 interface {
516 bindnetaddr: 192.168.30.50
517 ringnumber: 0
518 }
519
520 }
521 ----
522
523 The first you want to do is add the 'name' properties in the node entries if
524 you do not see them already. Those *must* match the node name.
525
526 Then replace the address from the 'ring0_addr' properties with the new
527 addresses. You may use plain IP addresses or also hostnames here. If you use
528 hostnames ensure that they are resolvable from all nodes.
529
530 In my example I want to switch my cluster communication to the 10.10.10.1/25
531 network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
532 in the totem section of the config to an address of the new network. It can be
533 any address from the subnet configured on the new network interface.
534
535 After you increased the 'config_version' property the new configuration file
536 should look like:
537
538 ----
539
540 logging {
541 debug: off
542 to_syslog: yes
543 }
544
545 nodelist {
546
547 node {
548 name: due
549 nodeid: 2
550 quorum_votes: 1
551 ring0_addr: 10.10.10.2
552 }
553
554 node {
555 name: tre
556 nodeid: 3
557 quorum_votes: 1
558 ring0_addr: 10.10.10.3
559 }
560
561 node {
562 name: uno
563 nodeid: 1
564 quorum_votes: 1
565 ring0_addr: 10.10.10.1
566 }
567
568 }
569
570 quorum {
571 provider: corosync_votequorum
572 }
573
574 totem {
575 cluster_name: thomas-testcluster
576 config_version: 4
577 ip_version: ipv4
578 secauth: on
579 version: 2
580 interface {
581 bindnetaddr: 10.10.10.1
582 ringnumber: 0
583 }
584
585 }
586 ----
587
588 Now after a final check whether all changed information is correct we save it
589 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
590 learn how to bring it in effect.
591
592 As our change cannot be enforced live from corosync we have to do an restart.
593
594 On a single node execute:
595 [source,bash]
596 ----
597 systemctl restart corosync
598 ----
599
600 Now check if everything is fine:
601
602 [source,bash]
603 ----
604 systemctl status corosync
605 ----
606
607 If corosync runs again correct restart corosync also on all other nodes.
608 They will then join the cluster membership one by one on the new network.
609
610 Redundant Ring Protocol
611 ~~~~~~~~~~~~~~~~~~~~~~~
612 To avoid a single point of failure you should implement counter measurements.
613 This can be on the hardware and operating system level through network bonding.
614
615 Corosync itself offers also a possibility to add redundancy through the so
616 called 'Redundant Ring Protocol'. This protocol allows running a second totem
617 ring on another network, this network should be physically separated from the
618 other rings network to actually increase availability.
619
620 RRP On Cluster Creation
621 ~~~~~~~~~~~~~~~~~~~~~~~
622
623 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
624 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
625
626 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
627
628 So if you have two networks, one on the 10.10.10.1/24 and the other on the
629 10.10.20.1/24 subnet you would execute:
630
631 [source,bash]
632 ----
633 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
634 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
635 ----
636
637 RRP On Existing Clusters
638 ~~~~~~~~~~~~~~~~~~~~~~~~
639
640 You will take similar steps as described in
641 <<separate-cluster-net-after-creation,separating the cluster network>> to
642 enable RRP on an already running cluster. The single difference is, that you
643 will add `ring1` and use it instead of `ring0`.
644
645 First add a new `interface` subsection in the `totem` section, set its
646 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
647 address of the subnet you have configured for your new ring.
648 Further set the `rrp_mode` to `passive`, this is the only stable mode.
649
650 Then add to each node entry in the `nodelist` section its new `ring1_addr`
651 property with the nodes additional ring address.
652
653 So if you have two networks, one on the 10.10.10.1/24 and the other on the
654 10.10.20.1/24 subnet, the final configuration file should look like:
655
656 ----
657 totem {
658 cluster_name: tweak
659 config_version: 9
660 ip_version: ipv4
661 rrp_mode: passive
662 secauth: on
663 version: 2
664 interface {
665 bindnetaddr: 10.10.10.1
666 ringnumber: 0
667 }
668 interface {
669 bindnetaddr: 10.10.20.1
670 ringnumber: 1
671 }
672 }
673
674 nodelist {
675 node {
676 name: pvecm1
677 nodeid: 1
678 quorum_votes: 1
679 ring0_addr: 10.10.10.1
680 ring1_addr: 10.10.20.1
681 }
682
683 node {
684 name: pvecm2
685 nodeid: 2
686 quorum_votes: 1
687 ring0_addr: 10.10.10.2
688 ring1_addr: 10.10.20.2
689 }
690
691 [...] # other cluster nodes here
692 }
693
694 [...] # other remaining config sections here
695
696 ----
697
698 Bring it in effect like described in the
699 <<edit-corosync-conf,edit the corosync.conf file>> section.
700
701 This is a change which cannot take live in effect and needs at least a restart
702 of corosync. Recommended is a restart of the whole cluster.
703
704 If you cannot reboot the whole cluster ensure no High Availability services are
705 configured and the stop the corosync service on all nodes. After corosync is
706 stopped on all nodes start it one after the other again.
707
708 Corosync Configuration
709 ----------------------
710
711 The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
712 controls the cluster member ship and its network.
713 For reading more about it check the corosync.conf man page:
714 [source,bash]
715 ----
716 man corosync.conf
717 ----
718
719 For node membership you should always use the `pvecm` tool provided by {pve}.
720 You may have to edit the configuration file manually for other changes.
721 Here are a few best practice tips for doing this.
722
723 [[edit-corosync-conf]]
724 Edit corosync.conf
725 ~~~~~~~~~~~~~~~~~~
726
727 Editing the corosync.conf file can be not always straight forward. There are
728 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
729 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
730 propagate the changes to the local one, but not vice versa.
731
732 The configuration will get updated automatically as soon as the file changes.
733 This means changes which can be integrated in a running corosync will take
734 instantly effect. So you should always make a copy and edit that instead, to
735 avoid triggering some unwanted changes by an in between safe.
736
737 [source,bash]
738 ----
739 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
740 ----
741
742 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
743 preinstalled on {pve} for example.
744
745 NOTE: Always increment the 'config_version' number on configuration changes,
746 omitting this can lead to problems.
747
748 After making the necessary changes create another copy of the current working
749 configuration file. This serves as a backup if the new configuration fails to
750 apply or makes problems in other ways.
751
752 [source,bash]
753 ----
754 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
755 ----
756
757 Then move the new configuration file over the old one:
758 [source,bash]
759 ----
760 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
761 ----
762
763 You may check with the commands
764 [source,bash]
765 ----
766 systemctl status corosync
767 journalctl -b -u corosync
768 ----
769
770 If the change could applied automatically. If not you may have to restart the
771 corosync service via:
772 [source,bash]
773 ----
774 systemctl restart corosync
775 ----
776
777 On errors check the troubleshooting section below.
778
779 Troubleshooting
780 ~~~~~~~~~~~~~~~
781
782 Issue: 'quorum.expected_votes must be configured'
783 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
784
785 When corosync starts to fail and you get the following message in the system log:
786
787 ----
788 [...]
789 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
790 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
791 'configuration error: nodelist or quorum.expected_votes must be configured!'
792 [...]
793 ----
794
795 It means that the hostname you set for corosync 'ringX_addr' in the
796 configuration could not be resolved.
797
798
799 Write Configuration When Not Quorate
800 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
801
802 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
803 know what you do, use:
804 [source,bash]
805 ----
806 pvecm expected 1
807 ----
808
809 This sets the expected vote count to 1 and makes the cluster quorate. You can
810 now fix your configuration, or revert it back to the last working backup.
811
812 This is not enough if corosync cannot start anymore. Here its best to edit the
813 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
814 that corosync can start again. Ensure that on all nodes this configuration has
815 the same content to avoid split brains. If you are not sure what went wrong
816 it's best to ask the Proxmox Community to help you.
817
818
819 [[corosync-conf-glossary]]
820 Corosync Configuration Glossary
821 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
822
823 ringX_addr::
824 This names the different ring addresses for the corosync totem rings used for
825 the cluster communication.
826
827 bindnetaddr::
828 Defines to which interface the ring should bind to. It may be any address of
829 the subnet configured on the interface we want to use. In general its the
830 recommended to just use an address a node uses on this interface.
831
832 rrp_mode::
833 Specifies the mode of the redundant ring protocol and may be passive, active or
834 none. Note that use of active is highly experimental and not official
835 supported. Passive is the preferred mode, it may double the cluster
836 communication throughput and increases availability.
837
838
839 Cluster Cold Start
840 ------------------
841
842 It is obvious that a cluster is not quorate when all nodes are
843 offline. This is a common case after a power failure.
844
845 NOTE: It is always a good idea to use an uninterruptible power supply
846 (``UPS'', also called ``battery backup'') to avoid this state, especially if
847 you want HA.
848
849 On node startup, service `pve-manager` is started and waits for
850 quorum. Once quorate, it starts all guests which have the `onboot`
851 flag set.
852
853 When you turn on nodes, or when power comes back after power failure,
854 it is likely that some nodes boots faster than others. Please keep in
855 mind that guest startup is delayed until you reach quorum.
856
857
858 Guest Migration
859 ---------------
860
861 Migrating virtual guests to other nodes is a useful feature in a
862 cluster. There are settings to control the behavior of such
863 migrations. This can be done via the configuration file
864 `datacenter.cfg` or for a specific migration via API or command line
865 parameters.
866
867 It makes a difference if a Guest is online or offline, or if it has
868 local resources (like a local disk).
869
870 For Details about Virtual Machine Migration see the
871 xref:qm_migration[QEMU/KVM Migration Chapter]
872
873 For Details about Container Migration see the
874 xref:pct_migration[Container Migration Chapter]
875
876 Migration Type
877 ~~~~~~~~~~~~~~
878
879 The migration type defines if the migration data should be sent over a
880 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
881 Setting the migration type to insecure means that the RAM content of a
882 virtual guest gets also transfered unencrypted, which can lead to
883 information disclosure of critical data from inside the guest (for
884 example passwords or encryption keys).
885
886 Therefore, we strongly recommend using the secure channel if you do
887 not have full control over the network and can not guarantee that no
888 one is eavesdropping to it.
889
890 NOTE: Storage migration does not follow this setting. Currently, it
891 always sends the storage content over a secure channel.
892
893 Encryption requires a lot of computing power, so this setting is often
894 changed to "unsafe" to achieve better performance. The impact on
895 modern systems is lower because they implement AES encryption in
896 hardware. The performance impact is particularly evident in fast
897 networks where you can transfer 10 Gbps or more.
898
899
900 Migration Network
901 ~~~~~~~~~~~~~~~~~
902
903 By default, {pve} uses the network in which cluster communication
904 takes place to send the migration traffic. This is not optimal because
905 sensitive cluster traffic can be disrupted and this network may not
906 have the best bandwidth available on the node.
907
908 Setting the migration network parameter allows the use of a dedicated
909 network for the entire migration traffic. In addition to the memory,
910 this also affects the storage traffic for offline migrations.
911
912 The migration network is set as a network in the CIDR notation. This
913 has the advantage that you do not have to set individual IP addresses
914 for each node. {pve} can determine the real address on the
915 destination node from the network specified in the CIDR form. To
916 enable this, the network must be specified so that each node has one,
917 but only one IP in the respective network.
918
919
920 Example
921 ^^^^^^^
922
923 We assume that we have a three-node setup with three separate
924 networks. One for public communication with the Internet, one for
925 cluster communication and a very fast one, which we want to use as a
926 dedicated network for migration.
927
928 A network configuration for such a setup might look as follows:
929
930 ----
931 iface eth0 inet manual
932
933 # public network
934 auto vmbr0
935 iface vmbr0 inet static
936 address 192.X.Y.57
937 netmask 255.255.250.0
938 gateway 192.X.Y.1
939 bridge_ports eth0
940 bridge_stp off
941 bridge_fd 0
942
943 # cluster network
944 auto eth1
945 iface eth1 inet static
946 address 10.1.1.1
947 netmask 255.255.255.0
948
949 # fast network
950 auto eth2
951 iface eth2 inet static
952 address 10.1.2.1
953 netmask 255.255.255.0
954 ----
955
956 Here, we will use the network 10.1.2.0/24 as a migration network. For
957 a single migration, you can do this using the `migration_network`
958 parameter of the command line tool:
959
960 ----
961 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
962 ----
963
964 To configure this as the default network for all migrations in the
965 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
966 file:
967
968 ----
969 # use dedicated migration network
970 migration: secure,network=10.1.2.0/24
971 ----
972
973 NOTE: The migration type must always be set when the migration network
974 gets set in `/etc/pve/datacenter.cfg`.
975
976
977 ifdef::manvolnum[]
978 include::pve-copyright.adoc[]
979 endif::manvolnum[]