]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
minor fixup
[pve-docs.git] / pvecm.adoc
1 ifdef::manvolnum[]
2 pvecm(1)
3 ========
4 :pve-toplevel:
5
6 NAME
7 ----
8
9 pvecm - Proxmox VE Cluster Manager
10
11 SYNOPSIS
12 --------
13
14 include::pvecm.1-synopsis.adoc[]
15
16 DESCRIPTION
17 -----------
18 endif::manvolnum[]
19
20 ifndef::manvolnum[]
21 Cluster Manager
22 ===============
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 The {PVE} cluster manager `pvecm` is a tool to create a group of
27 physical servers. Such a group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such clusters can consist of up to 32 physical nodes
30 (probably more, dependent on network latency).
31
32 `pvecm` can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35 is used to transparently distribute the cluster configuration to all cluster
36 nodes.
37
38 Grouping nodes into a cluster has the following advantages:
39
40 * Centralized, web based management
41
42 * Multi-master clusters: each node can do all management task
43
44 * `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
46
47 * Easy migration of virtual machines and containers between physical
48 hosts
49
50 * Fast deployment
51
52 * Cluster-wide services like firewall and HA
53
54
55 Requirements
56 ------------
57
58 * All nodes must be in the same network as `corosync` uses IP Multicast
59 to communicate between nodes (also see
60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
61 ports 5404 and 5405 for cluster communication.
62 +
63 NOTE: Some switches do not support IP multicast by default and must be
64 manually enabled first.
65
66 * Date and time have to be synchronized.
67
68 * SSH tunnel on TCP port 22 between nodes is used.
69
70 * If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
73
74 * We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
78 Proxmox VE 4.0 cluster nodes.
79
80
81 Preparing Nodes
82 ---------------
83
84 First, install {PVE} on all nodes. Make sure that each node is
85 installed with the final hostname and IP configuration. Changing the
86 hostname and IP is not possible after cluster creation.
87
88 Currently the cluster creation has to be done on the console, so you
89 need to login via `ssh`.
90
91 Create the Cluster
92 ------------------
93
94 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95 This name cannot be changed later.
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
99 CAUTION: The cluster name is used to compute the default multicast
100 address. Please use unique cluster names if you run more than one
101 cluster inside your network.
102
103 To check the state of your cluster use:
104
105 hp1# pvecm status
106
107 Multiple Clusters In Same Network
108 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
109
110 It is possible to create multiple clusters in the same physical or logical
111 network. Each cluster must have a unique name, which is used to generate the
112 cluster's multicast group address. As long as no duplicate cluster names are
113 configured in one network segment, the different clusters won't interfere with
114 each other.
115
116 If multiple clusters operate in a single network it may be beneficial to setup
117 an IGMP querier and enable IGMP Snooping in said network. This may reduce the
118 load of the network significantly because multicast packets are only delivered
119 to endpoints of the respective member nodes.
120
121
122 Adding Nodes to the Cluster
123 ---------------------------
124
125 Login via `ssh` to the node you want to add.
126
127 hp2# pvecm add IP-ADDRESS-CLUSTER
128
129 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
130
131 CAUTION: A new node cannot hold any VMs, because you would get
132 conflicts about identical VM IDs. Also, all existing configuration in
133 `/etc/pve` is overwritten when you join a new node to the cluster. To
134 workaround, use `vzdump` to backup and restore to a different VMID after
135 adding the node to the cluster.
136
137 To check the state of cluster:
138
139 # pvecm status
140
141 .Cluster status after adding 4 nodes
142 ----
143 hp2# pvecm status
144 Quorum information
145 ~~~~~~~~~~~~~~~~~~
146 Date: Mon Apr 20 12:30:13 2015
147 Quorum provider: corosync_votequorum
148 Nodes: 4
149 Node ID: 0x00000001
150 Ring ID: 1928
151 Quorate: Yes
152
153 Votequorum information
154 ~~~~~~~~~~~~~~~~~~~~~~
155 Expected votes: 4
156 Highest expected: 4
157 Total votes: 4
158 Quorum: 2
159 Flags: Quorate
160
161 Membership information
162 ~~~~~~~~~~~~~~~~~~~~~~
163 Nodeid Votes Name
164 0x00000001 1 192.168.15.91
165 0x00000002 1 192.168.15.92 (local)
166 0x00000003 1 192.168.15.93
167 0x00000004 1 192.168.15.94
168 ----
169
170 If you only want the list of all nodes use:
171
172 # pvecm nodes
173
174 .List nodes in a cluster
175 ----
176 hp2# pvecm nodes
177
178 Membership information
179 ~~~~~~~~~~~~~~~~~~~~~~
180 Nodeid Votes Name
181 1 1 hp1
182 2 1 hp2 (local)
183 3 1 hp3
184 4 1 hp4
185 ----
186
187 [[adding-nodes-with-separated-cluster-network]]
188 Adding Nodes With Separated Cluster Network
189 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
190
191 When adding a node to a cluster with a separated cluster network you need to
192 use the 'ringX_addr' parameters to set the nodes address on those networks:
193
194 [source,bash]
195 ----
196 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
197 ----
198
199 If you want to use the Redundant Ring Protocol you will also want to pass the
200 'ring1_addr' parameter.
201
202
203 Remove a Cluster Node
204 ---------------------
205
206 CAUTION: Read carefully the procedure before proceeding, as it could
207 not be what you want or need.
208
209 Move all virtual machines from the node. Make sure you have no local
210 data or backups you want to keep, or save them accordingly.
211 In the following example we will remove the node hp4 from the cluster.
212
213 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
214 command to identify the node ID to remove:
215
216 ----
217 hp1# pvecm nodes
218
219 Membership information
220 ~~~~~~~~~~~~~~~~~~~~~~
221 Nodeid Votes Name
222 1 1 hp1 (local)
223 2 1 hp2
224 3 1 hp3
225 4 1 hp4
226 ----
227
228
229 At this point you must power off hp4 and
230 make sure that it will not power on again (in the network) as it
231 is.
232
233 IMPORTANT: As said above, it is critical to power off the node
234 *before* removal, and make sure that it will *never* power on again
235 (in the existing cluster network) as it is.
236 If you power on the node as it is, your cluster will be screwed up and
237 it could be difficult to restore a clean cluster state.
238
239 After powering off the node hp4, we can safely remove it from the cluster.
240
241 hp1# pvecm delnode hp4
242
243 If the operation succeeds no output is returned, just check the node
244 list again with `pvecm nodes` or `pvecm status`. You should see
245 something like:
246
247 ----
248 hp1# pvecm status
249
250 Quorum information
251 ~~~~~~~~~~~~~~~~~~
252 Date: Mon Apr 20 12:44:28 2015
253 Quorum provider: corosync_votequorum
254 Nodes: 3
255 Node ID: 0x00000001
256 Ring ID: 1992
257 Quorate: Yes
258
259 Votequorum information
260 ~~~~~~~~~~~~~~~~~~~~~~
261 Expected votes: 3
262 Highest expected: 3
263 Total votes: 3
264 Quorum: 3
265 Flags: Quorate
266
267 Membership information
268 ~~~~~~~~~~~~~~~~~~~~~~
269 Nodeid Votes Name
270 0x00000001 1 192.168.15.90 (local)
271 0x00000002 1 192.168.15.91
272 0x00000003 1 192.168.15.92
273 ----
274
275 If, for whatever reason, you want that this server joins the same
276 cluster again, you have to
277
278 * reinstall {pve} on it from scratch
279
280 * then join it, as explained in the previous section.
281
282 [[pvecm_separate_node_without_reinstall]]
283 Separate A Node Without Reinstalling
284 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
285
286 CAUTION: This is *not* the recommended method, proceed with caution. Use the
287 above mentioned method if you're unsure.
288
289 You can also separate a node from a cluster without reinstalling it from
290 scratch. But after removing the node from the cluster it will still have
291 access to the shared storages! This must be resolved before you start removing
292 the node from the cluster. A {pve} cluster cannot share the exact same
293 storage with another cluster, as storage locking doesn't work over cluster
294 boundary. Further, it may also lead to VMID conflicts.
295
296 Its suggested that you create a new storage where only the node which you want
297 to separate has access. This can be an new export on your NFS or a new Ceph
298 pool, to name a few examples. Its just important that the exact same storage
299 does not gets accessed by multiple clusters. After setting this storage up move
300 all data from the node and its VMs to it. Then you are ready to separate the
301 node from the cluster.
302
303 WARNING: Ensure all shared resources are cleanly separated! You will run into
304 conflicts and problems else.
305
306 First stop the corosync and the pve-cluster services on the node:
307 [source,bash]
308 ----
309 systemctl stop pve-cluster
310 systemctl stop corosync
311 ----
312
313 Start the cluster filesystem again in local mode:
314 [source,bash]
315 ----
316 pmxcfs -l
317 ----
318
319 Delete the corosync configuration files:
320 [source,bash]
321 ----
322 rm /etc/pve/corosync.conf
323 rm /etc/corosync/*
324 ----
325
326 You can now start the filesystem again as normal service:
327 [source,bash]
328 ----
329 killall pmxcfs
330 systemctl start pve-cluster
331 ----
332
333 The node is now separated from the cluster. You can deleted it from a remaining
334 node of the cluster with:
335 [source,bash]
336 ----
337 pvecm delnode oldnode
338 ----
339
340 If the command failed, because the remaining node in the cluster lost quorum
341 when the now separate node exited, you may set the expected votes to 1 as a workaround:
342 [source,bash]
343 ----
344 pvecm expected 1
345 ----
346
347 And the repeat the 'pvecm delnode' command.
348
349 Now switch back to the separated node, here delete all remaining files left
350 from the old cluster. This ensures that the node can be added to another
351 cluster again without problems.
352
353 [source,bash]
354 ----
355 rm /var/lib/corosync/*
356 ----
357
358 As the configuration files from the other nodes are still in the cluster
359 filesystem you may want to clean those up too. Remove simply the whole
360 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
361 you used the correct one before deleting it.
362
363 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
364 the nodes can still connect to each other with public key authentication. This
365 should be fixed by removing the respective keys from the
366 '/etc/pve/priv/authorized_keys' file.
367
368 Quorum
369 ------
370
371 {pve} use a quorum-based technique to provide a consistent state among
372 all cluster nodes.
373
374 [quote, from Wikipedia, Quorum (distributed computing)]
375 ____
376 A quorum is the minimum number of votes that a distributed transaction
377 has to obtain in order to be allowed to perform an operation in a
378 distributed system.
379 ____
380
381 In case of network partitioning, state changes requires that a
382 majority of nodes are online. The cluster switches to read-only mode
383 if it loses quorum.
384
385 NOTE: {pve} assigns a single vote to each node by default.
386
387 Cluster Network
388 ---------------
389
390 The cluster network is the core of a cluster. All messages sent over it have to
391 be delivered reliable to all nodes in their respective order. In {pve} this
392 part is done by corosync, an implementation of a high performance low overhead
393 high availability development toolkit. It serves our decentralized
394 configuration file system (`pmxcfs`).
395
396 [[cluster-network-requirements]]
397 Network Requirements
398 ~~~~~~~~~~~~~~~~~~~~
399 This needs a reliable network with latencies under 2 milliseconds (LAN
400 performance) to work properly. While corosync can also use unicast for
401 communication between nodes its **highly recommended** to have a multicast
402 capable network. The network should not be used heavily by other members,
403 ideally corosync runs on its own network.
404 *never* share it with network where storage communicates too.
405
406 Before setting up a cluster it is good practice to check if the network is fit
407 for that purpose.
408
409 * Ensure that all nodes are in the same subnet. This must only be true for the
410 network interfaces used for cluster communication (corosync).
411
412 * Ensure all nodes can reach each other over those interfaces, using `ping` is
413 enough for a basic test.
414
415 * Ensure that multicast works in general and a high package rates. This can be
416 done with the `omping` tool. The final "%loss" number should be < 1%.
417 +
418 [source,bash]
419 ----
420 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
421 ----
422
423 * Ensure that multicast communication works over an extended period of time.
424 This uncovers problems where IGMP snooping is activated on the network but
425 no multicast querier is active. This test has a duration of around 10
426 minutes.
427 +
428 [source,bash]
429 ----
430 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
431 ----
432
433 Your network is not ready for clustering if any of these test fails. Recheck
434 your network configuration. Especially switches are notorious for having
435 multicast disabled by default or IGMP snooping enabled with no IGMP querier
436 active.
437
438 In smaller cluster its also an option to use unicast if you really cannot get
439 multicast to work.
440
441 Separate Cluster Network
442 ~~~~~~~~~~~~~~~~~~~~~~~~
443
444 When creating a cluster without any parameters the cluster network is generally
445 shared with the Web UI and the VMs and its traffic. Depending on your setup
446 even storage traffic may get sent over the same network. Its recommended to
447 change that, as corosync is a time critical real time application.
448
449 Setting Up A New Network
450 ^^^^^^^^^^^^^^^^^^^^^^^^
451
452 First you have to setup a new network interface. It should be on a physical
453 separate network. Ensure that your network fulfills the
454 <<cluster-network-requirements,cluster network requirements>>.
455
456 Separate On Cluster Creation
457 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
458
459 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
460 the 'pvecm create' command used for creating a new cluster.
461
462 If you have setup an additional NIC with a static address on 10.10.10.1/25
463 and want to send and receive all cluster communication over this interface
464 you would execute:
465
466 [source,bash]
467 ----
468 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
469 ----
470
471 To check if everything is working properly execute:
472 [source,bash]
473 ----
474 systemctl status corosync
475 ----
476
477 Afterwards, proceed as descripted in the section to
478 <<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
479
480 [[separate-cluster-net-after-creation]]
481 Separate After Cluster Creation
482 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
483
484 You can do this also if you have already created a cluster and want to switch
485 its communication to another network, without rebuilding the whole cluster.
486 This change may lead to short durations of quorum loss in the cluster, as nodes
487 have to restart corosync and come up one after the other on the new network.
488
489 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
490 The open it and you should see a file similar to:
491
492 ----
493 logging {
494 debug: off
495 to_syslog: yes
496 }
497
498 nodelist {
499
500 node {
501 name: due
502 nodeid: 2
503 quorum_votes: 1
504 ring0_addr: due
505 }
506
507 node {
508 name: tre
509 nodeid: 3
510 quorum_votes: 1
511 ring0_addr: tre
512 }
513
514 node {
515 name: uno
516 nodeid: 1
517 quorum_votes: 1
518 ring0_addr: uno
519 }
520
521 }
522
523 quorum {
524 provider: corosync_votequorum
525 }
526
527 totem {
528 cluster_name: thomas-testcluster
529 config_version: 3
530 ip_version: ipv4
531 secauth: on
532 version: 2
533 interface {
534 bindnetaddr: 192.168.30.50
535 ringnumber: 0
536 }
537
538 }
539 ----
540
541 The first you want to do is add the 'name' properties in the node entries if
542 you do not see them already. Those *must* match the node name.
543
544 Then replace the address from the 'ring0_addr' properties with the new
545 addresses. You may use plain IP addresses or also hostnames here. If you use
546 hostnames ensure that they are resolvable from all nodes.
547
548 In my example I want to switch my cluster communication to the 10.10.10.1/25
549 network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
550 in the totem section of the config to an address of the new network. It can be
551 any address from the subnet configured on the new network interface.
552
553 After you increased the 'config_version' property the new configuration file
554 should look like:
555
556 ----
557
558 logging {
559 debug: off
560 to_syslog: yes
561 }
562
563 nodelist {
564
565 node {
566 name: due
567 nodeid: 2
568 quorum_votes: 1
569 ring0_addr: 10.10.10.2
570 }
571
572 node {
573 name: tre
574 nodeid: 3
575 quorum_votes: 1
576 ring0_addr: 10.10.10.3
577 }
578
579 node {
580 name: uno
581 nodeid: 1
582 quorum_votes: 1
583 ring0_addr: 10.10.10.1
584 }
585
586 }
587
588 quorum {
589 provider: corosync_votequorum
590 }
591
592 totem {
593 cluster_name: thomas-testcluster
594 config_version: 4
595 ip_version: ipv4
596 secauth: on
597 version: 2
598 interface {
599 bindnetaddr: 10.10.10.1
600 ringnumber: 0
601 }
602
603 }
604 ----
605
606 Now after a final check whether all changed information is correct we save it
607 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
608 learn how to bring it in effect.
609
610 As our change cannot be enforced live from corosync we have to do an restart.
611
612 On a single node execute:
613 [source,bash]
614 ----
615 systemctl restart corosync
616 ----
617
618 Now check if everything is fine:
619
620 [source,bash]
621 ----
622 systemctl status corosync
623 ----
624
625 If corosync runs again correct restart corosync also on all other nodes.
626 They will then join the cluster membership one by one on the new network.
627
628 Redundant Ring Protocol
629 ~~~~~~~~~~~~~~~~~~~~~~~
630 To avoid a single point of failure you should implement counter measurements.
631 This can be on the hardware and operating system level through network bonding.
632
633 Corosync itself offers also a possibility to add redundancy through the so
634 called 'Redundant Ring Protocol'. This protocol allows running a second totem
635 ring on another network, this network should be physically separated from the
636 other rings network to actually increase availability.
637
638 RRP On Cluster Creation
639 ~~~~~~~~~~~~~~~~~~~~~~~
640
641 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
642 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
643
644 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
645
646 So if you have two networks, one on the 10.10.10.1/24 and the other on the
647 10.10.20.1/24 subnet you would execute:
648
649 [source,bash]
650 ----
651 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
652 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
653 ----
654
655 RRP On Existing Clusters
656 ~~~~~~~~~~~~~~~~~~~~~~~~
657
658 You will take similar steps as described in
659 <<separate-cluster-net-after-creation,separating the cluster network>> to
660 enable RRP on an already running cluster. The single difference is, that you
661 will add `ring1` and use it instead of `ring0`.
662
663 First add a new `interface` subsection in the `totem` section, set its
664 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
665 address of the subnet you have configured for your new ring.
666 Further set the `rrp_mode` to `passive`, this is the only stable mode.
667
668 Then add to each node entry in the `nodelist` section its new `ring1_addr`
669 property with the nodes additional ring address.
670
671 So if you have two networks, one on the 10.10.10.1/24 and the other on the
672 10.10.20.1/24 subnet, the final configuration file should look like:
673
674 ----
675 totem {
676 cluster_name: tweak
677 config_version: 9
678 ip_version: ipv4
679 rrp_mode: passive
680 secauth: on
681 version: 2
682 interface {
683 bindnetaddr: 10.10.10.1
684 ringnumber: 0
685 }
686 interface {
687 bindnetaddr: 10.10.20.1
688 ringnumber: 1
689 }
690 }
691
692 nodelist {
693 node {
694 name: pvecm1
695 nodeid: 1
696 quorum_votes: 1
697 ring0_addr: 10.10.10.1
698 ring1_addr: 10.10.20.1
699 }
700
701 node {
702 name: pvecm2
703 nodeid: 2
704 quorum_votes: 1
705 ring0_addr: 10.10.10.2
706 ring1_addr: 10.10.20.2
707 }
708
709 [...] # other cluster nodes here
710 }
711
712 [...] # other remaining config sections here
713
714 ----
715
716 Bring it in effect like described in the
717 <<edit-corosync-conf,edit the corosync.conf file>> section.
718
719 This is a change which cannot take live in effect and needs at least a restart
720 of corosync. Recommended is a restart of the whole cluster.
721
722 If you cannot reboot the whole cluster ensure no High Availability services are
723 configured and the stop the corosync service on all nodes. After corosync is
724 stopped on all nodes start it one after the other again.
725
726 Corosync Configuration
727 ----------------------
728
729 The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
730 controls the cluster member ship and its network.
731 For reading more about it check the corosync.conf man page:
732 [source,bash]
733 ----
734 man corosync.conf
735 ----
736
737 For node membership you should always use the `pvecm` tool provided by {pve}.
738 You may have to edit the configuration file manually for other changes.
739 Here are a few best practice tips for doing this.
740
741 [[edit-corosync-conf]]
742 Edit corosync.conf
743 ~~~~~~~~~~~~~~~~~~
744
745 Editing the corosync.conf file can be not always straight forward. There are
746 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
747 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
748 propagate the changes to the local one, but not vice versa.
749
750 The configuration will get updated automatically as soon as the file changes.
751 This means changes which can be integrated in a running corosync will take
752 instantly effect. So you should always make a copy and edit that instead, to
753 avoid triggering some unwanted changes by an in between safe.
754
755 [source,bash]
756 ----
757 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
758 ----
759
760 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
761 preinstalled on {pve} for example.
762
763 NOTE: Always increment the 'config_version' number on configuration changes,
764 omitting this can lead to problems.
765
766 After making the necessary changes create another copy of the current working
767 configuration file. This serves as a backup if the new configuration fails to
768 apply or makes problems in other ways.
769
770 [source,bash]
771 ----
772 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
773 ----
774
775 Then move the new configuration file over the old one:
776 [source,bash]
777 ----
778 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
779 ----
780
781 You may check with the commands
782 [source,bash]
783 ----
784 systemctl status corosync
785 journalctl -b -u corosync
786 ----
787
788 If the change could applied automatically. If not you may have to restart the
789 corosync service via:
790 [source,bash]
791 ----
792 systemctl restart corosync
793 ----
794
795 On errors check the troubleshooting section below.
796
797 Troubleshooting
798 ~~~~~~~~~~~~~~~
799
800 Issue: 'quorum.expected_votes must be configured'
801 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
802
803 When corosync starts to fail and you get the following message in the system log:
804
805 ----
806 [...]
807 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
808 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
809 'configuration error: nodelist or quorum.expected_votes must be configured!'
810 [...]
811 ----
812
813 It means that the hostname you set for corosync 'ringX_addr' in the
814 configuration could not be resolved.
815
816
817 Write Configuration When Not Quorate
818 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
819
820 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
821 know what you do, use:
822 [source,bash]
823 ----
824 pvecm expected 1
825 ----
826
827 This sets the expected vote count to 1 and makes the cluster quorate. You can
828 now fix your configuration, or revert it back to the last working backup.
829
830 This is not enough if corosync cannot start anymore. Here its best to edit the
831 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
832 that corosync can start again. Ensure that on all nodes this configuration has
833 the same content to avoid split brains. If you are not sure what went wrong
834 it's best to ask the Proxmox Community to help you.
835
836
837 [[corosync-conf-glossary]]
838 Corosync Configuration Glossary
839 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
840
841 ringX_addr::
842 This names the different ring addresses for the corosync totem rings used for
843 the cluster communication.
844
845 bindnetaddr::
846 Defines to which interface the ring should bind to. It may be any address of
847 the subnet configured on the interface we want to use. In general its the
848 recommended to just use an address a node uses on this interface.
849
850 rrp_mode::
851 Specifies the mode of the redundant ring protocol and may be passive, active or
852 none. Note that use of active is highly experimental and not official
853 supported. Passive is the preferred mode, it may double the cluster
854 communication throughput and increases availability.
855
856
857 Cluster Cold Start
858 ------------------
859
860 It is obvious that a cluster is not quorate when all nodes are
861 offline. This is a common case after a power failure.
862
863 NOTE: It is always a good idea to use an uninterruptible power supply
864 (``UPS'', also called ``battery backup'') to avoid this state, especially if
865 you want HA.
866
867 On node startup, the `pve-guests` service is started and waits for
868 quorum. Once quorate, it starts all guests which have the `onboot`
869 flag set.
870
871 When you turn on nodes, or when power comes back after power failure,
872 it is likely that some nodes boots faster than others. Please keep in
873 mind that guest startup is delayed until you reach quorum.
874
875
876 Guest Migration
877 ---------------
878
879 Migrating virtual guests to other nodes is a useful feature in a
880 cluster. There are settings to control the behavior of such
881 migrations. This can be done via the configuration file
882 `datacenter.cfg` or for a specific migration via API or command line
883 parameters.
884
885 It makes a difference if a Guest is online or offline, or if it has
886 local resources (like a local disk).
887
888 For Details about Virtual Machine Migration see the
889 xref:qm_migration[QEMU/KVM Migration Chapter]
890
891 For Details about Container Migration see the
892 xref:pct_migration[Container Migration Chapter]
893
894 Migration Type
895 ~~~~~~~~~~~~~~
896
897 The migration type defines if the migration data should be sent over an
898 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
899 Setting the migration type to insecure means that the RAM content of a
900 virtual guest gets also transferred unencrypted, which can lead to
901 information disclosure of critical data from inside the guest (for
902 example passwords or encryption keys).
903
904 Therefore, we strongly recommend using the secure channel if you do
905 not have full control over the network and can not guarantee that no
906 one is eavesdropping to it.
907
908 NOTE: Storage migration does not follow this setting. Currently, it
909 always sends the storage content over a secure channel.
910
911 Encryption requires a lot of computing power, so this setting is often
912 changed to "unsafe" to achieve better performance. The impact on
913 modern systems is lower because they implement AES encryption in
914 hardware. The performance impact is particularly evident in fast
915 networks where you can transfer 10 Gbps or more.
916
917
918 Migration Network
919 ~~~~~~~~~~~~~~~~~
920
921 By default, {pve} uses the network in which cluster communication
922 takes place to send the migration traffic. This is not optimal because
923 sensitive cluster traffic can be disrupted and this network may not
924 have the best bandwidth available on the node.
925
926 Setting the migration network parameter allows the use of a dedicated
927 network for the entire migration traffic. In addition to the memory,
928 this also affects the storage traffic for offline migrations.
929
930 The migration network is set as a network in the CIDR notation. This
931 has the advantage that you do not have to set individual IP addresses
932 for each node. {pve} can determine the real address on the
933 destination node from the network specified in the CIDR form. To
934 enable this, the network must be specified so that each node has one,
935 but only one IP in the respective network.
936
937
938 Example
939 ^^^^^^^
940
941 We assume that we have a three-node setup with three separate
942 networks. One for public communication with the Internet, one for
943 cluster communication and a very fast one, which we want to use as a
944 dedicated network for migration.
945
946 A network configuration for such a setup might look as follows:
947
948 ----
949 iface eno1 inet manual
950
951 # public network
952 auto vmbr0
953 iface vmbr0 inet static
954 address 192.X.Y.57
955 netmask 255.255.250.0
956 gateway 192.X.Y.1
957 bridge_ports eno1
958 bridge_stp off
959 bridge_fd 0
960
961 # cluster network
962 auto eno2
963 iface eno2 inet static
964 address 10.1.1.1
965 netmask 255.255.255.0
966
967 # fast network
968 auto eno3
969 iface eno3 inet static
970 address 10.1.2.1
971 netmask 255.255.255.0
972 ----
973
974 Here, we will use the network 10.1.2.0/24 as a migration network. For
975 a single migration, you can do this using the `migration_network`
976 parameter of the command line tool:
977
978 ----
979 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
980 ----
981
982 To configure this as the default network for all migrations in the
983 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
984 file:
985
986 ----
987 # use dedicated migration network
988 migration: secure,network=10.1.2.0/24
989 ----
990
991 NOTE: The migration type must always be set when the migration network
992 gets set in `/etc/pve/datacenter.cfg`.
993
994
995 ifdef::manvolnum[]
996 include::pve-copyright.adoc[]
997 endif::manvolnum[]