]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
cluster: document multiple clusters in same network
[pve-docs.git] / pvecm.adoc
1 ifdef::manvolnum[]
2 pvecm(1)
3 ========
4 :pve-toplevel:
5
6 NAME
7 ----
8
9 pvecm - Proxmox VE Cluster Manager
10
11 SYNOPSIS
12 --------
13
14 include::pvecm.1-synopsis.adoc[]
15
16 DESCRIPTION
17 -----------
18 endif::manvolnum[]
19
20 ifndef::manvolnum[]
21 Cluster Manager
22 ===============
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 The {PVE} cluster manager `pvecm` is a tool to create a group of
27 physical servers. Such a group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such clusters can consist of up to 32 physical nodes
30 (probably more, dependent on network latency).
31
32 `pvecm` can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35 is used to transparently distribute the cluster configuration to all cluster
36 nodes.
37
38 Grouping nodes into a cluster has the following advantages:
39
40 * Centralized, web based management
41
42 * Multi-master clusters: each node can do all management task
43
44 * `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
46
47 * Easy migration of virtual machines and containers between physical
48 hosts
49
50 * Fast deployment
51
52 * Cluster-wide services like firewall and HA
53
54
55 Requirements
56 ------------
57
58 * All nodes must be in the same network as `corosync` uses IP Multicast
59 to communicate between nodes (also see
60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
61 ports 5404 and 5405 for cluster communication.
62 +
63 NOTE: Some switches do not support IP multicast by default and must be
64 manually enabled first.
65
66 * Date and time have to be synchronized.
67
68 * SSH tunnel on TCP port 22 between nodes is used.
69
70 * If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
73
74 * We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
78 Proxmox VE 4.0 cluster nodes.
79
80
81 Preparing Nodes
82 ---------------
83
84 First, install {PVE} on all nodes. Make sure that each node is
85 installed with the final hostname and IP configuration. Changing the
86 hostname and IP is not possible after cluster creation.
87
88 Currently the cluster creation has to be done on the console, so you
89 need to login via `ssh`.
90
91 Create the Cluster
92 ------------------
93
94 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95 This name cannot be changed later.
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
99 CAUTION: The cluster name is used to compute the default multicast
100 address. Please use unique cluster names if you run more than one
101 cluster inside your network.
102
103 To check the state of your cluster use:
104
105 hp1# pvecm status
106
107 Multiple Clusters In Same Network
108 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
109
110 It is possible to create multiple clusters in the same physical or logical
111 network. Each cluster must have a unique name, which is used to generate the
112 cluster's multicast group address. As long as no duplicate cluster names are
113 configured in one network segment, the different clusters won't interfere with
114 each other.
115
116 If multiple clusters operate in a single network it may be beneficial to setup
117 an IGMP querier and enable IGMP Snooping in said network. This may reduce the
118 load of the network significantly because multicast packets are only delivered
119 to endpoints of the respective member nodes.
120
121
122 Adding Nodes to the Cluster
123 ---------------------------
124
125 Login via `ssh` to the node you want to add.
126
127 hp2# pvecm add IP-ADDRESS-CLUSTER
128
129 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
130
131 CAUTION: A new node cannot hold any VMs, because you would get
132 conflicts about identical VM IDs. Also, all existing configuration in
133 `/etc/pve` is overwritten when you join a new node to the cluster. To
134 workaround, use `vzdump` to backup and restore to a different VMID after
135 adding the node to the cluster.
136
137 To check the state of cluster:
138
139 # pvecm status
140
141 .Cluster status after adding 4 nodes
142 ----
143 hp2# pvecm status
144 Quorum information
145 ~~~~~~~~~~~~~~~~~~
146 Date: Mon Apr 20 12:30:13 2015
147 Quorum provider: corosync_votequorum
148 Nodes: 4
149 Node ID: 0x00000001
150 Ring ID: 1928
151 Quorate: Yes
152
153 Votequorum information
154 ~~~~~~~~~~~~~~~~~~~~~~
155 Expected votes: 4
156 Highest expected: 4
157 Total votes: 4
158 Quorum: 2
159 Flags: Quorate
160
161 Membership information
162 ~~~~~~~~~~~~~~~~~~~~~~
163 Nodeid Votes Name
164 0x00000001 1 192.168.15.91
165 0x00000002 1 192.168.15.92 (local)
166 0x00000003 1 192.168.15.93
167 0x00000004 1 192.168.15.94
168 ----
169
170 If you only want the list of all nodes use:
171
172 # pvecm nodes
173
174 .List nodes in a cluster
175 ----
176 hp2# pvecm nodes
177
178 Membership information
179 ~~~~~~~~~~~~~~~~~~~~~~
180 Nodeid Votes Name
181 1 1 hp1
182 2 1 hp2 (local)
183 3 1 hp3
184 4 1 hp4
185 ----
186
187 Adding Nodes With Separated Cluster Network
188 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
189
190 When adding a node to a cluster with a separated cluster network you need to
191 use the 'ringX_addr' parameters to set the nodes address on those networks:
192
193 [source,bash]
194 ----
195 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
196 ----
197
198 If you want to use the Redundant Ring Protocol you will also want to pass the
199 'ring1_addr' parameter.
200
201
202 Remove a Cluster Node
203 ---------------------
204
205 CAUTION: Read carefully the procedure before proceeding, as it could
206 not be what you want or need.
207
208 Move all virtual machines from the node. Make sure you have no local
209 data or backups you want to keep, or save them accordingly.
210 In the following example we will remove the node hp4 from the cluster.
211
212 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
213 command to identify the node ID to remove:
214
215 ----
216 hp1# pvecm nodes
217
218 Membership information
219 ~~~~~~~~~~~~~~~~~~~~~~
220 Nodeid Votes Name
221 1 1 hp1 (local)
222 2 1 hp2
223 3 1 hp3
224 4 1 hp4
225 ----
226
227
228 At this point you must power off hp4 and
229 make sure that it will not power on again (in the network) as it
230 is.
231
232 IMPORTANT: As said above, it is critical to power off the node
233 *before* removal, and make sure that it will *never* power on again
234 (in the existing cluster network) as it is.
235 If you power on the node as it is, your cluster will be screwed up and
236 it could be difficult to restore a clean cluster state.
237
238 After powering off the node hp4, we can safely remove it from the cluster.
239
240 hp1# pvecm delnode hp4
241
242 If the operation succeeds no output is returned, just check the node
243 list again with `pvecm nodes` or `pvecm status`. You should see
244 something like:
245
246 ----
247 hp1# pvecm status
248
249 Quorum information
250 ~~~~~~~~~~~~~~~~~~
251 Date: Mon Apr 20 12:44:28 2015
252 Quorum provider: corosync_votequorum
253 Nodes: 3
254 Node ID: 0x00000001
255 Ring ID: 1992
256 Quorate: Yes
257
258 Votequorum information
259 ~~~~~~~~~~~~~~~~~~~~~~
260 Expected votes: 3
261 Highest expected: 3
262 Total votes: 3
263 Quorum: 3
264 Flags: Quorate
265
266 Membership information
267 ~~~~~~~~~~~~~~~~~~~~~~
268 Nodeid Votes Name
269 0x00000001 1 192.168.15.90 (local)
270 0x00000002 1 192.168.15.91
271 0x00000003 1 192.168.15.92
272 ----
273
274 If, for whatever reason, you want that this server joins the same
275 cluster again, you have to
276
277 * reinstall {pve} on it from scratch
278
279 * then join it, as explained in the previous section.
280
281 [[pvecm_separate_node_without_reinstall]]
282 Separate A Node Without Reinstalling
283 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
284
285 CAUTION: This is *not* the recommended method, proceed with caution. Use the
286 above mentioned method if you're unsure.
287
288 You can also separate a node from a cluster without reinstalling it from
289 scratch. But after removing the node from the cluster it will still have
290 access to the shared storages! This must be resolved before you start removing
291 the node from the cluster. A {pve} cluster cannot share the exact same
292 storage with another cluster, as storage locking doesn't work over cluster
293 boundary. Further, it may also lead to VMID conflicts.
294
295 Its suggested that you create a new storage where only the node which you want
296 to separate has access. This can be an new export on your NFS or a new Ceph
297 pool, to name a few examples. Its just important that the exact same storage
298 does not gets accessed by multiple clusters. After setting this storage up move
299 all data from the node and its VMs to it. Then you are ready to separate the
300 node from the cluster.
301
302 WARNING: Ensure all shared resources are cleanly separated! You will run into
303 conflicts and problems else.
304
305 First stop the corosync and the pve-cluster services on the node:
306 [source,bash]
307 ----
308 systemctl stop pve-cluster
309 systemctl stop corosync
310 ----
311
312 Start the cluster filesystem again in local mode:
313 [source,bash]
314 ----
315 pmxcfs -l
316 ----
317
318 Delete the corosync configuration files:
319 [source,bash]
320 ----
321 rm /etc/pve/corosync.conf
322 rm /etc/corosync/*
323 ----
324
325 You can now start the filesystem again as normal service:
326 [source,bash]
327 ----
328 killall pmxcfs
329 systemctl start pve-cluster
330 ----
331
332 The node is now separated from the cluster. You can deleted it from a remaining
333 node of the cluster with:
334 [source,bash]
335 ----
336 pvecm delnode oldnode
337 ----
338
339 If the command failed, because the remaining node in the cluster lost quorum
340 when the now separate node exited, you may set the expected votes to 1 as a workaround:
341 [source,bash]
342 ----
343 pvecm expected 1
344 ----
345
346 And the repeat the 'pvecm delnode' command.
347
348 Now switch back to the separated node, here delete all remaining files left
349 from the old cluster. This ensures that the node can be added to another
350 cluster again without problems.
351
352 [source,bash]
353 ----
354 rm /var/lib/corosync/*
355 ----
356
357 As the configuration files from the other nodes are still in the cluster
358 filesystem you may want to clean those up too. Remove simply the whole
359 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
360 you used the correct one before deleting it.
361
362 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
363 the nodes can still connect to each other with public key authentication. This
364 should be fixed by removing the respective keys from the
365 '/etc/pve/priv/authorized_keys' file.
366
367 Quorum
368 ------
369
370 {pve} use a quorum-based technique to provide a consistent state among
371 all cluster nodes.
372
373 [quote, from Wikipedia, Quorum (distributed computing)]
374 ____
375 A quorum is the minimum number of votes that a distributed transaction
376 has to obtain in order to be allowed to perform an operation in a
377 distributed system.
378 ____
379
380 In case of network partitioning, state changes requires that a
381 majority of nodes are online. The cluster switches to read-only mode
382 if it loses quorum.
383
384 NOTE: {pve} assigns a single vote to each node by default.
385
386 Cluster Network
387 ---------------
388
389 The cluster network is the core of a cluster. All messages sent over it have to
390 be delivered reliable to all nodes in their respective order. In {pve} this
391 part is done by corosync, an implementation of a high performance low overhead
392 high availability development toolkit. It serves our decentralized
393 configuration file system (`pmxcfs`).
394
395 [[cluster-network-requirements]]
396 Network Requirements
397 ~~~~~~~~~~~~~~~~~~~~
398 This needs a reliable network with latencies under 2 milliseconds (LAN
399 performance) to work properly. While corosync can also use unicast for
400 communication between nodes its **highly recommended** to have a multicast
401 capable network. The network should not be used heavily by other members,
402 ideally corosync runs on its own network.
403 *never* share it with network where storage communicates too.
404
405 Before setting up a cluster it is good practice to check if the network is fit
406 for that purpose.
407
408 * Ensure that all nodes are in the same subnet. This must only be true for the
409 network interfaces used for cluster communication (corosync).
410
411 * Ensure all nodes can reach each other over those interfaces, using `ping` is
412 enough for a basic test.
413
414 * Ensure that multicast works in general and a high package rates. This can be
415 done with the `omping` tool. The final "%loss" number should be < 1%.
416 +
417 [source,bash]
418 ----
419 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
420 ----
421
422 * Ensure that multicast communication works over an extended period of time.
423 This uncovers problems where IGMP snooping is activated on the network but
424 no multicast querier is active. This test has a duration of around 10
425 minutes.
426 +
427 [source,bash]
428 ----
429 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
430 ----
431
432 Your network is not ready for clustering if any of these test fails. Recheck
433 your network configuration. Especially switches are notorious for having
434 multicast disabled by default or IGMP snooping enabled with no IGMP querier
435 active.
436
437 In smaller cluster its also an option to use unicast if you really cannot get
438 multicast to work.
439
440 Separate Cluster Network
441 ~~~~~~~~~~~~~~~~~~~~~~~~
442
443 When creating a cluster without any parameters the cluster network is generally
444 shared with the Web UI and the VMs and its traffic. Depending on your setup
445 even storage traffic may get sent over the same network. Its recommended to
446 change that, as corosync is a time critical real time application.
447
448 Setting Up A New Network
449 ^^^^^^^^^^^^^^^^^^^^^^^^
450
451 First you have to setup a new network interface. It should be on a physical
452 separate network. Ensure that your network fulfills the
453 <<cluster-network-requirements,cluster network requirements>>.
454
455 Separate On Cluster Creation
456 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
457
458 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
459 the 'pvecm create' command used for creating a new cluster.
460
461 If you have setup an additional NIC with a static address on 10.10.10.1/25
462 and want to send and receive all cluster communication over this interface
463 you would execute:
464
465 [source,bash]
466 ----
467 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
468 ----
469
470 To check if everything is working properly execute:
471 [source,bash]
472 ----
473 systemctl status corosync
474 ----
475
476 [[separate-cluster-net-after-creation]]
477 Separate After Cluster Creation
478 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
479
480 You can do this also if you have already created a cluster and want to switch
481 its communication to another network, without rebuilding the whole cluster.
482 This change may lead to short durations of quorum loss in the cluster, as nodes
483 have to restart corosync and come up one after the other on the new network.
484
485 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
486 The open it and you should see a file similar to:
487
488 ----
489 logging {
490 debug: off
491 to_syslog: yes
492 }
493
494 nodelist {
495
496 node {
497 name: due
498 nodeid: 2
499 quorum_votes: 1
500 ring0_addr: due
501 }
502
503 node {
504 name: tre
505 nodeid: 3
506 quorum_votes: 1
507 ring0_addr: tre
508 }
509
510 node {
511 name: uno
512 nodeid: 1
513 quorum_votes: 1
514 ring0_addr: uno
515 }
516
517 }
518
519 quorum {
520 provider: corosync_votequorum
521 }
522
523 totem {
524 cluster_name: thomas-testcluster
525 config_version: 3
526 ip_version: ipv4
527 secauth: on
528 version: 2
529 interface {
530 bindnetaddr: 192.168.30.50
531 ringnumber: 0
532 }
533
534 }
535 ----
536
537 The first you want to do is add the 'name' properties in the node entries if
538 you do not see them already. Those *must* match the node name.
539
540 Then replace the address from the 'ring0_addr' properties with the new
541 addresses. You may use plain IP addresses or also hostnames here. If you use
542 hostnames ensure that they are resolvable from all nodes.
543
544 In my example I want to switch my cluster communication to the 10.10.10.1/25
545 network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
546 in the totem section of the config to an address of the new network. It can be
547 any address from the subnet configured on the new network interface.
548
549 After you increased the 'config_version' property the new configuration file
550 should look like:
551
552 ----
553
554 logging {
555 debug: off
556 to_syslog: yes
557 }
558
559 nodelist {
560
561 node {
562 name: due
563 nodeid: 2
564 quorum_votes: 1
565 ring0_addr: 10.10.10.2
566 }
567
568 node {
569 name: tre
570 nodeid: 3
571 quorum_votes: 1
572 ring0_addr: 10.10.10.3
573 }
574
575 node {
576 name: uno
577 nodeid: 1
578 quorum_votes: 1
579 ring0_addr: 10.10.10.1
580 }
581
582 }
583
584 quorum {
585 provider: corosync_votequorum
586 }
587
588 totem {
589 cluster_name: thomas-testcluster
590 config_version: 4
591 ip_version: ipv4
592 secauth: on
593 version: 2
594 interface {
595 bindnetaddr: 10.10.10.1
596 ringnumber: 0
597 }
598
599 }
600 ----
601
602 Now after a final check whether all changed information is correct we save it
603 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
604 learn how to bring it in effect.
605
606 As our change cannot be enforced live from corosync we have to do an restart.
607
608 On a single node execute:
609 [source,bash]
610 ----
611 systemctl restart corosync
612 ----
613
614 Now check if everything is fine:
615
616 [source,bash]
617 ----
618 systemctl status corosync
619 ----
620
621 If corosync runs again correct restart corosync also on all other nodes.
622 They will then join the cluster membership one by one on the new network.
623
624 Redundant Ring Protocol
625 ~~~~~~~~~~~~~~~~~~~~~~~
626 To avoid a single point of failure you should implement counter measurements.
627 This can be on the hardware and operating system level through network bonding.
628
629 Corosync itself offers also a possibility to add redundancy through the so
630 called 'Redundant Ring Protocol'. This protocol allows running a second totem
631 ring on another network, this network should be physically separated from the
632 other rings network to actually increase availability.
633
634 RRP On Cluster Creation
635 ~~~~~~~~~~~~~~~~~~~~~~~
636
637 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
638 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
639
640 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
641
642 So if you have two networks, one on the 10.10.10.1/24 and the other on the
643 10.10.20.1/24 subnet you would execute:
644
645 [source,bash]
646 ----
647 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
648 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
649 ----
650
651 RRP On Existing Clusters
652 ~~~~~~~~~~~~~~~~~~~~~~~~
653
654 You will take similar steps as described in
655 <<separate-cluster-net-after-creation,separating the cluster network>> to
656 enable RRP on an already running cluster. The single difference is, that you
657 will add `ring1` and use it instead of `ring0`.
658
659 First add a new `interface` subsection in the `totem` section, set its
660 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
661 address of the subnet you have configured for your new ring.
662 Further set the `rrp_mode` to `passive`, this is the only stable mode.
663
664 Then add to each node entry in the `nodelist` section its new `ring1_addr`
665 property with the nodes additional ring address.
666
667 So if you have two networks, one on the 10.10.10.1/24 and the other on the
668 10.10.20.1/24 subnet, the final configuration file should look like:
669
670 ----
671 totem {
672 cluster_name: tweak
673 config_version: 9
674 ip_version: ipv4
675 rrp_mode: passive
676 secauth: on
677 version: 2
678 interface {
679 bindnetaddr: 10.10.10.1
680 ringnumber: 0
681 }
682 interface {
683 bindnetaddr: 10.10.20.1
684 ringnumber: 1
685 }
686 }
687
688 nodelist {
689 node {
690 name: pvecm1
691 nodeid: 1
692 quorum_votes: 1
693 ring0_addr: 10.10.10.1
694 ring1_addr: 10.10.20.1
695 }
696
697 node {
698 name: pvecm2
699 nodeid: 2
700 quorum_votes: 1
701 ring0_addr: 10.10.10.2
702 ring1_addr: 10.10.20.2
703 }
704
705 [...] # other cluster nodes here
706 }
707
708 [...] # other remaining config sections here
709
710 ----
711
712 Bring it in effect like described in the
713 <<edit-corosync-conf,edit the corosync.conf file>> section.
714
715 This is a change which cannot take live in effect and needs at least a restart
716 of corosync. Recommended is a restart of the whole cluster.
717
718 If you cannot reboot the whole cluster ensure no High Availability services are
719 configured and the stop the corosync service on all nodes. After corosync is
720 stopped on all nodes start it one after the other again.
721
722 Corosync Configuration
723 ----------------------
724
725 The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
726 controls the cluster member ship and its network.
727 For reading more about it check the corosync.conf man page:
728 [source,bash]
729 ----
730 man corosync.conf
731 ----
732
733 For node membership you should always use the `pvecm` tool provided by {pve}.
734 You may have to edit the configuration file manually for other changes.
735 Here are a few best practice tips for doing this.
736
737 [[edit-corosync-conf]]
738 Edit corosync.conf
739 ~~~~~~~~~~~~~~~~~~
740
741 Editing the corosync.conf file can be not always straight forward. There are
742 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
743 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
744 propagate the changes to the local one, but not vice versa.
745
746 The configuration will get updated automatically as soon as the file changes.
747 This means changes which can be integrated in a running corosync will take
748 instantly effect. So you should always make a copy and edit that instead, to
749 avoid triggering some unwanted changes by an in between safe.
750
751 [source,bash]
752 ----
753 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
754 ----
755
756 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
757 preinstalled on {pve} for example.
758
759 NOTE: Always increment the 'config_version' number on configuration changes,
760 omitting this can lead to problems.
761
762 After making the necessary changes create another copy of the current working
763 configuration file. This serves as a backup if the new configuration fails to
764 apply or makes problems in other ways.
765
766 [source,bash]
767 ----
768 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
769 ----
770
771 Then move the new configuration file over the old one:
772 [source,bash]
773 ----
774 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
775 ----
776
777 You may check with the commands
778 [source,bash]
779 ----
780 systemctl status corosync
781 journalctl -b -u corosync
782 ----
783
784 If the change could applied automatically. If not you may have to restart the
785 corosync service via:
786 [source,bash]
787 ----
788 systemctl restart corosync
789 ----
790
791 On errors check the troubleshooting section below.
792
793 Troubleshooting
794 ~~~~~~~~~~~~~~~
795
796 Issue: 'quorum.expected_votes must be configured'
797 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
798
799 When corosync starts to fail and you get the following message in the system log:
800
801 ----
802 [...]
803 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
804 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
805 'configuration error: nodelist or quorum.expected_votes must be configured!'
806 [...]
807 ----
808
809 It means that the hostname you set for corosync 'ringX_addr' in the
810 configuration could not be resolved.
811
812
813 Write Configuration When Not Quorate
814 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
815
816 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
817 know what you do, use:
818 [source,bash]
819 ----
820 pvecm expected 1
821 ----
822
823 This sets the expected vote count to 1 and makes the cluster quorate. You can
824 now fix your configuration, or revert it back to the last working backup.
825
826 This is not enough if corosync cannot start anymore. Here its best to edit the
827 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
828 that corosync can start again. Ensure that on all nodes this configuration has
829 the same content to avoid split brains. If you are not sure what went wrong
830 it's best to ask the Proxmox Community to help you.
831
832
833 [[corosync-conf-glossary]]
834 Corosync Configuration Glossary
835 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
836
837 ringX_addr::
838 This names the different ring addresses for the corosync totem rings used for
839 the cluster communication.
840
841 bindnetaddr::
842 Defines to which interface the ring should bind to. It may be any address of
843 the subnet configured on the interface we want to use. In general its the
844 recommended to just use an address a node uses on this interface.
845
846 rrp_mode::
847 Specifies the mode of the redundant ring protocol and may be passive, active or
848 none. Note that use of active is highly experimental and not official
849 supported. Passive is the preferred mode, it may double the cluster
850 communication throughput and increases availability.
851
852
853 Cluster Cold Start
854 ------------------
855
856 It is obvious that a cluster is not quorate when all nodes are
857 offline. This is a common case after a power failure.
858
859 NOTE: It is always a good idea to use an uninterruptible power supply
860 (``UPS'', also called ``battery backup'') to avoid this state, especially if
861 you want HA.
862
863 On node startup, the `pve-guests` service is started and waits for
864 quorum. Once quorate, it starts all guests which have the `onboot`
865 flag set.
866
867 When you turn on nodes, or when power comes back after power failure,
868 it is likely that some nodes boots faster than others. Please keep in
869 mind that guest startup is delayed until you reach quorum.
870
871
872 Guest Migration
873 ---------------
874
875 Migrating virtual guests to other nodes is a useful feature in a
876 cluster. There are settings to control the behavior of such
877 migrations. This can be done via the configuration file
878 `datacenter.cfg` or for a specific migration via API or command line
879 parameters.
880
881 It makes a difference if a Guest is online or offline, or if it has
882 local resources (like a local disk).
883
884 For Details about Virtual Machine Migration see the
885 xref:qm_migration[QEMU/KVM Migration Chapter]
886
887 For Details about Container Migration see the
888 xref:pct_migration[Container Migration Chapter]
889
890 Migration Type
891 ~~~~~~~~~~~~~~
892
893 The migration type defines if the migration data should be sent over an
894 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
895 Setting the migration type to insecure means that the RAM content of a
896 virtual guest gets also transferred unencrypted, which can lead to
897 information disclosure of critical data from inside the guest (for
898 example passwords or encryption keys).
899
900 Therefore, we strongly recommend using the secure channel if you do
901 not have full control over the network and can not guarantee that no
902 one is eavesdropping to it.
903
904 NOTE: Storage migration does not follow this setting. Currently, it
905 always sends the storage content over a secure channel.
906
907 Encryption requires a lot of computing power, so this setting is often
908 changed to "unsafe" to achieve better performance. The impact on
909 modern systems is lower because they implement AES encryption in
910 hardware. The performance impact is particularly evident in fast
911 networks where you can transfer 10 Gbps or more.
912
913
914 Migration Network
915 ~~~~~~~~~~~~~~~~~
916
917 By default, {pve} uses the network in which cluster communication
918 takes place to send the migration traffic. This is not optimal because
919 sensitive cluster traffic can be disrupted and this network may not
920 have the best bandwidth available on the node.
921
922 Setting the migration network parameter allows the use of a dedicated
923 network for the entire migration traffic. In addition to the memory,
924 this also affects the storage traffic for offline migrations.
925
926 The migration network is set as a network in the CIDR notation. This
927 has the advantage that you do not have to set individual IP addresses
928 for each node. {pve} can determine the real address on the
929 destination node from the network specified in the CIDR form. To
930 enable this, the network must be specified so that each node has one,
931 but only one IP in the respective network.
932
933
934 Example
935 ^^^^^^^
936
937 We assume that we have a three-node setup with three separate
938 networks. One for public communication with the Internet, one for
939 cluster communication and a very fast one, which we want to use as a
940 dedicated network for migration.
941
942 A network configuration for such a setup might look as follows:
943
944 ----
945 iface eno1 inet manual
946
947 # public network
948 auto vmbr0
949 iface vmbr0 inet static
950 address 192.X.Y.57
951 netmask 255.255.250.0
952 gateway 192.X.Y.1
953 bridge_ports eno1
954 bridge_stp off
955 bridge_fd 0
956
957 # cluster network
958 auto eno2
959 iface eno2 inet static
960 address 10.1.1.1
961 netmask 255.255.255.0
962
963 # fast network
964 auto eno3
965 iface eno3 inet static
966 address 10.1.2.1
967 netmask 255.255.255.0
968 ----
969
970 Here, we will use the network 10.1.2.0/24 as a migration network. For
971 a single migration, you can do this using the `migration_network`
972 parameter of the command line tool:
973
974 ----
975 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
976 ----
977
978 To configure this as the default network for all migrations in the
979 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
980 file:
981
982 ----
983 # use dedicated migration network
984 migration: secure,network=10.1.2.0/24
985 ----
986
987 NOTE: The migration type must always be set when the migration network
988 gets set in `/etc/pve/datacenter.cfg`.
989
990
991 ifdef::manvolnum[]
992 include::pve-copyright.adoc[]
993 endif::manvolnum[]