]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
bump version to 5.2-5
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
32
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
37 nodes.
38
39 Grouping nodes into a cluster has the following advantages:
40
41 * Centralized, web based management
42
43 * Multi-master clusters: each node can do all management task
44
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
47
48 * Easy migration of virtual machines and containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be in the same network as `corosync` uses IP Multicast
60 to communicate between nodes (also see
61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
62 ports 5404 and 5405 for cluster communication.
63 +
64 NOTE: Some switches do not support IP multicast by default and must be
65 manually enabled first.
66
67 * Date and time have to be synchronized.
68
69 * SSH tunnel on TCP port 22 between nodes is used.
70
71 * If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
74
75 * We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
78 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
79 Proxmox VE 4.0 cluster nodes.
80
81
82 Preparing Nodes
83 ---------------
84
85 First, install {PVE} on all nodes. Make sure that each node is
86 installed with the final hostname and IP configuration. Changing the
87 hostname and IP is not possible after cluster creation.
88
89 Currently the cluster creation has to be done on the console, so you
90 need to login via `ssh`.
91
92 [[pvecm_create_cluster]]
93 Create the Cluster
94 ------------------
95
96 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
97 This name cannot be changed later.
98
99 hp1# pvecm create YOUR-CLUSTER-NAME
100
101 CAUTION: The cluster name is used to compute the default multicast
102 address. Please use unique cluster names if you run more than one
103 cluster inside your network.
104
105 To check the state of your cluster use:
106
107 hp1# pvecm status
108
109 Multiple Clusters In Same Network
110 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111
112 It is possible to create multiple clusters in the same physical or logical
113 network. Each cluster must have a unique name, which is used to generate the
114 cluster's multicast group address. As long as no duplicate cluster names are
115 configured in one network segment, the different clusters won't interfere with
116 each other.
117
118 If multiple clusters operate in a single network it may be beneficial to setup
119 an IGMP querier and enable IGMP Snooping in said network. This may reduce the
120 load of the network significantly because multicast packets are only delivered
121 to endpoints of the respective member nodes.
122
123
124 [[pvecm_join_node_to_cluster]]
125 Adding Nodes to the Cluster
126 ---------------------------
127
128 Login via `ssh` to the node you want to add.
129
130 hp2# pvecm add IP-ADDRESS-CLUSTER
131
132 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
133
134 CAUTION: A new node cannot hold any VMs, because you would get
135 conflicts about identical VM IDs. Also, all existing configuration in
136 `/etc/pve` is overwritten when you join a new node to the cluster. To
137 workaround, use `vzdump` to backup and restore to a different VMID after
138 adding the node to the cluster.
139
140 To check the state of cluster:
141
142 # pvecm status
143
144 .Cluster status after adding 4 nodes
145 ----
146 hp2# pvecm status
147 Quorum information
148 ~~~~~~~~~~~~~~~~~~
149 Date: Mon Apr 20 12:30:13 2015
150 Quorum provider: corosync_votequorum
151 Nodes: 4
152 Node ID: 0x00000001
153 Ring ID: 1928
154 Quorate: Yes
155
156 Votequorum information
157 ~~~~~~~~~~~~~~~~~~~~~~
158 Expected votes: 4
159 Highest expected: 4
160 Total votes: 4
161 Quorum: 2
162 Flags: Quorate
163
164 Membership information
165 ~~~~~~~~~~~~~~~~~~~~~~
166 Nodeid Votes Name
167 0x00000001 1 192.168.15.91
168 0x00000002 1 192.168.15.92 (local)
169 0x00000003 1 192.168.15.93
170 0x00000004 1 192.168.15.94
171 ----
172
173 If you only want the list of all nodes use:
174
175 # pvecm nodes
176
177 .List nodes in a cluster
178 ----
179 hp2# pvecm nodes
180
181 Membership information
182 ~~~~~~~~~~~~~~~~~~~~~~
183 Nodeid Votes Name
184 1 1 hp1
185 2 1 hp2 (local)
186 3 1 hp3
187 4 1 hp4
188 ----
189
190 [[adding-nodes-with-separated-cluster-network]]
191 Adding Nodes With Separated Cluster Network
192 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
193
194 When adding a node to a cluster with a separated cluster network you need to
195 use the 'ringX_addr' parameters to set the nodes address on those networks:
196
197 [source,bash]
198 ----
199 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
200 ----
201
202 If you want to use the Redundant Ring Protocol you will also want to pass the
203 'ring1_addr' parameter.
204
205
206 Remove a Cluster Node
207 ---------------------
208
209 CAUTION: Read carefully the procedure before proceeding, as it could
210 not be what you want or need.
211
212 Move all virtual machines from the node. Make sure you have no local
213 data or backups you want to keep, or save them accordingly.
214 In the following example we will remove the node hp4 from the cluster.
215
216 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
217 command to identify the node ID to remove:
218
219 ----
220 hp1# pvecm nodes
221
222 Membership information
223 ~~~~~~~~~~~~~~~~~~~~~~
224 Nodeid Votes Name
225 1 1 hp1 (local)
226 2 1 hp2
227 3 1 hp3
228 4 1 hp4
229 ----
230
231
232 At this point you must power off hp4 and
233 make sure that it will not power on again (in the network) as it
234 is.
235
236 IMPORTANT: As said above, it is critical to power off the node
237 *before* removal, and make sure that it will *never* power on again
238 (in the existing cluster network) as it is.
239 If you power on the node as it is, your cluster will be screwed up and
240 it could be difficult to restore a clean cluster state.
241
242 After powering off the node hp4, we can safely remove it from the cluster.
243
244 hp1# pvecm delnode hp4
245
246 If the operation succeeds no output is returned, just check the node
247 list again with `pvecm nodes` or `pvecm status`. You should see
248 something like:
249
250 ----
251 hp1# pvecm status
252
253 Quorum information
254 ~~~~~~~~~~~~~~~~~~
255 Date: Mon Apr 20 12:44:28 2015
256 Quorum provider: corosync_votequorum
257 Nodes: 3
258 Node ID: 0x00000001
259 Ring ID: 1992
260 Quorate: Yes
261
262 Votequorum information
263 ~~~~~~~~~~~~~~~~~~~~~~
264 Expected votes: 3
265 Highest expected: 3
266 Total votes: 3
267 Quorum: 3
268 Flags: Quorate
269
270 Membership information
271 ~~~~~~~~~~~~~~~~~~~~~~
272 Nodeid Votes Name
273 0x00000001 1 192.168.15.90 (local)
274 0x00000002 1 192.168.15.91
275 0x00000003 1 192.168.15.92
276 ----
277
278 If, for whatever reason, you want that this server joins the same
279 cluster again, you have to
280
281 * reinstall {pve} on it from scratch
282
283 * then join it, as explained in the previous section.
284
285 [[pvecm_separate_node_without_reinstall]]
286 Separate A Node Without Reinstalling
287 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
288
289 CAUTION: This is *not* the recommended method, proceed with caution. Use the
290 above mentioned method if you're unsure.
291
292 You can also separate a node from a cluster without reinstalling it from
293 scratch. But after removing the node from the cluster it will still have
294 access to the shared storages! This must be resolved before you start removing
295 the node from the cluster. A {pve} cluster cannot share the exact same
296 storage with another cluster, as storage locking doesn't work over cluster
297 boundary. Further, it may also lead to VMID conflicts.
298
299 Its suggested that you create a new storage where only the node which you want
300 to separate has access. This can be an new export on your NFS or a new Ceph
301 pool, to name a few examples. Its just important that the exact same storage
302 does not gets accessed by multiple clusters. After setting this storage up move
303 all data from the node and its VMs to it. Then you are ready to separate the
304 node from the cluster.
305
306 WARNING: Ensure all shared resources are cleanly separated! You will run into
307 conflicts and problems else.
308
309 First stop the corosync and the pve-cluster services on the node:
310 [source,bash]
311 ----
312 systemctl stop pve-cluster
313 systemctl stop corosync
314 ----
315
316 Start the cluster filesystem again in local mode:
317 [source,bash]
318 ----
319 pmxcfs -l
320 ----
321
322 Delete the corosync configuration files:
323 [source,bash]
324 ----
325 rm /etc/pve/corosync.conf
326 rm /etc/corosync/*
327 ----
328
329 You can now start the filesystem again as normal service:
330 [source,bash]
331 ----
332 killall pmxcfs
333 systemctl start pve-cluster
334 ----
335
336 The node is now separated from the cluster. You can deleted it from a remaining
337 node of the cluster with:
338 [source,bash]
339 ----
340 pvecm delnode oldnode
341 ----
342
343 If the command failed, because the remaining node in the cluster lost quorum
344 when the now separate node exited, you may set the expected votes to 1 as a workaround:
345 [source,bash]
346 ----
347 pvecm expected 1
348 ----
349
350 And the repeat the 'pvecm delnode' command.
351
352 Now switch back to the separated node, here delete all remaining files left
353 from the old cluster. This ensures that the node can be added to another
354 cluster again without problems.
355
356 [source,bash]
357 ----
358 rm /var/lib/corosync/*
359 ----
360
361 As the configuration files from the other nodes are still in the cluster
362 filesystem you may want to clean those up too. Remove simply the whole
363 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
364 you used the correct one before deleting it.
365
366 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
367 the nodes can still connect to each other with public key authentication. This
368 should be fixed by removing the respective keys from the
369 '/etc/pve/priv/authorized_keys' file.
370
371 Quorum
372 ------
373
374 {pve} use a quorum-based technique to provide a consistent state among
375 all cluster nodes.
376
377 [quote, from Wikipedia, Quorum (distributed computing)]
378 ____
379 A quorum is the minimum number of votes that a distributed transaction
380 has to obtain in order to be allowed to perform an operation in a
381 distributed system.
382 ____
383
384 In case of network partitioning, state changes requires that a
385 majority of nodes are online. The cluster switches to read-only mode
386 if it loses quorum.
387
388 NOTE: {pve} assigns a single vote to each node by default.
389
390 Cluster Network
391 ---------------
392
393 The cluster network is the core of a cluster. All messages sent over it have to
394 be delivered reliable to all nodes in their respective order. In {pve} this
395 part is done by corosync, an implementation of a high performance low overhead
396 high availability development toolkit. It serves our decentralized
397 configuration file system (`pmxcfs`).
398
399 [[cluster-network-requirements]]
400 Network Requirements
401 ~~~~~~~~~~~~~~~~~~~~
402 This needs a reliable network with latencies under 2 milliseconds (LAN
403 performance) to work properly. While corosync can also use unicast for
404 communication between nodes its **highly recommended** to have a multicast
405 capable network. The network should not be used heavily by other members,
406 ideally corosync runs on its own network.
407 *never* share it with network where storage communicates too.
408
409 Before setting up a cluster it is good practice to check if the network is fit
410 for that purpose.
411
412 * Ensure that all nodes are in the same subnet. This must only be true for the
413 network interfaces used for cluster communication (corosync).
414
415 * Ensure all nodes can reach each other over those interfaces, using `ping` is
416 enough for a basic test.
417
418 * Ensure that multicast works in general and a high package rates. This can be
419 done with the `omping` tool. The final "%loss" number should be < 1%.
420 +
421 [source,bash]
422 ----
423 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
424 ----
425
426 * Ensure that multicast communication works over an extended period of time.
427 This uncovers problems where IGMP snooping is activated on the network but
428 no multicast querier is active. This test has a duration of around 10
429 minutes.
430 +
431 [source,bash]
432 ----
433 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
434 ----
435
436 Your network is not ready for clustering if any of these test fails. Recheck
437 your network configuration. Especially switches are notorious for having
438 multicast disabled by default or IGMP snooping enabled with no IGMP querier
439 active.
440
441 In smaller cluster its also an option to use unicast if you really cannot get
442 multicast to work.
443
444 Separate Cluster Network
445 ~~~~~~~~~~~~~~~~~~~~~~~~
446
447 When creating a cluster without any parameters the cluster network is generally
448 shared with the Web UI and the VMs and its traffic. Depending on your setup
449 even storage traffic may get sent over the same network. Its recommended to
450 change that, as corosync is a time critical real time application.
451
452 Setting Up A New Network
453 ^^^^^^^^^^^^^^^^^^^^^^^^
454
455 First you have to setup a new network interface. It should be on a physical
456 separate network. Ensure that your network fulfills the
457 <<cluster-network-requirements,cluster network requirements>>.
458
459 Separate On Cluster Creation
460 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
461
462 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
463 the 'pvecm create' command used for creating a new cluster.
464
465 If you have setup an additional NIC with a static address on 10.10.10.1/25
466 and want to send and receive all cluster communication over this interface
467 you would execute:
468
469 [source,bash]
470 ----
471 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
472 ----
473
474 To check if everything is working properly execute:
475 [source,bash]
476 ----
477 systemctl status corosync
478 ----
479
480 Afterwards, proceed as descripted in the section to
481 <<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
482
483 [[separate-cluster-net-after-creation]]
484 Separate After Cluster Creation
485 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
486
487 You can do this also if you have already created a cluster and want to switch
488 its communication to another network, without rebuilding the whole cluster.
489 This change may lead to short durations of quorum loss in the cluster, as nodes
490 have to restart corosync and come up one after the other on the new network.
491
492 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
493 The open it and you should see a file similar to:
494
495 ----
496 logging {
497 debug: off
498 to_syslog: yes
499 }
500
501 nodelist {
502
503 node {
504 name: due
505 nodeid: 2
506 quorum_votes: 1
507 ring0_addr: due
508 }
509
510 node {
511 name: tre
512 nodeid: 3
513 quorum_votes: 1
514 ring0_addr: tre
515 }
516
517 node {
518 name: uno
519 nodeid: 1
520 quorum_votes: 1
521 ring0_addr: uno
522 }
523
524 }
525
526 quorum {
527 provider: corosync_votequorum
528 }
529
530 totem {
531 cluster_name: thomas-testcluster
532 config_version: 3
533 ip_version: ipv4
534 secauth: on
535 version: 2
536 interface {
537 bindnetaddr: 192.168.30.50
538 ringnumber: 0
539 }
540
541 }
542 ----
543
544 The first you want to do is add the 'name' properties in the node entries if
545 you do not see them already. Those *must* match the node name.
546
547 Then replace the address from the 'ring0_addr' properties with the new
548 addresses. You may use plain IP addresses or also hostnames here. If you use
549 hostnames ensure that they are resolvable from all nodes.
550
551 In my example I want to switch my cluster communication to the 10.10.10.1/25
552 network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
553 in the totem section of the config to an address of the new network. It can be
554 any address from the subnet configured on the new network interface.
555
556 After you increased the 'config_version' property the new configuration file
557 should look like:
558
559 ----
560
561 logging {
562 debug: off
563 to_syslog: yes
564 }
565
566 nodelist {
567
568 node {
569 name: due
570 nodeid: 2
571 quorum_votes: 1
572 ring0_addr: 10.10.10.2
573 }
574
575 node {
576 name: tre
577 nodeid: 3
578 quorum_votes: 1
579 ring0_addr: 10.10.10.3
580 }
581
582 node {
583 name: uno
584 nodeid: 1
585 quorum_votes: 1
586 ring0_addr: 10.10.10.1
587 }
588
589 }
590
591 quorum {
592 provider: corosync_votequorum
593 }
594
595 totem {
596 cluster_name: thomas-testcluster
597 config_version: 4
598 ip_version: ipv4
599 secauth: on
600 version: 2
601 interface {
602 bindnetaddr: 10.10.10.1
603 ringnumber: 0
604 }
605
606 }
607 ----
608
609 Now after a final check whether all changed information is correct we save it
610 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
611 learn how to bring it in effect.
612
613 As our change cannot be enforced live from corosync we have to do an restart.
614
615 On a single node execute:
616 [source,bash]
617 ----
618 systemctl restart corosync
619 ----
620
621 Now check if everything is fine:
622
623 [source,bash]
624 ----
625 systemctl status corosync
626 ----
627
628 If corosync runs again correct restart corosync also on all other nodes.
629 They will then join the cluster membership one by one on the new network.
630
631 [[pvecm_rrp]]
632 Redundant Ring Protocol
633 ~~~~~~~~~~~~~~~~~~~~~~~
634 To avoid a single point of failure you should implement counter measurements.
635 This can be on the hardware and operating system level through network bonding.
636
637 Corosync itself offers also a possibility to add redundancy through the so
638 called 'Redundant Ring Protocol'. This protocol allows running a second totem
639 ring on another network, this network should be physically separated from the
640 other rings network to actually increase availability.
641
642 RRP On Cluster Creation
643 ~~~~~~~~~~~~~~~~~~~~~~~
644
645 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
646 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
647
648 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
649
650 So if you have two networks, one on the 10.10.10.1/24 and the other on the
651 10.10.20.1/24 subnet you would execute:
652
653 [source,bash]
654 ----
655 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
656 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
657 ----
658
659 RRP On Existing Clusters
660 ~~~~~~~~~~~~~~~~~~~~~~~~
661
662 You will take similar steps as described in
663 <<separate-cluster-net-after-creation,separating the cluster network>> to
664 enable RRP on an already running cluster. The single difference is, that you
665 will add `ring1` and use it instead of `ring0`.
666
667 First add a new `interface` subsection in the `totem` section, set its
668 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
669 address of the subnet you have configured for your new ring.
670 Further set the `rrp_mode` to `passive`, this is the only stable mode.
671
672 Then add to each node entry in the `nodelist` section its new `ring1_addr`
673 property with the nodes additional ring address.
674
675 So if you have two networks, one on the 10.10.10.1/24 and the other on the
676 10.10.20.1/24 subnet, the final configuration file should look like:
677
678 ----
679 totem {
680 cluster_name: tweak
681 config_version: 9
682 ip_version: ipv4
683 rrp_mode: passive
684 secauth: on
685 version: 2
686 interface {
687 bindnetaddr: 10.10.10.1
688 ringnumber: 0
689 }
690 interface {
691 bindnetaddr: 10.10.20.1
692 ringnumber: 1
693 }
694 }
695
696 nodelist {
697 node {
698 name: pvecm1
699 nodeid: 1
700 quorum_votes: 1
701 ring0_addr: 10.10.10.1
702 ring1_addr: 10.10.20.1
703 }
704
705 node {
706 name: pvecm2
707 nodeid: 2
708 quorum_votes: 1
709 ring0_addr: 10.10.10.2
710 ring1_addr: 10.10.20.2
711 }
712
713 [...] # other cluster nodes here
714 }
715
716 [...] # other remaining config sections here
717
718 ----
719
720 Bring it in effect like described in the
721 <<edit-corosync-conf,edit the corosync.conf file>> section.
722
723 This is a change which cannot take live in effect and needs at least a restart
724 of corosync. Recommended is a restart of the whole cluster.
725
726 If you cannot reboot the whole cluster ensure no High Availability services are
727 configured and the stop the corosync service on all nodes. After corosync is
728 stopped on all nodes start it one after the other again.
729
730 Corosync Configuration
731 ----------------------
732
733 The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
734 controls the cluster member ship and its network.
735 For reading more about it check the corosync.conf man page:
736 [source,bash]
737 ----
738 man corosync.conf
739 ----
740
741 For node membership you should always use the `pvecm` tool provided by {pve}.
742 You may have to edit the configuration file manually for other changes.
743 Here are a few best practice tips for doing this.
744
745 [[edit-corosync-conf]]
746 Edit corosync.conf
747 ~~~~~~~~~~~~~~~~~~
748
749 Editing the corosync.conf file can be not always straight forward. There are
750 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
751 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
752 propagate the changes to the local one, but not vice versa.
753
754 The configuration will get updated automatically as soon as the file changes.
755 This means changes which can be integrated in a running corosync will take
756 instantly effect. So you should always make a copy and edit that instead, to
757 avoid triggering some unwanted changes by an in between safe.
758
759 [source,bash]
760 ----
761 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
762 ----
763
764 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
765 preinstalled on {pve} for example.
766
767 NOTE: Always increment the 'config_version' number on configuration changes,
768 omitting this can lead to problems.
769
770 After making the necessary changes create another copy of the current working
771 configuration file. This serves as a backup if the new configuration fails to
772 apply or makes problems in other ways.
773
774 [source,bash]
775 ----
776 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
777 ----
778
779 Then move the new configuration file over the old one:
780 [source,bash]
781 ----
782 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
783 ----
784
785 You may check with the commands
786 [source,bash]
787 ----
788 systemctl status corosync
789 journalctl -b -u corosync
790 ----
791
792 If the change could applied automatically. If not you may have to restart the
793 corosync service via:
794 [source,bash]
795 ----
796 systemctl restart corosync
797 ----
798
799 On errors check the troubleshooting section below.
800
801 Troubleshooting
802 ~~~~~~~~~~~~~~~
803
804 Issue: 'quorum.expected_votes must be configured'
805 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
806
807 When corosync starts to fail and you get the following message in the system log:
808
809 ----
810 [...]
811 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
812 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
813 'configuration error: nodelist or quorum.expected_votes must be configured!'
814 [...]
815 ----
816
817 It means that the hostname you set for corosync 'ringX_addr' in the
818 configuration could not be resolved.
819
820
821 Write Configuration When Not Quorate
822 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
823
824 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
825 know what you do, use:
826 [source,bash]
827 ----
828 pvecm expected 1
829 ----
830
831 This sets the expected vote count to 1 and makes the cluster quorate. You can
832 now fix your configuration, or revert it back to the last working backup.
833
834 This is not enough if corosync cannot start anymore. Here its best to edit the
835 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
836 that corosync can start again. Ensure that on all nodes this configuration has
837 the same content to avoid split brains. If you are not sure what went wrong
838 it's best to ask the Proxmox Community to help you.
839
840
841 [[corosync-conf-glossary]]
842 Corosync Configuration Glossary
843 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
844
845 ringX_addr::
846 This names the different ring addresses for the corosync totem rings used for
847 the cluster communication.
848
849 bindnetaddr::
850 Defines to which interface the ring should bind to. It may be any address of
851 the subnet configured on the interface we want to use. In general its the
852 recommended to just use an address a node uses on this interface.
853
854 rrp_mode::
855 Specifies the mode of the redundant ring protocol and may be passive, active or
856 none. Note that use of active is highly experimental and not official
857 supported. Passive is the preferred mode, it may double the cluster
858 communication throughput and increases availability.
859
860
861 Cluster Cold Start
862 ------------------
863
864 It is obvious that a cluster is not quorate when all nodes are
865 offline. This is a common case after a power failure.
866
867 NOTE: It is always a good idea to use an uninterruptible power supply
868 (``UPS'', also called ``battery backup'') to avoid this state, especially if
869 you want HA.
870
871 On node startup, the `pve-guests` service is started and waits for
872 quorum. Once quorate, it starts all guests which have the `onboot`
873 flag set.
874
875 When you turn on nodes, or when power comes back after power failure,
876 it is likely that some nodes boots faster than others. Please keep in
877 mind that guest startup is delayed until you reach quorum.
878
879
880 Guest Migration
881 ---------------
882
883 Migrating virtual guests to other nodes is a useful feature in a
884 cluster. There are settings to control the behavior of such
885 migrations. This can be done via the configuration file
886 `datacenter.cfg` or for a specific migration via API or command line
887 parameters.
888
889 It makes a difference if a Guest is online or offline, or if it has
890 local resources (like a local disk).
891
892 For Details about Virtual Machine Migration see the
893 xref:qm_migration[QEMU/KVM Migration Chapter]
894
895 For Details about Container Migration see the
896 xref:pct_migration[Container Migration Chapter]
897
898 Migration Type
899 ~~~~~~~~~~~~~~
900
901 The migration type defines if the migration data should be sent over an
902 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
903 Setting the migration type to insecure means that the RAM content of a
904 virtual guest gets also transferred unencrypted, which can lead to
905 information disclosure of critical data from inside the guest (for
906 example passwords or encryption keys).
907
908 Therefore, we strongly recommend using the secure channel if you do
909 not have full control over the network and can not guarantee that no
910 one is eavesdropping to it.
911
912 NOTE: Storage migration does not follow this setting. Currently, it
913 always sends the storage content over a secure channel.
914
915 Encryption requires a lot of computing power, so this setting is often
916 changed to "unsafe" to achieve better performance. The impact on
917 modern systems is lower because they implement AES encryption in
918 hardware. The performance impact is particularly evident in fast
919 networks where you can transfer 10 Gbps or more.
920
921
922 Migration Network
923 ~~~~~~~~~~~~~~~~~
924
925 By default, {pve} uses the network in which cluster communication
926 takes place to send the migration traffic. This is not optimal because
927 sensitive cluster traffic can be disrupted and this network may not
928 have the best bandwidth available on the node.
929
930 Setting the migration network parameter allows the use of a dedicated
931 network for the entire migration traffic. In addition to the memory,
932 this also affects the storage traffic for offline migrations.
933
934 The migration network is set as a network in the CIDR notation. This
935 has the advantage that you do not have to set individual IP addresses
936 for each node. {pve} can determine the real address on the
937 destination node from the network specified in the CIDR form. To
938 enable this, the network must be specified so that each node has one,
939 but only one IP in the respective network.
940
941
942 Example
943 ^^^^^^^
944
945 We assume that we have a three-node setup with three separate
946 networks. One for public communication with the Internet, one for
947 cluster communication and a very fast one, which we want to use as a
948 dedicated network for migration.
949
950 A network configuration for such a setup might look as follows:
951
952 ----
953 iface eno1 inet manual
954
955 # public network
956 auto vmbr0
957 iface vmbr0 inet static
958 address 192.X.Y.57
959 netmask 255.255.250.0
960 gateway 192.X.Y.1
961 bridge_ports eno1
962 bridge_stp off
963 bridge_fd 0
964
965 # cluster network
966 auto eno2
967 iface eno2 inet static
968 address 10.1.1.1
969 netmask 255.255.255.0
970
971 # fast network
972 auto eno3
973 iface eno3 inet static
974 address 10.1.2.1
975 netmask 255.255.255.0
976 ----
977
978 Here, we will use the network 10.1.2.0/24 as a migration network. For
979 a single migration, you can do this using the `migration_network`
980 parameter of the command line tool:
981
982 ----
983 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
984 ----
985
986 To configure this as the default network for all migrations in the
987 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
988 file:
989
990 ----
991 # use dedicated migration network
992 migration: secure,network=10.1.2.0/24
993 ----
994
995 NOTE: The migration type must always be set when the migration network
996 gets set in `/etc/pve/datacenter.cfg`.
997
998
999 ifdef::manvolnum[]
1000 include::pve-copyright.adoc[]
1001 endif::manvolnum[]