]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
7c786bc99f7f158f442c05becf10458623be6ab6
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
32
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
37 nodes.
38
39 Grouping nodes into a cluster has the following advantages:
40
41 * Centralized, web based management
42
43 * Multi-master clusters: each node can do all management task
44
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
47
48 * Easy migration of virtual machines and containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be in the same network as `corosync` uses IP Multicast
60 to communicate between nodes (also see
61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
62 ports 5404 and 5405 for cluster communication.
63 +
64 NOTE: Some switches do not support IP multicast by default and must be
65 manually enabled first.
66
67 * Date and time have to be synchronized.
68
69 * SSH tunnel on TCP port 22 between nodes is used.
70
71 * If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
74
75 * We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
78 * Root password of a cluster node is required for adding nodes.
79
80 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
81 Proxmox VE 4.0 cluster nodes.
82
83
84 Preparing Nodes
85 ---------------
86
87 First, install {PVE} on all nodes. Make sure that each node is
88 installed with the final hostname and IP configuration. Changing the
89 hostname and IP is not possible after cluster creation.
90
91 Currently the cluster creation can either be done on the console(login via `ssh`) or the GUI.
92
93 [[pvecm_create_cluster]]
94 Create the Cluster
95 ------------------
96
97 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
98 This name cannot be changed later. The cluster name follows the same rules as node names.
99
100 hp1# pvecm create YOUR-CLUSTER-NAME
101
102 CAUTION: The cluster name is used to compute the default multicast
103 address. Please use unique cluster names if you run more than one
104 cluster inside your network.
105
106 To check the state of your cluster use:
107
108 hp1# pvecm status
109
110 Multiple Clusters In Same Network
111 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
112
113 It is possible to create multiple clusters in the same physical or logical
114 network. Each cluster must have a unique name, which is used to generate the
115 cluster's multicast group address. As long as no duplicate cluster names are
116 configured in one network segment, the different clusters won't interfere with
117 each other.
118
119 If multiple clusters operate in a single network it may be beneficial to setup
120 an IGMP querier and enable IGMP Snooping in said network. This may reduce the
121 load of the network significantly because multicast packets are only delivered
122 to endpoints of the respective member nodes.
123
124
125 [[pvecm_join_node_to_cluster]]
126 Adding Nodes to the Cluster
127 ---------------------------
128
129 Login via `ssh` to the node you want to add.
130
131 hp2# pvecm add IP-ADDRESS-CLUSTER
132
133 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
134
135 CAUTION: A new node cannot hold any VMs, because you would get
136 conflicts about identical VM IDs. Also, all existing configuration in
137 `/etc/pve` is overwritten when you join a new node to the cluster. To
138 workaround, use `vzdump` to backup and restore to a different VMID after
139 adding the node to the cluster.
140
141 To check the state of cluster:
142
143 # pvecm status
144
145 .Cluster status after adding 4 nodes
146 ----
147 hp2# pvecm status
148 Quorum information
149 ~~~~~~~~~~~~~~~~~~
150 Date: Mon Apr 20 12:30:13 2015
151 Quorum provider: corosync_votequorum
152 Nodes: 4
153 Node ID: 0x00000001
154 Ring ID: 1928
155 Quorate: Yes
156
157 Votequorum information
158 ~~~~~~~~~~~~~~~~~~~~~~
159 Expected votes: 4
160 Highest expected: 4
161 Total votes: 4
162 Quorum: 2
163 Flags: Quorate
164
165 Membership information
166 ~~~~~~~~~~~~~~~~~~~~~~
167 Nodeid Votes Name
168 0x00000001 1 192.168.15.91
169 0x00000002 1 192.168.15.92 (local)
170 0x00000003 1 192.168.15.93
171 0x00000004 1 192.168.15.94
172 ----
173
174 If you only want the list of all nodes use:
175
176 # pvecm nodes
177
178 .List nodes in a cluster
179 ----
180 hp2# pvecm nodes
181
182 Membership information
183 ~~~~~~~~~~~~~~~~~~~~~~
184 Nodeid Votes Name
185 1 1 hp1
186 2 1 hp2 (local)
187 3 1 hp3
188 4 1 hp4
189 ----
190
191 [[adding-nodes-with-separated-cluster-network]]
192 Adding Nodes With Separated Cluster Network
193 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
194
195 When adding a node to a cluster with a separated cluster network you need to
196 use the 'ringX_addr' parameters to set the nodes address on those networks:
197
198 [source,bash]
199 ----
200 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
201 ----
202
203 If you want to use the Redundant Ring Protocol you will also want to pass the
204 'ring1_addr' parameter.
205
206
207 Remove a Cluster Node
208 ---------------------
209
210 CAUTION: Read carefully the procedure before proceeding, as it could
211 not be what you want or need.
212
213 Move all virtual machines from the node. Make sure you have no local
214 data or backups you want to keep, or save them accordingly.
215 In the following example we will remove the node hp4 from the cluster.
216
217 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
218 command to identify the node ID to remove:
219
220 ----
221 hp1# pvecm nodes
222
223 Membership information
224 ~~~~~~~~~~~~~~~~~~~~~~
225 Nodeid Votes Name
226 1 1 hp1 (local)
227 2 1 hp2
228 3 1 hp3
229 4 1 hp4
230 ----
231
232
233 At this point you must power off hp4 and
234 make sure that it will not power on again (in the network) as it
235 is.
236
237 IMPORTANT: As said above, it is critical to power off the node
238 *before* removal, and make sure that it will *never* power on again
239 (in the existing cluster network) as it is.
240 If you power on the node as it is, your cluster will be screwed up and
241 it could be difficult to restore a clean cluster state.
242
243 After powering off the node hp4, we can safely remove it from the cluster.
244
245 hp1# pvecm delnode hp4
246
247 If the operation succeeds no output is returned, just check the node
248 list again with `pvecm nodes` or `pvecm status`. You should see
249 something like:
250
251 ----
252 hp1# pvecm status
253
254 Quorum information
255 ~~~~~~~~~~~~~~~~~~
256 Date: Mon Apr 20 12:44:28 2015
257 Quorum provider: corosync_votequorum
258 Nodes: 3
259 Node ID: 0x00000001
260 Ring ID: 1992
261 Quorate: Yes
262
263 Votequorum information
264 ~~~~~~~~~~~~~~~~~~~~~~
265 Expected votes: 3
266 Highest expected: 3
267 Total votes: 3
268 Quorum: 3
269 Flags: Quorate
270
271 Membership information
272 ~~~~~~~~~~~~~~~~~~~~~~
273 Nodeid Votes Name
274 0x00000001 1 192.168.15.90 (local)
275 0x00000002 1 192.168.15.91
276 0x00000003 1 192.168.15.92
277 ----
278
279 If, for whatever reason, you want that this server joins the same
280 cluster again, you have to
281
282 * reinstall {pve} on it from scratch
283
284 * then join it, as explained in the previous section.
285
286 [[pvecm_separate_node_without_reinstall]]
287 Separate A Node Without Reinstalling
288 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
289
290 CAUTION: This is *not* the recommended method, proceed with caution. Use the
291 above mentioned method if you're unsure.
292
293 You can also separate a node from a cluster without reinstalling it from
294 scratch. But after removing the node from the cluster it will still have
295 access to the shared storages! This must be resolved before you start removing
296 the node from the cluster. A {pve} cluster cannot share the exact same
297 storage with another cluster, as storage locking doesn't work over cluster
298 boundary. Further, it may also lead to VMID conflicts.
299
300 Its suggested that you create a new storage where only the node which you want
301 to separate has access. This can be an new export on your NFS or a new Ceph
302 pool, to name a few examples. Its just important that the exact same storage
303 does not gets accessed by multiple clusters. After setting this storage up move
304 all data from the node and its VMs to it. Then you are ready to separate the
305 node from the cluster.
306
307 WARNING: Ensure all shared resources are cleanly separated! You will run into
308 conflicts and problems else.
309
310 First stop the corosync and the pve-cluster services on the node:
311 [source,bash]
312 ----
313 systemctl stop pve-cluster
314 systemctl stop corosync
315 ----
316
317 Start the cluster filesystem again in local mode:
318 [source,bash]
319 ----
320 pmxcfs -l
321 ----
322
323 Delete the corosync configuration files:
324 [source,bash]
325 ----
326 rm /etc/pve/corosync.conf
327 rm /etc/corosync/*
328 ----
329
330 You can now start the filesystem again as normal service:
331 [source,bash]
332 ----
333 killall pmxcfs
334 systemctl start pve-cluster
335 ----
336
337 The node is now separated from the cluster. You can deleted it from a remaining
338 node of the cluster with:
339 [source,bash]
340 ----
341 pvecm delnode oldnode
342 ----
343
344 If the command failed, because the remaining node in the cluster lost quorum
345 when the now separate node exited, you may set the expected votes to 1 as a workaround:
346 [source,bash]
347 ----
348 pvecm expected 1
349 ----
350
351 And the repeat the 'pvecm delnode' command.
352
353 Now switch back to the separated node, here delete all remaining files left
354 from the old cluster. This ensures that the node can be added to another
355 cluster again without problems.
356
357 [source,bash]
358 ----
359 rm /var/lib/corosync/*
360 ----
361
362 As the configuration files from the other nodes are still in the cluster
363 filesystem you may want to clean those up too. Remove simply the whole
364 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
365 you used the correct one before deleting it.
366
367 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
368 the nodes can still connect to each other with public key authentication. This
369 should be fixed by removing the respective keys from the
370 '/etc/pve/priv/authorized_keys' file.
371
372 Quorum
373 ------
374
375 {pve} use a quorum-based technique to provide a consistent state among
376 all cluster nodes.
377
378 [quote, from Wikipedia, Quorum (distributed computing)]
379 ____
380 A quorum is the minimum number of votes that a distributed transaction
381 has to obtain in order to be allowed to perform an operation in a
382 distributed system.
383 ____
384
385 In case of network partitioning, state changes requires that a
386 majority of nodes are online. The cluster switches to read-only mode
387 if it loses quorum.
388
389 NOTE: {pve} assigns a single vote to each node by default.
390
391 Cluster Network
392 ---------------
393
394 The cluster network is the core of a cluster. All messages sent over it have to
395 be delivered reliable to all nodes in their respective order. In {pve} this
396 part is done by corosync, an implementation of a high performance low overhead
397 high availability development toolkit. It serves our decentralized
398 configuration file system (`pmxcfs`).
399
400 [[cluster-network-requirements]]
401 Network Requirements
402 ~~~~~~~~~~~~~~~~~~~~
403 This needs a reliable network with latencies under 2 milliseconds (LAN
404 performance) to work properly. While corosync can also use unicast for
405 communication between nodes its **highly recommended** to have a multicast
406 capable network. The network should not be used heavily by other members,
407 ideally corosync runs on its own network.
408 *never* share it with network where storage communicates too.
409
410 Before setting up a cluster it is good practice to check if the network is fit
411 for that purpose.
412
413 * Ensure that all nodes are in the same subnet. This must only be true for the
414 network interfaces used for cluster communication (corosync).
415
416 * Ensure all nodes can reach each other over those interfaces, using `ping` is
417 enough for a basic test.
418
419 * Ensure that multicast works in general and a high package rates. This can be
420 done with the `omping` tool. The final "%loss" number should be < 1%.
421 +
422 [source,bash]
423 ----
424 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
425 ----
426
427 * Ensure that multicast communication works over an extended period of time.
428 This uncovers problems where IGMP snooping is activated on the network but
429 no multicast querier is active. This test has a duration of around 10
430 minutes.
431 +
432 [source,bash]
433 ----
434 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
435 ----
436
437 Your network is not ready for clustering if any of these test fails. Recheck
438 your network configuration. Especially switches are notorious for having
439 multicast disabled by default or IGMP snooping enabled with no IGMP querier
440 active.
441
442 In smaller cluster its also an option to use unicast if you really cannot get
443 multicast to work.
444
445 Separate Cluster Network
446 ~~~~~~~~~~~~~~~~~~~~~~~~
447
448 When creating a cluster without any parameters the cluster network is generally
449 shared with the Web UI and the VMs and its traffic. Depending on your setup
450 even storage traffic may get sent over the same network. Its recommended to
451 change that, as corosync is a time critical real time application.
452
453 Setting Up A New Network
454 ^^^^^^^^^^^^^^^^^^^^^^^^
455
456 First you have to setup a new network interface. It should be on a physical
457 separate network. Ensure that your network fulfills the
458 <<cluster-network-requirements,cluster network requirements>>.
459
460 Separate On Cluster Creation
461 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
462
463 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
464 the 'pvecm create' command used for creating a new cluster.
465
466 If you have setup an additional NIC with a static address on 10.10.10.1/25
467 and want to send and receive all cluster communication over this interface
468 you would execute:
469
470 [source,bash]
471 ----
472 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
473 ----
474
475 To check if everything is working properly execute:
476 [source,bash]
477 ----
478 systemctl status corosync
479 ----
480
481 Afterwards, proceed as descripted in the section to
482 <<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
483
484 [[separate-cluster-net-after-creation]]
485 Separate After Cluster Creation
486 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
487
488 You can do this also if you have already created a cluster and want to switch
489 its communication to another network, without rebuilding the whole cluster.
490 This change may lead to short durations of quorum loss in the cluster, as nodes
491 have to restart corosync and come up one after the other on the new network.
492
493 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
494 The open it and you should see a file similar to:
495
496 ----
497 logging {
498 debug: off
499 to_syslog: yes
500 }
501
502 nodelist {
503
504 node {
505 name: due
506 nodeid: 2
507 quorum_votes: 1
508 ring0_addr: due
509 }
510
511 node {
512 name: tre
513 nodeid: 3
514 quorum_votes: 1
515 ring0_addr: tre
516 }
517
518 node {
519 name: uno
520 nodeid: 1
521 quorum_votes: 1
522 ring0_addr: uno
523 }
524
525 }
526
527 quorum {
528 provider: corosync_votequorum
529 }
530
531 totem {
532 cluster_name: thomas-testcluster
533 config_version: 3
534 ip_version: ipv4
535 secauth: on
536 version: 2
537 interface {
538 bindnetaddr: 192.168.30.50
539 ringnumber: 0
540 }
541
542 }
543 ----
544
545 The first you want to do is add the 'name' properties in the node entries if
546 you do not see them already. Those *must* match the node name.
547
548 Then replace the address from the 'ring0_addr' properties with the new
549 addresses. You may use plain IP addresses or also hostnames here. If you use
550 hostnames ensure that they are resolvable from all nodes.
551
552 In my example I want to switch my cluster communication to the 10.10.10.1/25
553 network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
554 in the totem section of the config to an address of the new network. It can be
555 any address from the subnet configured on the new network interface.
556
557 After you increased the 'config_version' property the new configuration file
558 should look like:
559
560 ----
561
562 logging {
563 debug: off
564 to_syslog: yes
565 }
566
567 nodelist {
568
569 node {
570 name: due
571 nodeid: 2
572 quorum_votes: 1
573 ring0_addr: 10.10.10.2
574 }
575
576 node {
577 name: tre
578 nodeid: 3
579 quorum_votes: 1
580 ring0_addr: 10.10.10.3
581 }
582
583 node {
584 name: uno
585 nodeid: 1
586 quorum_votes: 1
587 ring0_addr: 10.10.10.1
588 }
589
590 }
591
592 quorum {
593 provider: corosync_votequorum
594 }
595
596 totem {
597 cluster_name: thomas-testcluster
598 config_version: 4
599 ip_version: ipv4
600 secauth: on
601 version: 2
602 interface {
603 bindnetaddr: 10.10.10.1
604 ringnumber: 0
605 }
606
607 }
608 ----
609
610 Now after a final check whether all changed information is correct we save it
611 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
612 learn how to bring it in effect.
613
614 As our change cannot be enforced live from corosync we have to do an restart.
615
616 On a single node execute:
617 [source,bash]
618 ----
619 systemctl restart corosync
620 ----
621
622 Now check if everything is fine:
623
624 [source,bash]
625 ----
626 systemctl status corosync
627 ----
628
629 If corosync runs again correct restart corosync also on all other nodes.
630 They will then join the cluster membership one by one on the new network.
631
632 [[pvecm_rrp]]
633 Redundant Ring Protocol
634 ~~~~~~~~~~~~~~~~~~~~~~~
635 To avoid a single point of failure you should implement counter measurements.
636 This can be on the hardware and operating system level through network bonding.
637
638 Corosync itself offers also a possibility to add redundancy through the so
639 called 'Redundant Ring Protocol'. This protocol allows running a second totem
640 ring on another network, this network should be physically separated from the
641 other rings network to actually increase availability.
642
643 RRP On Cluster Creation
644 ~~~~~~~~~~~~~~~~~~~~~~~
645
646 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
647 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
648
649 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
650
651 So if you have two networks, one on the 10.10.10.1/24 and the other on the
652 10.10.20.1/24 subnet you would execute:
653
654 [source,bash]
655 ----
656 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
657 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
658 ----
659
660 RRP On Existing Clusters
661 ~~~~~~~~~~~~~~~~~~~~~~~~
662
663 You will take similar steps as described in
664 <<separate-cluster-net-after-creation,separating the cluster network>> to
665 enable RRP on an already running cluster. The single difference is, that you
666 will add `ring1` and use it instead of `ring0`.
667
668 First add a new `interface` subsection in the `totem` section, set its
669 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
670 address of the subnet you have configured for your new ring.
671 Further set the `rrp_mode` to `passive`, this is the only stable mode.
672
673 Then add to each node entry in the `nodelist` section its new `ring1_addr`
674 property with the nodes additional ring address.
675
676 So if you have two networks, one on the 10.10.10.1/24 and the other on the
677 10.10.20.1/24 subnet, the final configuration file should look like:
678
679 ----
680 totem {
681 cluster_name: tweak
682 config_version: 9
683 ip_version: ipv4
684 rrp_mode: passive
685 secauth: on
686 version: 2
687 interface {
688 bindnetaddr: 10.10.10.1
689 ringnumber: 0
690 }
691 interface {
692 bindnetaddr: 10.10.20.1
693 ringnumber: 1
694 }
695 }
696
697 nodelist {
698 node {
699 name: pvecm1
700 nodeid: 1
701 quorum_votes: 1
702 ring0_addr: 10.10.10.1
703 ring1_addr: 10.10.20.1
704 }
705
706 node {
707 name: pvecm2
708 nodeid: 2
709 quorum_votes: 1
710 ring0_addr: 10.10.10.2
711 ring1_addr: 10.10.20.2
712 }
713
714 [...] # other cluster nodes here
715 }
716
717 [...] # other remaining config sections here
718
719 ----
720
721 Bring it in effect like described in the
722 <<edit-corosync-conf,edit the corosync.conf file>> section.
723
724 This is a change which cannot take live in effect and needs at least a restart
725 of corosync. Recommended is a restart of the whole cluster.
726
727 If you cannot reboot the whole cluster ensure no High Availability services are
728 configured and the stop the corosync service on all nodes. After corosync is
729 stopped on all nodes start it one after the other again.
730
731 Corosync Configuration
732 ----------------------
733
734 The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
735 controls the cluster member ship and its network.
736 For reading more about it check the corosync.conf man page:
737 [source,bash]
738 ----
739 man corosync.conf
740 ----
741
742 For node membership you should always use the `pvecm` tool provided by {pve}.
743 You may have to edit the configuration file manually for other changes.
744 Here are a few best practice tips for doing this.
745
746 [[edit-corosync-conf]]
747 Edit corosync.conf
748 ~~~~~~~~~~~~~~~~~~
749
750 Editing the corosync.conf file can be not always straight forward. There are
751 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
752 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
753 propagate the changes to the local one, but not vice versa.
754
755 The configuration will get updated automatically as soon as the file changes.
756 This means changes which can be integrated in a running corosync will take
757 instantly effect. So you should always make a copy and edit that instead, to
758 avoid triggering some unwanted changes by an in between safe.
759
760 [source,bash]
761 ----
762 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
763 ----
764
765 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
766 preinstalled on {pve} for example.
767
768 NOTE: Always increment the 'config_version' number on configuration changes,
769 omitting this can lead to problems.
770
771 After making the necessary changes create another copy of the current working
772 configuration file. This serves as a backup if the new configuration fails to
773 apply or makes problems in other ways.
774
775 [source,bash]
776 ----
777 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
778 ----
779
780 Then move the new configuration file over the old one:
781 [source,bash]
782 ----
783 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
784 ----
785
786 You may check with the commands
787 [source,bash]
788 ----
789 systemctl status corosync
790 journalctl -b -u corosync
791 ----
792
793 If the change could applied automatically. If not you may have to restart the
794 corosync service via:
795 [source,bash]
796 ----
797 systemctl restart corosync
798 ----
799
800 On errors check the troubleshooting section below.
801
802 Troubleshooting
803 ~~~~~~~~~~~~~~~
804
805 Issue: 'quorum.expected_votes must be configured'
806 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
807
808 When corosync starts to fail and you get the following message in the system log:
809
810 ----
811 [...]
812 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
813 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
814 'configuration error: nodelist or quorum.expected_votes must be configured!'
815 [...]
816 ----
817
818 It means that the hostname you set for corosync 'ringX_addr' in the
819 configuration could not be resolved.
820
821
822 Write Configuration When Not Quorate
823 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
824
825 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
826 know what you do, use:
827 [source,bash]
828 ----
829 pvecm expected 1
830 ----
831
832 This sets the expected vote count to 1 and makes the cluster quorate. You can
833 now fix your configuration, or revert it back to the last working backup.
834
835 This is not enough if corosync cannot start anymore. Here its best to edit the
836 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
837 that corosync can start again. Ensure that on all nodes this configuration has
838 the same content to avoid split brains. If you are not sure what went wrong
839 it's best to ask the Proxmox Community to help you.
840
841
842 [[corosync-conf-glossary]]
843 Corosync Configuration Glossary
844 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
845
846 ringX_addr::
847 This names the different ring addresses for the corosync totem rings used for
848 the cluster communication.
849
850 bindnetaddr::
851 Defines to which interface the ring should bind to. It may be any address of
852 the subnet configured on the interface we want to use. In general its the
853 recommended to just use an address a node uses on this interface.
854
855 rrp_mode::
856 Specifies the mode of the redundant ring protocol and may be passive, active or
857 none. Note that use of active is highly experimental and not official
858 supported. Passive is the preferred mode, it may double the cluster
859 communication throughput and increases availability.
860
861
862 Cluster Cold Start
863 ------------------
864
865 It is obvious that a cluster is not quorate when all nodes are
866 offline. This is a common case after a power failure.
867
868 NOTE: It is always a good idea to use an uninterruptible power supply
869 (``UPS'', also called ``battery backup'') to avoid this state, especially if
870 you want HA.
871
872 On node startup, the `pve-guests` service is started and waits for
873 quorum. Once quorate, it starts all guests which have the `onboot`
874 flag set.
875
876 When you turn on nodes, or when power comes back after power failure,
877 it is likely that some nodes boots faster than others. Please keep in
878 mind that guest startup is delayed until you reach quorum.
879
880
881 Guest Migration
882 ---------------
883
884 Migrating virtual guests to other nodes is a useful feature in a
885 cluster. There are settings to control the behavior of such
886 migrations. This can be done via the configuration file
887 `datacenter.cfg` or for a specific migration via API or command line
888 parameters.
889
890 It makes a difference if a Guest is online or offline, or if it has
891 local resources (like a local disk).
892
893 For Details about Virtual Machine Migration see the
894 xref:qm_migration[QEMU/KVM Migration Chapter]
895
896 For Details about Container Migration see the
897 xref:pct_migration[Container Migration Chapter]
898
899 Migration Type
900 ~~~~~~~~~~~~~~
901
902 The migration type defines if the migration data should be sent over an
903 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
904 Setting the migration type to insecure means that the RAM content of a
905 virtual guest gets also transferred unencrypted, which can lead to
906 information disclosure of critical data from inside the guest (for
907 example passwords or encryption keys).
908
909 Therefore, we strongly recommend using the secure channel if you do
910 not have full control over the network and can not guarantee that no
911 one is eavesdropping to it.
912
913 NOTE: Storage migration does not follow this setting. Currently, it
914 always sends the storage content over a secure channel.
915
916 Encryption requires a lot of computing power, so this setting is often
917 changed to "unsafe" to achieve better performance. The impact on
918 modern systems is lower because they implement AES encryption in
919 hardware. The performance impact is particularly evident in fast
920 networks where you can transfer 10 Gbps or more.
921
922
923 Migration Network
924 ~~~~~~~~~~~~~~~~~
925
926 By default, {pve} uses the network in which cluster communication
927 takes place to send the migration traffic. This is not optimal because
928 sensitive cluster traffic can be disrupted and this network may not
929 have the best bandwidth available on the node.
930
931 Setting the migration network parameter allows the use of a dedicated
932 network for the entire migration traffic. In addition to the memory,
933 this also affects the storage traffic for offline migrations.
934
935 The migration network is set as a network in the CIDR notation. This
936 has the advantage that you do not have to set individual IP addresses
937 for each node. {pve} can determine the real address on the
938 destination node from the network specified in the CIDR form. To
939 enable this, the network must be specified so that each node has one,
940 but only one IP in the respective network.
941
942
943 Example
944 ^^^^^^^
945
946 We assume that we have a three-node setup with three separate
947 networks. One for public communication with the Internet, one for
948 cluster communication and a very fast one, which we want to use as a
949 dedicated network for migration.
950
951 A network configuration for such a setup might look as follows:
952
953 ----
954 iface eno1 inet manual
955
956 # public network
957 auto vmbr0
958 iface vmbr0 inet static
959 address 192.X.Y.57
960 netmask 255.255.250.0
961 gateway 192.X.Y.1
962 bridge_ports eno1
963 bridge_stp off
964 bridge_fd 0
965
966 # cluster network
967 auto eno2
968 iface eno2 inet static
969 address 10.1.1.1
970 netmask 255.255.255.0
971
972 # fast network
973 auto eno3
974 iface eno3 inet static
975 address 10.1.2.1
976 netmask 255.255.255.0
977 ----
978
979 Here, we will use the network 10.1.2.0/24 as a migration network. For
980 a single migration, you can do this using the `migration_network`
981 parameter of the command line tool:
982
983 ----
984 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
985 ----
986
987 To configure this as the default network for all migrations in the
988 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
989 file:
990
991 ----
992 # use dedicated migration network
993 migration: secure,network=10.1.2.0/24
994 ----
995
996 NOTE: The migration type must always be set when the migration network
997 gets set in `/etc/pve/datacenter.cfg`.
998
999
1000 ifdef::manvolnum[]
1001 include::pve-copyright.adoc[]
1002 endif::manvolnum[]