]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
pvecm: mark all console commands the same way
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
32
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
37 nodes.
38
39 Grouping nodes into a cluster has the following advantages:
40
41 * Centralized, web based management
42
43 * Multi-master clusters: each node can do all management task
44
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
47
48 * Easy migration of virtual machines and containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be in the same network as `corosync` uses IP Multicast
60 to communicate between nodes (also see
61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
62 ports 5404 and 5405 for cluster communication.
63 +
64 NOTE: Some switches do not support IP multicast by default and must be
65 manually enabled first.
66
67 * Date and time have to be synchronized.
68
69 * SSH tunnel on TCP port 22 between nodes is used.
70
71 * If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
74
75 * We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
78 * Root password of a cluster node is required for adding nodes.
79
80 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
81 Proxmox VE 4.0 cluster nodes.
82
83
84 Preparing Nodes
85 ---------------
86
87 First, install {PVE} on all nodes. Make sure that each node is
88 installed with the final hostname and IP configuration. Changing the
89 hostname and IP is not possible after cluster creation.
90
91 Currently the cluster creation can either be done on the console (login via
92 `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
93 Cluster__).
94
95 [[pvecm_create_cluster]]
96 Create the Cluster
97 ------------------
98
99 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
100 This name cannot be changed later. The cluster name follows the same rules as node names.
101
102 ----
103 hp1# pvecm create CLUSTERNAME
104 ----
105
106 CAUTION: The cluster name is used to compute the default multicast
107 address. Please use unique cluster names if you run more than one
108 cluster inside your network.
109
110 To check the state of your cluster use:
111
112 ----
113 hp1# pvecm status
114 ----
115
116 Multiple Clusters In Same Network
117 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
118
119 It is possible to create multiple clusters in the same physical or logical
120 network. Each cluster must have a unique name, which is used to generate the
121 cluster's multicast group address. As long as no duplicate cluster names are
122 configured in one network segment, the different clusters won't interfere with
123 each other.
124
125 If multiple clusters operate in a single network it may be beneficial to setup
126 an IGMP querier and enable IGMP Snooping in said network. This may reduce the
127 load of the network significantly because multicast packets are only delivered
128 to endpoints of the respective member nodes.
129
130
131 [[pvecm_join_node_to_cluster]]
132 Adding Nodes to the Cluster
133 ---------------------------
134
135 Login via `ssh` to the node you want to add.
136
137 ----
138 hp2# pvecm add IP-ADDRESS-CLUSTER
139 ----
140
141 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
142
143 CAUTION: A new node cannot hold any VMs, because you would get
144 conflicts about identical VM IDs. Also, all existing configuration in
145 `/etc/pve` is overwritten when you join a new node to the cluster. To
146 workaround, use `vzdump` to backup and restore to a different VMID after
147 adding the node to the cluster.
148
149 To check the state of cluster:
150
151 ----
152 # pvecm status
153 ----
154
155 .Cluster status after adding 4 nodes
156 ----
157 hp2# pvecm status
158 Quorum information
159 ~~~~~~~~~~~~~~~~~~
160 Date: Mon Apr 20 12:30:13 2015
161 Quorum provider: corosync_votequorum
162 Nodes: 4
163 Node ID: 0x00000001
164 Ring ID: 1928
165 Quorate: Yes
166
167 Votequorum information
168 ~~~~~~~~~~~~~~~~~~~~~~
169 Expected votes: 4
170 Highest expected: 4
171 Total votes: 4
172 Quorum: 2
173 Flags: Quorate
174
175 Membership information
176 ~~~~~~~~~~~~~~~~~~~~~~
177 Nodeid Votes Name
178 0x00000001 1 192.168.15.91
179 0x00000002 1 192.168.15.92 (local)
180 0x00000003 1 192.168.15.93
181 0x00000004 1 192.168.15.94
182 ----
183
184 If you only want the list of all nodes use:
185
186 ----
187 # pvecm nodes
188 ----
189
190 .List nodes in a cluster
191 ----
192 hp2# pvecm nodes
193
194 Membership information
195 ~~~~~~~~~~~~~~~~~~~~~~
196 Nodeid Votes Name
197 1 1 hp1
198 2 1 hp2 (local)
199 3 1 hp3
200 4 1 hp4
201 ----
202
203 [[adding-nodes-with-separated-cluster-network]]
204 Adding Nodes With Separated Cluster Network
205 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
206
207 When adding a node to a cluster with a separated cluster network you need to
208 use the 'ringX_addr' parameters to set the nodes address on those networks:
209
210 [source,bash]
211 ----
212 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
213 ----
214
215 If you want to use the Redundant Ring Protocol you will also want to pass the
216 'ring1_addr' parameter.
217
218
219 Remove a Cluster Node
220 ---------------------
221
222 CAUTION: Read carefully the procedure before proceeding, as it could
223 not be what you want or need.
224
225 Move all virtual machines from the node. Make sure you have no local
226 data or backups you want to keep, or save them accordingly.
227 In the following example we will remove the node hp4 from the cluster.
228
229 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
230 command to identify the node ID to remove:
231
232 ----
233 hp1# pvecm nodes
234
235 Membership information
236 ~~~~~~~~~~~~~~~~~~~~~~
237 Nodeid Votes Name
238 1 1 hp1 (local)
239 2 1 hp2
240 3 1 hp3
241 4 1 hp4
242 ----
243
244
245 At this point you must power off hp4 and
246 make sure that it will not power on again (in the network) as it
247 is.
248
249 IMPORTANT: As said above, it is critical to power off the node
250 *before* removal, and make sure that it will *never* power on again
251 (in the existing cluster network) as it is.
252 If you power on the node as it is, your cluster will be screwed up and
253 it could be difficult to restore a clean cluster state.
254
255 After powering off the node hp4, we can safely remove it from the cluster.
256
257 ----
258 hp1# pvecm delnode hp4
259 ----
260
261 If the operation succeeds no output is returned, just check the node
262 list again with `pvecm nodes` or `pvecm status`. You should see
263 something like:
264
265 ----
266 hp1# pvecm status
267
268 Quorum information
269 ~~~~~~~~~~~~~~~~~~
270 Date: Mon Apr 20 12:44:28 2015
271 Quorum provider: corosync_votequorum
272 Nodes: 3
273 Node ID: 0x00000001
274 Ring ID: 1992
275 Quorate: Yes
276
277 Votequorum information
278 ~~~~~~~~~~~~~~~~~~~~~~
279 Expected votes: 3
280 Highest expected: 3
281 Total votes: 3
282 Quorum: 3
283 Flags: Quorate
284
285 Membership information
286 ~~~~~~~~~~~~~~~~~~~~~~
287 Nodeid Votes Name
288 0x00000001 1 192.168.15.90 (local)
289 0x00000002 1 192.168.15.91
290 0x00000003 1 192.168.15.92
291 ----
292
293 If, for whatever reason, you want that this server joins the same
294 cluster again, you have to
295
296 * reinstall {pve} on it from scratch
297
298 * then join it, as explained in the previous section.
299
300 [[pvecm_separate_node_without_reinstall]]
301 Separate A Node Without Reinstalling
302 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
303
304 CAUTION: This is *not* the recommended method, proceed with caution. Use the
305 above mentioned method if you're unsure.
306
307 You can also separate a node from a cluster without reinstalling it from
308 scratch. But after removing the node from the cluster it will still have
309 access to the shared storages! This must be resolved before you start removing
310 the node from the cluster. A {pve} cluster cannot share the exact same
311 storage with another cluster, as storage locking doesn't work over cluster
312 boundary. Further, it may also lead to VMID conflicts.
313
314 Its suggested that you create a new storage where only the node which you want
315 to separate has access. This can be an new export on your NFS or a new Ceph
316 pool, to name a few examples. Its just important that the exact same storage
317 does not gets accessed by multiple clusters. After setting this storage up move
318 all data from the node and its VMs to it. Then you are ready to separate the
319 node from the cluster.
320
321 WARNING: Ensure all shared resources are cleanly separated! You will run into
322 conflicts and problems else.
323
324 First stop the corosync and the pve-cluster services on the node:
325 [source,bash]
326 ----
327 systemctl stop pve-cluster
328 systemctl stop corosync
329 ----
330
331 Start the cluster filesystem again in local mode:
332 [source,bash]
333 ----
334 pmxcfs -l
335 ----
336
337 Delete the corosync configuration files:
338 [source,bash]
339 ----
340 rm /etc/pve/corosync.conf
341 rm /etc/corosync/*
342 ----
343
344 You can now start the filesystem again as normal service:
345 [source,bash]
346 ----
347 killall pmxcfs
348 systemctl start pve-cluster
349 ----
350
351 The node is now separated from the cluster. You can deleted it from a remaining
352 node of the cluster with:
353 [source,bash]
354 ----
355 pvecm delnode oldnode
356 ----
357
358 If the command failed, because the remaining node in the cluster lost quorum
359 when the now separate node exited, you may set the expected votes to 1 as a workaround:
360 [source,bash]
361 ----
362 pvecm expected 1
363 ----
364
365 And the repeat the 'pvecm delnode' command.
366
367 Now switch back to the separated node, here delete all remaining files left
368 from the old cluster. This ensures that the node can be added to another
369 cluster again without problems.
370
371 [source,bash]
372 ----
373 rm /var/lib/corosync/*
374 ----
375
376 As the configuration files from the other nodes are still in the cluster
377 filesystem you may want to clean those up too. Remove simply the whole
378 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
379 you used the correct one before deleting it.
380
381 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
382 the nodes can still connect to each other with public key authentication. This
383 should be fixed by removing the respective keys from the
384 '/etc/pve/priv/authorized_keys' file.
385
386 Quorum
387 ------
388
389 {pve} use a quorum-based technique to provide a consistent state among
390 all cluster nodes.
391
392 [quote, from Wikipedia, Quorum (distributed computing)]
393 ____
394 A quorum is the minimum number of votes that a distributed transaction
395 has to obtain in order to be allowed to perform an operation in a
396 distributed system.
397 ____
398
399 In case of network partitioning, state changes requires that a
400 majority of nodes are online. The cluster switches to read-only mode
401 if it loses quorum.
402
403 NOTE: {pve} assigns a single vote to each node by default.
404
405 Cluster Network
406 ---------------
407
408 The cluster network is the core of a cluster. All messages sent over it have to
409 be delivered reliable to all nodes in their respective order. In {pve} this
410 part is done by corosync, an implementation of a high performance low overhead
411 high availability development toolkit. It serves our decentralized
412 configuration file system (`pmxcfs`).
413
414 [[cluster-network-requirements]]
415 Network Requirements
416 ~~~~~~~~~~~~~~~~~~~~
417 This needs a reliable network with latencies under 2 milliseconds (LAN
418 performance) to work properly. While corosync can also use unicast for
419 communication between nodes its **highly recommended** to have a multicast
420 capable network. The network should not be used heavily by other members,
421 ideally corosync runs on its own network.
422 *never* share it with network where storage communicates too.
423
424 Before setting up a cluster it is good practice to check if the network is fit
425 for that purpose.
426
427 * Ensure that all nodes are in the same subnet. This must only be true for the
428 network interfaces used for cluster communication (corosync).
429
430 * Ensure all nodes can reach each other over those interfaces, using `ping` is
431 enough for a basic test.
432
433 * Ensure that multicast works in general and a high package rates. This can be
434 done with the `omping` tool. The final "%loss" number should be < 1%.
435 +
436 [source,bash]
437 ----
438 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
439 ----
440
441 * Ensure that multicast communication works over an extended period of time.
442 This uncovers problems where IGMP snooping is activated on the network but
443 no multicast querier is active. This test has a duration of around 10
444 minutes.
445 +
446 [source,bash]
447 ----
448 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
449 ----
450
451 Your network is not ready for clustering if any of these test fails. Recheck
452 your network configuration. Especially switches are notorious for having
453 multicast disabled by default or IGMP snooping enabled with no IGMP querier
454 active.
455
456 In smaller cluster its also an option to use unicast if you really cannot get
457 multicast to work.
458
459 Separate Cluster Network
460 ~~~~~~~~~~~~~~~~~~~~~~~~
461
462 When creating a cluster without any parameters the cluster network is generally
463 shared with the Web UI and the VMs and its traffic. Depending on your setup
464 even storage traffic may get sent over the same network. Its recommended to
465 change that, as corosync is a time critical real time application.
466
467 Setting Up A New Network
468 ^^^^^^^^^^^^^^^^^^^^^^^^
469
470 First you have to setup a new network interface. It should be on a physical
471 separate network. Ensure that your network fulfills the
472 <<cluster-network-requirements,cluster network requirements>>.
473
474 Separate On Cluster Creation
475 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
476
477 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
478 the 'pvecm create' command used for creating a new cluster.
479
480 If you have setup an additional NIC with a static address on 10.10.10.1/25
481 and want to send and receive all cluster communication over this interface
482 you would execute:
483
484 [source,bash]
485 ----
486 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
487 ----
488
489 To check if everything is working properly execute:
490 [source,bash]
491 ----
492 systemctl status corosync
493 ----
494
495 Afterwards, proceed as descripted in the section to
496 <<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
497
498 [[separate-cluster-net-after-creation]]
499 Separate After Cluster Creation
500 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
501
502 You can do this also if you have already created a cluster and want to switch
503 its communication to another network, without rebuilding the whole cluster.
504 This change may lead to short durations of quorum loss in the cluster, as nodes
505 have to restart corosync and come up one after the other on the new network.
506
507 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
508 The open it and you should see a file similar to:
509
510 ----
511 logging {
512 debug: off
513 to_syslog: yes
514 }
515
516 nodelist {
517
518 node {
519 name: due
520 nodeid: 2
521 quorum_votes: 1
522 ring0_addr: due
523 }
524
525 node {
526 name: tre
527 nodeid: 3
528 quorum_votes: 1
529 ring0_addr: tre
530 }
531
532 node {
533 name: uno
534 nodeid: 1
535 quorum_votes: 1
536 ring0_addr: uno
537 }
538
539 }
540
541 quorum {
542 provider: corosync_votequorum
543 }
544
545 totem {
546 cluster_name: thomas-testcluster
547 config_version: 3
548 ip_version: ipv4
549 secauth: on
550 version: 2
551 interface {
552 bindnetaddr: 192.168.30.50
553 ringnumber: 0
554 }
555
556 }
557 ----
558
559 The first you want to do is add the 'name' properties in the node entries if
560 you do not see them already. Those *must* match the node name.
561
562 Then replace the address from the 'ring0_addr' properties with the new
563 addresses. You may use plain IP addresses or also hostnames here. If you use
564 hostnames ensure that they are resolvable from all nodes.
565
566 In my example I want to switch my cluster communication to the 10.10.10.1/25
567 network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
568 in the totem section of the config to an address of the new network. It can be
569 any address from the subnet configured on the new network interface.
570
571 After you increased the 'config_version' property the new configuration file
572 should look like:
573
574 ----
575
576 logging {
577 debug: off
578 to_syslog: yes
579 }
580
581 nodelist {
582
583 node {
584 name: due
585 nodeid: 2
586 quorum_votes: 1
587 ring0_addr: 10.10.10.2
588 }
589
590 node {
591 name: tre
592 nodeid: 3
593 quorum_votes: 1
594 ring0_addr: 10.10.10.3
595 }
596
597 node {
598 name: uno
599 nodeid: 1
600 quorum_votes: 1
601 ring0_addr: 10.10.10.1
602 }
603
604 }
605
606 quorum {
607 provider: corosync_votequorum
608 }
609
610 totem {
611 cluster_name: thomas-testcluster
612 config_version: 4
613 ip_version: ipv4
614 secauth: on
615 version: 2
616 interface {
617 bindnetaddr: 10.10.10.1
618 ringnumber: 0
619 }
620
621 }
622 ----
623
624 Now after a final check whether all changed information is correct we save it
625 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
626 learn how to bring it in effect.
627
628 As our change cannot be enforced live from corosync we have to do an restart.
629
630 On a single node execute:
631 [source,bash]
632 ----
633 systemctl restart corosync
634 ----
635
636 Now check if everything is fine:
637
638 [source,bash]
639 ----
640 systemctl status corosync
641 ----
642
643 If corosync runs again correct restart corosync also on all other nodes.
644 They will then join the cluster membership one by one on the new network.
645
646 [[pvecm_rrp]]
647 Redundant Ring Protocol
648 ~~~~~~~~~~~~~~~~~~~~~~~
649 To avoid a single point of failure you should implement counter measurements.
650 This can be on the hardware and operating system level through network bonding.
651
652 Corosync itself offers also a possibility to add redundancy through the so
653 called 'Redundant Ring Protocol'. This protocol allows running a second totem
654 ring on another network, this network should be physically separated from the
655 other rings network to actually increase availability.
656
657 RRP On Cluster Creation
658 ~~~~~~~~~~~~~~~~~~~~~~~
659
660 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
661 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
662
663 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
664
665 So if you have two networks, one on the 10.10.10.1/24 and the other on the
666 10.10.20.1/24 subnet you would execute:
667
668 [source,bash]
669 ----
670 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
671 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
672 ----
673
674 RRP On Existing Clusters
675 ~~~~~~~~~~~~~~~~~~~~~~~~
676
677 You will take similar steps as described in
678 <<separate-cluster-net-after-creation,separating the cluster network>> to
679 enable RRP on an already running cluster. The single difference is, that you
680 will add `ring1` and use it instead of `ring0`.
681
682 First add a new `interface` subsection in the `totem` section, set its
683 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
684 address of the subnet you have configured for your new ring.
685 Further set the `rrp_mode` to `passive`, this is the only stable mode.
686
687 Then add to each node entry in the `nodelist` section its new `ring1_addr`
688 property with the nodes additional ring address.
689
690 So if you have two networks, one on the 10.10.10.1/24 and the other on the
691 10.10.20.1/24 subnet, the final configuration file should look like:
692
693 ----
694 totem {
695 cluster_name: tweak
696 config_version: 9
697 ip_version: ipv4
698 rrp_mode: passive
699 secauth: on
700 version: 2
701 interface {
702 bindnetaddr: 10.10.10.1
703 ringnumber: 0
704 }
705 interface {
706 bindnetaddr: 10.10.20.1
707 ringnumber: 1
708 }
709 }
710
711 nodelist {
712 node {
713 name: pvecm1
714 nodeid: 1
715 quorum_votes: 1
716 ring0_addr: 10.10.10.1
717 ring1_addr: 10.10.20.1
718 }
719
720 node {
721 name: pvecm2
722 nodeid: 2
723 quorum_votes: 1
724 ring0_addr: 10.10.10.2
725 ring1_addr: 10.10.20.2
726 }
727
728 [...] # other cluster nodes here
729 }
730
731 [...] # other remaining config sections here
732
733 ----
734
735 Bring it in effect like described in the
736 <<edit-corosync-conf,edit the corosync.conf file>> section.
737
738 This is a change which cannot take live in effect and needs at least a restart
739 of corosync. Recommended is a restart of the whole cluster.
740
741 If you cannot reboot the whole cluster ensure no High Availability services are
742 configured and the stop the corosync service on all nodes. After corosync is
743 stopped on all nodes start it one after the other again.
744
745 Corosync Configuration
746 ----------------------
747
748 The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
749 controls the cluster member ship and its network.
750 For reading more about it check the corosync.conf man page:
751 [source,bash]
752 ----
753 man corosync.conf
754 ----
755
756 For node membership you should always use the `pvecm` tool provided by {pve}.
757 You may have to edit the configuration file manually for other changes.
758 Here are a few best practice tips for doing this.
759
760 [[edit-corosync-conf]]
761 Edit corosync.conf
762 ~~~~~~~~~~~~~~~~~~
763
764 Editing the corosync.conf file can be not always straight forward. There are
765 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
766 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
767 propagate the changes to the local one, but not vice versa.
768
769 The configuration will get updated automatically as soon as the file changes.
770 This means changes which can be integrated in a running corosync will take
771 instantly effect. So you should always make a copy and edit that instead, to
772 avoid triggering some unwanted changes by an in between safe.
773
774 [source,bash]
775 ----
776 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
777 ----
778
779 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
780 preinstalled on {pve} for example.
781
782 NOTE: Always increment the 'config_version' number on configuration changes,
783 omitting this can lead to problems.
784
785 After making the necessary changes create another copy of the current working
786 configuration file. This serves as a backup if the new configuration fails to
787 apply or makes problems in other ways.
788
789 [source,bash]
790 ----
791 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
792 ----
793
794 Then move the new configuration file over the old one:
795 [source,bash]
796 ----
797 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
798 ----
799
800 You may check with the commands
801 [source,bash]
802 ----
803 systemctl status corosync
804 journalctl -b -u corosync
805 ----
806
807 If the change could applied automatically. If not you may have to restart the
808 corosync service via:
809 [source,bash]
810 ----
811 systemctl restart corosync
812 ----
813
814 On errors check the troubleshooting section below.
815
816 Troubleshooting
817 ~~~~~~~~~~~~~~~
818
819 Issue: 'quorum.expected_votes must be configured'
820 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
821
822 When corosync starts to fail and you get the following message in the system log:
823
824 ----
825 [...]
826 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
827 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
828 'configuration error: nodelist or quorum.expected_votes must be configured!'
829 [...]
830 ----
831
832 It means that the hostname you set for corosync 'ringX_addr' in the
833 configuration could not be resolved.
834
835
836 Write Configuration When Not Quorate
837 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
838
839 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
840 know what you do, use:
841 [source,bash]
842 ----
843 pvecm expected 1
844 ----
845
846 This sets the expected vote count to 1 and makes the cluster quorate. You can
847 now fix your configuration, or revert it back to the last working backup.
848
849 This is not enough if corosync cannot start anymore. Here its best to edit the
850 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
851 that corosync can start again. Ensure that on all nodes this configuration has
852 the same content to avoid split brains. If you are not sure what went wrong
853 it's best to ask the Proxmox Community to help you.
854
855
856 [[corosync-conf-glossary]]
857 Corosync Configuration Glossary
858 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
859
860 ringX_addr::
861 This names the different ring addresses for the corosync totem rings used for
862 the cluster communication.
863
864 bindnetaddr::
865 Defines to which interface the ring should bind to. It may be any address of
866 the subnet configured on the interface we want to use. In general its the
867 recommended to just use an address a node uses on this interface.
868
869 rrp_mode::
870 Specifies the mode of the redundant ring protocol and may be passive, active or
871 none. Note that use of active is highly experimental and not official
872 supported. Passive is the preferred mode, it may double the cluster
873 communication throughput and increases availability.
874
875
876 Cluster Cold Start
877 ------------------
878
879 It is obvious that a cluster is not quorate when all nodes are
880 offline. This is a common case after a power failure.
881
882 NOTE: It is always a good idea to use an uninterruptible power supply
883 (``UPS'', also called ``battery backup'') to avoid this state, especially if
884 you want HA.
885
886 On node startup, the `pve-guests` service is started and waits for
887 quorum. Once quorate, it starts all guests which have the `onboot`
888 flag set.
889
890 When you turn on nodes, or when power comes back after power failure,
891 it is likely that some nodes boots faster than others. Please keep in
892 mind that guest startup is delayed until you reach quorum.
893
894
895 Guest Migration
896 ---------------
897
898 Migrating virtual guests to other nodes is a useful feature in a
899 cluster. There are settings to control the behavior of such
900 migrations. This can be done via the configuration file
901 `datacenter.cfg` or for a specific migration via API or command line
902 parameters.
903
904 It makes a difference if a Guest is online or offline, or if it has
905 local resources (like a local disk).
906
907 For Details about Virtual Machine Migration see the
908 xref:qm_migration[QEMU/KVM Migration Chapter]
909
910 For Details about Container Migration see the
911 xref:pct_migration[Container Migration Chapter]
912
913 Migration Type
914 ~~~~~~~~~~~~~~
915
916 The migration type defines if the migration data should be sent over an
917 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
918 Setting the migration type to insecure means that the RAM content of a
919 virtual guest gets also transferred unencrypted, which can lead to
920 information disclosure of critical data from inside the guest (for
921 example passwords or encryption keys).
922
923 Therefore, we strongly recommend using the secure channel if you do
924 not have full control over the network and can not guarantee that no
925 one is eavesdropping to it.
926
927 NOTE: Storage migration does not follow this setting. Currently, it
928 always sends the storage content over a secure channel.
929
930 Encryption requires a lot of computing power, so this setting is often
931 changed to "unsafe" to achieve better performance. The impact on
932 modern systems is lower because they implement AES encryption in
933 hardware. The performance impact is particularly evident in fast
934 networks where you can transfer 10 Gbps or more.
935
936
937 Migration Network
938 ~~~~~~~~~~~~~~~~~
939
940 By default, {pve} uses the network in which cluster communication
941 takes place to send the migration traffic. This is not optimal because
942 sensitive cluster traffic can be disrupted and this network may not
943 have the best bandwidth available on the node.
944
945 Setting the migration network parameter allows the use of a dedicated
946 network for the entire migration traffic. In addition to the memory,
947 this also affects the storage traffic for offline migrations.
948
949 The migration network is set as a network in the CIDR notation. This
950 has the advantage that you do not have to set individual IP addresses
951 for each node. {pve} can determine the real address on the
952 destination node from the network specified in the CIDR form. To
953 enable this, the network must be specified so that each node has one,
954 but only one IP in the respective network.
955
956
957 Example
958 ^^^^^^^
959
960 We assume that we have a three-node setup with three separate
961 networks. One for public communication with the Internet, one for
962 cluster communication and a very fast one, which we want to use as a
963 dedicated network for migration.
964
965 A network configuration for such a setup might look as follows:
966
967 ----
968 iface eno1 inet manual
969
970 # public network
971 auto vmbr0
972 iface vmbr0 inet static
973 address 192.X.Y.57
974 netmask 255.255.250.0
975 gateway 192.X.Y.1
976 bridge_ports eno1
977 bridge_stp off
978 bridge_fd 0
979
980 # cluster network
981 auto eno2
982 iface eno2 inet static
983 address 10.1.1.1
984 netmask 255.255.255.0
985
986 # fast network
987 auto eno3
988 iface eno3 inet static
989 address 10.1.2.1
990 netmask 255.255.255.0
991 ----
992
993 Here, we will use the network 10.1.2.0/24 as a migration network. For
994 a single migration, you can do this using the `migration_network`
995 parameter of the command line tool:
996
997 ----
998 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
999 ----
1000
1001 To configure this as the default network for all migrations in the
1002 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1003 file:
1004
1005 ----
1006 # use dedicated migration network
1007 migration: secure,network=10.1.2.0/24
1008 ----
1009
1010 NOTE: The migration type must always be set when the migration network
1011 gets set in `/etc/pve/datacenter.cfg`.
1012
1013
1014 ifdef::manvolnum[]
1015 include::pve-copyright.adoc[]
1016 endif::manvolnum[]