]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
pvecm: mark all console commands the same way
[pve-docs.git] / pvecm.adoc
CommitLineData
bde0e57d 1[[chapter_pvecm]]
d8742b0c 2ifdef::manvolnum[]
b2f242ab
DM
3pvecm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6
FG
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
8a865621 29http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 30communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
31(probably more, dependent on network latency).
32
8c1189b6 33`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 34leave the cluster, get status information and do various other cluster
e300cf7d
FG
35related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
37nodes.
38
39Grouping nodes into a cluster has the following advantages:
40
41* Centralized, web based management
42
5eba0743 43* Multi-master clusters: each node can do all management task
8a865621 44
8c1189b6
FG
45* `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
8a865621 47
5eba0743 48* Easy migration of virtual machines and containers between physical
8a865621
DM
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
8c1189b6 59* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 60 to communicate between nodes (also see
ceabe189 61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 62 ports 5404 and 5405 for cluster communication.
ceabe189
DM
63+
64NOTE: Some switches do not support IP multicast by default and must be
65manually enabled first.
8a865621
DM
66
67* Date and time have to be synchronized.
68
ceabe189 69* SSH tunnel on TCP port 22 between nodes is used.
8a865621 70
ceabe189
DM
71* If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
8a865621
DM
74
75* We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
d4a9910f
DL
78* Root password of a cluster node is required for adding nodes.
79
8a865621 80NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 81Proxmox VE 4.0 cluster nodes.
8a865621
DM
82
83
ceabe189
DM
84Preparing Nodes
85---------------
8a865621
DM
86
87First, install {PVE} on all nodes. Make sure that each node is
88installed with the final hostname and IP configuration. Changing the
89hostname and IP is not possible after cluster creation.
90
30101530
TL
91Currently the cluster creation can either be done on the console (login via
92`ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
93Cluster__).
8a865621 94
11202f1d 95[[pvecm_create_cluster]]
8a865621 96Create the Cluster
ceabe189 97------------------
8a865621 98
8c1189b6 99Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
d4a9910f 100This name cannot be changed later. The cluster name follows the same rules as node names.
8a865621 101
c15cdfba
TL
102----
103 hp1# pvecm create CLUSTERNAME
104----
8a865621 105
63f956c8
DM
106CAUTION: The cluster name is used to compute the default multicast
107address. Please use unique cluster names if you run more than one
108cluster inside your network.
109
8a865621
DM
110To check the state of your cluster use:
111
c15cdfba 112----
8a865621 113 hp1# pvecm status
c15cdfba 114----
8a865621 115
82445c4e
TL
116Multiple Clusters In Same Network
117~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
118
119It is possible to create multiple clusters in the same physical or logical
120network. Each cluster must have a unique name, which is used to generate the
121cluster's multicast group address. As long as no duplicate cluster names are
122configured in one network segment, the different clusters won't interfere with
123each other.
124
125If multiple clusters operate in a single network it may be beneficial to setup
126an IGMP querier and enable IGMP Snooping in said network. This may reduce the
127load of the network significantly because multicast packets are only delivered
128to endpoints of the respective member nodes.
129
8a865621 130
11202f1d 131[[pvecm_join_node_to_cluster]]
8a865621 132Adding Nodes to the Cluster
ceabe189 133---------------------------
8a865621 134
8c1189b6 135Login via `ssh` to the node you want to add.
8a865621 136
c15cdfba 137----
8a865621 138 hp2# pvecm add IP-ADDRESS-CLUSTER
c15cdfba 139----
8a865621
DM
140
141For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
142
5eba0743 143CAUTION: A new node cannot hold any VMs, because you would get
7980581f 144conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
145`/etc/pve` is overwritten when you join a new node to the cluster. To
146workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 147adding the node to the cluster.
8a865621
DM
148
149To check the state of cluster:
150
c15cdfba 151----
8a865621 152 # pvecm status
c15cdfba 153----
8a865621 154
ceabe189 155.Cluster status after adding 4 nodes
8a865621
DM
156----
157hp2# pvecm status
158Quorum information
159~~~~~~~~~~~~~~~~~~
160Date: Mon Apr 20 12:30:13 2015
161Quorum provider: corosync_votequorum
162Nodes: 4
163Node ID: 0x00000001
164Ring ID: 1928
165Quorate: Yes
166
167Votequorum information
168~~~~~~~~~~~~~~~~~~~~~~
169Expected votes: 4
170Highest expected: 4
171Total votes: 4
172Quorum: 2
173Flags: Quorate
174
175Membership information
176~~~~~~~~~~~~~~~~~~~~~~
177 Nodeid Votes Name
1780x00000001 1 192.168.15.91
1790x00000002 1 192.168.15.92 (local)
1800x00000003 1 192.168.15.93
1810x00000004 1 192.168.15.94
182----
183
184If you only want the list of all nodes use:
185
c15cdfba 186----
8a865621 187 # pvecm nodes
c15cdfba 188----
8a865621 189
5eba0743 190.List nodes in a cluster
8a865621
DM
191----
192hp2# pvecm nodes
193
194Membership information
195~~~~~~~~~~~~~~~~~~~~~~
196 Nodeid Votes Name
197 1 1 hp1
198 2 1 hp2 (local)
199 3 1 hp3
200 4 1 hp4
201----
202
82d52451 203[[adding-nodes-with-separated-cluster-network]]
e4ec4154
TL
204Adding Nodes With Separated Cluster Network
205~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
206
207When adding a node to a cluster with a separated cluster network you need to
208use the 'ringX_addr' parameters to set the nodes address on those networks:
209
210[source,bash]
4d19cb00 211----
e4ec4154 212pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 213----
e4ec4154
TL
214
215If you want to use the Redundant Ring Protocol you will also want to pass the
216'ring1_addr' parameter.
217
8a865621
DM
218
219Remove a Cluster Node
ceabe189 220---------------------
8a865621
DM
221
222CAUTION: Read carefully the procedure before proceeding, as it could
223not be what you want or need.
224
225Move all virtual machines from the node. Make sure you have no local
226data or backups you want to keep, or save them accordingly.
e8503c6c 227In the following example we will remove the node hp4 from the cluster.
8a865621 228
e8503c6c
EK
229Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
230command to identify the node ID to remove:
8a865621
DM
231
232----
233hp1# pvecm nodes
234
235Membership information
236~~~~~~~~~~~~~~~~~~~~~~
237 Nodeid Votes Name
238 1 1 hp1 (local)
239 2 1 hp2
240 3 1 hp3
241 4 1 hp4
242----
243
e8503c6c
EK
244
245At this point you must power off hp4 and
246make sure that it will not power on again (in the network) as it
247is.
248
249IMPORTANT: As said above, it is critical to power off the node
250*before* removal, and make sure that it will *never* power on again
251(in the existing cluster network) as it is.
252If you power on the node as it is, your cluster will be screwed up and
253it could be difficult to restore a clean cluster state.
254
255After powering off the node hp4, we can safely remove it from the cluster.
8a865621 256
c15cdfba 257----
8a865621 258 hp1# pvecm delnode hp4
c15cdfba 259----
8a865621
DM
260
261If the operation succeeds no output is returned, just check the node
8c1189b6 262list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
263something like:
264
265----
266hp1# pvecm status
267
268Quorum information
269~~~~~~~~~~~~~~~~~~
270Date: Mon Apr 20 12:44:28 2015
271Quorum provider: corosync_votequorum
272Nodes: 3
273Node ID: 0x00000001
274Ring ID: 1992
275Quorate: Yes
276
277Votequorum information
278~~~~~~~~~~~~~~~~~~~~~~
279Expected votes: 3
280Highest expected: 3
281Total votes: 3
282Quorum: 3
283Flags: Quorate
284
285Membership information
286~~~~~~~~~~~~~~~~~~~~~~
287 Nodeid Votes Name
2880x00000001 1 192.168.15.90 (local)
2890x00000002 1 192.168.15.91
2900x00000003 1 192.168.15.92
291----
292
8a865621
DM
293If, for whatever reason, you want that this server joins the same
294cluster again, you have to
295
26ca7ff5 296* reinstall {pve} on it from scratch
8a865621
DM
297
298* then join it, as explained in the previous section.
d8742b0c 299
38ae8db3 300[[pvecm_separate_node_without_reinstall]]
555e966b
TL
301Separate A Node Without Reinstalling
302~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
303
304CAUTION: This is *not* the recommended method, proceed with caution. Use the
305above mentioned method if you're unsure.
306
307You can also separate a node from a cluster without reinstalling it from
308scratch. But after removing the node from the cluster it will still have
309access to the shared storages! This must be resolved before you start removing
310the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
311storage with another cluster, as storage locking doesn't work over cluster
312boundary. Further, it may also lead to VMID conflicts.
555e966b 313
3be22308
TL
314Its suggested that you create a new storage where only the node which you want
315to separate has access. This can be an new export on your NFS or a new Ceph
316pool, to name a few examples. Its just important that the exact same storage
317does not gets accessed by multiple clusters. After setting this storage up move
318all data from the node and its VMs to it. Then you are ready to separate the
319node from the cluster.
555e966b
TL
320
321WARNING: Ensure all shared resources are cleanly separated! You will run into
322conflicts and problems else.
323
324First stop the corosync and the pve-cluster services on the node:
325[source,bash]
4d19cb00 326----
555e966b
TL
327systemctl stop pve-cluster
328systemctl stop corosync
4d19cb00 329----
555e966b
TL
330
331Start the cluster filesystem again in local mode:
332[source,bash]
4d19cb00 333----
555e966b 334pmxcfs -l
4d19cb00 335----
555e966b
TL
336
337Delete the corosync configuration files:
338[source,bash]
4d19cb00 339----
555e966b
TL
340rm /etc/pve/corosync.conf
341rm /etc/corosync/*
4d19cb00 342----
555e966b
TL
343
344You can now start the filesystem again as normal service:
345[source,bash]
4d19cb00 346----
555e966b
TL
347killall pmxcfs
348systemctl start pve-cluster
4d19cb00 349----
555e966b
TL
350
351The node is now separated from the cluster. You can deleted it from a remaining
352node of the cluster with:
353[source,bash]
4d19cb00 354----
555e966b 355pvecm delnode oldnode
4d19cb00 356----
555e966b
TL
357
358If the command failed, because the remaining node in the cluster lost quorum
359when the now separate node exited, you may set the expected votes to 1 as a workaround:
360[source,bash]
4d19cb00 361----
555e966b 362pvecm expected 1
4d19cb00 363----
555e966b
TL
364
365And the repeat the 'pvecm delnode' command.
366
367Now switch back to the separated node, here delete all remaining files left
368from the old cluster. This ensures that the node can be added to another
369cluster again without problems.
370
371[source,bash]
4d19cb00 372----
555e966b 373rm /var/lib/corosync/*
4d19cb00 374----
555e966b
TL
375
376As the configuration files from the other nodes are still in the cluster
377filesystem you may want to clean those up too. Remove simply the whole
378directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
379you used the correct one before deleting it.
380
381CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
382the nodes can still connect to each other with public key authentication. This
383should be fixed by removing the respective keys from the
384'/etc/pve/priv/authorized_keys' file.
d8742b0c 385
806ef12d
DM
386Quorum
387------
388
389{pve} use a quorum-based technique to provide a consistent state among
390all cluster nodes.
391
392[quote, from Wikipedia, Quorum (distributed computing)]
393____
394A quorum is the minimum number of votes that a distributed transaction
395has to obtain in order to be allowed to perform an operation in a
396distributed system.
397____
398
399In case of network partitioning, state changes requires that a
400majority of nodes are online. The cluster switches to read-only mode
5eba0743 401if it loses quorum.
806ef12d
DM
402
403NOTE: {pve} assigns a single vote to each node by default.
404
e4ec4154
TL
405Cluster Network
406---------------
407
408The cluster network is the core of a cluster. All messages sent over it have to
409be delivered reliable to all nodes in their respective order. In {pve} this
410part is done by corosync, an implementation of a high performance low overhead
411high availability development toolkit. It serves our decentralized
412configuration file system (`pmxcfs`).
413
414[[cluster-network-requirements]]
415Network Requirements
416~~~~~~~~~~~~~~~~~~~~
417This needs a reliable network with latencies under 2 milliseconds (LAN
418performance) to work properly. While corosync can also use unicast for
419communication between nodes its **highly recommended** to have a multicast
420capable network. The network should not be used heavily by other members,
421ideally corosync runs on its own network.
422*never* share it with network where storage communicates too.
423
424Before setting up a cluster it is good practice to check if the network is fit
425for that purpose.
426
427* Ensure that all nodes are in the same subnet. This must only be true for the
428 network interfaces used for cluster communication (corosync).
429
430* Ensure all nodes can reach each other over those interfaces, using `ping` is
431 enough for a basic test.
432
433* Ensure that multicast works in general and a high package rates. This can be
434 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 435+
e4ec4154
TL
436[source,bash]
437----
438omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
439----
440
441* Ensure that multicast communication works over an extended period of time.
a181f090 442 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
443 no multicast querier is active. This test has a duration of around 10
444 minutes.
9e73d831 445+
e4ec4154 446[source,bash]
4d19cb00 447----
e4ec4154 448omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 449----
e4ec4154
TL
450
451Your network is not ready for clustering if any of these test fails. Recheck
452your network configuration. Especially switches are notorious for having
453multicast disabled by default or IGMP snooping enabled with no IGMP querier
454active.
455
456In smaller cluster its also an option to use unicast if you really cannot get
457multicast to work.
458
459Separate Cluster Network
460~~~~~~~~~~~~~~~~~~~~~~~~
461
462When creating a cluster without any parameters the cluster network is generally
463shared with the Web UI and the VMs and its traffic. Depending on your setup
464even storage traffic may get sent over the same network. Its recommended to
465change that, as corosync is a time critical real time application.
466
467Setting Up A New Network
468^^^^^^^^^^^^^^^^^^^^^^^^
469
470First you have to setup a new network interface. It should be on a physical
471separate network. Ensure that your network fulfills the
472<<cluster-network-requirements,cluster network requirements>>.
473
474Separate On Cluster Creation
475^^^^^^^^^^^^^^^^^^^^^^^^^^^^
476
477This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
478the 'pvecm create' command used for creating a new cluster.
479
44f38275 480If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
481and want to send and receive all cluster communication over this interface
482you would execute:
483
484[source,bash]
4d19cb00 485----
e4ec4154 486pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 487----
e4ec4154
TL
488
489To check if everything is working properly execute:
490[source,bash]
4d19cb00 491----
e4ec4154 492systemctl status corosync
4d19cb00 493----
e4ec4154 494
266cb17b
WB
495Afterwards, proceed as descripted in the section to
496<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
82d52451 497
e4ec4154
TL
498[[separate-cluster-net-after-creation]]
499Separate After Cluster Creation
500^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
501
502You can do this also if you have already created a cluster and want to switch
503its communication to another network, without rebuilding the whole cluster.
504This change may lead to short durations of quorum loss in the cluster, as nodes
505have to restart corosync and come up one after the other on the new network.
506
507Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
508The open it and you should see a file similar to:
509
510----
511logging {
512 debug: off
513 to_syslog: yes
514}
515
516nodelist {
517
518 node {
519 name: due
520 nodeid: 2
521 quorum_votes: 1
522 ring0_addr: due
523 }
524
525 node {
526 name: tre
527 nodeid: 3
528 quorum_votes: 1
529 ring0_addr: tre
530 }
531
532 node {
533 name: uno
534 nodeid: 1
535 quorum_votes: 1
536 ring0_addr: uno
537 }
538
539}
540
541quorum {
542 provider: corosync_votequorum
543}
544
545totem {
546 cluster_name: thomas-testcluster
547 config_version: 3
548 ip_version: ipv4
549 secauth: on
550 version: 2
551 interface {
552 bindnetaddr: 192.168.30.50
553 ringnumber: 0
554 }
555
556}
557----
558
559The first you want to do is add the 'name' properties in the node entries if
560you do not see them already. Those *must* match the node name.
561
562Then replace the address from the 'ring0_addr' properties with the new
563addresses. You may use plain IP addresses or also hostnames here. If you use
564hostnames ensure that they are resolvable from all nodes.
565
566In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 567network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
568in the totem section of the config to an address of the new network. It can be
569any address from the subnet configured on the new network interface.
570
571After you increased the 'config_version' property the new configuration file
572should look like:
573
574----
575
576logging {
577 debug: off
578 to_syslog: yes
579}
580
581nodelist {
582
583 node {
584 name: due
585 nodeid: 2
586 quorum_votes: 1
587 ring0_addr: 10.10.10.2
588 }
589
590 node {
591 name: tre
592 nodeid: 3
593 quorum_votes: 1
594 ring0_addr: 10.10.10.3
595 }
596
597 node {
598 name: uno
599 nodeid: 1
600 quorum_votes: 1
601 ring0_addr: 10.10.10.1
602 }
603
604}
605
606quorum {
607 provider: corosync_votequorum
608}
609
610totem {
611 cluster_name: thomas-testcluster
612 config_version: 4
613 ip_version: ipv4
614 secauth: on
615 version: 2
616 interface {
617 bindnetaddr: 10.10.10.1
618 ringnumber: 0
619 }
620
621}
622----
623
624Now after a final check whether all changed information is correct we save it
625and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
626learn how to bring it in effect.
627
628As our change cannot be enforced live from corosync we have to do an restart.
629
630On a single node execute:
631[source,bash]
4d19cb00 632----
e4ec4154 633systemctl restart corosync
4d19cb00 634----
e4ec4154
TL
635
636Now check if everything is fine:
637
638[source,bash]
4d19cb00 639----
e4ec4154 640systemctl status corosync
4d19cb00 641----
e4ec4154
TL
642
643If corosync runs again correct restart corosync also on all other nodes.
644They will then join the cluster membership one by one on the new network.
645
11202f1d 646[[pvecm_rrp]]
e4ec4154
TL
647Redundant Ring Protocol
648~~~~~~~~~~~~~~~~~~~~~~~
649To avoid a single point of failure you should implement counter measurements.
650This can be on the hardware and operating system level through network bonding.
651
652Corosync itself offers also a possibility to add redundancy through the so
653called 'Redundant Ring Protocol'. This protocol allows running a second totem
654ring on another network, this network should be physically separated from the
655other rings network to actually increase availability.
656
657RRP On Cluster Creation
658~~~~~~~~~~~~~~~~~~~~~~~
659
660The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
661'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
662
663NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
664
665So if you have two networks, one on the 10.10.10.1/24 and the other on the
66610.10.20.1/24 subnet you would execute:
667
668[source,bash]
4d19cb00 669----
e4ec4154
TL
670pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
671-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 672----
e4ec4154 673
6e78f927 674RRP On Existing Clusters
e4ec4154
TL
675~~~~~~~~~~~~~~~~~~~~~~~~
676
6e78f927
TL
677You will take similar steps as described in
678<<separate-cluster-net-after-creation,separating the cluster network>> to
679enable RRP on an already running cluster. The single difference is, that you
680will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
681
682First add a new `interface` subsection in the `totem` section, set its
683`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
684address of the subnet you have configured for your new ring.
685Further set the `rrp_mode` to `passive`, this is the only stable mode.
686
687Then add to each node entry in the `nodelist` section its new `ring1_addr`
688property with the nodes additional ring address.
689
690So if you have two networks, one on the 10.10.10.1/24 and the other on the
69110.10.20.1/24 subnet, the final configuration file should look like:
692
693----
694totem {
695 cluster_name: tweak
696 config_version: 9
697 ip_version: ipv4
698 rrp_mode: passive
699 secauth: on
700 version: 2
701 interface {
702 bindnetaddr: 10.10.10.1
703 ringnumber: 0
704 }
705 interface {
706 bindnetaddr: 10.10.20.1
707 ringnumber: 1
708 }
709}
710
711nodelist {
712 node {
713 name: pvecm1
714 nodeid: 1
715 quorum_votes: 1
716 ring0_addr: 10.10.10.1
717 ring1_addr: 10.10.20.1
718 }
719
720 node {
721 name: pvecm2
722 nodeid: 2
723 quorum_votes: 1
724 ring0_addr: 10.10.10.2
725 ring1_addr: 10.10.20.2
726 }
727
728 [...] # other cluster nodes here
729}
730
731[...] # other remaining config sections here
732
733----
734
7d48940b
DM
735Bring it in effect like described in the
736<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
737
738This is a change which cannot take live in effect and needs at least a restart
739of corosync. Recommended is a restart of the whole cluster.
740
741If you cannot reboot the whole cluster ensure no High Availability services are
742configured and the stop the corosync service on all nodes. After corosync is
743stopped on all nodes start it one after the other again.
744
745Corosync Configuration
746----------------------
747
470d4313 748The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
749controls the cluster member ship and its network.
750For reading more about it check the corosync.conf man page:
751[source,bash]
4d19cb00 752----
e4ec4154 753man corosync.conf
4d19cb00 754----
e4ec4154
TL
755
756For node membership you should always use the `pvecm` tool provided by {pve}.
757You may have to edit the configuration file manually for other changes.
758Here are a few best practice tips for doing this.
759
760[[edit-corosync-conf]]
761Edit corosync.conf
762~~~~~~~~~~~~~~~~~~
763
764Editing the corosync.conf file can be not always straight forward. There are
765two on each cluster, one in `/etc/pve/corosync.conf` and the other in
766`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
767propagate the changes to the local one, but not vice versa.
768
769The configuration will get updated automatically as soon as the file changes.
770This means changes which can be integrated in a running corosync will take
771instantly effect. So you should always make a copy and edit that instead, to
772avoid triggering some unwanted changes by an in between safe.
773
774[source,bash]
4d19cb00 775----
e4ec4154 776cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 777----
e4ec4154
TL
778
779Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
780preinstalled on {pve} for example.
781
782NOTE: Always increment the 'config_version' number on configuration changes,
783omitting this can lead to problems.
784
785After making the necessary changes create another copy of the current working
786configuration file. This serves as a backup if the new configuration fails to
787apply or makes problems in other ways.
788
789[source,bash]
4d19cb00 790----
e4ec4154 791cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 792----
e4ec4154
TL
793
794Then move the new configuration file over the old one:
795[source,bash]
4d19cb00 796----
e4ec4154 797mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 798----
e4ec4154
TL
799
800You may check with the commands
801[source,bash]
4d19cb00 802----
e4ec4154
TL
803systemctl status corosync
804journalctl -b -u corosync
4d19cb00 805----
e4ec4154
TL
806
807If the change could applied automatically. If not you may have to restart the
808corosync service via:
809[source,bash]
4d19cb00 810----
e4ec4154 811systemctl restart corosync
4d19cb00 812----
e4ec4154
TL
813
814On errors check the troubleshooting section below.
815
816Troubleshooting
817~~~~~~~~~~~~~~~
818
819Issue: 'quorum.expected_votes must be configured'
820^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
821
822When corosync starts to fail and you get the following message in the system log:
823
824----
825[...]
826corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
827corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
828 'configuration error: nodelist or quorum.expected_votes must be configured!'
829[...]
830----
831
832It means that the hostname you set for corosync 'ringX_addr' in the
833configuration could not be resolved.
834
835
836Write Configuration When Not Quorate
837^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
838
839If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
840know what you do, use:
841[source,bash]
4d19cb00 842----
e4ec4154 843pvecm expected 1
4d19cb00 844----
e4ec4154
TL
845
846This sets the expected vote count to 1 and makes the cluster quorate. You can
847now fix your configuration, or revert it back to the last working backup.
848
849This is not enough if corosync cannot start anymore. Here its best to edit the
850local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
851that corosync can start again. Ensure that on all nodes this configuration has
852the same content to avoid split brains. If you are not sure what went wrong
853it's best to ask the Proxmox Community to help you.
854
855
856[[corosync-conf-glossary]]
857Corosync Configuration Glossary
858~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
859
860ringX_addr::
861This names the different ring addresses for the corosync totem rings used for
862the cluster communication.
863
864bindnetaddr::
865Defines to which interface the ring should bind to. It may be any address of
866the subnet configured on the interface we want to use. In general its the
867recommended to just use an address a node uses on this interface.
868
869rrp_mode::
870Specifies the mode of the redundant ring protocol and may be passive, active or
871none. Note that use of active is highly experimental and not official
872supported. Passive is the preferred mode, it may double the cluster
873communication throughput and increases availability.
874
806ef12d
DM
875
876Cluster Cold Start
877------------------
878
879It is obvious that a cluster is not quorate when all nodes are
880offline. This is a common case after a power failure.
881
882NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 883(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
884you want HA.
885
204231df 886On node startup, the `pve-guests` service is started and waits for
8c1189b6 887quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
888flag set.
889
890When you turn on nodes, or when power comes back after power failure,
891it is likely that some nodes boots faster than others. Please keep in
892mind that guest startup is delayed until you reach quorum.
806ef12d 893
054a7e7d 894
082ea7d9
TL
895Guest Migration
896---------------
897
054a7e7d
DM
898Migrating virtual guests to other nodes is a useful feature in a
899cluster. There are settings to control the behavior of such
900migrations. This can be done via the configuration file
901`datacenter.cfg` or for a specific migration via API or command line
902parameters.
903
da6c7dee
DC
904It makes a difference if a Guest is online or offline, or if it has
905local resources (like a local disk).
906
907For Details about Virtual Machine Migration see the
908xref:qm_migration[QEMU/KVM Migration Chapter]
909
910For Details about Container Migration see the
911xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
912
913Migration Type
914~~~~~~~~~~~~~~
915
44f38275 916The migration type defines if the migration data should be sent over an
d63be10b 917encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 918Setting the migration type to insecure means that the RAM content of a
470d4313 919virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
920information disclosure of critical data from inside the guest (for
921example passwords or encryption keys).
054a7e7d
DM
922
923Therefore, we strongly recommend using the secure channel if you do
924not have full control over the network and can not guarantee that no
925one is eavesdropping to it.
082ea7d9 926
054a7e7d
DM
927NOTE: Storage migration does not follow this setting. Currently, it
928always sends the storage content over a secure channel.
929
930Encryption requires a lot of computing power, so this setting is often
931changed to "unsafe" to achieve better performance. The impact on
932modern systems is lower because they implement AES encryption in
b1743473
DM
933hardware. The performance impact is particularly evident in fast
934networks where you can transfer 10 Gbps or more.
082ea7d9 935
082ea7d9
TL
936
937Migration Network
938~~~~~~~~~~~~~~~~~
939
a9baa444
TL
940By default, {pve} uses the network in which cluster communication
941takes place to send the migration traffic. This is not optimal because
942sensitive cluster traffic can be disrupted and this network may not
943have the best bandwidth available on the node.
944
945Setting the migration network parameter allows the use of a dedicated
946network for the entire migration traffic. In addition to the memory,
947this also affects the storage traffic for offline migrations.
948
949The migration network is set as a network in the CIDR notation. This
950has the advantage that you do not have to set individual IP addresses
951for each node. {pve} can determine the real address on the
952destination node from the network specified in the CIDR form. To
953enable this, the network must be specified so that each node has one,
954but only one IP in the respective network.
955
082ea7d9
TL
956
957Example
958^^^^^^^
959
a9baa444
TL
960We assume that we have a three-node setup with three separate
961networks. One for public communication with the Internet, one for
962cluster communication and a very fast one, which we want to use as a
963dedicated network for migration.
964
965A network configuration for such a setup might look as follows:
082ea7d9
TL
966
967----
7a0d4784 968iface eno1 inet manual
082ea7d9
TL
969
970# public network
971auto vmbr0
972iface vmbr0 inet static
973 address 192.X.Y.57
974 netmask 255.255.250.0
975 gateway 192.X.Y.1
7a0d4784 976 bridge_ports eno1
082ea7d9
TL
977 bridge_stp off
978 bridge_fd 0
979
980# cluster network
7a0d4784
WL
981auto eno2
982iface eno2 inet static
082ea7d9
TL
983 address 10.1.1.1
984 netmask 255.255.255.0
985
986# fast network
7a0d4784
WL
987auto eno3
988iface eno3 inet static
082ea7d9
TL
989 address 10.1.2.1
990 netmask 255.255.255.0
082ea7d9
TL
991----
992
a9baa444
TL
993Here, we will use the network 10.1.2.0/24 as a migration network. For
994a single migration, you can do this using the `migration_network`
995parameter of the command line tool:
996
082ea7d9 997----
b1743473 998# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
999----
1000
a9baa444
TL
1001To configure this as the default network for all migrations in the
1002cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1003file:
1004
082ea7d9 1005----
a9baa444 1006# use dedicated migration network
b1743473 1007migration: secure,network=10.1.2.0/24
082ea7d9
TL
1008----
1009
a9baa444
TL
1010NOTE: The migration type must always be set when the migration network
1011gets set in `/etc/pve/datacenter.cfg`.
1012
806ef12d 1013
d8742b0c
DM
1014ifdef::manvolnum[]
1015include::pve-copyright.adoc[]
1016endif::manvolnum[]