]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
output-format.adoc: fix typo
[pve-docs.git] / pvecm.adoc
CommitLineData
bde0e57d 1[[chapter_pvecm]]
d8742b0c 2ifdef::manvolnum[]
b2f242ab
DM
3pvecm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6
FG
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
8a865621 29http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 30communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
31(probably more, dependent on network latency).
32
8c1189b6 33`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 34leave the cluster, get status information and do various other cluster
e300cf7d
FG
35related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
37nodes.
38
39Grouping nodes into a cluster has the following advantages:
40
41* Centralized, web based management
42
5eba0743 43* Multi-master clusters: each node can do all management task
8a865621 44
8c1189b6
FG
45* `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
8a865621 47
5eba0743 48* Easy migration of virtual machines and containers between physical
8a865621
DM
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
8c1189b6 59* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 60 to communicate between nodes (also see
ceabe189 61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 62 ports 5404 and 5405 for cluster communication.
ceabe189
DM
63+
64NOTE: Some switches do not support IP multicast by default and must be
65manually enabled first.
8a865621
DM
66
67* Date and time have to be synchronized.
68
ceabe189 69* SSH tunnel on TCP port 22 between nodes is used.
8a865621 70
ceabe189
DM
71* If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
8a865621
DM
74
75* We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
78NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 79Proxmox VE 4.0 cluster nodes.
8a865621
DM
80
81
ceabe189
DM
82Preparing Nodes
83---------------
8a865621
DM
84
85First, install {PVE} on all nodes. Make sure that each node is
86installed with the final hostname and IP configuration. Changing the
87hostname and IP is not possible after cluster creation.
88
89Currently the cluster creation has to be done on the console, so you
8c1189b6 90need to login via `ssh`.
8a865621 91
11202f1d 92[[pvecm_create_cluster]]
8a865621 93Create the Cluster
ceabe189 94------------------
8a865621 95
8c1189b6
FG
96Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
97This name cannot be changed later.
8a865621
DM
98
99 hp1# pvecm create YOUR-CLUSTER-NAME
100
63f956c8
DM
101CAUTION: The cluster name is used to compute the default multicast
102address. Please use unique cluster names if you run more than one
103cluster inside your network.
104
8a865621
DM
105To check the state of your cluster use:
106
107 hp1# pvecm status
108
82445c4e
TL
109Multiple Clusters In Same Network
110~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111
112It is possible to create multiple clusters in the same physical or logical
113network. Each cluster must have a unique name, which is used to generate the
114cluster's multicast group address. As long as no duplicate cluster names are
115configured in one network segment, the different clusters won't interfere with
116each other.
117
118If multiple clusters operate in a single network it may be beneficial to setup
119an IGMP querier and enable IGMP Snooping in said network. This may reduce the
120load of the network significantly because multicast packets are only delivered
121to endpoints of the respective member nodes.
122
8a865621 123
11202f1d 124[[pvecm_join_node_to_cluster]]
8a865621 125Adding Nodes to the Cluster
ceabe189 126---------------------------
8a865621 127
8c1189b6 128Login via `ssh` to the node you want to add.
8a865621
DM
129
130 hp2# pvecm add IP-ADDRESS-CLUSTER
131
132For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
133
5eba0743 134CAUTION: A new node cannot hold any VMs, because you would get
7980581f 135conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
136`/etc/pve` is overwritten when you join a new node to the cluster. To
137workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 138adding the node to the cluster.
8a865621
DM
139
140To check the state of cluster:
141
142 # pvecm status
143
ceabe189 144.Cluster status after adding 4 nodes
8a865621
DM
145----
146hp2# pvecm status
147Quorum information
148~~~~~~~~~~~~~~~~~~
149Date: Mon Apr 20 12:30:13 2015
150Quorum provider: corosync_votequorum
151Nodes: 4
152Node ID: 0x00000001
153Ring ID: 1928
154Quorate: Yes
155
156Votequorum information
157~~~~~~~~~~~~~~~~~~~~~~
158Expected votes: 4
159Highest expected: 4
160Total votes: 4
161Quorum: 2
162Flags: Quorate
163
164Membership information
165~~~~~~~~~~~~~~~~~~~~~~
166 Nodeid Votes Name
1670x00000001 1 192.168.15.91
1680x00000002 1 192.168.15.92 (local)
1690x00000003 1 192.168.15.93
1700x00000004 1 192.168.15.94
171----
172
173If you only want the list of all nodes use:
174
175 # pvecm nodes
176
5eba0743 177.List nodes in a cluster
8a865621
DM
178----
179hp2# pvecm nodes
180
181Membership information
182~~~~~~~~~~~~~~~~~~~~~~
183 Nodeid Votes Name
184 1 1 hp1
185 2 1 hp2 (local)
186 3 1 hp3
187 4 1 hp4
188----
189
82d52451 190[[adding-nodes-with-separated-cluster-network]]
e4ec4154
TL
191Adding Nodes With Separated Cluster Network
192~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
193
194When adding a node to a cluster with a separated cluster network you need to
195use the 'ringX_addr' parameters to set the nodes address on those networks:
196
197[source,bash]
4d19cb00 198----
e4ec4154 199pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 200----
e4ec4154
TL
201
202If you want to use the Redundant Ring Protocol you will also want to pass the
203'ring1_addr' parameter.
204
8a865621
DM
205
206Remove a Cluster Node
ceabe189 207---------------------
8a865621
DM
208
209CAUTION: Read carefully the procedure before proceeding, as it could
210not be what you want or need.
211
212Move all virtual machines from the node. Make sure you have no local
213data or backups you want to keep, or save them accordingly.
e8503c6c 214In the following example we will remove the node hp4 from the cluster.
8a865621 215
e8503c6c
EK
216Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
217command to identify the node ID to remove:
8a865621
DM
218
219----
220hp1# pvecm nodes
221
222Membership information
223~~~~~~~~~~~~~~~~~~~~~~
224 Nodeid Votes Name
225 1 1 hp1 (local)
226 2 1 hp2
227 3 1 hp3
228 4 1 hp4
229----
230
e8503c6c
EK
231
232At this point you must power off hp4 and
233make sure that it will not power on again (in the network) as it
234is.
235
236IMPORTANT: As said above, it is critical to power off the node
237*before* removal, and make sure that it will *never* power on again
238(in the existing cluster network) as it is.
239If you power on the node as it is, your cluster will be screwed up and
240it could be difficult to restore a clean cluster state.
241
242After powering off the node hp4, we can safely remove it from the cluster.
8a865621
DM
243
244 hp1# pvecm delnode hp4
245
246If the operation succeeds no output is returned, just check the node
8c1189b6 247list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
248something like:
249
250----
251hp1# pvecm status
252
253Quorum information
254~~~~~~~~~~~~~~~~~~
255Date: Mon Apr 20 12:44:28 2015
256Quorum provider: corosync_votequorum
257Nodes: 3
258Node ID: 0x00000001
259Ring ID: 1992
260Quorate: Yes
261
262Votequorum information
263~~~~~~~~~~~~~~~~~~~~~~
264Expected votes: 3
265Highest expected: 3
266Total votes: 3
267Quorum: 3
268Flags: Quorate
269
270Membership information
271~~~~~~~~~~~~~~~~~~~~~~
272 Nodeid Votes Name
2730x00000001 1 192.168.15.90 (local)
2740x00000002 1 192.168.15.91
2750x00000003 1 192.168.15.92
276----
277
8a865621
DM
278If, for whatever reason, you want that this server joins the same
279cluster again, you have to
280
26ca7ff5 281* reinstall {pve} on it from scratch
8a865621
DM
282
283* then join it, as explained in the previous section.
d8742b0c 284
38ae8db3 285[[pvecm_separate_node_without_reinstall]]
555e966b
TL
286Separate A Node Without Reinstalling
287~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
288
289CAUTION: This is *not* the recommended method, proceed with caution. Use the
290above mentioned method if you're unsure.
291
292You can also separate a node from a cluster without reinstalling it from
293scratch. But after removing the node from the cluster it will still have
294access to the shared storages! This must be resolved before you start removing
295the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
296storage with another cluster, as storage locking doesn't work over cluster
297boundary. Further, it may also lead to VMID conflicts.
555e966b 298
3be22308
TL
299Its suggested that you create a new storage where only the node which you want
300to separate has access. This can be an new export on your NFS or a new Ceph
301pool, to name a few examples. Its just important that the exact same storage
302does not gets accessed by multiple clusters. After setting this storage up move
303all data from the node and its VMs to it. Then you are ready to separate the
304node from the cluster.
555e966b
TL
305
306WARNING: Ensure all shared resources are cleanly separated! You will run into
307conflicts and problems else.
308
309First stop the corosync and the pve-cluster services on the node:
310[source,bash]
4d19cb00 311----
555e966b
TL
312systemctl stop pve-cluster
313systemctl stop corosync
4d19cb00 314----
555e966b
TL
315
316Start the cluster filesystem again in local mode:
317[source,bash]
4d19cb00 318----
555e966b 319pmxcfs -l
4d19cb00 320----
555e966b
TL
321
322Delete the corosync configuration files:
323[source,bash]
4d19cb00 324----
555e966b
TL
325rm /etc/pve/corosync.conf
326rm /etc/corosync/*
4d19cb00 327----
555e966b
TL
328
329You can now start the filesystem again as normal service:
330[source,bash]
4d19cb00 331----
555e966b
TL
332killall pmxcfs
333systemctl start pve-cluster
4d19cb00 334----
555e966b
TL
335
336The node is now separated from the cluster. You can deleted it from a remaining
337node of the cluster with:
338[source,bash]
4d19cb00 339----
555e966b 340pvecm delnode oldnode
4d19cb00 341----
555e966b
TL
342
343If the command failed, because the remaining node in the cluster lost quorum
344when the now separate node exited, you may set the expected votes to 1 as a workaround:
345[source,bash]
4d19cb00 346----
555e966b 347pvecm expected 1
4d19cb00 348----
555e966b
TL
349
350And the repeat the 'pvecm delnode' command.
351
352Now switch back to the separated node, here delete all remaining files left
353from the old cluster. This ensures that the node can be added to another
354cluster again without problems.
355
356[source,bash]
4d19cb00 357----
555e966b 358rm /var/lib/corosync/*
4d19cb00 359----
555e966b
TL
360
361As the configuration files from the other nodes are still in the cluster
362filesystem you may want to clean those up too. Remove simply the whole
363directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
364you used the correct one before deleting it.
365
366CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
367the nodes can still connect to each other with public key authentication. This
368should be fixed by removing the respective keys from the
369'/etc/pve/priv/authorized_keys' file.
d8742b0c 370
806ef12d
DM
371Quorum
372------
373
374{pve} use a quorum-based technique to provide a consistent state among
375all cluster nodes.
376
377[quote, from Wikipedia, Quorum (distributed computing)]
378____
379A quorum is the minimum number of votes that a distributed transaction
380has to obtain in order to be allowed to perform an operation in a
381distributed system.
382____
383
384In case of network partitioning, state changes requires that a
385majority of nodes are online. The cluster switches to read-only mode
5eba0743 386if it loses quorum.
806ef12d
DM
387
388NOTE: {pve} assigns a single vote to each node by default.
389
e4ec4154
TL
390Cluster Network
391---------------
392
393The cluster network is the core of a cluster. All messages sent over it have to
394be delivered reliable to all nodes in their respective order. In {pve} this
395part is done by corosync, an implementation of a high performance low overhead
396high availability development toolkit. It serves our decentralized
397configuration file system (`pmxcfs`).
398
399[[cluster-network-requirements]]
400Network Requirements
401~~~~~~~~~~~~~~~~~~~~
402This needs a reliable network with latencies under 2 milliseconds (LAN
403performance) to work properly. While corosync can also use unicast for
404communication between nodes its **highly recommended** to have a multicast
405capable network. The network should not be used heavily by other members,
406ideally corosync runs on its own network.
407*never* share it with network where storage communicates too.
408
409Before setting up a cluster it is good practice to check if the network is fit
410for that purpose.
411
412* Ensure that all nodes are in the same subnet. This must only be true for the
413 network interfaces used for cluster communication (corosync).
414
415* Ensure all nodes can reach each other over those interfaces, using `ping` is
416 enough for a basic test.
417
418* Ensure that multicast works in general and a high package rates. This can be
419 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 420+
e4ec4154
TL
421[source,bash]
422----
423omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
424----
425
426* Ensure that multicast communication works over an extended period of time.
a181f090 427 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
428 no multicast querier is active. This test has a duration of around 10
429 minutes.
9e73d831 430+
e4ec4154 431[source,bash]
4d19cb00 432----
e4ec4154 433omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 434----
e4ec4154
TL
435
436Your network is not ready for clustering if any of these test fails. Recheck
437your network configuration. Especially switches are notorious for having
438multicast disabled by default or IGMP snooping enabled with no IGMP querier
439active.
440
441In smaller cluster its also an option to use unicast if you really cannot get
442multicast to work.
443
444Separate Cluster Network
445~~~~~~~~~~~~~~~~~~~~~~~~
446
447When creating a cluster without any parameters the cluster network is generally
448shared with the Web UI and the VMs and its traffic. Depending on your setup
449even storage traffic may get sent over the same network. Its recommended to
450change that, as corosync is a time critical real time application.
451
452Setting Up A New Network
453^^^^^^^^^^^^^^^^^^^^^^^^
454
455First you have to setup a new network interface. It should be on a physical
456separate network. Ensure that your network fulfills the
457<<cluster-network-requirements,cluster network requirements>>.
458
459Separate On Cluster Creation
460^^^^^^^^^^^^^^^^^^^^^^^^^^^^
461
462This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
463the 'pvecm create' command used for creating a new cluster.
464
44f38275 465If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
466and want to send and receive all cluster communication over this interface
467you would execute:
468
469[source,bash]
4d19cb00 470----
e4ec4154 471pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 472----
e4ec4154
TL
473
474To check if everything is working properly execute:
475[source,bash]
4d19cb00 476----
e4ec4154 477systemctl status corosync
4d19cb00 478----
e4ec4154 479
266cb17b
WB
480Afterwards, proceed as descripted in the section to
481<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
82d52451 482
e4ec4154
TL
483[[separate-cluster-net-after-creation]]
484Separate After Cluster Creation
485^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
486
487You can do this also if you have already created a cluster and want to switch
488its communication to another network, without rebuilding the whole cluster.
489This change may lead to short durations of quorum loss in the cluster, as nodes
490have to restart corosync and come up one after the other on the new network.
491
492Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
493The open it and you should see a file similar to:
494
495----
496logging {
497 debug: off
498 to_syslog: yes
499}
500
501nodelist {
502
503 node {
504 name: due
505 nodeid: 2
506 quorum_votes: 1
507 ring0_addr: due
508 }
509
510 node {
511 name: tre
512 nodeid: 3
513 quorum_votes: 1
514 ring0_addr: tre
515 }
516
517 node {
518 name: uno
519 nodeid: 1
520 quorum_votes: 1
521 ring0_addr: uno
522 }
523
524}
525
526quorum {
527 provider: corosync_votequorum
528}
529
530totem {
531 cluster_name: thomas-testcluster
532 config_version: 3
533 ip_version: ipv4
534 secauth: on
535 version: 2
536 interface {
537 bindnetaddr: 192.168.30.50
538 ringnumber: 0
539 }
540
541}
542----
543
544The first you want to do is add the 'name' properties in the node entries if
545you do not see them already. Those *must* match the node name.
546
547Then replace the address from the 'ring0_addr' properties with the new
548addresses. You may use plain IP addresses or also hostnames here. If you use
549hostnames ensure that they are resolvable from all nodes.
550
551In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 552network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
553in the totem section of the config to an address of the new network. It can be
554any address from the subnet configured on the new network interface.
555
556After you increased the 'config_version' property the new configuration file
557should look like:
558
559----
560
561logging {
562 debug: off
563 to_syslog: yes
564}
565
566nodelist {
567
568 node {
569 name: due
570 nodeid: 2
571 quorum_votes: 1
572 ring0_addr: 10.10.10.2
573 }
574
575 node {
576 name: tre
577 nodeid: 3
578 quorum_votes: 1
579 ring0_addr: 10.10.10.3
580 }
581
582 node {
583 name: uno
584 nodeid: 1
585 quorum_votes: 1
586 ring0_addr: 10.10.10.1
587 }
588
589}
590
591quorum {
592 provider: corosync_votequorum
593}
594
595totem {
596 cluster_name: thomas-testcluster
597 config_version: 4
598 ip_version: ipv4
599 secauth: on
600 version: 2
601 interface {
602 bindnetaddr: 10.10.10.1
603 ringnumber: 0
604 }
605
606}
607----
608
609Now after a final check whether all changed information is correct we save it
610and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
611learn how to bring it in effect.
612
613As our change cannot be enforced live from corosync we have to do an restart.
614
615On a single node execute:
616[source,bash]
4d19cb00 617----
e4ec4154 618systemctl restart corosync
4d19cb00 619----
e4ec4154
TL
620
621Now check if everything is fine:
622
623[source,bash]
4d19cb00 624----
e4ec4154 625systemctl status corosync
4d19cb00 626----
e4ec4154
TL
627
628If corosync runs again correct restart corosync also on all other nodes.
629They will then join the cluster membership one by one on the new network.
630
11202f1d 631[[pvecm_rrp]]
e4ec4154
TL
632Redundant Ring Protocol
633~~~~~~~~~~~~~~~~~~~~~~~
634To avoid a single point of failure you should implement counter measurements.
635This can be on the hardware and operating system level through network bonding.
636
637Corosync itself offers also a possibility to add redundancy through the so
638called 'Redundant Ring Protocol'. This protocol allows running a second totem
639ring on another network, this network should be physically separated from the
640other rings network to actually increase availability.
641
642RRP On Cluster Creation
643~~~~~~~~~~~~~~~~~~~~~~~
644
645The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
646'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
647
648NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
649
650So if you have two networks, one on the 10.10.10.1/24 and the other on the
65110.10.20.1/24 subnet you would execute:
652
653[source,bash]
4d19cb00 654----
e4ec4154
TL
655pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
656-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 657----
e4ec4154 658
6e78f927 659RRP On Existing Clusters
e4ec4154
TL
660~~~~~~~~~~~~~~~~~~~~~~~~
661
6e78f927
TL
662You will take similar steps as described in
663<<separate-cluster-net-after-creation,separating the cluster network>> to
664enable RRP on an already running cluster. The single difference is, that you
665will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
666
667First add a new `interface` subsection in the `totem` section, set its
668`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
669address of the subnet you have configured for your new ring.
670Further set the `rrp_mode` to `passive`, this is the only stable mode.
671
672Then add to each node entry in the `nodelist` section its new `ring1_addr`
673property with the nodes additional ring address.
674
675So if you have two networks, one on the 10.10.10.1/24 and the other on the
67610.10.20.1/24 subnet, the final configuration file should look like:
677
678----
679totem {
680 cluster_name: tweak
681 config_version: 9
682 ip_version: ipv4
683 rrp_mode: passive
684 secauth: on
685 version: 2
686 interface {
687 bindnetaddr: 10.10.10.1
688 ringnumber: 0
689 }
690 interface {
691 bindnetaddr: 10.10.20.1
692 ringnumber: 1
693 }
694}
695
696nodelist {
697 node {
698 name: pvecm1
699 nodeid: 1
700 quorum_votes: 1
701 ring0_addr: 10.10.10.1
702 ring1_addr: 10.10.20.1
703 }
704
705 node {
706 name: pvecm2
707 nodeid: 2
708 quorum_votes: 1
709 ring0_addr: 10.10.10.2
710 ring1_addr: 10.10.20.2
711 }
712
713 [...] # other cluster nodes here
714}
715
716[...] # other remaining config sections here
717
718----
719
7d48940b
DM
720Bring it in effect like described in the
721<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
722
723This is a change which cannot take live in effect and needs at least a restart
724of corosync. Recommended is a restart of the whole cluster.
725
726If you cannot reboot the whole cluster ensure no High Availability services are
727configured and the stop the corosync service on all nodes. After corosync is
728stopped on all nodes start it one after the other again.
729
730Corosync Configuration
731----------------------
732
470d4313 733The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
734controls the cluster member ship and its network.
735For reading more about it check the corosync.conf man page:
736[source,bash]
4d19cb00 737----
e4ec4154 738man corosync.conf
4d19cb00 739----
e4ec4154
TL
740
741For node membership you should always use the `pvecm` tool provided by {pve}.
742You may have to edit the configuration file manually for other changes.
743Here are a few best practice tips for doing this.
744
745[[edit-corosync-conf]]
746Edit corosync.conf
747~~~~~~~~~~~~~~~~~~
748
749Editing the corosync.conf file can be not always straight forward. There are
750two on each cluster, one in `/etc/pve/corosync.conf` and the other in
751`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
752propagate the changes to the local one, but not vice versa.
753
754The configuration will get updated automatically as soon as the file changes.
755This means changes which can be integrated in a running corosync will take
756instantly effect. So you should always make a copy and edit that instead, to
757avoid triggering some unwanted changes by an in between safe.
758
759[source,bash]
4d19cb00 760----
e4ec4154 761cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 762----
e4ec4154
TL
763
764Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
765preinstalled on {pve} for example.
766
767NOTE: Always increment the 'config_version' number on configuration changes,
768omitting this can lead to problems.
769
770After making the necessary changes create another copy of the current working
771configuration file. This serves as a backup if the new configuration fails to
772apply or makes problems in other ways.
773
774[source,bash]
4d19cb00 775----
e4ec4154 776cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 777----
e4ec4154
TL
778
779Then move the new configuration file over the old one:
780[source,bash]
4d19cb00 781----
e4ec4154 782mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 783----
e4ec4154
TL
784
785You may check with the commands
786[source,bash]
4d19cb00 787----
e4ec4154
TL
788systemctl status corosync
789journalctl -b -u corosync
4d19cb00 790----
e4ec4154
TL
791
792If the change could applied automatically. If not you may have to restart the
793corosync service via:
794[source,bash]
4d19cb00 795----
e4ec4154 796systemctl restart corosync
4d19cb00 797----
e4ec4154
TL
798
799On errors check the troubleshooting section below.
800
801Troubleshooting
802~~~~~~~~~~~~~~~
803
804Issue: 'quorum.expected_votes must be configured'
805^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
806
807When corosync starts to fail and you get the following message in the system log:
808
809----
810[...]
811corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
812corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
813 'configuration error: nodelist or quorum.expected_votes must be configured!'
814[...]
815----
816
817It means that the hostname you set for corosync 'ringX_addr' in the
818configuration could not be resolved.
819
820
821Write Configuration When Not Quorate
822^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
823
824If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
825know what you do, use:
826[source,bash]
4d19cb00 827----
e4ec4154 828pvecm expected 1
4d19cb00 829----
e4ec4154
TL
830
831This sets the expected vote count to 1 and makes the cluster quorate. You can
832now fix your configuration, or revert it back to the last working backup.
833
834This is not enough if corosync cannot start anymore. Here its best to edit the
835local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
836that corosync can start again. Ensure that on all nodes this configuration has
837the same content to avoid split brains. If you are not sure what went wrong
838it's best to ask the Proxmox Community to help you.
839
840
841[[corosync-conf-glossary]]
842Corosync Configuration Glossary
843~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
844
845ringX_addr::
846This names the different ring addresses for the corosync totem rings used for
847the cluster communication.
848
849bindnetaddr::
850Defines to which interface the ring should bind to. It may be any address of
851the subnet configured on the interface we want to use. In general its the
852recommended to just use an address a node uses on this interface.
853
854rrp_mode::
855Specifies the mode of the redundant ring protocol and may be passive, active or
856none. Note that use of active is highly experimental and not official
857supported. Passive is the preferred mode, it may double the cluster
858communication throughput and increases availability.
859
806ef12d
DM
860
861Cluster Cold Start
862------------------
863
864It is obvious that a cluster is not quorate when all nodes are
865offline. This is a common case after a power failure.
866
867NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 868(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
869you want HA.
870
204231df 871On node startup, the `pve-guests` service is started and waits for
8c1189b6 872quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
873flag set.
874
875When you turn on nodes, or when power comes back after power failure,
876it is likely that some nodes boots faster than others. Please keep in
877mind that guest startup is delayed until you reach quorum.
806ef12d 878
054a7e7d 879
082ea7d9
TL
880Guest Migration
881---------------
882
054a7e7d
DM
883Migrating virtual guests to other nodes is a useful feature in a
884cluster. There are settings to control the behavior of such
885migrations. This can be done via the configuration file
886`datacenter.cfg` or for a specific migration via API or command line
887parameters.
888
da6c7dee
DC
889It makes a difference if a Guest is online or offline, or if it has
890local resources (like a local disk).
891
892For Details about Virtual Machine Migration see the
893xref:qm_migration[QEMU/KVM Migration Chapter]
894
895For Details about Container Migration see the
896xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
897
898Migration Type
899~~~~~~~~~~~~~~
900
44f38275 901The migration type defines if the migration data should be sent over an
d63be10b 902encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 903Setting the migration type to insecure means that the RAM content of a
470d4313 904virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
905information disclosure of critical data from inside the guest (for
906example passwords or encryption keys).
054a7e7d
DM
907
908Therefore, we strongly recommend using the secure channel if you do
909not have full control over the network and can not guarantee that no
910one is eavesdropping to it.
082ea7d9 911
054a7e7d
DM
912NOTE: Storage migration does not follow this setting. Currently, it
913always sends the storage content over a secure channel.
914
915Encryption requires a lot of computing power, so this setting is often
916changed to "unsafe" to achieve better performance. The impact on
917modern systems is lower because they implement AES encryption in
b1743473
DM
918hardware. The performance impact is particularly evident in fast
919networks where you can transfer 10 Gbps or more.
082ea7d9 920
082ea7d9
TL
921
922Migration Network
923~~~~~~~~~~~~~~~~~
924
a9baa444
TL
925By default, {pve} uses the network in which cluster communication
926takes place to send the migration traffic. This is not optimal because
927sensitive cluster traffic can be disrupted and this network may not
928have the best bandwidth available on the node.
929
930Setting the migration network parameter allows the use of a dedicated
931network for the entire migration traffic. In addition to the memory,
932this also affects the storage traffic for offline migrations.
933
934The migration network is set as a network in the CIDR notation. This
935has the advantage that you do not have to set individual IP addresses
936for each node. {pve} can determine the real address on the
937destination node from the network specified in the CIDR form. To
938enable this, the network must be specified so that each node has one,
939but only one IP in the respective network.
940
082ea7d9
TL
941
942Example
943^^^^^^^
944
a9baa444
TL
945We assume that we have a three-node setup with three separate
946networks. One for public communication with the Internet, one for
947cluster communication and a very fast one, which we want to use as a
948dedicated network for migration.
949
950A network configuration for such a setup might look as follows:
082ea7d9
TL
951
952----
7a0d4784 953iface eno1 inet manual
082ea7d9
TL
954
955# public network
956auto vmbr0
957iface vmbr0 inet static
958 address 192.X.Y.57
959 netmask 255.255.250.0
960 gateway 192.X.Y.1
7a0d4784 961 bridge_ports eno1
082ea7d9
TL
962 bridge_stp off
963 bridge_fd 0
964
965# cluster network
7a0d4784
WL
966auto eno2
967iface eno2 inet static
082ea7d9
TL
968 address 10.1.1.1
969 netmask 255.255.255.0
970
971# fast network
7a0d4784
WL
972auto eno3
973iface eno3 inet static
082ea7d9
TL
974 address 10.1.2.1
975 netmask 255.255.255.0
082ea7d9
TL
976----
977
a9baa444
TL
978Here, we will use the network 10.1.2.0/24 as a migration network. For
979a single migration, you can do this using the `migration_network`
980parameter of the command line tool:
981
082ea7d9 982----
b1743473 983# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
984----
985
a9baa444
TL
986To configure this as the default network for all migrations in the
987cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
988file:
989
082ea7d9 990----
a9baa444 991# use dedicated migration network
b1743473 992migration: secure,network=10.1.2.0/24
082ea7d9
TL
993----
994
a9baa444
TL
995NOTE: The migration type must always be set when the migration network
996gets set in `/etc/pve/datacenter.cfg`.
997
806ef12d 998
d8742b0c
DM
999ifdef::manvolnum[]
1000include::pve-copyright.adoc[]
1001endif::manvolnum[]