]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
vzdump: fix few typos and polish
[pve-docs.git] / pvecm.adoc
CommitLineData
bde0e57d 1[[chapter_pvecm]]
d8742b0c 2ifdef::manvolnum[]
b2f242ab
DM
3pvecm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6
FG
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
8a865621 29http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 30communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
31(probably more, dependent on network latency).
32
8c1189b6 33`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 34leave the cluster, get status information and do various other cluster
e300cf7d
FG
35related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
37nodes.
38
39Grouping nodes into a cluster has the following advantages:
40
41* Centralized, web based management
42
5eba0743 43* Multi-master clusters: each node can do all management task
8a865621 44
8c1189b6
FG
45* `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
8a865621 47
5eba0743 48* Easy migration of virtual machines and containers between physical
8a865621
DM
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
8c1189b6 59* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 60 to communicate between nodes (also see
ceabe189 61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 62 ports 5404 and 5405 for cluster communication.
ceabe189
DM
63+
64NOTE: Some switches do not support IP multicast by default and must be
65manually enabled first.
8a865621
DM
66
67* Date and time have to be synchronized.
68
ceabe189 69* SSH tunnel on TCP port 22 between nodes is used.
8a865621 70
ceabe189
DM
71* If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
8a865621
DM
74
75* We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
78NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 79Proxmox VE 4.0 cluster nodes.
8a865621
DM
80
81
ceabe189
DM
82Preparing Nodes
83---------------
8a865621
DM
84
85First, install {PVE} on all nodes. Make sure that each node is
86installed with the final hostname and IP configuration. Changing the
87hostname and IP is not possible after cluster creation.
88
89Currently the cluster creation has to be done on the console, so you
8c1189b6 90need to login via `ssh`.
8a865621 91
8a865621 92Create the Cluster
ceabe189 93------------------
8a865621 94
8c1189b6
FG
95Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
96This name cannot be changed later.
8a865621
DM
97
98 hp1# pvecm create YOUR-CLUSTER-NAME
99
63f956c8
DM
100CAUTION: The cluster name is used to compute the default multicast
101address. Please use unique cluster names if you run more than one
102cluster inside your network.
103
8a865621
DM
104To check the state of your cluster use:
105
106 hp1# pvecm status
107
82445c4e
TL
108Multiple Clusters In Same Network
109~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
110
111It is possible to create multiple clusters in the same physical or logical
112network. Each cluster must have a unique name, which is used to generate the
113cluster's multicast group address. As long as no duplicate cluster names are
114configured in one network segment, the different clusters won't interfere with
115each other.
116
117If multiple clusters operate in a single network it may be beneficial to setup
118an IGMP querier and enable IGMP Snooping in said network. This may reduce the
119load of the network significantly because multicast packets are only delivered
120to endpoints of the respective member nodes.
121
8a865621
DM
122
123Adding Nodes to the Cluster
ceabe189 124---------------------------
8a865621 125
8c1189b6 126Login via `ssh` to the node you want to add.
8a865621
DM
127
128 hp2# pvecm add IP-ADDRESS-CLUSTER
129
130For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
131
5eba0743 132CAUTION: A new node cannot hold any VMs, because you would get
7980581f 133conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
134`/etc/pve` is overwritten when you join a new node to the cluster. To
135workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 136adding the node to the cluster.
8a865621
DM
137
138To check the state of cluster:
139
140 # pvecm status
141
ceabe189 142.Cluster status after adding 4 nodes
8a865621
DM
143----
144hp2# pvecm status
145Quorum information
146~~~~~~~~~~~~~~~~~~
147Date: Mon Apr 20 12:30:13 2015
148Quorum provider: corosync_votequorum
149Nodes: 4
150Node ID: 0x00000001
151Ring ID: 1928
152Quorate: Yes
153
154Votequorum information
155~~~~~~~~~~~~~~~~~~~~~~
156Expected votes: 4
157Highest expected: 4
158Total votes: 4
159Quorum: 2
160Flags: Quorate
161
162Membership information
163~~~~~~~~~~~~~~~~~~~~~~
164 Nodeid Votes Name
1650x00000001 1 192.168.15.91
1660x00000002 1 192.168.15.92 (local)
1670x00000003 1 192.168.15.93
1680x00000004 1 192.168.15.94
169----
170
171If you only want the list of all nodes use:
172
173 # pvecm nodes
174
5eba0743 175.List nodes in a cluster
8a865621
DM
176----
177hp2# pvecm nodes
178
179Membership information
180~~~~~~~~~~~~~~~~~~~~~~
181 Nodeid Votes Name
182 1 1 hp1
183 2 1 hp2 (local)
184 3 1 hp3
185 4 1 hp4
186----
187
82d52451 188[[adding-nodes-with-separated-cluster-network]]
e4ec4154
TL
189Adding Nodes With Separated Cluster Network
190~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
191
192When adding a node to a cluster with a separated cluster network you need to
193use the 'ringX_addr' parameters to set the nodes address on those networks:
194
195[source,bash]
4d19cb00 196----
e4ec4154 197pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 198----
e4ec4154
TL
199
200If you want to use the Redundant Ring Protocol you will also want to pass the
201'ring1_addr' parameter.
202
8a865621
DM
203
204Remove a Cluster Node
ceabe189 205---------------------
8a865621
DM
206
207CAUTION: Read carefully the procedure before proceeding, as it could
208not be what you want or need.
209
210Move all virtual machines from the node. Make sure you have no local
211data or backups you want to keep, or save them accordingly.
e8503c6c 212In the following example we will remove the node hp4 from the cluster.
8a865621 213
e8503c6c
EK
214Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
215command to identify the node ID to remove:
8a865621
DM
216
217----
218hp1# pvecm nodes
219
220Membership information
221~~~~~~~~~~~~~~~~~~~~~~
222 Nodeid Votes Name
223 1 1 hp1 (local)
224 2 1 hp2
225 3 1 hp3
226 4 1 hp4
227----
228
e8503c6c
EK
229
230At this point you must power off hp4 and
231make sure that it will not power on again (in the network) as it
232is.
233
234IMPORTANT: As said above, it is critical to power off the node
235*before* removal, and make sure that it will *never* power on again
236(in the existing cluster network) as it is.
237If you power on the node as it is, your cluster will be screwed up and
238it could be difficult to restore a clean cluster state.
239
240After powering off the node hp4, we can safely remove it from the cluster.
8a865621
DM
241
242 hp1# pvecm delnode hp4
243
244If the operation succeeds no output is returned, just check the node
8c1189b6 245list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
246something like:
247
248----
249hp1# pvecm status
250
251Quorum information
252~~~~~~~~~~~~~~~~~~
253Date: Mon Apr 20 12:44:28 2015
254Quorum provider: corosync_votequorum
255Nodes: 3
256Node ID: 0x00000001
257Ring ID: 1992
258Quorate: Yes
259
260Votequorum information
261~~~~~~~~~~~~~~~~~~~~~~
262Expected votes: 3
263Highest expected: 3
264Total votes: 3
265Quorum: 3
266Flags: Quorate
267
268Membership information
269~~~~~~~~~~~~~~~~~~~~~~
270 Nodeid Votes Name
2710x00000001 1 192.168.15.90 (local)
2720x00000002 1 192.168.15.91
2730x00000003 1 192.168.15.92
274----
275
8a865621
DM
276If, for whatever reason, you want that this server joins the same
277cluster again, you have to
278
26ca7ff5 279* reinstall {pve} on it from scratch
8a865621
DM
280
281* then join it, as explained in the previous section.
d8742b0c 282
38ae8db3 283[[pvecm_separate_node_without_reinstall]]
555e966b
TL
284Separate A Node Without Reinstalling
285~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
286
287CAUTION: This is *not* the recommended method, proceed with caution. Use the
288above mentioned method if you're unsure.
289
290You can also separate a node from a cluster without reinstalling it from
291scratch. But after removing the node from the cluster it will still have
292access to the shared storages! This must be resolved before you start removing
293the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
294storage with another cluster, as storage locking doesn't work over cluster
295boundary. Further, it may also lead to VMID conflicts.
555e966b 296
3be22308
TL
297Its suggested that you create a new storage where only the node which you want
298to separate has access. This can be an new export on your NFS or a new Ceph
299pool, to name a few examples. Its just important that the exact same storage
300does not gets accessed by multiple clusters. After setting this storage up move
301all data from the node and its VMs to it. Then you are ready to separate the
302node from the cluster.
555e966b
TL
303
304WARNING: Ensure all shared resources are cleanly separated! You will run into
305conflicts and problems else.
306
307First stop the corosync and the pve-cluster services on the node:
308[source,bash]
4d19cb00 309----
555e966b
TL
310systemctl stop pve-cluster
311systemctl stop corosync
4d19cb00 312----
555e966b
TL
313
314Start the cluster filesystem again in local mode:
315[source,bash]
4d19cb00 316----
555e966b 317pmxcfs -l
4d19cb00 318----
555e966b
TL
319
320Delete the corosync configuration files:
321[source,bash]
4d19cb00 322----
555e966b
TL
323rm /etc/pve/corosync.conf
324rm /etc/corosync/*
4d19cb00 325----
555e966b
TL
326
327You can now start the filesystem again as normal service:
328[source,bash]
4d19cb00 329----
555e966b
TL
330killall pmxcfs
331systemctl start pve-cluster
4d19cb00 332----
555e966b
TL
333
334The node is now separated from the cluster. You can deleted it from a remaining
335node of the cluster with:
336[source,bash]
4d19cb00 337----
555e966b 338pvecm delnode oldnode
4d19cb00 339----
555e966b
TL
340
341If the command failed, because the remaining node in the cluster lost quorum
342when the now separate node exited, you may set the expected votes to 1 as a workaround:
343[source,bash]
4d19cb00 344----
555e966b 345pvecm expected 1
4d19cb00 346----
555e966b
TL
347
348And the repeat the 'pvecm delnode' command.
349
350Now switch back to the separated node, here delete all remaining files left
351from the old cluster. This ensures that the node can be added to another
352cluster again without problems.
353
354[source,bash]
4d19cb00 355----
555e966b 356rm /var/lib/corosync/*
4d19cb00 357----
555e966b
TL
358
359As the configuration files from the other nodes are still in the cluster
360filesystem you may want to clean those up too. Remove simply the whole
361directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
362you used the correct one before deleting it.
363
364CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
365the nodes can still connect to each other with public key authentication. This
366should be fixed by removing the respective keys from the
367'/etc/pve/priv/authorized_keys' file.
d8742b0c 368
806ef12d
DM
369Quorum
370------
371
372{pve} use a quorum-based technique to provide a consistent state among
373all cluster nodes.
374
375[quote, from Wikipedia, Quorum (distributed computing)]
376____
377A quorum is the minimum number of votes that a distributed transaction
378has to obtain in order to be allowed to perform an operation in a
379distributed system.
380____
381
382In case of network partitioning, state changes requires that a
383majority of nodes are online. The cluster switches to read-only mode
5eba0743 384if it loses quorum.
806ef12d
DM
385
386NOTE: {pve} assigns a single vote to each node by default.
387
e4ec4154
TL
388Cluster Network
389---------------
390
391The cluster network is the core of a cluster. All messages sent over it have to
392be delivered reliable to all nodes in their respective order. In {pve} this
393part is done by corosync, an implementation of a high performance low overhead
394high availability development toolkit. It serves our decentralized
395configuration file system (`pmxcfs`).
396
397[[cluster-network-requirements]]
398Network Requirements
399~~~~~~~~~~~~~~~~~~~~
400This needs a reliable network with latencies under 2 milliseconds (LAN
401performance) to work properly. While corosync can also use unicast for
402communication between nodes its **highly recommended** to have a multicast
403capable network. The network should not be used heavily by other members,
404ideally corosync runs on its own network.
405*never* share it with network where storage communicates too.
406
407Before setting up a cluster it is good practice to check if the network is fit
408for that purpose.
409
410* Ensure that all nodes are in the same subnet. This must only be true for the
411 network interfaces used for cluster communication (corosync).
412
413* Ensure all nodes can reach each other over those interfaces, using `ping` is
414 enough for a basic test.
415
416* Ensure that multicast works in general and a high package rates. This can be
417 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 418+
e4ec4154
TL
419[source,bash]
420----
421omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
422----
423
424* Ensure that multicast communication works over an extended period of time.
a181f090 425 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
426 no multicast querier is active. This test has a duration of around 10
427 minutes.
9e73d831 428+
e4ec4154 429[source,bash]
4d19cb00 430----
e4ec4154 431omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 432----
e4ec4154
TL
433
434Your network is not ready for clustering if any of these test fails. Recheck
435your network configuration. Especially switches are notorious for having
436multicast disabled by default or IGMP snooping enabled with no IGMP querier
437active.
438
439In smaller cluster its also an option to use unicast if you really cannot get
440multicast to work.
441
442Separate Cluster Network
443~~~~~~~~~~~~~~~~~~~~~~~~
444
445When creating a cluster without any parameters the cluster network is generally
446shared with the Web UI and the VMs and its traffic. Depending on your setup
447even storage traffic may get sent over the same network. Its recommended to
448change that, as corosync is a time critical real time application.
449
450Setting Up A New Network
451^^^^^^^^^^^^^^^^^^^^^^^^
452
453First you have to setup a new network interface. It should be on a physical
454separate network. Ensure that your network fulfills the
455<<cluster-network-requirements,cluster network requirements>>.
456
457Separate On Cluster Creation
458^^^^^^^^^^^^^^^^^^^^^^^^^^^^
459
460This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
461the 'pvecm create' command used for creating a new cluster.
462
44f38275 463If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
464and want to send and receive all cluster communication over this interface
465you would execute:
466
467[source,bash]
4d19cb00 468----
e4ec4154 469pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 470----
e4ec4154
TL
471
472To check if everything is working properly execute:
473[source,bash]
4d19cb00 474----
e4ec4154 475systemctl status corosync
4d19cb00 476----
e4ec4154 477
266cb17b
WB
478Afterwards, proceed as descripted in the section to
479<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
82d52451 480
e4ec4154
TL
481[[separate-cluster-net-after-creation]]
482Separate After Cluster Creation
483^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
484
485You can do this also if you have already created a cluster and want to switch
486its communication to another network, without rebuilding the whole cluster.
487This change may lead to short durations of quorum loss in the cluster, as nodes
488have to restart corosync and come up one after the other on the new network.
489
490Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
491The open it and you should see a file similar to:
492
493----
494logging {
495 debug: off
496 to_syslog: yes
497}
498
499nodelist {
500
501 node {
502 name: due
503 nodeid: 2
504 quorum_votes: 1
505 ring0_addr: due
506 }
507
508 node {
509 name: tre
510 nodeid: 3
511 quorum_votes: 1
512 ring0_addr: tre
513 }
514
515 node {
516 name: uno
517 nodeid: 1
518 quorum_votes: 1
519 ring0_addr: uno
520 }
521
522}
523
524quorum {
525 provider: corosync_votequorum
526}
527
528totem {
529 cluster_name: thomas-testcluster
530 config_version: 3
531 ip_version: ipv4
532 secauth: on
533 version: 2
534 interface {
535 bindnetaddr: 192.168.30.50
536 ringnumber: 0
537 }
538
539}
540----
541
542The first you want to do is add the 'name' properties in the node entries if
543you do not see them already. Those *must* match the node name.
544
545Then replace the address from the 'ring0_addr' properties with the new
546addresses. You may use plain IP addresses or also hostnames here. If you use
547hostnames ensure that they are resolvable from all nodes.
548
549In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 550network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
551in the totem section of the config to an address of the new network. It can be
552any address from the subnet configured on the new network interface.
553
554After you increased the 'config_version' property the new configuration file
555should look like:
556
557----
558
559logging {
560 debug: off
561 to_syslog: yes
562}
563
564nodelist {
565
566 node {
567 name: due
568 nodeid: 2
569 quorum_votes: 1
570 ring0_addr: 10.10.10.2
571 }
572
573 node {
574 name: tre
575 nodeid: 3
576 quorum_votes: 1
577 ring0_addr: 10.10.10.3
578 }
579
580 node {
581 name: uno
582 nodeid: 1
583 quorum_votes: 1
584 ring0_addr: 10.10.10.1
585 }
586
587}
588
589quorum {
590 provider: corosync_votequorum
591}
592
593totem {
594 cluster_name: thomas-testcluster
595 config_version: 4
596 ip_version: ipv4
597 secauth: on
598 version: 2
599 interface {
600 bindnetaddr: 10.10.10.1
601 ringnumber: 0
602 }
603
604}
605----
606
607Now after a final check whether all changed information is correct we save it
608and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
609learn how to bring it in effect.
610
611As our change cannot be enforced live from corosync we have to do an restart.
612
613On a single node execute:
614[source,bash]
4d19cb00 615----
e4ec4154 616systemctl restart corosync
4d19cb00 617----
e4ec4154
TL
618
619Now check if everything is fine:
620
621[source,bash]
4d19cb00 622----
e4ec4154 623systemctl status corosync
4d19cb00 624----
e4ec4154
TL
625
626If corosync runs again correct restart corosync also on all other nodes.
627They will then join the cluster membership one by one on the new network.
628
629Redundant Ring Protocol
630~~~~~~~~~~~~~~~~~~~~~~~
631To avoid a single point of failure you should implement counter measurements.
632This can be on the hardware and operating system level through network bonding.
633
634Corosync itself offers also a possibility to add redundancy through the so
635called 'Redundant Ring Protocol'. This protocol allows running a second totem
636ring on another network, this network should be physically separated from the
637other rings network to actually increase availability.
638
639RRP On Cluster Creation
640~~~~~~~~~~~~~~~~~~~~~~~
641
642The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
643'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
644
645NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
646
647So if you have two networks, one on the 10.10.10.1/24 and the other on the
64810.10.20.1/24 subnet you would execute:
649
650[source,bash]
4d19cb00 651----
e4ec4154
TL
652pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
653-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 654----
e4ec4154 655
6e78f927 656RRP On Existing Clusters
e4ec4154
TL
657~~~~~~~~~~~~~~~~~~~~~~~~
658
6e78f927
TL
659You will take similar steps as described in
660<<separate-cluster-net-after-creation,separating the cluster network>> to
661enable RRP on an already running cluster. The single difference is, that you
662will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
663
664First add a new `interface` subsection in the `totem` section, set its
665`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
666address of the subnet you have configured for your new ring.
667Further set the `rrp_mode` to `passive`, this is the only stable mode.
668
669Then add to each node entry in the `nodelist` section its new `ring1_addr`
670property with the nodes additional ring address.
671
672So if you have two networks, one on the 10.10.10.1/24 and the other on the
67310.10.20.1/24 subnet, the final configuration file should look like:
674
675----
676totem {
677 cluster_name: tweak
678 config_version: 9
679 ip_version: ipv4
680 rrp_mode: passive
681 secauth: on
682 version: 2
683 interface {
684 bindnetaddr: 10.10.10.1
685 ringnumber: 0
686 }
687 interface {
688 bindnetaddr: 10.10.20.1
689 ringnumber: 1
690 }
691}
692
693nodelist {
694 node {
695 name: pvecm1
696 nodeid: 1
697 quorum_votes: 1
698 ring0_addr: 10.10.10.1
699 ring1_addr: 10.10.20.1
700 }
701
702 node {
703 name: pvecm2
704 nodeid: 2
705 quorum_votes: 1
706 ring0_addr: 10.10.10.2
707 ring1_addr: 10.10.20.2
708 }
709
710 [...] # other cluster nodes here
711}
712
713[...] # other remaining config sections here
714
715----
716
7d48940b
DM
717Bring it in effect like described in the
718<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
719
720This is a change which cannot take live in effect and needs at least a restart
721of corosync. Recommended is a restart of the whole cluster.
722
723If you cannot reboot the whole cluster ensure no High Availability services are
724configured and the stop the corosync service on all nodes. After corosync is
725stopped on all nodes start it one after the other again.
726
727Corosync Configuration
728----------------------
729
470d4313 730The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
731controls the cluster member ship and its network.
732For reading more about it check the corosync.conf man page:
733[source,bash]
4d19cb00 734----
e4ec4154 735man corosync.conf
4d19cb00 736----
e4ec4154
TL
737
738For node membership you should always use the `pvecm` tool provided by {pve}.
739You may have to edit the configuration file manually for other changes.
740Here are a few best practice tips for doing this.
741
742[[edit-corosync-conf]]
743Edit corosync.conf
744~~~~~~~~~~~~~~~~~~
745
746Editing the corosync.conf file can be not always straight forward. There are
747two on each cluster, one in `/etc/pve/corosync.conf` and the other in
748`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
749propagate the changes to the local one, but not vice versa.
750
751The configuration will get updated automatically as soon as the file changes.
752This means changes which can be integrated in a running corosync will take
753instantly effect. So you should always make a copy and edit that instead, to
754avoid triggering some unwanted changes by an in between safe.
755
756[source,bash]
4d19cb00 757----
e4ec4154 758cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 759----
e4ec4154
TL
760
761Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
762preinstalled on {pve} for example.
763
764NOTE: Always increment the 'config_version' number on configuration changes,
765omitting this can lead to problems.
766
767After making the necessary changes create another copy of the current working
768configuration file. This serves as a backup if the new configuration fails to
769apply or makes problems in other ways.
770
771[source,bash]
4d19cb00 772----
e4ec4154 773cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 774----
e4ec4154
TL
775
776Then move the new configuration file over the old one:
777[source,bash]
4d19cb00 778----
e4ec4154 779mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 780----
e4ec4154
TL
781
782You may check with the commands
783[source,bash]
4d19cb00 784----
e4ec4154
TL
785systemctl status corosync
786journalctl -b -u corosync
4d19cb00 787----
e4ec4154
TL
788
789If the change could applied automatically. If not you may have to restart the
790corosync service via:
791[source,bash]
4d19cb00 792----
e4ec4154 793systemctl restart corosync
4d19cb00 794----
e4ec4154
TL
795
796On errors check the troubleshooting section below.
797
798Troubleshooting
799~~~~~~~~~~~~~~~
800
801Issue: 'quorum.expected_votes must be configured'
802^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
803
804When corosync starts to fail and you get the following message in the system log:
805
806----
807[...]
808corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
809corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
810 'configuration error: nodelist or quorum.expected_votes must be configured!'
811[...]
812----
813
814It means that the hostname you set for corosync 'ringX_addr' in the
815configuration could not be resolved.
816
817
818Write Configuration When Not Quorate
819^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
820
821If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
822know what you do, use:
823[source,bash]
4d19cb00 824----
e4ec4154 825pvecm expected 1
4d19cb00 826----
e4ec4154
TL
827
828This sets the expected vote count to 1 and makes the cluster quorate. You can
829now fix your configuration, or revert it back to the last working backup.
830
831This is not enough if corosync cannot start anymore. Here its best to edit the
832local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
833that corosync can start again. Ensure that on all nodes this configuration has
834the same content to avoid split brains. If you are not sure what went wrong
835it's best to ask the Proxmox Community to help you.
836
837
838[[corosync-conf-glossary]]
839Corosync Configuration Glossary
840~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
841
842ringX_addr::
843This names the different ring addresses for the corosync totem rings used for
844the cluster communication.
845
846bindnetaddr::
847Defines to which interface the ring should bind to. It may be any address of
848the subnet configured on the interface we want to use. In general its the
849recommended to just use an address a node uses on this interface.
850
851rrp_mode::
852Specifies the mode of the redundant ring protocol and may be passive, active or
853none. Note that use of active is highly experimental and not official
854supported. Passive is the preferred mode, it may double the cluster
855communication throughput and increases availability.
856
806ef12d
DM
857
858Cluster Cold Start
859------------------
860
861It is obvious that a cluster is not quorate when all nodes are
862offline. This is a common case after a power failure.
863
864NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 865(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
866you want HA.
867
204231df 868On node startup, the `pve-guests` service is started and waits for
8c1189b6 869quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
870flag set.
871
872When you turn on nodes, or when power comes back after power failure,
873it is likely that some nodes boots faster than others. Please keep in
874mind that guest startup is delayed until you reach quorum.
806ef12d 875
054a7e7d 876
082ea7d9
TL
877Guest Migration
878---------------
879
054a7e7d
DM
880Migrating virtual guests to other nodes is a useful feature in a
881cluster. There are settings to control the behavior of such
882migrations. This can be done via the configuration file
883`datacenter.cfg` or for a specific migration via API or command line
884parameters.
885
da6c7dee
DC
886It makes a difference if a Guest is online or offline, or if it has
887local resources (like a local disk).
888
889For Details about Virtual Machine Migration see the
890xref:qm_migration[QEMU/KVM Migration Chapter]
891
892For Details about Container Migration see the
893xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
894
895Migration Type
896~~~~~~~~~~~~~~
897
44f38275 898The migration type defines if the migration data should be sent over an
d63be10b 899encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 900Setting the migration type to insecure means that the RAM content of a
470d4313 901virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
902information disclosure of critical data from inside the guest (for
903example passwords or encryption keys).
054a7e7d
DM
904
905Therefore, we strongly recommend using the secure channel if you do
906not have full control over the network and can not guarantee that no
907one is eavesdropping to it.
082ea7d9 908
054a7e7d
DM
909NOTE: Storage migration does not follow this setting. Currently, it
910always sends the storage content over a secure channel.
911
912Encryption requires a lot of computing power, so this setting is often
913changed to "unsafe" to achieve better performance. The impact on
914modern systems is lower because they implement AES encryption in
b1743473
DM
915hardware. The performance impact is particularly evident in fast
916networks where you can transfer 10 Gbps or more.
082ea7d9 917
082ea7d9
TL
918
919Migration Network
920~~~~~~~~~~~~~~~~~
921
a9baa444
TL
922By default, {pve} uses the network in which cluster communication
923takes place to send the migration traffic. This is not optimal because
924sensitive cluster traffic can be disrupted and this network may not
925have the best bandwidth available on the node.
926
927Setting the migration network parameter allows the use of a dedicated
928network for the entire migration traffic. In addition to the memory,
929this also affects the storage traffic for offline migrations.
930
931The migration network is set as a network in the CIDR notation. This
932has the advantage that you do not have to set individual IP addresses
933for each node. {pve} can determine the real address on the
934destination node from the network specified in the CIDR form. To
935enable this, the network must be specified so that each node has one,
936but only one IP in the respective network.
937
082ea7d9
TL
938
939Example
940^^^^^^^
941
a9baa444
TL
942We assume that we have a three-node setup with three separate
943networks. One for public communication with the Internet, one for
944cluster communication and a very fast one, which we want to use as a
945dedicated network for migration.
946
947A network configuration for such a setup might look as follows:
082ea7d9
TL
948
949----
7a0d4784 950iface eno1 inet manual
082ea7d9
TL
951
952# public network
953auto vmbr0
954iface vmbr0 inet static
955 address 192.X.Y.57
956 netmask 255.255.250.0
957 gateway 192.X.Y.1
7a0d4784 958 bridge_ports eno1
082ea7d9
TL
959 bridge_stp off
960 bridge_fd 0
961
962# cluster network
7a0d4784
WL
963auto eno2
964iface eno2 inet static
082ea7d9
TL
965 address 10.1.1.1
966 netmask 255.255.255.0
967
968# fast network
7a0d4784
WL
969auto eno3
970iface eno3 inet static
082ea7d9
TL
971 address 10.1.2.1
972 netmask 255.255.255.0
082ea7d9
TL
973----
974
a9baa444
TL
975Here, we will use the network 10.1.2.0/24 as a migration network. For
976a single migration, you can do this using the `migration_network`
977parameter of the command line tool:
978
082ea7d9 979----
b1743473 980# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
981----
982
a9baa444
TL
983To configure this as the default network for all migrations in the
984cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
985file:
986
082ea7d9 987----
a9baa444 988# use dedicated migration network
b1743473 989migration: secure,network=10.1.2.0/24
082ea7d9
TL
990----
991
a9baa444
TL
992NOTE: The migration type must always be set when the migration network
993gets set in `/etc/pve/datacenter.cfg`.
994
806ef12d 995
d8742b0c
DM
996ifdef::manvolnum[]
997include::pve-copyright.adoc[]
998endif::manvolnum[]