]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
asciidoc-pve.in: use new mediawiki.conf to render wiki pages
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c 1ifdef::manvolnum[]
b2f242ab
DM
2pvecm(1)
3========
d8742b0c 4include::attributes.txt[]
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
24include::attributes.txt[]
5f09af76 25:pve-toplevel:
194d2f29 26endif::manvolnum[]
5f09af76 27
8c1189b6
FG
28The {PVE} cluster manager `pvecm` is a tool to create a group of
29physical servers. Such a group is called a *cluster*. We use the
8a865621 30http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 31communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
32(probably more, dependent on network latency).
33
8c1189b6 34`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 35leave the cluster, get status information and do various other cluster
e300cf7d
FG
36related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
37is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
38nodes.
39
40Grouping nodes into a cluster has the following advantages:
41
42* Centralized, web based management
43
5eba0743 44* Multi-master clusters: each node can do all management task
8a865621 45
8c1189b6
FG
46* `pmxcfs`: database-driven file system for storing configuration files,
47 replicated in real-time on all nodes using `corosync`.
8a865621 48
5eba0743 49* Easy migration of virtual machines and containers between physical
8a865621
DM
50 hosts
51
52* Fast deployment
53
54* Cluster-wide services like firewall and HA
55
56
57Requirements
58------------
59
8c1189b6 60* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 61 to communicate between nodes (also see
ceabe189 62 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 63 ports 5404 and 5405 for cluster communication.
ceabe189
DM
64+
65NOTE: Some switches do not support IP multicast by default and must be
66manually enabled first.
8a865621
DM
67
68* Date and time have to be synchronized.
69
ceabe189 70* SSH tunnel on TCP port 22 between nodes is used.
8a865621 71
ceabe189
DM
72* If you are interested in High Availability, you need to have at
73 least three nodes for reliable quorum. All nodes should have the
74 same version.
8a865621
DM
75
76* We recommend a dedicated NIC for the cluster traffic, especially if
77 you use shared storage.
78
79NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 80Proxmox VE 4.0 cluster nodes.
8a865621
DM
81
82
ceabe189
DM
83Preparing Nodes
84---------------
8a865621
DM
85
86First, install {PVE} on all nodes. Make sure that each node is
87installed with the final hostname and IP configuration. Changing the
88hostname and IP is not possible after cluster creation.
89
90Currently the cluster creation has to be done on the console, so you
8c1189b6 91need to login via `ssh`.
8a865621 92
8a865621 93Create the Cluster
ceabe189 94------------------
8a865621 95
8c1189b6
FG
96Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
97This name cannot be changed later.
8a865621
DM
98
99 hp1# pvecm create YOUR-CLUSTER-NAME
100
63f956c8
DM
101CAUTION: The cluster name is used to compute the default multicast
102address. Please use unique cluster names if you run more than one
103cluster inside your network.
104
8a865621
DM
105To check the state of your cluster use:
106
107 hp1# pvecm status
108
109
110Adding Nodes to the Cluster
ceabe189 111---------------------------
8a865621 112
8c1189b6 113Login via `ssh` to the node you want to add.
8a865621
DM
114
115 hp2# pvecm add IP-ADDRESS-CLUSTER
116
117For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
118
5eba0743 119CAUTION: A new node cannot hold any VMs, because you would get
7980581f 120conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
121`/etc/pve` is overwritten when you join a new node to the cluster. To
122workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 123adding the node to the cluster.
8a865621
DM
124
125To check the state of cluster:
126
127 # pvecm status
128
ceabe189 129.Cluster status after adding 4 nodes
8a865621
DM
130----
131hp2# pvecm status
132Quorum information
133~~~~~~~~~~~~~~~~~~
134Date: Mon Apr 20 12:30:13 2015
135Quorum provider: corosync_votequorum
136Nodes: 4
137Node ID: 0x00000001
138Ring ID: 1928
139Quorate: Yes
140
141Votequorum information
142~~~~~~~~~~~~~~~~~~~~~~
143Expected votes: 4
144Highest expected: 4
145Total votes: 4
146Quorum: 2
147Flags: Quorate
148
149Membership information
150~~~~~~~~~~~~~~~~~~~~~~
151 Nodeid Votes Name
1520x00000001 1 192.168.15.91
1530x00000002 1 192.168.15.92 (local)
1540x00000003 1 192.168.15.93
1550x00000004 1 192.168.15.94
156----
157
158If you only want the list of all nodes use:
159
160 # pvecm nodes
161
5eba0743 162.List nodes in a cluster
8a865621
DM
163----
164hp2# pvecm nodes
165
166Membership information
167~~~~~~~~~~~~~~~~~~~~~~
168 Nodeid Votes Name
169 1 1 hp1
170 2 1 hp2 (local)
171 3 1 hp3
172 4 1 hp4
173----
174
e4ec4154
TL
175Adding Nodes With Separated Cluster Network
176~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
177
178When adding a node to a cluster with a separated cluster network you need to
179use the 'ringX_addr' parameters to set the nodes address on those networks:
180
181[source,bash]
4d19cb00 182----
e4ec4154 183pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 184----
e4ec4154
TL
185
186If you want to use the Redundant Ring Protocol you will also want to pass the
187'ring1_addr' parameter.
188
8a865621
DM
189
190Remove a Cluster Node
ceabe189 191---------------------
8a865621
DM
192
193CAUTION: Read carefully the procedure before proceeding, as it could
194not be what you want or need.
195
196Move all virtual machines from the node. Make sure you have no local
197data or backups you want to keep, or save them accordingly.
198
8c1189b6 199Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
7980581f 200identify the node ID:
8a865621
DM
201
202----
203hp1# pvecm status
204
205Quorum information
206~~~~~~~~~~~~~~~~~~
207Date: Mon Apr 20 12:30:13 2015
208Quorum provider: corosync_votequorum
209Nodes: 4
210Node ID: 0x00000001
211Ring ID: 1928
212Quorate: Yes
213
214Votequorum information
215~~~~~~~~~~~~~~~~~~~~~~
216Expected votes: 4
217Highest expected: 4
218Total votes: 4
219Quorum: 2
220Flags: Quorate
221
222Membership information
223~~~~~~~~~~~~~~~~~~~~~~
224 Nodeid Votes Name
2250x00000001 1 192.168.15.91 (local)
2260x00000002 1 192.168.15.92
2270x00000003 1 192.168.15.93
2280x00000004 1 192.168.15.94
229----
230
231IMPORTANT: at this point you must power off the node to be removed and
232make sure that it will not power on again (in the network) as it
233is.
234
235----
236hp1# pvecm nodes
237
238Membership information
239~~~~~~~~~~~~~~~~~~~~~~
240 Nodeid Votes Name
241 1 1 hp1 (local)
242 2 1 hp2
243 3 1 hp3
244 4 1 hp4
245----
246
247Log in to one remaining node via ssh. Issue the delete command (here
8c1189b6 248deleting node `hp4`):
8a865621
DM
249
250 hp1# pvecm delnode hp4
251
252If the operation succeeds no output is returned, just check the node
8c1189b6 253list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
254something like:
255
256----
257hp1# pvecm status
258
259Quorum information
260~~~~~~~~~~~~~~~~~~
261Date: Mon Apr 20 12:44:28 2015
262Quorum provider: corosync_votequorum
263Nodes: 3
264Node ID: 0x00000001
265Ring ID: 1992
266Quorate: Yes
267
268Votequorum information
269~~~~~~~~~~~~~~~~~~~~~~
270Expected votes: 3
271Highest expected: 3
272Total votes: 3
273Quorum: 3
274Flags: Quorate
275
276Membership information
277~~~~~~~~~~~~~~~~~~~~~~
278 Nodeid Votes Name
2790x00000001 1 192.168.15.90 (local)
2800x00000002 1 192.168.15.91
2810x00000003 1 192.168.15.92
282----
283
284IMPORTANT: as said above, it is very important to power off the node
285*before* removal, and make sure that it will *never* power on again
286(in the existing cluster network) as it is.
287
288If you power on the node as it is, your cluster will be screwed up and
289it could be difficult to restore a clean cluster state.
290
291If, for whatever reason, you want that this server joins the same
292cluster again, you have to
293
26ca7ff5 294* reinstall {pve} on it from scratch
8a865621
DM
295
296* then join it, as explained in the previous section.
d8742b0c 297
555e966b
TL
298Separate A Node Without Reinstalling
299~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
300
301CAUTION: This is *not* the recommended method, proceed with caution. Use the
302above mentioned method if you're unsure.
303
304You can also separate a node from a cluster without reinstalling it from
305scratch. But after removing the node from the cluster it will still have
306access to the shared storages! This must be resolved before you start removing
307the node from the cluster. A {pve} cluster cannot share the exact same
308storage with another cluster, as it leads to VMID conflicts.
309
3be22308
TL
310Its suggested that you create a new storage where only the node which you want
311to separate has access. This can be an new export on your NFS or a new Ceph
312pool, to name a few examples. Its just important that the exact same storage
313does not gets accessed by multiple clusters. After setting this storage up move
314all data from the node and its VMs to it. Then you are ready to separate the
315node from the cluster.
555e966b
TL
316
317WARNING: Ensure all shared resources are cleanly separated! You will run into
318conflicts and problems else.
319
320First stop the corosync and the pve-cluster services on the node:
321[source,bash]
4d19cb00 322----
555e966b
TL
323systemctl stop pve-cluster
324systemctl stop corosync
4d19cb00 325----
555e966b
TL
326
327Start the cluster filesystem again in local mode:
328[source,bash]
4d19cb00 329----
555e966b 330pmxcfs -l
4d19cb00 331----
555e966b
TL
332
333Delete the corosync configuration files:
334[source,bash]
4d19cb00 335----
555e966b
TL
336rm /etc/pve/corosync.conf
337rm /etc/corosync/*
4d19cb00 338----
555e966b
TL
339
340You can now start the filesystem again as normal service:
341[source,bash]
4d19cb00 342----
555e966b
TL
343killall pmxcfs
344systemctl start pve-cluster
4d19cb00 345----
555e966b
TL
346
347The node is now separated from the cluster. You can deleted it from a remaining
348node of the cluster with:
349[source,bash]
4d19cb00 350----
555e966b 351pvecm delnode oldnode
4d19cb00 352----
555e966b
TL
353
354If the command failed, because the remaining node in the cluster lost quorum
355when the now separate node exited, you may set the expected votes to 1 as a workaround:
356[source,bash]
4d19cb00 357----
555e966b 358pvecm expected 1
4d19cb00 359----
555e966b
TL
360
361And the repeat the 'pvecm delnode' command.
362
363Now switch back to the separated node, here delete all remaining files left
364from the old cluster. This ensures that the node can be added to another
365cluster again without problems.
366
367[source,bash]
4d19cb00 368----
555e966b 369rm /var/lib/corosync/*
4d19cb00 370----
555e966b
TL
371
372As the configuration files from the other nodes are still in the cluster
373filesystem you may want to clean those up too. Remove simply the whole
374directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
375you used the correct one before deleting it.
376
377CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
378the nodes can still connect to each other with public key authentication. This
379should be fixed by removing the respective keys from the
380'/etc/pve/priv/authorized_keys' file.
d8742b0c 381
806ef12d
DM
382Quorum
383------
384
385{pve} use a quorum-based technique to provide a consistent state among
386all cluster nodes.
387
388[quote, from Wikipedia, Quorum (distributed computing)]
389____
390A quorum is the minimum number of votes that a distributed transaction
391has to obtain in order to be allowed to perform an operation in a
392distributed system.
393____
394
395In case of network partitioning, state changes requires that a
396majority of nodes are online. The cluster switches to read-only mode
5eba0743 397if it loses quorum.
806ef12d
DM
398
399NOTE: {pve} assigns a single vote to each node by default.
400
e4ec4154
TL
401Cluster Network
402---------------
403
404The cluster network is the core of a cluster. All messages sent over it have to
405be delivered reliable to all nodes in their respective order. In {pve} this
406part is done by corosync, an implementation of a high performance low overhead
407high availability development toolkit. It serves our decentralized
408configuration file system (`pmxcfs`).
409
410[[cluster-network-requirements]]
411Network Requirements
412~~~~~~~~~~~~~~~~~~~~
413This needs a reliable network with latencies under 2 milliseconds (LAN
414performance) to work properly. While corosync can also use unicast for
415communication between nodes its **highly recommended** to have a multicast
416capable network. The network should not be used heavily by other members,
417ideally corosync runs on its own network.
418*never* share it with network where storage communicates too.
419
420Before setting up a cluster it is good practice to check if the network is fit
421for that purpose.
422
423* Ensure that all nodes are in the same subnet. This must only be true for the
424 network interfaces used for cluster communication (corosync).
425
426* Ensure all nodes can reach each other over those interfaces, using `ping` is
427 enough for a basic test.
428
429* Ensure that multicast works in general and a high package rates. This can be
430 done with the `omping` tool. The final "%loss" number should be < 1%.
431[source,bash]
432----
433omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
434----
435
436* Ensure that multicast communication works over an extended period of time.
437 This covers up problems where IGMP snooping is activated on the network but
438 no multicast querier is active. This test has a duration of around 10
439 minutes.
440[source,bash]
4d19cb00 441----
e4ec4154 442omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 443----
e4ec4154
TL
444
445Your network is not ready for clustering if any of these test fails. Recheck
446your network configuration. Especially switches are notorious for having
447multicast disabled by default or IGMP snooping enabled with no IGMP querier
448active.
449
450In smaller cluster its also an option to use unicast if you really cannot get
451multicast to work.
452
453Separate Cluster Network
454~~~~~~~~~~~~~~~~~~~~~~~~
455
456When creating a cluster without any parameters the cluster network is generally
457shared with the Web UI and the VMs and its traffic. Depending on your setup
458even storage traffic may get sent over the same network. Its recommended to
459change that, as corosync is a time critical real time application.
460
461Setting Up A New Network
462^^^^^^^^^^^^^^^^^^^^^^^^
463
464First you have to setup a new network interface. It should be on a physical
465separate network. Ensure that your network fulfills the
466<<cluster-network-requirements,cluster network requirements>>.
467
468Separate On Cluster Creation
469^^^^^^^^^^^^^^^^^^^^^^^^^^^^
470
471This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
472the 'pvecm create' command used for creating a new cluster.
473
474If you have setup a additional NIC with a static address on 10.10.10.1/25
475and want to send and receive all cluster communication over this interface
476you would execute:
477
478[source,bash]
4d19cb00 479----
e4ec4154 480pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 481----
e4ec4154
TL
482
483To check if everything is working properly execute:
484[source,bash]
4d19cb00 485----
e4ec4154 486systemctl status corosync
4d19cb00 487----
e4ec4154
TL
488
489[[separate-cluster-net-after-creation]]
490Separate After Cluster Creation
491^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
492
493You can do this also if you have already created a cluster and want to switch
494its communication to another network, without rebuilding the whole cluster.
495This change may lead to short durations of quorum loss in the cluster, as nodes
496have to restart corosync and come up one after the other on the new network.
497
498Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
499The open it and you should see a file similar to:
500
501----
502logging {
503 debug: off
504 to_syslog: yes
505}
506
507nodelist {
508
509 node {
510 name: due
511 nodeid: 2
512 quorum_votes: 1
513 ring0_addr: due
514 }
515
516 node {
517 name: tre
518 nodeid: 3
519 quorum_votes: 1
520 ring0_addr: tre
521 }
522
523 node {
524 name: uno
525 nodeid: 1
526 quorum_votes: 1
527 ring0_addr: uno
528 }
529
530}
531
532quorum {
533 provider: corosync_votequorum
534}
535
536totem {
537 cluster_name: thomas-testcluster
538 config_version: 3
539 ip_version: ipv4
540 secauth: on
541 version: 2
542 interface {
543 bindnetaddr: 192.168.30.50
544 ringnumber: 0
545 }
546
547}
548----
549
550The first you want to do is add the 'name' properties in the node entries if
551you do not see them already. Those *must* match the node name.
552
553Then replace the address from the 'ring0_addr' properties with the new
554addresses. You may use plain IP addresses or also hostnames here. If you use
555hostnames ensure that they are resolvable from all nodes.
556
557In my example I want to switch my cluster communication to the 10.10.10.1/25
558network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
559in the totem section of the config to an address of the new network. It can be
560any address from the subnet configured on the new network interface.
561
562After you increased the 'config_version' property the new configuration file
563should look like:
564
565----
566
567logging {
568 debug: off
569 to_syslog: yes
570}
571
572nodelist {
573
574 node {
575 name: due
576 nodeid: 2
577 quorum_votes: 1
578 ring0_addr: 10.10.10.2
579 }
580
581 node {
582 name: tre
583 nodeid: 3
584 quorum_votes: 1
585 ring0_addr: 10.10.10.3
586 }
587
588 node {
589 name: uno
590 nodeid: 1
591 quorum_votes: 1
592 ring0_addr: 10.10.10.1
593 }
594
595}
596
597quorum {
598 provider: corosync_votequorum
599}
600
601totem {
602 cluster_name: thomas-testcluster
603 config_version: 4
604 ip_version: ipv4
605 secauth: on
606 version: 2
607 interface {
608 bindnetaddr: 10.10.10.1
609 ringnumber: 0
610 }
611
612}
613----
614
615Now after a final check whether all changed information is correct we save it
616and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
617learn how to bring it in effect.
618
619As our change cannot be enforced live from corosync we have to do an restart.
620
621On a single node execute:
622[source,bash]
4d19cb00 623----
e4ec4154 624systemctl restart corosync
4d19cb00 625----
e4ec4154
TL
626
627Now check if everything is fine:
628
629[source,bash]
4d19cb00 630----
e4ec4154 631systemctl status corosync
4d19cb00 632----
e4ec4154
TL
633
634If corosync runs again correct restart corosync also on all other nodes.
635They will then join the cluster membership one by one on the new network.
636
637Redundant Ring Protocol
638~~~~~~~~~~~~~~~~~~~~~~~
639To avoid a single point of failure you should implement counter measurements.
640This can be on the hardware and operating system level through network bonding.
641
642Corosync itself offers also a possibility to add redundancy through the so
643called 'Redundant Ring Protocol'. This protocol allows running a second totem
644ring on another network, this network should be physically separated from the
645other rings network to actually increase availability.
646
647RRP On Cluster Creation
648~~~~~~~~~~~~~~~~~~~~~~~
649
650The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
651'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
652
653NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
654
655So if you have two networks, one on the 10.10.10.1/24 and the other on the
65610.10.20.1/24 subnet you would execute:
657
658[source,bash]
4d19cb00 659----
e4ec4154
TL
660pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
661-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 662----
e4ec4154
TL
663
664RRP On A Created Cluster
665~~~~~~~~~~~~~~~~~~~~~~~~
666
667When enabling an already running cluster to use RRP you will take similar steps
668as describe in <<separate-cluster-net-after-creation,separating the cluster
669network>>. You just do it on another ring.
670
671First add a new `interface` subsection in the `totem` section, set its
672`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
673address of the subnet you have configured for your new ring.
674Further set the `rrp_mode` to `passive`, this is the only stable mode.
675
676Then add to each node entry in the `nodelist` section its new `ring1_addr`
677property with the nodes additional ring address.
678
679So if you have two networks, one on the 10.10.10.1/24 and the other on the
68010.10.20.1/24 subnet, the final configuration file should look like:
681
682----
683totem {
684 cluster_name: tweak
685 config_version: 9
686 ip_version: ipv4
687 rrp_mode: passive
688 secauth: on
689 version: 2
690 interface {
691 bindnetaddr: 10.10.10.1
692 ringnumber: 0
693 }
694 interface {
695 bindnetaddr: 10.10.20.1
696 ringnumber: 1
697 }
698}
699
700nodelist {
701 node {
702 name: pvecm1
703 nodeid: 1
704 quorum_votes: 1
705 ring0_addr: 10.10.10.1
706 ring1_addr: 10.10.20.1
707 }
708
709 node {
710 name: pvecm2
711 nodeid: 2
712 quorum_votes: 1
713 ring0_addr: 10.10.10.2
714 ring1_addr: 10.10.20.2
715 }
716
717 [...] # other cluster nodes here
718}
719
720[...] # other remaining config sections here
721
722----
723
724Bring it in effect like described in the <<edit-corosync-conf,edit the
725corosync.conf file>> section.
726
727This is a change which cannot take live in effect and needs at least a restart
728of corosync. Recommended is a restart of the whole cluster.
729
730If you cannot reboot the whole cluster ensure no High Availability services are
731configured and the stop the corosync service on all nodes. After corosync is
732stopped on all nodes start it one after the other again.
733
734Corosync Configuration
735----------------------
736
737The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
738controls the cluster member ship and its network.
739For reading more about it check the corosync.conf man page:
740[source,bash]
4d19cb00 741----
e4ec4154 742man corosync.conf
4d19cb00 743----
e4ec4154
TL
744
745For node membership you should always use the `pvecm` tool provided by {pve}.
746You may have to edit the configuration file manually for other changes.
747Here are a few best practice tips for doing this.
748
749[[edit-corosync-conf]]
750Edit corosync.conf
751~~~~~~~~~~~~~~~~~~
752
753Editing the corosync.conf file can be not always straight forward. There are
754two on each cluster, one in `/etc/pve/corosync.conf` and the other in
755`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
756propagate the changes to the local one, but not vice versa.
757
758The configuration will get updated automatically as soon as the file changes.
759This means changes which can be integrated in a running corosync will take
760instantly effect. So you should always make a copy and edit that instead, to
761avoid triggering some unwanted changes by an in between safe.
762
763[source,bash]
4d19cb00 764----
e4ec4154 765cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 766----
e4ec4154
TL
767
768Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
769preinstalled on {pve} for example.
770
771NOTE: Always increment the 'config_version' number on configuration changes,
772omitting this can lead to problems.
773
774After making the necessary changes create another copy of the current working
775configuration file. This serves as a backup if the new configuration fails to
776apply or makes problems in other ways.
777
778[source,bash]
4d19cb00 779----
e4ec4154 780cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 781----
e4ec4154
TL
782
783Then move the new configuration file over the old one:
784[source,bash]
4d19cb00 785----
e4ec4154 786mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 787----
e4ec4154
TL
788
789You may check with the commands
790[source,bash]
4d19cb00 791----
e4ec4154
TL
792systemctl status corosync
793journalctl -b -u corosync
4d19cb00 794----
e4ec4154
TL
795
796If the change could applied automatically. If not you may have to restart the
797corosync service via:
798[source,bash]
4d19cb00 799----
e4ec4154 800systemctl restart corosync
4d19cb00 801----
e4ec4154
TL
802
803On errors check the troubleshooting section below.
804
805Troubleshooting
806~~~~~~~~~~~~~~~
807
808Issue: 'quorum.expected_votes must be configured'
809^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
810
811When corosync starts to fail and you get the following message in the system log:
812
813----
814[...]
815corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
816corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
817 'configuration error: nodelist or quorum.expected_votes must be configured!'
818[...]
819----
820
821It means that the hostname you set for corosync 'ringX_addr' in the
822configuration could not be resolved.
823
824
825Write Configuration When Not Quorate
826^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
827
828If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
829know what you do, use:
830[source,bash]
4d19cb00 831----
e4ec4154 832pvecm expected 1
4d19cb00 833----
e4ec4154
TL
834
835This sets the expected vote count to 1 and makes the cluster quorate. You can
836now fix your configuration, or revert it back to the last working backup.
837
838This is not enough if corosync cannot start anymore. Here its best to edit the
839local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
840that corosync can start again. Ensure that on all nodes this configuration has
841the same content to avoid split brains. If you are not sure what went wrong
842it's best to ask the Proxmox Community to help you.
843
844
845[[corosync-conf-glossary]]
846Corosync Configuration Glossary
847~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
848
849ringX_addr::
850This names the different ring addresses for the corosync totem rings used for
851the cluster communication.
852
853bindnetaddr::
854Defines to which interface the ring should bind to. It may be any address of
855the subnet configured on the interface we want to use. In general its the
856recommended to just use an address a node uses on this interface.
857
858rrp_mode::
859Specifies the mode of the redundant ring protocol and may be passive, active or
860none. Note that use of active is highly experimental and not official
861supported. Passive is the preferred mode, it may double the cluster
862communication throughput and increases availability.
863
806ef12d
DM
864
865Cluster Cold Start
866------------------
867
868It is obvious that a cluster is not quorate when all nodes are
869offline. This is a common case after a power failure.
870
871NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 872(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
873you want HA.
874
8c1189b6
FG
875On node startup, service `pve-manager` is started and waits for
876quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
877flag set.
878
879When you turn on nodes, or when power comes back after power failure,
880it is likely that some nodes boots faster than others. Please keep in
881mind that guest startup is delayed until you reach quorum.
806ef12d
DM
882
883
d8742b0c
DM
884ifdef::manvolnum[]
885include::pve-copyright.adoc[]
886endif::manvolnum[]