]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
improve error recovery section
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c 1ifdef::manvolnum[]
b2f242ab
DM
2pvecm(1)
3========
5f09af76
DM
4:pve-toplevel:
5
d8742b0c
DM
6NAME
7----
8
74026b8f 9pvecm - Proxmox VE Cluster Manager
d8742b0c 10
49a5e11c 11SYNOPSIS
d8742b0c
DM
12--------
13
14include::pvecm.1-synopsis.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19
20ifndef::manvolnum[]
21Cluster Manager
22===============
5f09af76 23:pve-toplevel:
194d2f29 24endif::manvolnum[]
5f09af76 25
8c1189b6
FG
26The {PVE} cluster manager `pvecm` is a tool to create a group of
27physical servers. Such a group is called a *cluster*. We use the
8a865621 28http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 29communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
30(probably more, dependent on network latency).
31
8c1189b6 32`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 33leave the cluster, get status information and do various other cluster
e300cf7d
FG
34related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
36nodes.
37
38Grouping nodes into a cluster has the following advantages:
39
40* Centralized, web based management
41
5eba0743 42* Multi-master clusters: each node can do all management task
8a865621 43
8c1189b6
FG
44* `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
8a865621 46
5eba0743 47* Easy migration of virtual machines and containers between physical
8a865621
DM
48 hosts
49
50* Fast deployment
51
52* Cluster-wide services like firewall and HA
53
54
55Requirements
56------------
57
8c1189b6 58* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 59 to communicate between nodes (also see
ceabe189 60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 61 ports 5404 and 5405 for cluster communication.
ceabe189
DM
62+
63NOTE: Some switches do not support IP multicast by default and must be
64manually enabled first.
8a865621
DM
65
66* Date and time have to be synchronized.
67
ceabe189 68* SSH tunnel on TCP port 22 between nodes is used.
8a865621 69
ceabe189
DM
70* If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
8a865621
DM
73
74* We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 78Proxmox VE 4.0 cluster nodes.
8a865621
DM
79
80
ceabe189
DM
81Preparing Nodes
82---------------
8a865621
DM
83
84First, install {PVE} on all nodes. Make sure that each node is
85installed with the final hostname and IP configuration. Changing the
86hostname and IP is not possible after cluster creation.
87
88Currently the cluster creation has to be done on the console, so you
8c1189b6 89need to login via `ssh`.
8a865621 90
8a865621 91Create the Cluster
ceabe189 92------------------
8a865621 93
8c1189b6
FG
94Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95This name cannot be changed later.
8a865621
DM
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
63f956c8
DM
99CAUTION: The cluster name is used to compute the default multicast
100address. Please use unique cluster names if you run more than one
101cluster inside your network.
102
8a865621
DM
103To check the state of your cluster use:
104
105 hp1# pvecm status
106
107
108Adding Nodes to the Cluster
ceabe189 109---------------------------
8a865621 110
8c1189b6 111Login via `ssh` to the node you want to add.
8a865621
DM
112
113 hp2# pvecm add IP-ADDRESS-CLUSTER
114
115For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
116
5eba0743 117CAUTION: A new node cannot hold any VMs, because you would get
7980581f 118conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
119`/etc/pve` is overwritten when you join a new node to the cluster. To
120workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 121adding the node to the cluster.
8a865621
DM
122
123To check the state of cluster:
124
125 # pvecm status
126
ceabe189 127.Cluster status after adding 4 nodes
8a865621
DM
128----
129hp2# pvecm status
130Quorum information
131~~~~~~~~~~~~~~~~~~
132Date: Mon Apr 20 12:30:13 2015
133Quorum provider: corosync_votequorum
134Nodes: 4
135Node ID: 0x00000001
136Ring ID: 1928
137Quorate: Yes
138
139Votequorum information
140~~~~~~~~~~~~~~~~~~~~~~
141Expected votes: 4
142Highest expected: 4
143Total votes: 4
144Quorum: 2
145Flags: Quorate
146
147Membership information
148~~~~~~~~~~~~~~~~~~~~~~
149 Nodeid Votes Name
1500x00000001 1 192.168.15.91
1510x00000002 1 192.168.15.92 (local)
1520x00000003 1 192.168.15.93
1530x00000004 1 192.168.15.94
154----
155
156If you only want the list of all nodes use:
157
158 # pvecm nodes
159
5eba0743 160.List nodes in a cluster
8a865621
DM
161----
162hp2# pvecm nodes
163
164Membership information
165~~~~~~~~~~~~~~~~~~~~~~
166 Nodeid Votes Name
167 1 1 hp1
168 2 1 hp2 (local)
169 3 1 hp3
170 4 1 hp4
171----
172
e4ec4154
TL
173Adding Nodes With Separated Cluster Network
174~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
175
176When adding a node to a cluster with a separated cluster network you need to
177use the 'ringX_addr' parameters to set the nodes address on those networks:
178
179[source,bash]
4d19cb00 180----
e4ec4154 181pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 182----
e4ec4154
TL
183
184If you want to use the Redundant Ring Protocol you will also want to pass the
185'ring1_addr' parameter.
186
8a865621
DM
187
188Remove a Cluster Node
ceabe189 189---------------------
8a865621
DM
190
191CAUTION: Read carefully the procedure before proceeding, as it could
192not be what you want or need.
193
194Move all virtual machines from the node. Make sure you have no local
195data or backups you want to keep, or save them accordingly.
196
8c1189b6 197Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
7980581f 198identify the node ID:
8a865621
DM
199
200----
201hp1# pvecm status
202
203Quorum information
204~~~~~~~~~~~~~~~~~~
205Date: Mon Apr 20 12:30:13 2015
206Quorum provider: corosync_votequorum
207Nodes: 4
208Node ID: 0x00000001
209Ring ID: 1928
210Quorate: Yes
211
212Votequorum information
213~~~~~~~~~~~~~~~~~~~~~~
214Expected votes: 4
215Highest expected: 4
216Total votes: 4
217Quorum: 2
218Flags: Quorate
219
220Membership information
221~~~~~~~~~~~~~~~~~~~~~~
222 Nodeid Votes Name
2230x00000001 1 192.168.15.91 (local)
2240x00000002 1 192.168.15.92
2250x00000003 1 192.168.15.93
2260x00000004 1 192.168.15.94
227----
228
229IMPORTANT: at this point you must power off the node to be removed and
230make sure that it will not power on again (in the network) as it
231is.
232
233----
234hp1# pvecm nodes
235
236Membership information
237~~~~~~~~~~~~~~~~~~~~~~
238 Nodeid Votes Name
239 1 1 hp1 (local)
240 2 1 hp2
241 3 1 hp3
242 4 1 hp4
243----
244
245Log in to one remaining node via ssh. Issue the delete command (here
8c1189b6 246deleting node `hp4`):
8a865621
DM
247
248 hp1# pvecm delnode hp4
249
250If the operation succeeds no output is returned, just check the node
8c1189b6 251list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
252something like:
253
254----
255hp1# pvecm status
256
257Quorum information
258~~~~~~~~~~~~~~~~~~
259Date: Mon Apr 20 12:44:28 2015
260Quorum provider: corosync_votequorum
261Nodes: 3
262Node ID: 0x00000001
263Ring ID: 1992
264Quorate: Yes
265
266Votequorum information
267~~~~~~~~~~~~~~~~~~~~~~
268Expected votes: 3
269Highest expected: 3
270Total votes: 3
271Quorum: 3
272Flags: Quorate
273
274Membership information
275~~~~~~~~~~~~~~~~~~~~~~
276 Nodeid Votes Name
2770x00000001 1 192.168.15.90 (local)
2780x00000002 1 192.168.15.91
2790x00000003 1 192.168.15.92
280----
281
282IMPORTANT: as said above, it is very important to power off the node
283*before* removal, and make sure that it will *never* power on again
284(in the existing cluster network) as it is.
285
286If you power on the node as it is, your cluster will be screwed up and
287it could be difficult to restore a clean cluster state.
288
289If, for whatever reason, you want that this server joins the same
290cluster again, you have to
291
26ca7ff5 292* reinstall {pve} on it from scratch
8a865621
DM
293
294* then join it, as explained in the previous section.
d8742b0c 295
38ae8db3 296[[pvecm_separate_node_without_reinstall]]
555e966b
TL
297Separate A Node Without Reinstalling
298~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
299
300CAUTION: This is *not* the recommended method, proceed with caution. Use the
301above mentioned method if you're unsure.
302
303You can also separate a node from a cluster without reinstalling it from
304scratch. But after removing the node from the cluster it will still have
305access to the shared storages! This must be resolved before you start removing
306the node from the cluster. A {pve} cluster cannot share the exact same
307storage with another cluster, as it leads to VMID conflicts.
308
3be22308
TL
309Its suggested that you create a new storage where only the node which you want
310to separate has access. This can be an new export on your NFS or a new Ceph
311pool, to name a few examples. Its just important that the exact same storage
312does not gets accessed by multiple clusters. After setting this storage up move
313all data from the node and its VMs to it. Then you are ready to separate the
314node from the cluster.
555e966b
TL
315
316WARNING: Ensure all shared resources are cleanly separated! You will run into
317conflicts and problems else.
318
319First stop the corosync and the pve-cluster services on the node:
320[source,bash]
4d19cb00 321----
555e966b
TL
322systemctl stop pve-cluster
323systemctl stop corosync
4d19cb00 324----
555e966b
TL
325
326Start the cluster filesystem again in local mode:
327[source,bash]
4d19cb00 328----
555e966b 329pmxcfs -l
4d19cb00 330----
555e966b
TL
331
332Delete the corosync configuration files:
333[source,bash]
4d19cb00 334----
555e966b
TL
335rm /etc/pve/corosync.conf
336rm /etc/corosync/*
4d19cb00 337----
555e966b
TL
338
339You can now start the filesystem again as normal service:
340[source,bash]
4d19cb00 341----
555e966b
TL
342killall pmxcfs
343systemctl start pve-cluster
4d19cb00 344----
555e966b
TL
345
346The node is now separated from the cluster. You can deleted it from a remaining
347node of the cluster with:
348[source,bash]
4d19cb00 349----
555e966b 350pvecm delnode oldnode
4d19cb00 351----
555e966b
TL
352
353If the command failed, because the remaining node in the cluster lost quorum
354when the now separate node exited, you may set the expected votes to 1 as a workaround:
355[source,bash]
4d19cb00 356----
555e966b 357pvecm expected 1
4d19cb00 358----
555e966b
TL
359
360And the repeat the 'pvecm delnode' command.
361
362Now switch back to the separated node, here delete all remaining files left
363from the old cluster. This ensures that the node can be added to another
364cluster again without problems.
365
366[source,bash]
4d19cb00 367----
555e966b 368rm /var/lib/corosync/*
4d19cb00 369----
555e966b
TL
370
371As the configuration files from the other nodes are still in the cluster
372filesystem you may want to clean those up too. Remove simply the whole
373directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
374you used the correct one before deleting it.
375
376CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
377the nodes can still connect to each other with public key authentication. This
378should be fixed by removing the respective keys from the
379'/etc/pve/priv/authorized_keys' file.
d8742b0c 380
806ef12d
DM
381Quorum
382------
383
384{pve} use a quorum-based technique to provide a consistent state among
385all cluster nodes.
386
387[quote, from Wikipedia, Quorum (distributed computing)]
388____
389A quorum is the minimum number of votes that a distributed transaction
390has to obtain in order to be allowed to perform an operation in a
391distributed system.
392____
393
394In case of network partitioning, state changes requires that a
395majority of nodes are online. The cluster switches to read-only mode
5eba0743 396if it loses quorum.
806ef12d
DM
397
398NOTE: {pve} assigns a single vote to each node by default.
399
e4ec4154
TL
400Cluster Network
401---------------
402
403The cluster network is the core of a cluster. All messages sent over it have to
404be delivered reliable to all nodes in their respective order. In {pve} this
405part is done by corosync, an implementation of a high performance low overhead
406high availability development toolkit. It serves our decentralized
407configuration file system (`pmxcfs`).
408
409[[cluster-network-requirements]]
410Network Requirements
411~~~~~~~~~~~~~~~~~~~~
412This needs a reliable network with latencies under 2 milliseconds (LAN
413performance) to work properly. While corosync can also use unicast for
414communication between nodes its **highly recommended** to have a multicast
415capable network. The network should not be used heavily by other members,
416ideally corosync runs on its own network.
417*never* share it with network where storage communicates too.
418
419Before setting up a cluster it is good practice to check if the network is fit
420for that purpose.
421
422* Ensure that all nodes are in the same subnet. This must only be true for the
423 network interfaces used for cluster communication (corosync).
424
425* Ensure all nodes can reach each other over those interfaces, using `ping` is
426 enough for a basic test.
427
428* Ensure that multicast works in general and a high package rates. This can be
429 done with the `omping` tool. The final "%loss" number should be < 1%.
430[source,bash]
431----
432omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
433----
434
435* Ensure that multicast communication works over an extended period of time.
436 This covers up problems where IGMP snooping is activated on the network but
437 no multicast querier is active. This test has a duration of around 10
438 minutes.
439[source,bash]
4d19cb00 440----
e4ec4154 441omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 442----
e4ec4154
TL
443
444Your network is not ready for clustering if any of these test fails. Recheck
445your network configuration. Especially switches are notorious for having
446multicast disabled by default or IGMP snooping enabled with no IGMP querier
447active.
448
449In smaller cluster its also an option to use unicast if you really cannot get
450multicast to work.
451
452Separate Cluster Network
453~~~~~~~~~~~~~~~~~~~~~~~~
454
455When creating a cluster without any parameters the cluster network is generally
456shared with the Web UI and the VMs and its traffic. Depending on your setup
457even storage traffic may get sent over the same network. Its recommended to
458change that, as corosync is a time critical real time application.
459
460Setting Up A New Network
461^^^^^^^^^^^^^^^^^^^^^^^^
462
463First you have to setup a new network interface. It should be on a physical
464separate network. Ensure that your network fulfills the
465<<cluster-network-requirements,cluster network requirements>>.
466
467Separate On Cluster Creation
468^^^^^^^^^^^^^^^^^^^^^^^^^^^^
469
470This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
471the 'pvecm create' command used for creating a new cluster.
472
473If you have setup a additional NIC with a static address on 10.10.10.1/25
474and want to send and receive all cluster communication over this interface
475you would execute:
476
477[source,bash]
4d19cb00 478----
e4ec4154 479pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 480----
e4ec4154
TL
481
482To check if everything is working properly execute:
483[source,bash]
4d19cb00 484----
e4ec4154 485systemctl status corosync
4d19cb00 486----
e4ec4154
TL
487
488[[separate-cluster-net-after-creation]]
489Separate After Cluster Creation
490^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
491
492You can do this also if you have already created a cluster and want to switch
493its communication to another network, without rebuilding the whole cluster.
494This change may lead to short durations of quorum loss in the cluster, as nodes
495have to restart corosync and come up one after the other on the new network.
496
497Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
498The open it and you should see a file similar to:
499
500----
501logging {
502 debug: off
503 to_syslog: yes
504}
505
506nodelist {
507
508 node {
509 name: due
510 nodeid: 2
511 quorum_votes: 1
512 ring0_addr: due
513 }
514
515 node {
516 name: tre
517 nodeid: 3
518 quorum_votes: 1
519 ring0_addr: tre
520 }
521
522 node {
523 name: uno
524 nodeid: 1
525 quorum_votes: 1
526 ring0_addr: uno
527 }
528
529}
530
531quorum {
532 provider: corosync_votequorum
533}
534
535totem {
536 cluster_name: thomas-testcluster
537 config_version: 3
538 ip_version: ipv4
539 secauth: on
540 version: 2
541 interface {
542 bindnetaddr: 192.168.30.50
543 ringnumber: 0
544 }
545
546}
547----
548
549The first you want to do is add the 'name' properties in the node entries if
550you do not see them already. Those *must* match the node name.
551
552Then replace the address from the 'ring0_addr' properties with the new
553addresses. You may use plain IP addresses or also hostnames here. If you use
554hostnames ensure that they are resolvable from all nodes.
555
556In my example I want to switch my cluster communication to the 10.10.10.1/25
557network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
558in the totem section of the config to an address of the new network. It can be
559any address from the subnet configured on the new network interface.
560
561After you increased the 'config_version' property the new configuration file
562should look like:
563
564----
565
566logging {
567 debug: off
568 to_syslog: yes
569}
570
571nodelist {
572
573 node {
574 name: due
575 nodeid: 2
576 quorum_votes: 1
577 ring0_addr: 10.10.10.2
578 }
579
580 node {
581 name: tre
582 nodeid: 3
583 quorum_votes: 1
584 ring0_addr: 10.10.10.3
585 }
586
587 node {
588 name: uno
589 nodeid: 1
590 quorum_votes: 1
591 ring0_addr: 10.10.10.1
592 }
593
594}
595
596quorum {
597 provider: corosync_votequorum
598}
599
600totem {
601 cluster_name: thomas-testcluster
602 config_version: 4
603 ip_version: ipv4
604 secauth: on
605 version: 2
606 interface {
607 bindnetaddr: 10.10.10.1
608 ringnumber: 0
609 }
610
611}
612----
613
614Now after a final check whether all changed information is correct we save it
615and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
616learn how to bring it in effect.
617
618As our change cannot be enforced live from corosync we have to do an restart.
619
620On a single node execute:
621[source,bash]
4d19cb00 622----
e4ec4154 623systemctl restart corosync
4d19cb00 624----
e4ec4154
TL
625
626Now check if everything is fine:
627
628[source,bash]
4d19cb00 629----
e4ec4154 630systemctl status corosync
4d19cb00 631----
e4ec4154
TL
632
633If corosync runs again correct restart corosync also on all other nodes.
634They will then join the cluster membership one by one on the new network.
635
636Redundant Ring Protocol
637~~~~~~~~~~~~~~~~~~~~~~~
638To avoid a single point of failure you should implement counter measurements.
639This can be on the hardware and operating system level through network bonding.
640
641Corosync itself offers also a possibility to add redundancy through the so
642called 'Redundant Ring Protocol'. This protocol allows running a second totem
643ring on another network, this network should be physically separated from the
644other rings network to actually increase availability.
645
646RRP On Cluster Creation
647~~~~~~~~~~~~~~~~~~~~~~~
648
649The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
650'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
651
652NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
653
654So if you have two networks, one on the 10.10.10.1/24 and the other on the
65510.10.20.1/24 subnet you would execute:
656
657[source,bash]
4d19cb00 658----
e4ec4154
TL
659pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
660-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 661----
e4ec4154
TL
662
663RRP On A Created Cluster
664~~~~~~~~~~~~~~~~~~~~~~~~
665
666When enabling an already running cluster to use RRP you will take similar steps
7d48940b
DM
667as describe in
668<<separate-cluster-net-after-creation,separating the cluster network>>. You
669just do it on another ring.
e4ec4154
TL
670
671First add a new `interface` subsection in the `totem` section, set its
672`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
673address of the subnet you have configured for your new ring.
674Further set the `rrp_mode` to `passive`, this is the only stable mode.
675
676Then add to each node entry in the `nodelist` section its new `ring1_addr`
677property with the nodes additional ring address.
678
679So if you have two networks, one on the 10.10.10.1/24 and the other on the
68010.10.20.1/24 subnet, the final configuration file should look like:
681
682----
683totem {
684 cluster_name: tweak
685 config_version: 9
686 ip_version: ipv4
687 rrp_mode: passive
688 secauth: on
689 version: 2
690 interface {
691 bindnetaddr: 10.10.10.1
692 ringnumber: 0
693 }
694 interface {
695 bindnetaddr: 10.10.20.1
696 ringnumber: 1
697 }
698}
699
700nodelist {
701 node {
702 name: pvecm1
703 nodeid: 1
704 quorum_votes: 1
705 ring0_addr: 10.10.10.1
706 ring1_addr: 10.10.20.1
707 }
708
709 node {
710 name: pvecm2
711 nodeid: 2
712 quorum_votes: 1
713 ring0_addr: 10.10.10.2
714 ring1_addr: 10.10.20.2
715 }
716
717 [...] # other cluster nodes here
718}
719
720[...] # other remaining config sections here
721
722----
723
7d48940b
DM
724Bring it in effect like described in the
725<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
726
727This is a change which cannot take live in effect and needs at least a restart
728of corosync. Recommended is a restart of the whole cluster.
729
730If you cannot reboot the whole cluster ensure no High Availability services are
731configured and the stop the corosync service on all nodes. After corosync is
732stopped on all nodes start it one after the other again.
733
734Corosync Configuration
735----------------------
736
737The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
738controls the cluster member ship and its network.
739For reading more about it check the corosync.conf man page:
740[source,bash]
4d19cb00 741----
e4ec4154 742man corosync.conf
4d19cb00 743----
e4ec4154
TL
744
745For node membership you should always use the `pvecm` tool provided by {pve}.
746You may have to edit the configuration file manually for other changes.
747Here are a few best practice tips for doing this.
748
749[[edit-corosync-conf]]
750Edit corosync.conf
751~~~~~~~~~~~~~~~~~~
752
753Editing the corosync.conf file can be not always straight forward. There are
754two on each cluster, one in `/etc/pve/corosync.conf` and the other in
755`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
756propagate the changes to the local one, but not vice versa.
757
758The configuration will get updated automatically as soon as the file changes.
759This means changes which can be integrated in a running corosync will take
760instantly effect. So you should always make a copy and edit that instead, to
761avoid triggering some unwanted changes by an in between safe.
762
763[source,bash]
4d19cb00 764----
e4ec4154 765cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 766----
e4ec4154
TL
767
768Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
769preinstalled on {pve} for example.
770
771NOTE: Always increment the 'config_version' number on configuration changes,
772omitting this can lead to problems.
773
774After making the necessary changes create another copy of the current working
775configuration file. This serves as a backup if the new configuration fails to
776apply or makes problems in other ways.
777
778[source,bash]
4d19cb00 779----
e4ec4154 780cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 781----
e4ec4154
TL
782
783Then move the new configuration file over the old one:
784[source,bash]
4d19cb00 785----
e4ec4154 786mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 787----
e4ec4154
TL
788
789You may check with the commands
790[source,bash]
4d19cb00 791----
e4ec4154
TL
792systemctl status corosync
793journalctl -b -u corosync
4d19cb00 794----
e4ec4154
TL
795
796If the change could applied automatically. If not you may have to restart the
797corosync service via:
798[source,bash]
4d19cb00 799----
e4ec4154 800systemctl restart corosync
4d19cb00 801----
e4ec4154
TL
802
803On errors check the troubleshooting section below.
804
805Troubleshooting
806~~~~~~~~~~~~~~~
807
808Issue: 'quorum.expected_votes must be configured'
809^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
810
811When corosync starts to fail and you get the following message in the system log:
812
813----
814[...]
815corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
816corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
817 'configuration error: nodelist or quorum.expected_votes must be configured!'
818[...]
819----
820
821It means that the hostname you set for corosync 'ringX_addr' in the
822configuration could not be resolved.
823
824
825Write Configuration When Not Quorate
826^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
827
828If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
829know what you do, use:
830[source,bash]
4d19cb00 831----
e4ec4154 832pvecm expected 1
4d19cb00 833----
e4ec4154
TL
834
835This sets the expected vote count to 1 and makes the cluster quorate. You can
836now fix your configuration, or revert it back to the last working backup.
837
838This is not enough if corosync cannot start anymore. Here its best to edit the
839local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
840that corosync can start again. Ensure that on all nodes this configuration has
841the same content to avoid split brains. If you are not sure what went wrong
842it's best to ask the Proxmox Community to help you.
843
844
845[[corosync-conf-glossary]]
846Corosync Configuration Glossary
847~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
848
849ringX_addr::
850This names the different ring addresses for the corosync totem rings used for
851the cluster communication.
852
853bindnetaddr::
854Defines to which interface the ring should bind to. It may be any address of
855the subnet configured on the interface we want to use. In general its the
856recommended to just use an address a node uses on this interface.
857
858rrp_mode::
859Specifies the mode of the redundant ring protocol and may be passive, active or
860none. Note that use of active is highly experimental and not official
861supported. Passive is the preferred mode, it may double the cluster
862communication throughput and increases availability.
863
806ef12d
DM
864
865Cluster Cold Start
866------------------
867
868It is obvious that a cluster is not quorate when all nodes are
869offline. This is a common case after a power failure.
870
871NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 872(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
873you want HA.
874
8c1189b6
FG
875On node startup, service `pve-manager` is started and waits for
876quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
877flag set.
878
879When you turn on nodes, or when power comes back after power failure,
880it is likely that some nodes boots faster than others. Please keep in
881mind that guest startup is delayed until you reach quorum.
806ef12d
DM
882
883
d8742b0c
DM
884ifdef::manvolnum[]
885include::pve-copyright.adoc[]
886endif::manvolnum[]