]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
asciidoc-pve.in: implement new command print-links-json
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c 1ifdef::manvolnum[]
b2f242ab
DM
2pvecm(1)
3========
d8742b0c 4include::attributes.txt[]
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
24include::attributes.txt[]
25endif::manvolnum[]
5f09af76
DM
26ifdef::wiki[]
27:pve-toplevel:
28endif::wiki[]
29
8c1189b6
FG
30The {PVE} cluster manager `pvecm` is a tool to create a group of
31physical servers. Such a group is called a *cluster*. We use the
8a865621 32http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 33communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
34(probably more, dependent on network latency).
35
8c1189b6 36`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 37leave the cluster, get status information and do various other cluster
e300cf7d
FG
38related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
39is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
40nodes.
41
42Grouping nodes into a cluster has the following advantages:
43
44* Centralized, web based management
45
5eba0743 46* Multi-master clusters: each node can do all management task
8a865621 47
8c1189b6
FG
48* `pmxcfs`: database-driven file system for storing configuration files,
49 replicated in real-time on all nodes using `corosync`.
8a865621 50
5eba0743 51* Easy migration of virtual machines and containers between physical
8a865621
DM
52 hosts
53
54* Fast deployment
55
56* Cluster-wide services like firewall and HA
57
58
59Requirements
60------------
61
8c1189b6 62* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 63 to communicate between nodes (also see
ceabe189 64 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 65 ports 5404 and 5405 for cluster communication.
ceabe189
DM
66+
67NOTE: Some switches do not support IP multicast by default and must be
68manually enabled first.
8a865621
DM
69
70* Date and time have to be synchronized.
71
ceabe189 72* SSH tunnel on TCP port 22 between nodes is used.
8a865621 73
ceabe189
DM
74* If you are interested in High Availability, you need to have at
75 least three nodes for reliable quorum. All nodes should have the
76 same version.
8a865621
DM
77
78* We recommend a dedicated NIC for the cluster traffic, especially if
79 you use shared storage.
80
81NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 82Proxmox VE 4.0 cluster nodes.
8a865621
DM
83
84
ceabe189
DM
85Preparing Nodes
86---------------
8a865621
DM
87
88First, install {PVE} on all nodes. Make sure that each node is
89installed with the final hostname and IP configuration. Changing the
90hostname and IP is not possible after cluster creation.
91
92Currently the cluster creation has to be done on the console, so you
8c1189b6 93need to login via `ssh`.
8a865621 94
8a865621 95Create the Cluster
ceabe189 96------------------
8a865621 97
8c1189b6
FG
98Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
99This name cannot be changed later.
8a865621
DM
100
101 hp1# pvecm create YOUR-CLUSTER-NAME
102
63f956c8
DM
103CAUTION: The cluster name is used to compute the default multicast
104address. Please use unique cluster names if you run more than one
105cluster inside your network.
106
8a865621
DM
107To check the state of your cluster use:
108
109 hp1# pvecm status
110
111
112Adding Nodes to the Cluster
ceabe189 113---------------------------
8a865621 114
8c1189b6 115Login via `ssh` to the node you want to add.
8a865621
DM
116
117 hp2# pvecm add IP-ADDRESS-CLUSTER
118
119For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
120
5eba0743 121CAUTION: A new node cannot hold any VMs, because you would get
7980581f 122conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
123`/etc/pve` is overwritten when you join a new node to the cluster. To
124workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 125adding the node to the cluster.
8a865621
DM
126
127To check the state of cluster:
128
129 # pvecm status
130
ceabe189 131.Cluster status after adding 4 nodes
8a865621
DM
132----
133hp2# pvecm status
134Quorum information
135~~~~~~~~~~~~~~~~~~
136Date: Mon Apr 20 12:30:13 2015
137Quorum provider: corosync_votequorum
138Nodes: 4
139Node ID: 0x00000001
140Ring ID: 1928
141Quorate: Yes
142
143Votequorum information
144~~~~~~~~~~~~~~~~~~~~~~
145Expected votes: 4
146Highest expected: 4
147Total votes: 4
148Quorum: 2
149Flags: Quorate
150
151Membership information
152~~~~~~~~~~~~~~~~~~~~~~
153 Nodeid Votes Name
1540x00000001 1 192.168.15.91
1550x00000002 1 192.168.15.92 (local)
1560x00000003 1 192.168.15.93
1570x00000004 1 192.168.15.94
158----
159
160If you only want the list of all nodes use:
161
162 # pvecm nodes
163
5eba0743 164.List nodes in a cluster
8a865621
DM
165----
166hp2# pvecm nodes
167
168Membership information
169~~~~~~~~~~~~~~~~~~~~~~
170 Nodeid Votes Name
171 1 1 hp1
172 2 1 hp2 (local)
173 3 1 hp3
174 4 1 hp4
175----
176
e4ec4154
TL
177Adding Nodes With Separated Cluster Network
178~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
179
180When adding a node to a cluster with a separated cluster network you need to
181use the 'ringX_addr' parameters to set the nodes address on those networks:
182
183[source,bash]
4d19cb00 184----
e4ec4154 185pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 186----
e4ec4154
TL
187
188If you want to use the Redundant Ring Protocol you will also want to pass the
189'ring1_addr' parameter.
190
8a865621
DM
191
192Remove a Cluster Node
ceabe189 193---------------------
8a865621
DM
194
195CAUTION: Read carefully the procedure before proceeding, as it could
196not be what you want or need.
197
198Move all virtual machines from the node. Make sure you have no local
199data or backups you want to keep, or save them accordingly.
200
8c1189b6 201Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
7980581f 202identify the node ID:
8a865621
DM
203
204----
205hp1# pvecm status
206
207Quorum information
208~~~~~~~~~~~~~~~~~~
209Date: Mon Apr 20 12:30:13 2015
210Quorum provider: corosync_votequorum
211Nodes: 4
212Node ID: 0x00000001
213Ring ID: 1928
214Quorate: Yes
215
216Votequorum information
217~~~~~~~~~~~~~~~~~~~~~~
218Expected votes: 4
219Highest expected: 4
220Total votes: 4
221Quorum: 2
222Flags: Quorate
223
224Membership information
225~~~~~~~~~~~~~~~~~~~~~~
226 Nodeid Votes Name
2270x00000001 1 192.168.15.91 (local)
2280x00000002 1 192.168.15.92
2290x00000003 1 192.168.15.93
2300x00000004 1 192.168.15.94
231----
232
233IMPORTANT: at this point you must power off the node to be removed and
234make sure that it will not power on again (in the network) as it
235is.
236
237----
238hp1# pvecm nodes
239
240Membership information
241~~~~~~~~~~~~~~~~~~~~~~
242 Nodeid Votes Name
243 1 1 hp1 (local)
244 2 1 hp2
245 3 1 hp3
246 4 1 hp4
247----
248
249Log in to one remaining node via ssh. Issue the delete command (here
8c1189b6 250deleting node `hp4`):
8a865621
DM
251
252 hp1# pvecm delnode hp4
253
254If the operation succeeds no output is returned, just check the node
8c1189b6 255list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
256something like:
257
258----
259hp1# pvecm status
260
261Quorum information
262~~~~~~~~~~~~~~~~~~
263Date: Mon Apr 20 12:44:28 2015
264Quorum provider: corosync_votequorum
265Nodes: 3
266Node ID: 0x00000001
267Ring ID: 1992
268Quorate: Yes
269
270Votequorum information
271~~~~~~~~~~~~~~~~~~~~~~
272Expected votes: 3
273Highest expected: 3
274Total votes: 3
275Quorum: 3
276Flags: Quorate
277
278Membership information
279~~~~~~~~~~~~~~~~~~~~~~
280 Nodeid Votes Name
2810x00000001 1 192.168.15.90 (local)
2820x00000002 1 192.168.15.91
2830x00000003 1 192.168.15.92
284----
285
286IMPORTANT: as said above, it is very important to power off the node
287*before* removal, and make sure that it will *never* power on again
288(in the existing cluster network) as it is.
289
290If you power on the node as it is, your cluster will be screwed up and
291it could be difficult to restore a clean cluster state.
292
293If, for whatever reason, you want that this server joins the same
294cluster again, you have to
295
26ca7ff5 296* reinstall {pve} on it from scratch
8a865621
DM
297
298* then join it, as explained in the previous section.
d8742b0c 299
555e966b
TL
300Separate A Node Without Reinstalling
301~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
302
303CAUTION: This is *not* the recommended method, proceed with caution. Use the
304above mentioned method if you're unsure.
305
306You can also separate a node from a cluster without reinstalling it from
307scratch. But after removing the node from the cluster it will still have
308access to the shared storages! This must be resolved before you start removing
309the node from the cluster. A {pve} cluster cannot share the exact same
310storage with another cluster, as it leads to VMID conflicts.
311
3be22308
TL
312Its suggested that you create a new storage where only the node which you want
313to separate has access. This can be an new export on your NFS or a new Ceph
314pool, to name a few examples. Its just important that the exact same storage
315does not gets accessed by multiple clusters. After setting this storage up move
316all data from the node and its VMs to it. Then you are ready to separate the
317node from the cluster.
555e966b
TL
318
319WARNING: Ensure all shared resources are cleanly separated! You will run into
320conflicts and problems else.
321
322First stop the corosync and the pve-cluster services on the node:
323[source,bash]
4d19cb00 324----
555e966b
TL
325systemctl stop pve-cluster
326systemctl stop corosync
4d19cb00 327----
555e966b
TL
328
329Start the cluster filesystem again in local mode:
330[source,bash]
4d19cb00 331----
555e966b 332pmxcfs -l
4d19cb00 333----
555e966b
TL
334
335Delete the corosync configuration files:
336[source,bash]
4d19cb00 337----
555e966b
TL
338rm /etc/pve/corosync.conf
339rm /etc/corosync/*
4d19cb00 340----
555e966b
TL
341
342You can now start the filesystem again as normal service:
343[source,bash]
4d19cb00 344----
555e966b
TL
345killall pmxcfs
346systemctl start pve-cluster
4d19cb00 347----
555e966b
TL
348
349The node is now separated from the cluster. You can deleted it from a remaining
350node of the cluster with:
351[source,bash]
4d19cb00 352----
555e966b 353pvecm delnode oldnode
4d19cb00 354----
555e966b
TL
355
356If the command failed, because the remaining node in the cluster lost quorum
357when the now separate node exited, you may set the expected votes to 1 as a workaround:
358[source,bash]
4d19cb00 359----
555e966b 360pvecm expected 1
4d19cb00 361----
555e966b
TL
362
363And the repeat the 'pvecm delnode' command.
364
365Now switch back to the separated node, here delete all remaining files left
366from the old cluster. This ensures that the node can be added to another
367cluster again without problems.
368
369[source,bash]
4d19cb00 370----
555e966b 371rm /var/lib/corosync/*
4d19cb00 372----
555e966b
TL
373
374As the configuration files from the other nodes are still in the cluster
375filesystem you may want to clean those up too. Remove simply the whole
376directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
377you used the correct one before deleting it.
378
379CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
380the nodes can still connect to each other with public key authentication. This
381should be fixed by removing the respective keys from the
382'/etc/pve/priv/authorized_keys' file.
d8742b0c 383
806ef12d
DM
384Quorum
385------
386
387{pve} use a quorum-based technique to provide a consistent state among
388all cluster nodes.
389
390[quote, from Wikipedia, Quorum (distributed computing)]
391____
392A quorum is the minimum number of votes that a distributed transaction
393has to obtain in order to be allowed to perform an operation in a
394distributed system.
395____
396
397In case of network partitioning, state changes requires that a
398majority of nodes are online. The cluster switches to read-only mode
5eba0743 399if it loses quorum.
806ef12d
DM
400
401NOTE: {pve} assigns a single vote to each node by default.
402
e4ec4154
TL
403Cluster Network
404---------------
405
406The cluster network is the core of a cluster. All messages sent over it have to
407be delivered reliable to all nodes in their respective order. In {pve} this
408part is done by corosync, an implementation of a high performance low overhead
409high availability development toolkit. It serves our decentralized
410configuration file system (`pmxcfs`).
411
412[[cluster-network-requirements]]
413Network Requirements
414~~~~~~~~~~~~~~~~~~~~
415This needs a reliable network with latencies under 2 milliseconds (LAN
416performance) to work properly. While corosync can also use unicast for
417communication between nodes its **highly recommended** to have a multicast
418capable network. The network should not be used heavily by other members,
419ideally corosync runs on its own network.
420*never* share it with network where storage communicates too.
421
422Before setting up a cluster it is good practice to check if the network is fit
423for that purpose.
424
425* Ensure that all nodes are in the same subnet. This must only be true for the
426 network interfaces used for cluster communication (corosync).
427
428* Ensure all nodes can reach each other over those interfaces, using `ping` is
429 enough for a basic test.
430
431* Ensure that multicast works in general and a high package rates. This can be
432 done with the `omping` tool. The final "%loss" number should be < 1%.
433[source,bash]
434----
435omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
436----
437
438* Ensure that multicast communication works over an extended period of time.
439 This covers up problems where IGMP snooping is activated on the network but
440 no multicast querier is active. This test has a duration of around 10
441 minutes.
442[source,bash]
4d19cb00 443----
e4ec4154 444omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 445----
e4ec4154
TL
446
447Your network is not ready for clustering if any of these test fails. Recheck
448your network configuration. Especially switches are notorious for having
449multicast disabled by default or IGMP snooping enabled with no IGMP querier
450active.
451
452In smaller cluster its also an option to use unicast if you really cannot get
453multicast to work.
454
455Separate Cluster Network
456~~~~~~~~~~~~~~~~~~~~~~~~
457
458When creating a cluster without any parameters the cluster network is generally
459shared with the Web UI and the VMs and its traffic. Depending on your setup
460even storage traffic may get sent over the same network. Its recommended to
461change that, as corosync is a time critical real time application.
462
463Setting Up A New Network
464^^^^^^^^^^^^^^^^^^^^^^^^
465
466First you have to setup a new network interface. It should be on a physical
467separate network. Ensure that your network fulfills the
468<<cluster-network-requirements,cluster network requirements>>.
469
470Separate On Cluster Creation
471^^^^^^^^^^^^^^^^^^^^^^^^^^^^
472
473This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
474the 'pvecm create' command used for creating a new cluster.
475
476If you have setup a additional NIC with a static address on 10.10.10.1/25
477and want to send and receive all cluster communication over this interface
478you would execute:
479
480[source,bash]
4d19cb00 481----
e4ec4154 482pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 483----
e4ec4154
TL
484
485To check if everything is working properly execute:
486[source,bash]
4d19cb00 487----
e4ec4154 488systemctl status corosync
4d19cb00 489----
e4ec4154
TL
490
491[[separate-cluster-net-after-creation]]
492Separate After Cluster Creation
493^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
494
495You can do this also if you have already created a cluster and want to switch
496its communication to another network, without rebuilding the whole cluster.
497This change may lead to short durations of quorum loss in the cluster, as nodes
498have to restart corosync and come up one after the other on the new network.
499
500Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
501The open it and you should see a file similar to:
502
503----
504logging {
505 debug: off
506 to_syslog: yes
507}
508
509nodelist {
510
511 node {
512 name: due
513 nodeid: 2
514 quorum_votes: 1
515 ring0_addr: due
516 }
517
518 node {
519 name: tre
520 nodeid: 3
521 quorum_votes: 1
522 ring0_addr: tre
523 }
524
525 node {
526 name: uno
527 nodeid: 1
528 quorum_votes: 1
529 ring0_addr: uno
530 }
531
532}
533
534quorum {
535 provider: corosync_votequorum
536}
537
538totem {
539 cluster_name: thomas-testcluster
540 config_version: 3
541 ip_version: ipv4
542 secauth: on
543 version: 2
544 interface {
545 bindnetaddr: 192.168.30.50
546 ringnumber: 0
547 }
548
549}
550----
551
552The first you want to do is add the 'name' properties in the node entries if
553you do not see them already. Those *must* match the node name.
554
555Then replace the address from the 'ring0_addr' properties with the new
556addresses. You may use plain IP addresses or also hostnames here. If you use
557hostnames ensure that they are resolvable from all nodes.
558
559In my example I want to switch my cluster communication to the 10.10.10.1/25
560network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
561in the totem section of the config to an address of the new network. It can be
562any address from the subnet configured on the new network interface.
563
564After you increased the 'config_version' property the new configuration file
565should look like:
566
567----
568
569logging {
570 debug: off
571 to_syslog: yes
572}
573
574nodelist {
575
576 node {
577 name: due
578 nodeid: 2
579 quorum_votes: 1
580 ring0_addr: 10.10.10.2
581 }
582
583 node {
584 name: tre
585 nodeid: 3
586 quorum_votes: 1
587 ring0_addr: 10.10.10.3
588 }
589
590 node {
591 name: uno
592 nodeid: 1
593 quorum_votes: 1
594 ring0_addr: 10.10.10.1
595 }
596
597}
598
599quorum {
600 provider: corosync_votequorum
601}
602
603totem {
604 cluster_name: thomas-testcluster
605 config_version: 4
606 ip_version: ipv4
607 secauth: on
608 version: 2
609 interface {
610 bindnetaddr: 10.10.10.1
611 ringnumber: 0
612 }
613
614}
615----
616
617Now after a final check whether all changed information is correct we save it
618and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
619learn how to bring it in effect.
620
621As our change cannot be enforced live from corosync we have to do an restart.
622
623On a single node execute:
624[source,bash]
4d19cb00 625----
e4ec4154 626systemctl restart corosync
4d19cb00 627----
e4ec4154
TL
628
629Now check if everything is fine:
630
631[source,bash]
4d19cb00 632----
e4ec4154 633systemctl status corosync
4d19cb00 634----
e4ec4154
TL
635
636If corosync runs again correct restart corosync also on all other nodes.
637They will then join the cluster membership one by one on the new network.
638
639Redundant Ring Protocol
640~~~~~~~~~~~~~~~~~~~~~~~
641To avoid a single point of failure you should implement counter measurements.
642This can be on the hardware and operating system level through network bonding.
643
644Corosync itself offers also a possibility to add redundancy through the so
645called 'Redundant Ring Protocol'. This protocol allows running a second totem
646ring on another network, this network should be physically separated from the
647other rings network to actually increase availability.
648
649RRP On Cluster Creation
650~~~~~~~~~~~~~~~~~~~~~~~
651
652The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
653'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
654
655NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
656
657So if you have two networks, one on the 10.10.10.1/24 and the other on the
65810.10.20.1/24 subnet you would execute:
659
660[source,bash]
4d19cb00 661----
e4ec4154
TL
662pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
663-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 664----
e4ec4154
TL
665
666RRP On A Created Cluster
667~~~~~~~~~~~~~~~~~~~~~~~~
668
669When enabling an already running cluster to use RRP you will take similar steps
670as describe in <<separate-cluster-net-after-creation,separating the cluster
671network>>. You just do it on another ring.
672
673First add a new `interface` subsection in the `totem` section, set its
674`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
675address of the subnet you have configured for your new ring.
676Further set the `rrp_mode` to `passive`, this is the only stable mode.
677
678Then add to each node entry in the `nodelist` section its new `ring1_addr`
679property with the nodes additional ring address.
680
681So if you have two networks, one on the 10.10.10.1/24 and the other on the
68210.10.20.1/24 subnet, the final configuration file should look like:
683
684----
685totem {
686 cluster_name: tweak
687 config_version: 9
688 ip_version: ipv4
689 rrp_mode: passive
690 secauth: on
691 version: 2
692 interface {
693 bindnetaddr: 10.10.10.1
694 ringnumber: 0
695 }
696 interface {
697 bindnetaddr: 10.10.20.1
698 ringnumber: 1
699 }
700}
701
702nodelist {
703 node {
704 name: pvecm1
705 nodeid: 1
706 quorum_votes: 1
707 ring0_addr: 10.10.10.1
708 ring1_addr: 10.10.20.1
709 }
710
711 node {
712 name: pvecm2
713 nodeid: 2
714 quorum_votes: 1
715 ring0_addr: 10.10.10.2
716 ring1_addr: 10.10.20.2
717 }
718
719 [...] # other cluster nodes here
720}
721
722[...] # other remaining config sections here
723
724----
725
726Bring it in effect like described in the <<edit-corosync-conf,edit the
727corosync.conf file>> section.
728
729This is a change which cannot take live in effect and needs at least a restart
730of corosync. Recommended is a restart of the whole cluster.
731
732If you cannot reboot the whole cluster ensure no High Availability services are
733configured and the stop the corosync service on all nodes. After corosync is
734stopped on all nodes start it one after the other again.
735
736Corosync Configuration
737----------------------
738
739The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
740controls the cluster member ship and its network.
741For reading more about it check the corosync.conf man page:
742[source,bash]
4d19cb00 743----
e4ec4154 744man corosync.conf
4d19cb00 745----
e4ec4154
TL
746
747For node membership you should always use the `pvecm` tool provided by {pve}.
748You may have to edit the configuration file manually for other changes.
749Here are a few best practice tips for doing this.
750
751[[edit-corosync-conf]]
752Edit corosync.conf
753~~~~~~~~~~~~~~~~~~
754
755Editing the corosync.conf file can be not always straight forward. There are
756two on each cluster, one in `/etc/pve/corosync.conf` and the other in
757`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
758propagate the changes to the local one, but not vice versa.
759
760The configuration will get updated automatically as soon as the file changes.
761This means changes which can be integrated in a running corosync will take
762instantly effect. So you should always make a copy and edit that instead, to
763avoid triggering some unwanted changes by an in between safe.
764
765[source,bash]
4d19cb00 766----
e4ec4154 767cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 768----
e4ec4154
TL
769
770Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
771preinstalled on {pve} for example.
772
773NOTE: Always increment the 'config_version' number on configuration changes,
774omitting this can lead to problems.
775
776After making the necessary changes create another copy of the current working
777configuration file. This serves as a backup if the new configuration fails to
778apply or makes problems in other ways.
779
780[source,bash]
4d19cb00 781----
e4ec4154 782cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 783----
e4ec4154
TL
784
785Then move the new configuration file over the old one:
786[source,bash]
4d19cb00 787----
e4ec4154 788mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 789----
e4ec4154
TL
790
791You may check with the commands
792[source,bash]
4d19cb00 793----
e4ec4154
TL
794systemctl status corosync
795journalctl -b -u corosync
4d19cb00 796----
e4ec4154
TL
797
798If the change could applied automatically. If not you may have to restart the
799corosync service via:
800[source,bash]
4d19cb00 801----
e4ec4154 802systemctl restart corosync
4d19cb00 803----
e4ec4154
TL
804
805On errors check the troubleshooting section below.
806
807Troubleshooting
808~~~~~~~~~~~~~~~
809
810Issue: 'quorum.expected_votes must be configured'
811^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
812
813When corosync starts to fail and you get the following message in the system log:
814
815----
816[...]
817corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
818corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
819 'configuration error: nodelist or quorum.expected_votes must be configured!'
820[...]
821----
822
823It means that the hostname you set for corosync 'ringX_addr' in the
824configuration could not be resolved.
825
826
827Write Configuration When Not Quorate
828^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
829
830If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
831know what you do, use:
832[source,bash]
4d19cb00 833----
e4ec4154 834pvecm expected 1
4d19cb00 835----
e4ec4154
TL
836
837This sets the expected vote count to 1 and makes the cluster quorate. You can
838now fix your configuration, or revert it back to the last working backup.
839
840This is not enough if corosync cannot start anymore. Here its best to edit the
841local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
842that corosync can start again. Ensure that on all nodes this configuration has
843the same content to avoid split brains. If you are not sure what went wrong
844it's best to ask the Proxmox Community to help you.
845
846
847[[corosync-conf-glossary]]
848Corosync Configuration Glossary
849~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
850
851ringX_addr::
852This names the different ring addresses for the corosync totem rings used for
853the cluster communication.
854
855bindnetaddr::
856Defines to which interface the ring should bind to. It may be any address of
857the subnet configured on the interface we want to use. In general its the
858recommended to just use an address a node uses on this interface.
859
860rrp_mode::
861Specifies the mode of the redundant ring protocol and may be passive, active or
862none. Note that use of active is highly experimental and not official
863supported. Passive is the preferred mode, it may double the cluster
864communication throughput and increases availability.
865
806ef12d
DM
866
867Cluster Cold Start
868------------------
869
870It is obvious that a cluster is not quorate when all nodes are
871offline. This is a common case after a power failure.
872
873NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 874(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
875you want HA.
876
8c1189b6
FG
877On node startup, service `pve-manager` is started and waits for
878quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
879flag set.
880
881When you turn on nodes, or when power comes back after power failure,
882it is likely that some nodes boots faster than others. Please keep in
883mind that guest startup is delayed until you reach quorum.
806ef12d
DM
884
885
d8742b0c
DM
886ifdef::manvolnum[]
887include::pve-copyright.adoc[]
888endif::manvolnum[]