]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
add correct wiki titles
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c
DM
1ifdef::manvolnum[]
2PVE({manvolnum})
3================
4include::attributes.txt[]
5
5f09af76
DM
6:pve-toplevel:
7
d8742b0c
DM
8NAME
9----
10
74026b8f 11pvecm - Proxmox VE Cluster Manager
d8742b0c 12
49a5e11c 13SYNOPSIS
d8742b0c
DM
14--------
15
16include::pvecm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23Cluster Manager
24===============
25include::attributes.txt[]
26endif::manvolnum[]
27
5f09af76
DM
28ifdef::wiki[]
29:pve-toplevel:
30endif::wiki[]
31
8c1189b6
FG
32The {PVE} cluster manager `pvecm` is a tool to create a group of
33physical servers. Such a group is called a *cluster*. We use the
8a865621 34http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 35communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
36(probably more, dependent on network latency).
37
8c1189b6 38`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 39leave the cluster, get status information and do various other cluster
e300cf7d
FG
40related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
41is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
42nodes.
43
44Grouping nodes into a cluster has the following advantages:
45
46* Centralized, web based management
47
5eba0743 48* Multi-master clusters: each node can do all management task
8a865621 49
8c1189b6
FG
50* `pmxcfs`: database-driven file system for storing configuration files,
51 replicated in real-time on all nodes using `corosync`.
8a865621 52
5eba0743 53* Easy migration of virtual machines and containers between physical
8a865621
DM
54 hosts
55
56* Fast deployment
57
58* Cluster-wide services like firewall and HA
59
60
61Requirements
62------------
63
8c1189b6 64* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 65 to communicate between nodes (also see
ceabe189 66 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 67 ports 5404 and 5405 for cluster communication.
ceabe189
DM
68+
69NOTE: Some switches do not support IP multicast by default and must be
70manually enabled first.
8a865621
DM
71
72* Date and time have to be synchronized.
73
ceabe189 74* SSH tunnel on TCP port 22 between nodes is used.
8a865621 75
ceabe189
DM
76* If you are interested in High Availability, you need to have at
77 least three nodes for reliable quorum. All nodes should have the
78 same version.
8a865621
DM
79
80* We recommend a dedicated NIC for the cluster traffic, especially if
81 you use shared storage.
82
83NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 84Proxmox VE 4.0 cluster nodes.
8a865621
DM
85
86
ceabe189
DM
87Preparing Nodes
88---------------
8a865621
DM
89
90First, install {PVE} on all nodes. Make sure that each node is
91installed with the final hostname and IP configuration. Changing the
92hostname and IP is not possible after cluster creation.
93
94Currently the cluster creation has to be done on the console, so you
8c1189b6 95need to login via `ssh`.
8a865621 96
8a865621 97Create the Cluster
ceabe189 98------------------
8a865621 99
8c1189b6
FG
100Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
101This name cannot be changed later.
8a865621
DM
102
103 hp1# pvecm create YOUR-CLUSTER-NAME
104
63f956c8
DM
105CAUTION: The cluster name is used to compute the default multicast
106address. Please use unique cluster names if you run more than one
107cluster inside your network.
108
8a865621
DM
109To check the state of your cluster use:
110
111 hp1# pvecm status
112
113
114Adding Nodes to the Cluster
ceabe189 115---------------------------
8a865621 116
8c1189b6 117Login via `ssh` to the node you want to add.
8a865621
DM
118
119 hp2# pvecm add IP-ADDRESS-CLUSTER
120
121For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
122
5eba0743 123CAUTION: A new node cannot hold any VMs, because you would get
7980581f 124conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
125`/etc/pve` is overwritten when you join a new node to the cluster. To
126workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 127adding the node to the cluster.
8a865621
DM
128
129To check the state of cluster:
130
131 # pvecm status
132
ceabe189 133.Cluster status after adding 4 nodes
8a865621
DM
134----
135hp2# pvecm status
136Quorum information
137~~~~~~~~~~~~~~~~~~
138Date: Mon Apr 20 12:30:13 2015
139Quorum provider: corosync_votequorum
140Nodes: 4
141Node ID: 0x00000001
142Ring ID: 1928
143Quorate: Yes
144
145Votequorum information
146~~~~~~~~~~~~~~~~~~~~~~
147Expected votes: 4
148Highest expected: 4
149Total votes: 4
150Quorum: 2
151Flags: Quorate
152
153Membership information
154~~~~~~~~~~~~~~~~~~~~~~
155 Nodeid Votes Name
1560x00000001 1 192.168.15.91
1570x00000002 1 192.168.15.92 (local)
1580x00000003 1 192.168.15.93
1590x00000004 1 192.168.15.94
160----
161
162If you only want the list of all nodes use:
163
164 # pvecm nodes
165
5eba0743 166.List nodes in a cluster
8a865621
DM
167----
168hp2# pvecm nodes
169
170Membership information
171~~~~~~~~~~~~~~~~~~~~~~
172 Nodeid Votes Name
173 1 1 hp1
174 2 1 hp2 (local)
175 3 1 hp3
176 4 1 hp4
177----
178
e4ec4154
TL
179Adding Nodes With Separated Cluster Network
180~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
181
182When adding a node to a cluster with a separated cluster network you need to
183use the 'ringX_addr' parameters to set the nodes address on those networks:
184
185[source,bash]
4d19cb00 186----
e4ec4154 187pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 188----
e4ec4154
TL
189
190If you want to use the Redundant Ring Protocol you will also want to pass the
191'ring1_addr' parameter.
192
8a865621
DM
193
194Remove a Cluster Node
ceabe189 195---------------------
8a865621
DM
196
197CAUTION: Read carefully the procedure before proceeding, as it could
198not be what you want or need.
199
200Move all virtual machines from the node. Make sure you have no local
201data or backups you want to keep, or save them accordingly.
202
8c1189b6 203Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
7980581f 204identify the node ID:
8a865621
DM
205
206----
207hp1# pvecm status
208
209Quorum information
210~~~~~~~~~~~~~~~~~~
211Date: Mon Apr 20 12:30:13 2015
212Quorum provider: corosync_votequorum
213Nodes: 4
214Node ID: 0x00000001
215Ring ID: 1928
216Quorate: Yes
217
218Votequorum information
219~~~~~~~~~~~~~~~~~~~~~~
220Expected votes: 4
221Highest expected: 4
222Total votes: 4
223Quorum: 2
224Flags: Quorate
225
226Membership information
227~~~~~~~~~~~~~~~~~~~~~~
228 Nodeid Votes Name
2290x00000001 1 192.168.15.91 (local)
2300x00000002 1 192.168.15.92
2310x00000003 1 192.168.15.93
2320x00000004 1 192.168.15.94
233----
234
235IMPORTANT: at this point you must power off the node to be removed and
236make sure that it will not power on again (in the network) as it
237is.
238
239----
240hp1# pvecm nodes
241
242Membership information
243~~~~~~~~~~~~~~~~~~~~~~
244 Nodeid Votes Name
245 1 1 hp1 (local)
246 2 1 hp2
247 3 1 hp3
248 4 1 hp4
249----
250
251Log in to one remaining node via ssh. Issue the delete command (here
8c1189b6 252deleting node `hp4`):
8a865621
DM
253
254 hp1# pvecm delnode hp4
255
256If the operation succeeds no output is returned, just check the node
8c1189b6 257list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
258something like:
259
260----
261hp1# pvecm status
262
263Quorum information
264~~~~~~~~~~~~~~~~~~
265Date: Mon Apr 20 12:44:28 2015
266Quorum provider: corosync_votequorum
267Nodes: 3
268Node ID: 0x00000001
269Ring ID: 1992
270Quorate: Yes
271
272Votequorum information
273~~~~~~~~~~~~~~~~~~~~~~
274Expected votes: 3
275Highest expected: 3
276Total votes: 3
277Quorum: 3
278Flags: Quorate
279
280Membership information
281~~~~~~~~~~~~~~~~~~~~~~
282 Nodeid Votes Name
2830x00000001 1 192.168.15.90 (local)
2840x00000002 1 192.168.15.91
2850x00000003 1 192.168.15.92
286----
287
288IMPORTANT: as said above, it is very important to power off the node
289*before* removal, and make sure that it will *never* power on again
290(in the existing cluster network) as it is.
291
292If you power on the node as it is, your cluster will be screwed up and
293it could be difficult to restore a clean cluster state.
294
295If, for whatever reason, you want that this server joins the same
296cluster again, you have to
297
26ca7ff5 298* reinstall {pve} on it from scratch
8a865621
DM
299
300* then join it, as explained in the previous section.
d8742b0c 301
555e966b
TL
302Separate A Node Without Reinstalling
303~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
304
305CAUTION: This is *not* the recommended method, proceed with caution. Use the
306above mentioned method if you're unsure.
307
308You can also separate a node from a cluster without reinstalling it from
309scratch. But after removing the node from the cluster it will still have
310access to the shared storages! This must be resolved before you start removing
311the node from the cluster. A {pve} cluster cannot share the exact same
312storage with another cluster, as it leads to VMID conflicts.
313
3be22308
TL
314Its suggested that you create a new storage where only the node which you want
315to separate has access. This can be an new export on your NFS or a new Ceph
316pool, to name a few examples. Its just important that the exact same storage
317does not gets accessed by multiple clusters. After setting this storage up move
318all data from the node and its VMs to it. Then you are ready to separate the
319node from the cluster.
555e966b
TL
320
321WARNING: Ensure all shared resources are cleanly separated! You will run into
322conflicts and problems else.
323
324First stop the corosync and the pve-cluster services on the node:
325[source,bash]
4d19cb00 326----
555e966b
TL
327systemctl stop pve-cluster
328systemctl stop corosync
4d19cb00 329----
555e966b
TL
330
331Start the cluster filesystem again in local mode:
332[source,bash]
4d19cb00 333----
555e966b 334pmxcfs -l
4d19cb00 335----
555e966b
TL
336
337Delete the corosync configuration files:
338[source,bash]
4d19cb00 339----
555e966b
TL
340rm /etc/pve/corosync.conf
341rm /etc/corosync/*
4d19cb00 342----
555e966b
TL
343
344You can now start the filesystem again as normal service:
345[source,bash]
4d19cb00 346----
555e966b
TL
347killall pmxcfs
348systemctl start pve-cluster
4d19cb00 349----
555e966b
TL
350
351The node is now separated from the cluster. You can deleted it from a remaining
352node of the cluster with:
353[source,bash]
4d19cb00 354----
555e966b 355pvecm delnode oldnode
4d19cb00 356----
555e966b
TL
357
358If the command failed, because the remaining node in the cluster lost quorum
359when the now separate node exited, you may set the expected votes to 1 as a workaround:
360[source,bash]
4d19cb00 361----
555e966b 362pvecm expected 1
4d19cb00 363----
555e966b
TL
364
365And the repeat the 'pvecm delnode' command.
366
367Now switch back to the separated node, here delete all remaining files left
368from the old cluster. This ensures that the node can be added to another
369cluster again without problems.
370
371[source,bash]
4d19cb00 372----
555e966b 373rm /var/lib/corosync/*
4d19cb00 374----
555e966b
TL
375
376As the configuration files from the other nodes are still in the cluster
377filesystem you may want to clean those up too. Remove simply the whole
378directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
379you used the correct one before deleting it.
380
381CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
382the nodes can still connect to each other with public key authentication. This
383should be fixed by removing the respective keys from the
384'/etc/pve/priv/authorized_keys' file.
d8742b0c 385
806ef12d
DM
386Quorum
387------
388
389{pve} use a quorum-based technique to provide a consistent state among
390all cluster nodes.
391
392[quote, from Wikipedia, Quorum (distributed computing)]
393____
394A quorum is the minimum number of votes that a distributed transaction
395has to obtain in order to be allowed to perform an operation in a
396distributed system.
397____
398
399In case of network partitioning, state changes requires that a
400majority of nodes are online. The cluster switches to read-only mode
5eba0743 401if it loses quorum.
806ef12d
DM
402
403NOTE: {pve} assigns a single vote to each node by default.
404
e4ec4154
TL
405Cluster Network
406---------------
407
408The cluster network is the core of a cluster. All messages sent over it have to
409be delivered reliable to all nodes in their respective order. In {pve} this
410part is done by corosync, an implementation of a high performance low overhead
411high availability development toolkit. It serves our decentralized
412configuration file system (`pmxcfs`).
413
414[[cluster-network-requirements]]
415Network Requirements
416~~~~~~~~~~~~~~~~~~~~
417This needs a reliable network with latencies under 2 milliseconds (LAN
418performance) to work properly. While corosync can also use unicast for
419communication between nodes its **highly recommended** to have a multicast
420capable network. The network should not be used heavily by other members,
421ideally corosync runs on its own network.
422*never* share it with network where storage communicates too.
423
424Before setting up a cluster it is good practice to check if the network is fit
425for that purpose.
426
427* Ensure that all nodes are in the same subnet. This must only be true for the
428 network interfaces used for cluster communication (corosync).
429
430* Ensure all nodes can reach each other over those interfaces, using `ping` is
431 enough for a basic test.
432
433* Ensure that multicast works in general and a high package rates. This can be
434 done with the `omping` tool. The final "%loss" number should be < 1%.
435[source,bash]
436----
437omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
438----
439
440* Ensure that multicast communication works over an extended period of time.
441 This covers up problems where IGMP snooping is activated on the network but
442 no multicast querier is active. This test has a duration of around 10
443 minutes.
444[source,bash]
4d19cb00 445----
e4ec4154 446omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 447----
e4ec4154
TL
448
449Your network is not ready for clustering if any of these test fails. Recheck
450your network configuration. Especially switches are notorious for having
451multicast disabled by default or IGMP snooping enabled with no IGMP querier
452active.
453
454In smaller cluster its also an option to use unicast if you really cannot get
455multicast to work.
456
457Separate Cluster Network
458~~~~~~~~~~~~~~~~~~~~~~~~
459
460When creating a cluster without any parameters the cluster network is generally
461shared with the Web UI and the VMs and its traffic. Depending on your setup
462even storage traffic may get sent over the same network. Its recommended to
463change that, as corosync is a time critical real time application.
464
465Setting Up A New Network
466^^^^^^^^^^^^^^^^^^^^^^^^
467
468First you have to setup a new network interface. It should be on a physical
469separate network. Ensure that your network fulfills the
470<<cluster-network-requirements,cluster network requirements>>.
471
472Separate On Cluster Creation
473^^^^^^^^^^^^^^^^^^^^^^^^^^^^
474
475This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
476the 'pvecm create' command used for creating a new cluster.
477
478If you have setup a additional NIC with a static address on 10.10.10.1/25
479and want to send and receive all cluster communication over this interface
480you would execute:
481
482[source,bash]
4d19cb00 483----
e4ec4154 484pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 485----
e4ec4154
TL
486
487To check if everything is working properly execute:
488[source,bash]
4d19cb00 489----
e4ec4154 490systemctl status corosync
4d19cb00 491----
e4ec4154
TL
492
493[[separate-cluster-net-after-creation]]
494Separate After Cluster Creation
495^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
496
497You can do this also if you have already created a cluster and want to switch
498its communication to another network, without rebuilding the whole cluster.
499This change may lead to short durations of quorum loss in the cluster, as nodes
500have to restart corosync and come up one after the other on the new network.
501
502Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
503The open it and you should see a file similar to:
504
505----
506logging {
507 debug: off
508 to_syslog: yes
509}
510
511nodelist {
512
513 node {
514 name: due
515 nodeid: 2
516 quorum_votes: 1
517 ring0_addr: due
518 }
519
520 node {
521 name: tre
522 nodeid: 3
523 quorum_votes: 1
524 ring0_addr: tre
525 }
526
527 node {
528 name: uno
529 nodeid: 1
530 quorum_votes: 1
531 ring0_addr: uno
532 }
533
534}
535
536quorum {
537 provider: corosync_votequorum
538}
539
540totem {
541 cluster_name: thomas-testcluster
542 config_version: 3
543 ip_version: ipv4
544 secauth: on
545 version: 2
546 interface {
547 bindnetaddr: 192.168.30.50
548 ringnumber: 0
549 }
550
551}
552----
553
554The first you want to do is add the 'name' properties in the node entries if
555you do not see them already. Those *must* match the node name.
556
557Then replace the address from the 'ring0_addr' properties with the new
558addresses. You may use plain IP addresses or also hostnames here. If you use
559hostnames ensure that they are resolvable from all nodes.
560
561In my example I want to switch my cluster communication to the 10.10.10.1/25
562network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
563in the totem section of the config to an address of the new network. It can be
564any address from the subnet configured on the new network interface.
565
566After you increased the 'config_version' property the new configuration file
567should look like:
568
569----
570
571logging {
572 debug: off
573 to_syslog: yes
574}
575
576nodelist {
577
578 node {
579 name: due
580 nodeid: 2
581 quorum_votes: 1
582 ring0_addr: 10.10.10.2
583 }
584
585 node {
586 name: tre
587 nodeid: 3
588 quorum_votes: 1
589 ring0_addr: 10.10.10.3
590 }
591
592 node {
593 name: uno
594 nodeid: 1
595 quorum_votes: 1
596 ring0_addr: 10.10.10.1
597 }
598
599}
600
601quorum {
602 provider: corosync_votequorum
603}
604
605totem {
606 cluster_name: thomas-testcluster
607 config_version: 4
608 ip_version: ipv4
609 secauth: on
610 version: 2
611 interface {
612 bindnetaddr: 10.10.10.1
613 ringnumber: 0
614 }
615
616}
617----
618
619Now after a final check whether all changed information is correct we save it
620and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
621learn how to bring it in effect.
622
623As our change cannot be enforced live from corosync we have to do an restart.
624
625On a single node execute:
626[source,bash]
4d19cb00 627----
e4ec4154 628systemctl restart corosync
4d19cb00 629----
e4ec4154
TL
630
631Now check if everything is fine:
632
633[source,bash]
4d19cb00 634----
e4ec4154 635systemctl status corosync
4d19cb00 636----
e4ec4154
TL
637
638If corosync runs again correct restart corosync also on all other nodes.
639They will then join the cluster membership one by one on the new network.
640
641Redundant Ring Protocol
642~~~~~~~~~~~~~~~~~~~~~~~
643To avoid a single point of failure you should implement counter measurements.
644This can be on the hardware and operating system level through network bonding.
645
646Corosync itself offers also a possibility to add redundancy through the so
647called 'Redundant Ring Protocol'. This protocol allows running a second totem
648ring on another network, this network should be physically separated from the
649other rings network to actually increase availability.
650
651RRP On Cluster Creation
652~~~~~~~~~~~~~~~~~~~~~~~
653
654The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
655'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
656
657NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
658
659So if you have two networks, one on the 10.10.10.1/24 and the other on the
66010.10.20.1/24 subnet you would execute:
661
662[source,bash]
4d19cb00 663----
e4ec4154
TL
664pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
665-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 666----
e4ec4154
TL
667
668RRP On A Created Cluster
669~~~~~~~~~~~~~~~~~~~~~~~~
670
671When enabling an already running cluster to use RRP you will take similar steps
672as describe in <<separate-cluster-net-after-creation,separating the cluster
673network>>. You just do it on another ring.
674
675First add a new `interface` subsection in the `totem` section, set its
676`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
677address of the subnet you have configured for your new ring.
678Further set the `rrp_mode` to `passive`, this is the only stable mode.
679
680Then add to each node entry in the `nodelist` section its new `ring1_addr`
681property with the nodes additional ring address.
682
683So if you have two networks, one on the 10.10.10.1/24 and the other on the
68410.10.20.1/24 subnet, the final configuration file should look like:
685
686----
687totem {
688 cluster_name: tweak
689 config_version: 9
690 ip_version: ipv4
691 rrp_mode: passive
692 secauth: on
693 version: 2
694 interface {
695 bindnetaddr: 10.10.10.1
696 ringnumber: 0
697 }
698 interface {
699 bindnetaddr: 10.10.20.1
700 ringnumber: 1
701 }
702}
703
704nodelist {
705 node {
706 name: pvecm1
707 nodeid: 1
708 quorum_votes: 1
709 ring0_addr: 10.10.10.1
710 ring1_addr: 10.10.20.1
711 }
712
713 node {
714 name: pvecm2
715 nodeid: 2
716 quorum_votes: 1
717 ring0_addr: 10.10.10.2
718 ring1_addr: 10.10.20.2
719 }
720
721 [...] # other cluster nodes here
722}
723
724[...] # other remaining config sections here
725
726----
727
728Bring it in effect like described in the <<edit-corosync-conf,edit the
729corosync.conf file>> section.
730
731This is a change which cannot take live in effect and needs at least a restart
732of corosync. Recommended is a restart of the whole cluster.
733
734If you cannot reboot the whole cluster ensure no High Availability services are
735configured and the stop the corosync service on all nodes. After corosync is
736stopped on all nodes start it one after the other again.
737
738Corosync Configuration
739----------------------
740
741The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
742controls the cluster member ship and its network.
743For reading more about it check the corosync.conf man page:
744[source,bash]
4d19cb00 745----
e4ec4154 746man corosync.conf
4d19cb00 747----
e4ec4154
TL
748
749For node membership you should always use the `pvecm` tool provided by {pve}.
750You may have to edit the configuration file manually for other changes.
751Here are a few best practice tips for doing this.
752
753[[edit-corosync-conf]]
754Edit corosync.conf
755~~~~~~~~~~~~~~~~~~
756
757Editing the corosync.conf file can be not always straight forward. There are
758two on each cluster, one in `/etc/pve/corosync.conf` and the other in
759`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
760propagate the changes to the local one, but not vice versa.
761
762The configuration will get updated automatically as soon as the file changes.
763This means changes which can be integrated in a running corosync will take
764instantly effect. So you should always make a copy and edit that instead, to
765avoid triggering some unwanted changes by an in between safe.
766
767[source,bash]
4d19cb00 768----
e4ec4154 769cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 770----
e4ec4154
TL
771
772Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
773preinstalled on {pve} for example.
774
775NOTE: Always increment the 'config_version' number on configuration changes,
776omitting this can lead to problems.
777
778After making the necessary changes create another copy of the current working
779configuration file. This serves as a backup if the new configuration fails to
780apply or makes problems in other ways.
781
782[source,bash]
4d19cb00 783----
e4ec4154 784cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 785----
e4ec4154
TL
786
787Then move the new configuration file over the old one:
788[source,bash]
4d19cb00 789----
e4ec4154 790mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 791----
e4ec4154
TL
792
793You may check with the commands
794[source,bash]
4d19cb00 795----
e4ec4154
TL
796systemctl status corosync
797journalctl -b -u corosync
4d19cb00 798----
e4ec4154
TL
799
800If the change could applied automatically. If not you may have to restart the
801corosync service via:
802[source,bash]
4d19cb00 803----
e4ec4154 804systemctl restart corosync
4d19cb00 805----
e4ec4154
TL
806
807On errors check the troubleshooting section below.
808
809Troubleshooting
810~~~~~~~~~~~~~~~
811
812Issue: 'quorum.expected_votes must be configured'
813^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
814
815When corosync starts to fail and you get the following message in the system log:
816
817----
818[...]
819corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
820corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
821 'configuration error: nodelist or quorum.expected_votes must be configured!'
822[...]
823----
824
825It means that the hostname you set for corosync 'ringX_addr' in the
826configuration could not be resolved.
827
828
829Write Configuration When Not Quorate
830^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
831
832If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
833know what you do, use:
834[source,bash]
4d19cb00 835----
e4ec4154 836pvecm expected 1
4d19cb00 837----
e4ec4154
TL
838
839This sets the expected vote count to 1 and makes the cluster quorate. You can
840now fix your configuration, or revert it back to the last working backup.
841
842This is not enough if corosync cannot start anymore. Here its best to edit the
843local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
844that corosync can start again. Ensure that on all nodes this configuration has
845the same content to avoid split brains. If you are not sure what went wrong
846it's best to ask the Proxmox Community to help you.
847
848
849[[corosync-conf-glossary]]
850Corosync Configuration Glossary
851~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
852
853ringX_addr::
854This names the different ring addresses for the corosync totem rings used for
855the cluster communication.
856
857bindnetaddr::
858Defines to which interface the ring should bind to. It may be any address of
859the subnet configured on the interface we want to use. In general its the
860recommended to just use an address a node uses on this interface.
861
862rrp_mode::
863Specifies the mode of the redundant ring protocol and may be passive, active or
864none. Note that use of active is highly experimental and not official
865supported. Passive is the preferred mode, it may double the cluster
866communication throughput and increases availability.
867
806ef12d
DM
868
869Cluster Cold Start
870------------------
871
872It is obvious that a cluster is not quorate when all nodes are
873offline. This is a common case after a power failure.
874
875NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 876(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
877you want HA.
878
8c1189b6
FG
879On node startup, service `pve-manager` is started and waits for
880quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
881flag set.
882
883When you turn on nodes, or when power comes back after power failure,
884it is likely that some nodes boots faster than others. Please keep in
885mind that guest startup is delayed until you reach quorum.
806ef12d
DM
886
887
d8742b0c
DM
888ifdef::manvolnum[]
889include::pve-copyright.adoc[]
890endif::manvolnum[]