]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
remove getting help link from wiki
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c
DM
1ifdef::manvolnum[]
2PVE({manvolnum})
3================
4include::attributes.txt[]
5
6NAME
7----
8
74026b8f 9pvecm - Proxmox VE Cluster Manager
d8742b0c
DM
10
11SYNOPSYS
12--------
13
14include::pvecm.1-synopsis.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19
20ifndef::manvolnum[]
21Cluster Manager
22===============
23include::attributes.txt[]
24endif::manvolnum[]
25
8c1189b6
FG
26The {PVE} cluster manager `pvecm` is a tool to create a group of
27physical servers. Such a group is called a *cluster*. We use the
8a865621 28http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 29communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
30(probably more, dependent on network latency).
31
8c1189b6 32`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 33leave the cluster, get status information and do various other cluster
e300cf7d
FG
34related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
36nodes.
37
38Grouping nodes into a cluster has the following advantages:
39
40* Centralized, web based management
41
5eba0743 42* Multi-master clusters: each node can do all management task
8a865621 43
8c1189b6
FG
44* `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
8a865621 46
5eba0743 47* Easy migration of virtual machines and containers between physical
8a865621
DM
48 hosts
49
50* Fast deployment
51
52* Cluster-wide services like firewall and HA
53
54
55Requirements
56------------
57
8c1189b6 58* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 59 to communicate between nodes (also see
ceabe189 60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 61 ports 5404 and 5405 for cluster communication.
ceabe189
DM
62+
63NOTE: Some switches do not support IP multicast by default and must be
64manually enabled first.
8a865621
DM
65
66* Date and time have to be synchronized.
67
ceabe189 68* SSH tunnel on TCP port 22 between nodes is used.
8a865621 69
ceabe189
DM
70* If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
8a865621
DM
73
74* We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 78Proxmox VE 4.0 cluster nodes.
8a865621
DM
79
80
ceabe189
DM
81Preparing Nodes
82---------------
8a865621
DM
83
84First, install {PVE} on all nodes. Make sure that each node is
85installed with the final hostname and IP configuration. Changing the
86hostname and IP is not possible after cluster creation.
87
88Currently the cluster creation has to be done on the console, so you
8c1189b6 89need to login via `ssh`.
8a865621 90
8a865621 91Create the Cluster
ceabe189 92------------------
8a865621 93
8c1189b6
FG
94Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95This name cannot be changed later.
8a865621
DM
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
63f956c8
DM
99CAUTION: The cluster name is used to compute the default multicast
100address. Please use unique cluster names if you run more than one
101cluster inside your network.
102
8a865621
DM
103To check the state of your cluster use:
104
105 hp1# pvecm status
106
107
108Adding Nodes to the Cluster
ceabe189 109---------------------------
8a865621 110
8c1189b6 111Login via `ssh` to the node you want to add.
8a865621
DM
112
113 hp2# pvecm add IP-ADDRESS-CLUSTER
114
115For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
116
5eba0743 117CAUTION: A new node cannot hold any VMs, because you would get
7980581f 118conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
119`/etc/pve` is overwritten when you join a new node to the cluster. To
120workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 121adding the node to the cluster.
8a865621
DM
122
123To check the state of cluster:
124
125 # pvecm status
126
ceabe189 127.Cluster status after adding 4 nodes
8a865621
DM
128----
129hp2# pvecm status
130Quorum information
131~~~~~~~~~~~~~~~~~~
132Date: Mon Apr 20 12:30:13 2015
133Quorum provider: corosync_votequorum
134Nodes: 4
135Node ID: 0x00000001
136Ring ID: 1928
137Quorate: Yes
138
139Votequorum information
140~~~~~~~~~~~~~~~~~~~~~~
141Expected votes: 4
142Highest expected: 4
143Total votes: 4
144Quorum: 2
145Flags: Quorate
146
147Membership information
148~~~~~~~~~~~~~~~~~~~~~~
149 Nodeid Votes Name
1500x00000001 1 192.168.15.91
1510x00000002 1 192.168.15.92 (local)
1520x00000003 1 192.168.15.93
1530x00000004 1 192.168.15.94
154----
155
156If you only want the list of all nodes use:
157
158 # pvecm nodes
159
5eba0743 160.List nodes in a cluster
8a865621
DM
161----
162hp2# pvecm nodes
163
164Membership information
165~~~~~~~~~~~~~~~~~~~~~~
166 Nodeid Votes Name
167 1 1 hp1
168 2 1 hp2 (local)
169 3 1 hp3
170 4 1 hp4
171----
172
e4ec4154
TL
173Adding Nodes With Separated Cluster Network
174~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
175
176When adding a node to a cluster with a separated cluster network you need to
177use the 'ringX_addr' parameters to set the nodes address on those networks:
178
179[source,bash]
180pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
181
182If you want to use the Redundant Ring Protocol you will also want to pass the
183'ring1_addr' parameter.
184
8a865621
DM
185
186Remove a Cluster Node
ceabe189 187---------------------
8a865621
DM
188
189CAUTION: Read carefully the procedure before proceeding, as it could
190not be what you want or need.
191
192Move all virtual machines from the node. Make sure you have no local
193data or backups you want to keep, or save them accordingly.
194
8c1189b6 195Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
7980581f 196identify the node ID:
8a865621
DM
197
198----
199hp1# pvecm status
200
201Quorum information
202~~~~~~~~~~~~~~~~~~
203Date: Mon Apr 20 12:30:13 2015
204Quorum provider: corosync_votequorum
205Nodes: 4
206Node ID: 0x00000001
207Ring ID: 1928
208Quorate: Yes
209
210Votequorum information
211~~~~~~~~~~~~~~~~~~~~~~
212Expected votes: 4
213Highest expected: 4
214Total votes: 4
215Quorum: 2
216Flags: Quorate
217
218Membership information
219~~~~~~~~~~~~~~~~~~~~~~
220 Nodeid Votes Name
2210x00000001 1 192.168.15.91 (local)
2220x00000002 1 192.168.15.92
2230x00000003 1 192.168.15.93
2240x00000004 1 192.168.15.94
225----
226
227IMPORTANT: at this point you must power off the node to be removed and
228make sure that it will not power on again (in the network) as it
229is.
230
231----
232hp1# pvecm nodes
233
234Membership information
235~~~~~~~~~~~~~~~~~~~~~~
236 Nodeid Votes Name
237 1 1 hp1 (local)
238 2 1 hp2
239 3 1 hp3
240 4 1 hp4
241----
242
243Log in to one remaining node via ssh. Issue the delete command (here
8c1189b6 244deleting node `hp4`):
8a865621
DM
245
246 hp1# pvecm delnode hp4
247
248If the operation succeeds no output is returned, just check the node
8c1189b6 249list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
250something like:
251
252----
253hp1# pvecm status
254
255Quorum information
256~~~~~~~~~~~~~~~~~~
257Date: Mon Apr 20 12:44:28 2015
258Quorum provider: corosync_votequorum
259Nodes: 3
260Node ID: 0x00000001
261Ring ID: 1992
262Quorate: Yes
263
264Votequorum information
265~~~~~~~~~~~~~~~~~~~~~~
266Expected votes: 3
267Highest expected: 3
268Total votes: 3
269Quorum: 3
270Flags: Quorate
271
272Membership information
273~~~~~~~~~~~~~~~~~~~~~~
274 Nodeid Votes Name
2750x00000001 1 192.168.15.90 (local)
2760x00000002 1 192.168.15.91
2770x00000003 1 192.168.15.92
278----
279
280IMPORTANT: as said above, it is very important to power off the node
281*before* removal, and make sure that it will *never* power on again
282(in the existing cluster network) as it is.
283
284If you power on the node as it is, your cluster will be screwed up and
285it could be difficult to restore a clean cluster state.
286
287If, for whatever reason, you want that this server joins the same
288cluster again, you have to
289
26ca7ff5 290* reinstall {pve} on it from scratch
8a865621
DM
291
292* then join it, as explained in the previous section.
d8742b0c 293
555e966b
TL
294Separate A Node Without Reinstalling
295~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
296
297CAUTION: This is *not* the recommended method, proceed with caution. Use the
298above mentioned method if you're unsure.
299
300You can also separate a node from a cluster without reinstalling it from
301scratch. But after removing the node from the cluster it will still have
302access to the shared storages! This must be resolved before you start removing
303the node from the cluster. A {pve} cluster cannot share the exact same
304storage with another cluster, as it leads to VMID conflicts.
305
306Move the guests which you want to keep on this node now, after the removal you
307can do this only via backup and restore. Its suggested that you create a new
308storage where only the node which you want to separate has access. This can be
309an new export on your NFS or a new Ceph pool, to name a few examples. Its just
310important that the exact same storage does not gets accessed by multiple
311clusters. After setting this storage up move all data from the node and its VMs
312to it. Then you are ready to separate the node from the cluster.
313
314WARNING: Ensure all shared resources are cleanly separated! You will run into
315conflicts and problems else.
316
317First stop the corosync and the pve-cluster services on the node:
318[source,bash]
319systemctl stop pve-cluster
320systemctl stop corosync
321
322Start the cluster filesystem again in local mode:
323[source,bash]
324pmxcfs -l
325
326Delete the corosync configuration files:
327[source,bash]
328rm /etc/pve/corosync.conf
329rm /etc/corosync/*
330
331You can now start the filesystem again as normal service:
332[source,bash]
333killall pmxcfs
334systemctl start pve-cluster
335
336The node is now separated from the cluster. You can deleted it from a remaining
337node of the cluster with:
338[source,bash]
339pvecm delnode oldnode
340
341If the command failed, because the remaining node in the cluster lost quorum
342when the now separate node exited, you may set the expected votes to 1 as a workaround:
343[source,bash]
344pvecm expected 1
345
346And the repeat the 'pvecm delnode' command.
347
348Now switch back to the separated node, here delete all remaining files left
349from the old cluster. This ensures that the node can be added to another
350cluster again without problems.
351
352[source,bash]
353rm /var/lib/corosync/*
354
355As the configuration files from the other nodes are still in the cluster
356filesystem you may want to clean those up too. Remove simply the whole
357directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
358you used the correct one before deleting it.
359
360CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
361the nodes can still connect to each other with public key authentication. This
362should be fixed by removing the respective keys from the
363'/etc/pve/priv/authorized_keys' file.
d8742b0c 364
806ef12d
DM
365Quorum
366------
367
368{pve} use a quorum-based technique to provide a consistent state among
369all cluster nodes.
370
371[quote, from Wikipedia, Quorum (distributed computing)]
372____
373A quorum is the minimum number of votes that a distributed transaction
374has to obtain in order to be allowed to perform an operation in a
375distributed system.
376____
377
378In case of network partitioning, state changes requires that a
379majority of nodes are online. The cluster switches to read-only mode
5eba0743 380if it loses quorum.
806ef12d
DM
381
382NOTE: {pve} assigns a single vote to each node by default.
383
e4ec4154
TL
384Cluster Network
385---------------
386
387The cluster network is the core of a cluster. All messages sent over it have to
388be delivered reliable to all nodes in their respective order. In {pve} this
389part is done by corosync, an implementation of a high performance low overhead
390high availability development toolkit. It serves our decentralized
391configuration file system (`pmxcfs`).
392
393[[cluster-network-requirements]]
394Network Requirements
395~~~~~~~~~~~~~~~~~~~~
396This needs a reliable network with latencies under 2 milliseconds (LAN
397performance) to work properly. While corosync can also use unicast for
398communication between nodes its **highly recommended** to have a multicast
399capable network. The network should not be used heavily by other members,
400ideally corosync runs on its own network.
401*never* share it with network where storage communicates too.
402
403Before setting up a cluster it is good practice to check if the network is fit
404for that purpose.
405
406* Ensure that all nodes are in the same subnet. This must only be true for the
407 network interfaces used for cluster communication (corosync).
408
409* Ensure all nodes can reach each other over those interfaces, using `ping` is
410 enough for a basic test.
411
412* Ensure that multicast works in general and a high package rates. This can be
413 done with the `omping` tool. The final "%loss" number should be < 1%.
414[source,bash]
415----
416omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
417----
418
419* Ensure that multicast communication works over an extended period of time.
420 This covers up problems where IGMP snooping is activated on the network but
421 no multicast querier is active. This test has a duration of around 10
422 minutes.
423[source,bash]
424omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
425
426Your network is not ready for clustering if any of these test fails. Recheck
427your network configuration. Especially switches are notorious for having
428multicast disabled by default or IGMP snooping enabled with no IGMP querier
429active.
430
431In smaller cluster its also an option to use unicast if you really cannot get
432multicast to work.
433
434Separate Cluster Network
435~~~~~~~~~~~~~~~~~~~~~~~~
436
437When creating a cluster without any parameters the cluster network is generally
438shared with the Web UI and the VMs and its traffic. Depending on your setup
439even storage traffic may get sent over the same network. Its recommended to
440change that, as corosync is a time critical real time application.
441
442Setting Up A New Network
443^^^^^^^^^^^^^^^^^^^^^^^^
444
445First you have to setup a new network interface. It should be on a physical
446separate network. Ensure that your network fulfills the
447<<cluster-network-requirements,cluster network requirements>>.
448
449Separate On Cluster Creation
450^^^^^^^^^^^^^^^^^^^^^^^^^^^^
451
452This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
453the 'pvecm create' command used for creating a new cluster.
454
455If you have setup a additional NIC with a static address on 10.10.10.1/25
456and want to send and receive all cluster communication over this interface
457you would execute:
458
459[source,bash]
460pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
461
462To check if everything is working properly execute:
463[source,bash]
464systemctl status corosync
465
466[[separate-cluster-net-after-creation]]
467Separate After Cluster Creation
468^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
469
470You can do this also if you have already created a cluster and want to switch
471its communication to another network, without rebuilding the whole cluster.
472This change may lead to short durations of quorum loss in the cluster, as nodes
473have to restart corosync and come up one after the other on the new network.
474
475Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
476The open it and you should see a file similar to:
477
478----
479logging {
480 debug: off
481 to_syslog: yes
482}
483
484nodelist {
485
486 node {
487 name: due
488 nodeid: 2
489 quorum_votes: 1
490 ring0_addr: due
491 }
492
493 node {
494 name: tre
495 nodeid: 3
496 quorum_votes: 1
497 ring0_addr: tre
498 }
499
500 node {
501 name: uno
502 nodeid: 1
503 quorum_votes: 1
504 ring0_addr: uno
505 }
506
507}
508
509quorum {
510 provider: corosync_votequorum
511}
512
513totem {
514 cluster_name: thomas-testcluster
515 config_version: 3
516 ip_version: ipv4
517 secauth: on
518 version: 2
519 interface {
520 bindnetaddr: 192.168.30.50
521 ringnumber: 0
522 }
523
524}
525----
526
527The first you want to do is add the 'name' properties in the node entries if
528you do not see them already. Those *must* match the node name.
529
530Then replace the address from the 'ring0_addr' properties with the new
531addresses. You may use plain IP addresses or also hostnames here. If you use
532hostnames ensure that they are resolvable from all nodes.
533
534In my example I want to switch my cluster communication to the 10.10.10.1/25
535network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
536in the totem section of the config to an address of the new network. It can be
537any address from the subnet configured on the new network interface.
538
539After you increased the 'config_version' property the new configuration file
540should look like:
541
542----
543
544logging {
545 debug: off
546 to_syslog: yes
547}
548
549nodelist {
550
551 node {
552 name: due
553 nodeid: 2
554 quorum_votes: 1
555 ring0_addr: 10.10.10.2
556 }
557
558 node {
559 name: tre
560 nodeid: 3
561 quorum_votes: 1
562 ring0_addr: 10.10.10.3
563 }
564
565 node {
566 name: uno
567 nodeid: 1
568 quorum_votes: 1
569 ring0_addr: 10.10.10.1
570 }
571
572}
573
574quorum {
575 provider: corosync_votequorum
576}
577
578totem {
579 cluster_name: thomas-testcluster
580 config_version: 4
581 ip_version: ipv4
582 secauth: on
583 version: 2
584 interface {
585 bindnetaddr: 10.10.10.1
586 ringnumber: 0
587 }
588
589}
590----
591
592Now after a final check whether all changed information is correct we save it
593and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
594learn how to bring it in effect.
595
596As our change cannot be enforced live from corosync we have to do an restart.
597
598On a single node execute:
599[source,bash]
600systemctl restart corosync
601
602Now check if everything is fine:
603
604[source,bash]
605systemctl status corosync
606
607If corosync runs again correct restart corosync also on all other nodes.
608They will then join the cluster membership one by one on the new network.
609
610Redundant Ring Protocol
611~~~~~~~~~~~~~~~~~~~~~~~
612To avoid a single point of failure you should implement counter measurements.
613This can be on the hardware and operating system level through network bonding.
614
615Corosync itself offers also a possibility to add redundancy through the so
616called 'Redundant Ring Protocol'. This protocol allows running a second totem
617ring on another network, this network should be physically separated from the
618other rings network to actually increase availability.
619
620RRP On Cluster Creation
621~~~~~~~~~~~~~~~~~~~~~~~
622
623The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
624'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
625
626NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
627
628So if you have two networks, one on the 10.10.10.1/24 and the other on the
62910.10.20.1/24 subnet you would execute:
630
631[source,bash]
632pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
633-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
634
635RRP On A Created Cluster
636~~~~~~~~~~~~~~~~~~~~~~~~
637
638When enabling an already running cluster to use RRP you will take similar steps
639as describe in <<separate-cluster-net-after-creation,separating the cluster
640network>>. You just do it on another ring.
641
642First add a new `interface` subsection in the `totem` section, set its
643`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
644address of the subnet you have configured for your new ring.
645Further set the `rrp_mode` to `passive`, this is the only stable mode.
646
647Then add to each node entry in the `nodelist` section its new `ring1_addr`
648property with the nodes additional ring address.
649
650So if you have two networks, one on the 10.10.10.1/24 and the other on the
65110.10.20.1/24 subnet, the final configuration file should look like:
652
653----
654totem {
655 cluster_name: tweak
656 config_version: 9
657 ip_version: ipv4
658 rrp_mode: passive
659 secauth: on
660 version: 2
661 interface {
662 bindnetaddr: 10.10.10.1
663 ringnumber: 0
664 }
665 interface {
666 bindnetaddr: 10.10.20.1
667 ringnumber: 1
668 }
669}
670
671nodelist {
672 node {
673 name: pvecm1
674 nodeid: 1
675 quorum_votes: 1
676 ring0_addr: 10.10.10.1
677 ring1_addr: 10.10.20.1
678 }
679
680 node {
681 name: pvecm2
682 nodeid: 2
683 quorum_votes: 1
684 ring0_addr: 10.10.10.2
685 ring1_addr: 10.10.20.2
686 }
687
688 [...] # other cluster nodes here
689}
690
691[...] # other remaining config sections here
692
693----
694
695Bring it in effect like described in the <<edit-corosync-conf,edit the
696corosync.conf file>> section.
697
698This is a change which cannot take live in effect and needs at least a restart
699of corosync. Recommended is a restart of the whole cluster.
700
701If you cannot reboot the whole cluster ensure no High Availability services are
702configured and the stop the corosync service on all nodes. After corosync is
703stopped on all nodes start it one after the other again.
704
705Corosync Configuration
706----------------------
707
708The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
709controls the cluster member ship and its network.
710For reading more about it check the corosync.conf man page:
711[source,bash]
712man corosync.conf
713
714For node membership you should always use the `pvecm` tool provided by {pve}.
715You may have to edit the configuration file manually for other changes.
716Here are a few best practice tips for doing this.
717
718[[edit-corosync-conf]]
719Edit corosync.conf
720~~~~~~~~~~~~~~~~~~
721
722Editing the corosync.conf file can be not always straight forward. There are
723two on each cluster, one in `/etc/pve/corosync.conf` and the other in
724`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
725propagate the changes to the local one, but not vice versa.
726
727The configuration will get updated automatically as soon as the file changes.
728This means changes which can be integrated in a running corosync will take
729instantly effect. So you should always make a copy and edit that instead, to
730avoid triggering some unwanted changes by an in between safe.
731
732[source,bash]
733cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
734
735Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
736preinstalled on {pve} for example.
737
738NOTE: Always increment the 'config_version' number on configuration changes,
739omitting this can lead to problems.
740
741After making the necessary changes create another copy of the current working
742configuration file. This serves as a backup if the new configuration fails to
743apply or makes problems in other ways.
744
745[source,bash]
746cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
747
748Then move the new configuration file over the old one:
749[source,bash]
750mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
751
752You may check with the commands
753[source,bash]
754systemctl status corosync
755journalctl -b -u corosync
756
757If the change could applied automatically. If not you may have to restart the
758corosync service via:
759[source,bash]
760systemctl restart corosync
761
762On errors check the troubleshooting section below.
763
764Troubleshooting
765~~~~~~~~~~~~~~~
766
767Issue: 'quorum.expected_votes must be configured'
768^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
769
770When corosync starts to fail and you get the following message in the system log:
771
772----
773[...]
774corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
775corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
776 'configuration error: nodelist or quorum.expected_votes must be configured!'
777[...]
778----
779
780It means that the hostname you set for corosync 'ringX_addr' in the
781configuration could not be resolved.
782
783
784Write Configuration When Not Quorate
785^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
786
787If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
788know what you do, use:
789[source,bash]
790pvecm expected 1
791
792This sets the expected vote count to 1 and makes the cluster quorate. You can
793now fix your configuration, or revert it back to the last working backup.
794
795This is not enough if corosync cannot start anymore. Here its best to edit the
796local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
797that corosync can start again. Ensure that on all nodes this configuration has
798the same content to avoid split brains. If you are not sure what went wrong
799it's best to ask the Proxmox Community to help you.
800
801
802[[corosync-conf-glossary]]
803Corosync Configuration Glossary
804~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
805
806ringX_addr::
807This names the different ring addresses for the corosync totem rings used for
808the cluster communication.
809
810bindnetaddr::
811Defines to which interface the ring should bind to. It may be any address of
812the subnet configured on the interface we want to use. In general its the
813recommended to just use an address a node uses on this interface.
814
815rrp_mode::
816Specifies the mode of the redundant ring protocol and may be passive, active or
817none. Note that use of active is highly experimental and not official
818supported. Passive is the preferred mode, it may double the cluster
819communication throughput and increases availability.
820
806ef12d
DM
821
822Cluster Cold Start
823------------------
824
825It is obvious that a cluster is not quorate when all nodes are
826offline. This is a common case after a power failure.
827
828NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 829(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
830you want HA.
831
8c1189b6
FG
832On node startup, service `pve-manager` is started and waits for
833quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
834flag set.
835
836When you turn on nodes, or when power comes back after power failure,
837it is likely that some nodes boots faster than others. Please keep in
838mind that guest startup is delayed until you reach quorum.
806ef12d
DM
839
840
d8742b0c
DM
841ifdef::manvolnum[]
842include::pve-copyright.adoc[]
843endif::manvolnum[]