]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvecm.adoc
add tools to verify/set DPI densite
[pve-docs.git] / pvecm.adoc
... / ...
CommitLineData
1ifdef::manvolnum[]
2pvecm(1)
3========
4include::attributes.txt[]
5:pve-toplevel:
6
7NAME
8----
9
10pvecm - Proxmox VE Cluster Manager
11
12SYNOPSIS
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
24include::attributes.txt[]
25:pve-toplevel:
26endif::manvolnum[]
27
28The {PVE} cluster manager `pvecm` is a tool to create a group of
29physical servers. Such a group is called a *cluster*. We use the
30http://www.corosync.org[Corosync Cluster Engine] for reliable group
31communication, and such clusters can consist of up to 32 physical nodes
32(probably more, dependent on network latency).
33
34`pvecm` can be used to create a new cluster, join nodes to a cluster,
35leave the cluster, get status information and do various other cluster
36related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
37is used to transparently distribute the cluster configuration to all cluster
38nodes.
39
40Grouping nodes into a cluster has the following advantages:
41
42* Centralized, web based management
43
44* Multi-master clusters: each node can do all management task
45
46* `pmxcfs`: database-driven file system for storing configuration files,
47 replicated in real-time on all nodes using `corosync`.
48
49* Easy migration of virtual machines and containers between physical
50 hosts
51
52* Fast deployment
53
54* Cluster-wide services like firewall and HA
55
56
57Requirements
58------------
59
60* All nodes must be in the same network as `corosync` uses IP Multicast
61 to communicate between nodes (also see
62 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
63 ports 5404 and 5405 for cluster communication.
64+
65NOTE: Some switches do not support IP multicast by default and must be
66manually enabled first.
67
68* Date and time have to be synchronized.
69
70* SSH tunnel on TCP port 22 between nodes is used.
71
72* If you are interested in High Availability, you need to have at
73 least three nodes for reliable quorum. All nodes should have the
74 same version.
75
76* We recommend a dedicated NIC for the cluster traffic, especially if
77 you use shared storage.
78
79NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
80Proxmox VE 4.0 cluster nodes.
81
82
83Preparing Nodes
84---------------
85
86First, install {PVE} on all nodes. Make sure that each node is
87installed with the final hostname and IP configuration. Changing the
88hostname and IP is not possible after cluster creation.
89
90Currently the cluster creation has to be done on the console, so you
91need to login via `ssh`.
92
93Create the Cluster
94------------------
95
96Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
97This name cannot be changed later.
98
99 hp1# pvecm create YOUR-CLUSTER-NAME
100
101CAUTION: The cluster name is used to compute the default multicast
102address. Please use unique cluster names if you run more than one
103cluster inside your network.
104
105To check the state of your cluster use:
106
107 hp1# pvecm status
108
109
110Adding Nodes to the Cluster
111---------------------------
112
113Login via `ssh` to the node you want to add.
114
115 hp2# pvecm add IP-ADDRESS-CLUSTER
116
117For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
118
119CAUTION: A new node cannot hold any VMs, because you would get
120conflicts about identical VM IDs. Also, all existing configuration in
121`/etc/pve` is overwritten when you join a new node to the cluster. To
122workaround, use `vzdump` to backup and restore to a different VMID after
123adding the node to the cluster.
124
125To check the state of cluster:
126
127 # pvecm status
128
129.Cluster status after adding 4 nodes
130----
131hp2# pvecm status
132Quorum information
133~~~~~~~~~~~~~~~~~~
134Date: Mon Apr 20 12:30:13 2015
135Quorum provider: corosync_votequorum
136Nodes: 4
137Node ID: 0x00000001
138Ring ID: 1928
139Quorate: Yes
140
141Votequorum information
142~~~~~~~~~~~~~~~~~~~~~~
143Expected votes: 4
144Highest expected: 4
145Total votes: 4
146Quorum: 2
147Flags: Quorate
148
149Membership information
150~~~~~~~~~~~~~~~~~~~~~~
151 Nodeid Votes Name
1520x00000001 1 192.168.15.91
1530x00000002 1 192.168.15.92 (local)
1540x00000003 1 192.168.15.93
1550x00000004 1 192.168.15.94
156----
157
158If you only want the list of all nodes use:
159
160 # pvecm nodes
161
162.List nodes in a cluster
163----
164hp2# pvecm nodes
165
166Membership information
167~~~~~~~~~~~~~~~~~~~~~~
168 Nodeid Votes Name
169 1 1 hp1
170 2 1 hp2 (local)
171 3 1 hp3
172 4 1 hp4
173----
174
175Adding Nodes With Separated Cluster Network
176~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
177
178When adding a node to a cluster with a separated cluster network you need to
179use the 'ringX_addr' parameters to set the nodes address on those networks:
180
181[source,bash]
182----
183pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
184----
185
186If you want to use the Redundant Ring Protocol you will also want to pass the
187'ring1_addr' parameter.
188
189
190Remove a Cluster Node
191---------------------
192
193CAUTION: Read carefully the procedure before proceeding, as it could
194not be what you want or need.
195
196Move all virtual machines from the node. Make sure you have no local
197data or backups you want to keep, or save them accordingly.
198
199Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
200identify the node ID:
201
202----
203hp1# pvecm status
204
205Quorum information
206~~~~~~~~~~~~~~~~~~
207Date: Mon Apr 20 12:30:13 2015
208Quorum provider: corosync_votequorum
209Nodes: 4
210Node ID: 0x00000001
211Ring ID: 1928
212Quorate: Yes
213
214Votequorum information
215~~~~~~~~~~~~~~~~~~~~~~
216Expected votes: 4
217Highest expected: 4
218Total votes: 4
219Quorum: 2
220Flags: Quorate
221
222Membership information
223~~~~~~~~~~~~~~~~~~~~~~
224 Nodeid Votes Name
2250x00000001 1 192.168.15.91 (local)
2260x00000002 1 192.168.15.92
2270x00000003 1 192.168.15.93
2280x00000004 1 192.168.15.94
229----
230
231IMPORTANT: at this point you must power off the node to be removed and
232make sure that it will not power on again (in the network) as it
233is.
234
235----
236hp1# pvecm nodes
237
238Membership information
239~~~~~~~~~~~~~~~~~~~~~~
240 Nodeid Votes Name
241 1 1 hp1 (local)
242 2 1 hp2
243 3 1 hp3
244 4 1 hp4
245----
246
247Log in to one remaining node via ssh. Issue the delete command (here
248deleting node `hp4`):
249
250 hp1# pvecm delnode hp4
251
252If the operation succeeds no output is returned, just check the node
253list again with `pvecm nodes` or `pvecm status`. You should see
254something like:
255
256----
257hp1# pvecm status
258
259Quorum information
260~~~~~~~~~~~~~~~~~~
261Date: Mon Apr 20 12:44:28 2015
262Quorum provider: corosync_votequorum
263Nodes: 3
264Node ID: 0x00000001
265Ring ID: 1992
266Quorate: Yes
267
268Votequorum information
269~~~~~~~~~~~~~~~~~~~~~~
270Expected votes: 3
271Highest expected: 3
272Total votes: 3
273Quorum: 3
274Flags: Quorate
275
276Membership information
277~~~~~~~~~~~~~~~~~~~~~~
278 Nodeid Votes Name
2790x00000001 1 192.168.15.90 (local)
2800x00000002 1 192.168.15.91
2810x00000003 1 192.168.15.92
282----
283
284IMPORTANT: as said above, it is very important to power off the node
285*before* removal, and make sure that it will *never* power on again
286(in the existing cluster network) as it is.
287
288If you power on the node as it is, your cluster will be screwed up and
289it could be difficult to restore a clean cluster state.
290
291If, for whatever reason, you want that this server joins the same
292cluster again, you have to
293
294* reinstall {pve} on it from scratch
295
296* then join it, as explained in the previous section.
297
298[[pvecm_separate_node_without_reinstall]]
299Separate A Node Without Reinstalling
300~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
301
302CAUTION: This is *not* the recommended method, proceed with caution. Use the
303above mentioned method if you're unsure.
304
305You can also separate a node from a cluster without reinstalling it from
306scratch. But after removing the node from the cluster it will still have
307access to the shared storages! This must be resolved before you start removing
308the node from the cluster. A {pve} cluster cannot share the exact same
309storage with another cluster, as it leads to VMID conflicts.
310
311Its suggested that you create a new storage where only the node which you want
312to separate has access. This can be an new export on your NFS or a new Ceph
313pool, to name a few examples. Its just important that the exact same storage
314does not gets accessed by multiple clusters. After setting this storage up move
315all data from the node and its VMs to it. Then you are ready to separate the
316node from the cluster.
317
318WARNING: Ensure all shared resources are cleanly separated! You will run into
319conflicts and problems else.
320
321First stop the corosync and the pve-cluster services on the node:
322[source,bash]
323----
324systemctl stop pve-cluster
325systemctl stop corosync
326----
327
328Start the cluster filesystem again in local mode:
329[source,bash]
330----
331pmxcfs -l
332----
333
334Delete the corosync configuration files:
335[source,bash]
336----
337rm /etc/pve/corosync.conf
338rm /etc/corosync/*
339----
340
341You can now start the filesystem again as normal service:
342[source,bash]
343----
344killall pmxcfs
345systemctl start pve-cluster
346----
347
348The node is now separated from the cluster. You can deleted it from a remaining
349node of the cluster with:
350[source,bash]
351----
352pvecm delnode oldnode
353----
354
355If the command failed, because the remaining node in the cluster lost quorum
356when the now separate node exited, you may set the expected votes to 1 as a workaround:
357[source,bash]
358----
359pvecm expected 1
360----
361
362And the repeat the 'pvecm delnode' command.
363
364Now switch back to the separated node, here delete all remaining files left
365from the old cluster. This ensures that the node can be added to another
366cluster again without problems.
367
368[source,bash]
369----
370rm /var/lib/corosync/*
371----
372
373As the configuration files from the other nodes are still in the cluster
374filesystem you may want to clean those up too. Remove simply the whole
375directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
376you used the correct one before deleting it.
377
378CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
379the nodes can still connect to each other with public key authentication. This
380should be fixed by removing the respective keys from the
381'/etc/pve/priv/authorized_keys' file.
382
383Quorum
384------
385
386{pve} use a quorum-based technique to provide a consistent state among
387all cluster nodes.
388
389[quote, from Wikipedia, Quorum (distributed computing)]
390____
391A quorum is the minimum number of votes that a distributed transaction
392has to obtain in order to be allowed to perform an operation in a
393distributed system.
394____
395
396In case of network partitioning, state changes requires that a
397majority of nodes are online. The cluster switches to read-only mode
398if it loses quorum.
399
400NOTE: {pve} assigns a single vote to each node by default.
401
402Cluster Network
403---------------
404
405The cluster network is the core of a cluster. All messages sent over it have to
406be delivered reliable to all nodes in their respective order. In {pve} this
407part is done by corosync, an implementation of a high performance low overhead
408high availability development toolkit. It serves our decentralized
409configuration file system (`pmxcfs`).
410
411[[cluster-network-requirements]]
412Network Requirements
413~~~~~~~~~~~~~~~~~~~~
414This needs a reliable network with latencies under 2 milliseconds (LAN
415performance) to work properly. While corosync can also use unicast for
416communication between nodes its **highly recommended** to have a multicast
417capable network. The network should not be used heavily by other members,
418ideally corosync runs on its own network.
419*never* share it with network where storage communicates too.
420
421Before setting up a cluster it is good practice to check if the network is fit
422for that purpose.
423
424* Ensure that all nodes are in the same subnet. This must only be true for the
425 network interfaces used for cluster communication (corosync).
426
427* Ensure all nodes can reach each other over those interfaces, using `ping` is
428 enough for a basic test.
429
430* Ensure that multicast works in general and a high package rates. This can be
431 done with the `omping` tool. The final "%loss" number should be < 1%.
432[source,bash]
433----
434omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
435----
436
437* Ensure that multicast communication works over an extended period of time.
438 This covers up problems where IGMP snooping is activated on the network but
439 no multicast querier is active. This test has a duration of around 10
440 minutes.
441[source,bash]
442----
443omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
444----
445
446Your network is not ready for clustering if any of these test fails. Recheck
447your network configuration. Especially switches are notorious for having
448multicast disabled by default or IGMP snooping enabled with no IGMP querier
449active.
450
451In smaller cluster its also an option to use unicast if you really cannot get
452multicast to work.
453
454Separate Cluster Network
455~~~~~~~~~~~~~~~~~~~~~~~~
456
457When creating a cluster without any parameters the cluster network is generally
458shared with the Web UI and the VMs and its traffic. Depending on your setup
459even storage traffic may get sent over the same network. Its recommended to
460change that, as corosync is a time critical real time application.
461
462Setting Up A New Network
463^^^^^^^^^^^^^^^^^^^^^^^^
464
465First you have to setup a new network interface. It should be on a physical
466separate network. Ensure that your network fulfills the
467<<cluster-network-requirements,cluster network requirements>>.
468
469Separate On Cluster Creation
470^^^^^^^^^^^^^^^^^^^^^^^^^^^^
471
472This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
473the 'pvecm create' command used for creating a new cluster.
474
475If you have setup a additional NIC with a static address on 10.10.10.1/25
476and want to send and receive all cluster communication over this interface
477you would execute:
478
479[source,bash]
480----
481pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
482----
483
484To check if everything is working properly execute:
485[source,bash]
486----
487systemctl status corosync
488----
489
490[[separate-cluster-net-after-creation]]
491Separate After Cluster Creation
492^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
493
494You can do this also if you have already created a cluster and want to switch
495its communication to another network, without rebuilding the whole cluster.
496This change may lead to short durations of quorum loss in the cluster, as nodes
497have to restart corosync and come up one after the other on the new network.
498
499Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
500The open it and you should see a file similar to:
501
502----
503logging {
504 debug: off
505 to_syslog: yes
506}
507
508nodelist {
509
510 node {
511 name: due
512 nodeid: 2
513 quorum_votes: 1
514 ring0_addr: due
515 }
516
517 node {
518 name: tre
519 nodeid: 3
520 quorum_votes: 1
521 ring0_addr: tre
522 }
523
524 node {
525 name: uno
526 nodeid: 1
527 quorum_votes: 1
528 ring0_addr: uno
529 }
530
531}
532
533quorum {
534 provider: corosync_votequorum
535}
536
537totem {
538 cluster_name: thomas-testcluster
539 config_version: 3
540 ip_version: ipv4
541 secauth: on
542 version: 2
543 interface {
544 bindnetaddr: 192.168.30.50
545 ringnumber: 0
546 }
547
548}
549----
550
551The first you want to do is add the 'name' properties in the node entries if
552you do not see them already. Those *must* match the node name.
553
554Then replace the address from the 'ring0_addr' properties with the new
555addresses. You may use plain IP addresses or also hostnames here. If you use
556hostnames ensure that they are resolvable from all nodes.
557
558In my example I want to switch my cluster communication to the 10.10.10.1/25
559network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
560in the totem section of the config to an address of the new network. It can be
561any address from the subnet configured on the new network interface.
562
563After you increased the 'config_version' property the new configuration file
564should look like:
565
566----
567
568logging {
569 debug: off
570 to_syslog: yes
571}
572
573nodelist {
574
575 node {
576 name: due
577 nodeid: 2
578 quorum_votes: 1
579 ring0_addr: 10.10.10.2
580 }
581
582 node {
583 name: tre
584 nodeid: 3
585 quorum_votes: 1
586 ring0_addr: 10.10.10.3
587 }
588
589 node {
590 name: uno
591 nodeid: 1
592 quorum_votes: 1
593 ring0_addr: 10.10.10.1
594 }
595
596}
597
598quorum {
599 provider: corosync_votequorum
600}
601
602totem {
603 cluster_name: thomas-testcluster
604 config_version: 4
605 ip_version: ipv4
606 secauth: on
607 version: 2
608 interface {
609 bindnetaddr: 10.10.10.1
610 ringnumber: 0
611 }
612
613}
614----
615
616Now after a final check whether all changed information is correct we save it
617and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
618learn how to bring it in effect.
619
620As our change cannot be enforced live from corosync we have to do an restart.
621
622On a single node execute:
623[source,bash]
624----
625systemctl restart corosync
626----
627
628Now check if everything is fine:
629
630[source,bash]
631----
632systemctl status corosync
633----
634
635If corosync runs again correct restart corosync also on all other nodes.
636They will then join the cluster membership one by one on the new network.
637
638Redundant Ring Protocol
639~~~~~~~~~~~~~~~~~~~~~~~
640To avoid a single point of failure you should implement counter measurements.
641This can be on the hardware and operating system level through network bonding.
642
643Corosync itself offers also a possibility to add redundancy through the so
644called 'Redundant Ring Protocol'. This protocol allows running a second totem
645ring on another network, this network should be physically separated from the
646other rings network to actually increase availability.
647
648RRP On Cluster Creation
649~~~~~~~~~~~~~~~~~~~~~~~
650
651The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
652'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
653
654NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
655
656So if you have two networks, one on the 10.10.10.1/24 and the other on the
65710.10.20.1/24 subnet you would execute:
658
659[source,bash]
660----
661pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
662-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
663----
664
665RRP On A Created Cluster
666~~~~~~~~~~~~~~~~~~~~~~~~
667
668When enabling an already running cluster to use RRP you will take similar steps
669as describe in
670<<separate-cluster-net-after-creation,separating the cluster network>>. You
671just do it on another ring.
672
673First add a new `interface` subsection in the `totem` section, set its
674`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
675address of the subnet you have configured for your new ring.
676Further set the `rrp_mode` to `passive`, this is the only stable mode.
677
678Then add to each node entry in the `nodelist` section its new `ring1_addr`
679property with the nodes additional ring address.
680
681So if you have two networks, one on the 10.10.10.1/24 and the other on the
68210.10.20.1/24 subnet, the final configuration file should look like:
683
684----
685totem {
686 cluster_name: tweak
687 config_version: 9
688 ip_version: ipv4
689 rrp_mode: passive
690 secauth: on
691 version: 2
692 interface {
693 bindnetaddr: 10.10.10.1
694 ringnumber: 0
695 }
696 interface {
697 bindnetaddr: 10.10.20.1
698 ringnumber: 1
699 }
700}
701
702nodelist {
703 node {
704 name: pvecm1
705 nodeid: 1
706 quorum_votes: 1
707 ring0_addr: 10.10.10.1
708 ring1_addr: 10.10.20.1
709 }
710
711 node {
712 name: pvecm2
713 nodeid: 2
714 quorum_votes: 1
715 ring0_addr: 10.10.10.2
716 ring1_addr: 10.10.20.2
717 }
718
719 [...] # other cluster nodes here
720}
721
722[...] # other remaining config sections here
723
724----
725
726Bring it in effect like described in the
727<<edit-corosync-conf,edit the corosync.conf file>> section.
728
729This is a change which cannot take live in effect and needs at least a restart
730of corosync. Recommended is a restart of the whole cluster.
731
732If you cannot reboot the whole cluster ensure no High Availability services are
733configured and the stop the corosync service on all nodes. After corosync is
734stopped on all nodes start it one after the other again.
735
736Corosync Configuration
737----------------------
738
739The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
740controls the cluster member ship and its network.
741For reading more about it check the corosync.conf man page:
742[source,bash]
743----
744man corosync.conf
745----
746
747For node membership you should always use the `pvecm` tool provided by {pve}.
748You may have to edit the configuration file manually for other changes.
749Here are a few best practice tips for doing this.
750
751[[edit-corosync-conf]]
752Edit corosync.conf
753~~~~~~~~~~~~~~~~~~
754
755Editing the corosync.conf file can be not always straight forward. There are
756two on each cluster, one in `/etc/pve/corosync.conf` and the other in
757`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
758propagate the changes to the local one, but not vice versa.
759
760The configuration will get updated automatically as soon as the file changes.
761This means changes which can be integrated in a running corosync will take
762instantly effect. So you should always make a copy and edit that instead, to
763avoid triggering some unwanted changes by an in between safe.
764
765[source,bash]
766----
767cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
768----
769
770Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
771preinstalled on {pve} for example.
772
773NOTE: Always increment the 'config_version' number on configuration changes,
774omitting this can lead to problems.
775
776After making the necessary changes create another copy of the current working
777configuration file. This serves as a backup if the new configuration fails to
778apply or makes problems in other ways.
779
780[source,bash]
781----
782cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
783----
784
785Then move the new configuration file over the old one:
786[source,bash]
787----
788mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
789----
790
791You may check with the commands
792[source,bash]
793----
794systemctl status corosync
795journalctl -b -u corosync
796----
797
798If the change could applied automatically. If not you may have to restart the
799corosync service via:
800[source,bash]
801----
802systemctl restart corosync
803----
804
805On errors check the troubleshooting section below.
806
807Troubleshooting
808~~~~~~~~~~~~~~~
809
810Issue: 'quorum.expected_votes must be configured'
811^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
812
813When corosync starts to fail and you get the following message in the system log:
814
815----
816[...]
817corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
818corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
819 'configuration error: nodelist or quorum.expected_votes must be configured!'
820[...]
821----
822
823It means that the hostname you set for corosync 'ringX_addr' in the
824configuration could not be resolved.
825
826
827Write Configuration When Not Quorate
828^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
829
830If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
831know what you do, use:
832[source,bash]
833----
834pvecm expected 1
835----
836
837This sets the expected vote count to 1 and makes the cluster quorate. You can
838now fix your configuration, or revert it back to the last working backup.
839
840This is not enough if corosync cannot start anymore. Here its best to edit the
841local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
842that corosync can start again. Ensure that on all nodes this configuration has
843the same content to avoid split brains. If you are not sure what went wrong
844it's best to ask the Proxmox Community to help you.
845
846
847[[corosync-conf-glossary]]
848Corosync Configuration Glossary
849~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
850
851ringX_addr::
852This names the different ring addresses for the corosync totem rings used for
853the cluster communication.
854
855bindnetaddr::
856Defines to which interface the ring should bind to. It may be any address of
857the subnet configured on the interface we want to use. In general its the
858recommended to just use an address a node uses on this interface.
859
860rrp_mode::
861Specifies the mode of the redundant ring protocol and may be passive, active or
862none. Note that use of active is highly experimental and not official
863supported. Passive is the preferred mode, it may double the cluster
864communication throughput and increases availability.
865
866
867Cluster Cold Start
868------------------
869
870It is obvious that a cluster is not quorate when all nodes are
871offline. This is a common case after a power failure.
872
873NOTE: It is always a good idea to use an uninterruptible power supply
874(``UPS'', also called ``battery backup'') to avoid this state, especially if
875you want HA.
876
877On node startup, service `pve-manager` is started and waits for
878quorum. Once quorate, it starts all guests which have the `onboot`
879flag set.
880
881When you turn on nodes, or when power comes back after power failure,
882it is likely that some nodes boots faster than others. Please keep in
883mind that guest startup is delayed until you reach quorum.
884
885
886ifdef::manvolnum[]
887include::pve-copyright.adoc[]
888endif::manvolnum[]