]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
2b7f5d21825379a34528b6c947612075c2cffc46
[pve-docs.git] / pvecm.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 NAME
7 ----
8
9 pvecm - Proxmox VE Cluster Manager
10
11 SYNOPSYS
12 --------
13
14 include::pvecm.1-synopsis.adoc[]
15
16 DESCRIPTION
17 -----------
18 endif::manvolnum[]
19
20 ifndef::manvolnum[]
21 Cluster Manager
22 ===============
23 include::attributes.txt[]
24 endif::manvolnum[]
25
26 The {PVE} cluster manager `pvecm` is a tool to create a group of
27 physical servers. Such a group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such clusters can consist of up to 32 physical nodes
30 (probably more, dependent on network latency).
31
32 `pvecm` can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35 is used to transparently distribute the cluster configuration to all cluster
36 nodes.
37
38 Grouping nodes into a cluster has the following advantages:
39
40 * Centralized, web based management
41
42 * Multi-master clusters: each node can do all management task
43
44 * `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
46
47 * Easy migration of virtual machines and containers between physical
48 hosts
49
50 * Fast deployment
51
52 * Cluster-wide services like firewall and HA
53
54
55 Requirements
56 ------------
57
58 * All nodes must be in the same network as `corosync` uses IP Multicast
59 to communicate between nodes (also see
60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
61 ports 5404 and 5405 for cluster communication.
62 +
63 NOTE: Some switches do not support IP multicast by default and must be
64 manually enabled first.
65
66 * Date and time have to be synchronized.
67
68 * SSH tunnel on TCP port 22 between nodes is used.
69
70 * If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
73
74 * We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
78 Proxmox VE 4.0 cluster nodes.
79
80
81 Preparing Nodes
82 ---------------
83
84 First, install {PVE} on all nodes. Make sure that each node is
85 installed with the final hostname and IP configuration. Changing the
86 hostname and IP is not possible after cluster creation.
87
88 Currently the cluster creation has to be done on the console, so you
89 need to login via `ssh`.
90
91 Create the Cluster
92 ------------------
93
94 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95 This name cannot be changed later.
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
99 CAUTION: The cluster name is used to compute the default multicast
100 address. Please use unique cluster names if you run more than one
101 cluster inside your network.
102
103 To check the state of your cluster use:
104
105 hp1# pvecm status
106
107
108 Adding Nodes to the Cluster
109 ---------------------------
110
111 Login via `ssh` to the node you want to add.
112
113 hp2# pvecm add IP-ADDRESS-CLUSTER
114
115 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
116
117 CAUTION: A new node cannot hold any VMs, because you would get
118 conflicts about identical VM IDs. Also, all existing configuration in
119 `/etc/pve` is overwritten when you join a new node to the cluster. To
120 workaround, use `vzdump` to backup and restore to a different VMID after
121 adding the node to the cluster.
122
123 To check the state of cluster:
124
125 # pvecm status
126
127 .Cluster status after adding 4 nodes
128 ----
129 hp2# pvecm status
130 Quorum information
131 ~~~~~~~~~~~~~~~~~~
132 Date: Mon Apr 20 12:30:13 2015
133 Quorum provider: corosync_votequorum
134 Nodes: 4
135 Node ID: 0x00000001
136 Ring ID: 1928
137 Quorate: Yes
138
139 Votequorum information
140 ~~~~~~~~~~~~~~~~~~~~~~
141 Expected votes: 4
142 Highest expected: 4
143 Total votes: 4
144 Quorum: 2
145 Flags: Quorate
146
147 Membership information
148 ~~~~~~~~~~~~~~~~~~~~~~
149 Nodeid Votes Name
150 0x00000001 1 192.168.15.91
151 0x00000002 1 192.168.15.92 (local)
152 0x00000003 1 192.168.15.93
153 0x00000004 1 192.168.15.94
154 ----
155
156 If you only want the list of all nodes use:
157
158 # pvecm nodes
159
160 .List nodes in a cluster
161 ----
162 hp2# pvecm nodes
163
164 Membership information
165 ~~~~~~~~~~~~~~~~~~~~~~
166 Nodeid Votes Name
167 1 1 hp1
168 2 1 hp2 (local)
169 3 1 hp3
170 4 1 hp4
171 ----
172
173 Adding Nodes With Separated Cluster Network
174 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
175
176 When adding a node to a cluster with a separated cluster network you need to
177 use the 'ringX_addr' parameters to set the nodes address on those networks:
178
179 [source,bash]
180 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
181
182 If you want to use the Redundant Ring Protocol you will also want to pass the
183 'ring1_addr' parameter.
184
185
186 Remove a Cluster Node
187 ---------------------
188
189 CAUTION: Read carefully the procedure before proceeding, as it could
190 not be what you want or need.
191
192 Move all virtual machines from the node. Make sure you have no local
193 data or backups you want to keep, or save them accordingly.
194
195 Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
196 identify the node ID:
197
198 ----
199 hp1# pvecm status
200
201 Quorum information
202 ~~~~~~~~~~~~~~~~~~
203 Date: Mon Apr 20 12:30:13 2015
204 Quorum provider: corosync_votequorum
205 Nodes: 4
206 Node ID: 0x00000001
207 Ring ID: 1928
208 Quorate: Yes
209
210 Votequorum information
211 ~~~~~~~~~~~~~~~~~~~~~~
212 Expected votes: 4
213 Highest expected: 4
214 Total votes: 4
215 Quorum: 2
216 Flags: Quorate
217
218 Membership information
219 ~~~~~~~~~~~~~~~~~~~~~~
220 Nodeid Votes Name
221 0x00000001 1 192.168.15.91 (local)
222 0x00000002 1 192.168.15.92
223 0x00000003 1 192.168.15.93
224 0x00000004 1 192.168.15.94
225 ----
226
227 IMPORTANT: at this point you must power off the node to be removed and
228 make sure that it will not power on again (in the network) as it
229 is.
230
231 ----
232 hp1# pvecm nodes
233
234 Membership information
235 ~~~~~~~~~~~~~~~~~~~~~~
236 Nodeid Votes Name
237 1 1 hp1 (local)
238 2 1 hp2
239 3 1 hp3
240 4 1 hp4
241 ----
242
243 Log in to one remaining node via ssh. Issue the delete command (here
244 deleting node `hp4`):
245
246 hp1# pvecm delnode hp4
247
248 If the operation succeeds no output is returned, just check the node
249 list again with `pvecm nodes` or `pvecm status`. You should see
250 something like:
251
252 ----
253 hp1# pvecm status
254
255 Quorum information
256 ~~~~~~~~~~~~~~~~~~
257 Date: Mon Apr 20 12:44:28 2015
258 Quorum provider: corosync_votequorum
259 Nodes: 3
260 Node ID: 0x00000001
261 Ring ID: 1992
262 Quorate: Yes
263
264 Votequorum information
265 ~~~~~~~~~~~~~~~~~~~~~~
266 Expected votes: 3
267 Highest expected: 3
268 Total votes: 3
269 Quorum: 3
270 Flags: Quorate
271
272 Membership information
273 ~~~~~~~~~~~~~~~~~~~~~~
274 Nodeid Votes Name
275 0x00000001 1 192.168.15.90 (local)
276 0x00000002 1 192.168.15.91
277 0x00000003 1 192.168.15.92
278 ----
279
280 IMPORTANT: as said above, it is very important to power off the node
281 *before* removal, and make sure that it will *never* power on again
282 (in the existing cluster network) as it is.
283
284 If you power on the node as it is, your cluster will be screwed up and
285 it could be difficult to restore a clean cluster state.
286
287 If, for whatever reason, you want that this server joins the same
288 cluster again, you have to
289
290 * reinstall {pve} on it from scratch
291
292 * then join it, as explained in the previous section.
293
294 Separate A Node Without Reinstalling
295 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
296
297 CAUTION: This is *not* the recommended method, proceed with caution. Use the
298 above mentioned method if you're unsure.
299
300 You can also separate a node from a cluster without reinstalling it from
301 scratch. But after removing the node from the cluster it will still have
302 access to the shared storages! This must be resolved before you start removing
303 the node from the cluster. A {pve} cluster cannot share the exact same
304 storage with another cluster, as it leads to VMID conflicts.
305
306 Its suggested that you create a new storage where only the node which you want
307 to separate has access. This can be an new export on your NFS or a new Ceph
308 pool, to name a few examples. Its just important that the exact same storage
309 does not gets accessed by multiple clusters. After setting this storage up move
310 all data from the node and its VMs to it. Then you are ready to separate the
311 node from the cluster.
312
313 WARNING: Ensure all shared resources are cleanly separated! You will run into
314 conflicts and problems else.
315
316 First stop the corosync and the pve-cluster services on the node:
317 [source,bash]
318 systemctl stop pve-cluster
319 systemctl stop corosync
320
321 Start the cluster filesystem again in local mode:
322 [source,bash]
323 pmxcfs -l
324
325 Delete the corosync configuration files:
326 [source,bash]
327 rm /etc/pve/corosync.conf
328 rm /etc/corosync/*
329
330 You can now start the filesystem again as normal service:
331 [source,bash]
332 killall pmxcfs
333 systemctl start pve-cluster
334
335 The node is now separated from the cluster. You can deleted it from a remaining
336 node of the cluster with:
337 [source,bash]
338 pvecm delnode oldnode
339
340 If the command failed, because the remaining node in the cluster lost quorum
341 when the now separate node exited, you may set the expected votes to 1 as a workaround:
342 [source,bash]
343 pvecm expected 1
344
345 And the repeat the 'pvecm delnode' command.
346
347 Now switch back to the separated node, here delete all remaining files left
348 from the old cluster. This ensures that the node can be added to another
349 cluster again without problems.
350
351 [source,bash]
352 rm /var/lib/corosync/*
353
354 As the configuration files from the other nodes are still in the cluster
355 filesystem you may want to clean those up too. Remove simply the whole
356 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
357 you used the correct one before deleting it.
358
359 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
360 the nodes can still connect to each other with public key authentication. This
361 should be fixed by removing the respective keys from the
362 '/etc/pve/priv/authorized_keys' file.
363
364 Quorum
365 ------
366
367 {pve} use a quorum-based technique to provide a consistent state among
368 all cluster nodes.
369
370 [quote, from Wikipedia, Quorum (distributed computing)]
371 ____
372 A quorum is the minimum number of votes that a distributed transaction
373 has to obtain in order to be allowed to perform an operation in a
374 distributed system.
375 ____
376
377 In case of network partitioning, state changes requires that a
378 majority of nodes are online. The cluster switches to read-only mode
379 if it loses quorum.
380
381 NOTE: {pve} assigns a single vote to each node by default.
382
383 Cluster Network
384 ---------------
385
386 The cluster network is the core of a cluster. All messages sent over it have to
387 be delivered reliable to all nodes in their respective order. In {pve} this
388 part is done by corosync, an implementation of a high performance low overhead
389 high availability development toolkit. It serves our decentralized
390 configuration file system (`pmxcfs`).
391
392 [[cluster-network-requirements]]
393 Network Requirements
394 ~~~~~~~~~~~~~~~~~~~~
395 This needs a reliable network with latencies under 2 milliseconds (LAN
396 performance) to work properly. While corosync can also use unicast for
397 communication between nodes its **highly recommended** to have a multicast
398 capable network. The network should not be used heavily by other members,
399 ideally corosync runs on its own network.
400 *never* share it with network where storage communicates too.
401
402 Before setting up a cluster it is good practice to check if the network is fit
403 for that purpose.
404
405 * Ensure that all nodes are in the same subnet. This must only be true for the
406 network interfaces used for cluster communication (corosync).
407
408 * Ensure all nodes can reach each other over those interfaces, using `ping` is
409 enough for a basic test.
410
411 * Ensure that multicast works in general and a high package rates. This can be
412 done with the `omping` tool. The final "%loss" number should be < 1%.
413 [source,bash]
414 ----
415 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
416 ----
417
418 * Ensure that multicast communication works over an extended period of time.
419 This covers up problems where IGMP snooping is activated on the network but
420 no multicast querier is active. This test has a duration of around 10
421 minutes.
422 [source,bash]
423 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
424
425 Your network is not ready for clustering if any of these test fails. Recheck
426 your network configuration. Especially switches are notorious for having
427 multicast disabled by default or IGMP snooping enabled with no IGMP querier
428 active.
429
430 In smaller cluster its also an option to use unicast if you really cannot get
431 multicast to work.
432
433 Separate Cluster Network
434 ~~~~~~~~~~~~~~~~~~~~~~~~
435
436 When creating a cluster without any parameters the cluster network is generally
437 shared with the Web UI and the VMs and its traffic. Depending on your setup
438 even storage traffic may get sent over the same network. Its recommended to
439 change that, as corosync is a time critical real time application.
440
441 Setting Up A New Network
442 ^^^^^^^^^^^^^^^^^^^^^^^^
443
444 First you have to setup a new network interface. It should be on a physical
445 separate network. Ensure that your network fulfills the
446 <<cluster-network-requirements,cluster network requirements>>.
447
448 Separate On Cluster Creation
449 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
450
451 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
452 the 'pvecm create' command used for creating a new cluster.
453
454 If you have setup a additional NIC with a static address on 10.10.10.1/25
455 and want to send and receive all cluster communication over this interface
456 you would execute:
457
458 [source,bash]
459 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
460
461 To check if everything is working properly execute:
462 [source,bash]
463 systemctl status corosync
464
465 [[separate-cluster-net-after-creation]]
466 Separate After Cluster Creation
467 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
468
469 You can do this also if you have already created a cluster and want to switch
470 its communication to another network, without rebuilding the whole cluster.
471 This change may lead to short durations of quorum loss in the cluster, as nodes
472 have to restart corosync and come up one after the other on the new network.
473
474 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
475 The open it and you should see a file similar to:
476
477 ----
478 logging {
479 debug: off
480 to_syslog: yes
481 }
482
483 nodelist {
484
485 node {
486 name: due
487 nodeid: 2
488 quorum_votes: 1
489 ring0_addr: due
490 }
491
492 node {
493 name: tre
494 nodeid: 3
495 quorum_votes: 1
496 ring0_addr: tre
497 }
498
499 node {
500 name: uno
501 nodeid: 1
502 quorum_votes: 1
503 ring0_addr: uno
504 }
505
506 }
507
508 quorum {
509 provider: corosync_votequorum
510 }
511
512 totem {
513 cluster_name: thomas-testcluster
514 config_version: 3
515 ip_version: ipv4
516 secauth: on
517 version: 2
518 interface {
519 bindnetaddr: 192.168.30.50
520 ringnumber: 0
521 }
522
523 }
524 ----
525
526 The first you want to do is add the 'name' properties in the node entries if
527 you do not see them already. Those *must* match the node name.
528
529 Then replace the address from the 'ring0_addr' properties with the new
530 addresses. You may use plain IP addresses or also hostnames here. If you use
531 hostnames ensure that they are resolvable from all nodes.
532
533 In my example I want to switch my cluster communication to the 10.10.10.1/25
534 network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
535 in the totem section of the config to an address of the new network. It can be
536 any address from the subnet configured on the new network interface.
537
538 After you increased the 'config_version' property the new configuration file
539 should look like:
540
541 ----
542
543 logging {
544 debug: off
545 to_syslog: yes
546 }
547
548 nodelist {
549
550 node {
551 name: due
552 nodeid: 2
553 quorum_votes: 1
554 ring0_addr: 10.10.10.2
555 }
556
557 node {
558 name: tre
559 nodeid: 3
560 quorum_votes: 1
561 ring0_addr: 10.10.10.3
562 }
563
564 node {
565 name: uno
566 nodeid: 1
567 quorum_votes: 1
568 ring0_addr: 10.10.10.1
569 }
570
571 }
572
573 quorum {
574 provider: corosync_votequorum
575 }
576
577 totem {
578 cluster_name: thomas-testcluster
579 config_version: 4
580 ip_version: ipv4
581 secauth: on
582 version: 2
583 interface {
584 bindnetaddr: 10.10.10.1
585 ringnumber: 0
586 }
587
588 }
589 ----
590
591 Now after a final check whether all changed information is correct we save it
592 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
593 learn how to bring it in effect.
594
595 As our change cannot be enforced live from corosync we have to do an restart.
596
597 On a single node execute:
598 [source,bash]
599 systemctl restart corosync
600
601 Now check if everything is fine:
602
603 [source,bash]
604 systemctl status corosync
605
606 If corosync runs again correct restart corosync also on all other nodes.
607 They will then join the cluster membership one by one on the new network.
608
609 Redundant Ring Protocol
610 ~~~~~~~~~~~~~~~~~~~~~~~
611 To avoid a single point of failure you should implement counter measurements.
612 This can be on the hardware and operating system level through network bonding.
613
614 Corosync itself offers also a possibility to add redundancy through the so
615 called 'Redundant Ring Protocol'. This protocol allows running a second totem
616 ring on another network, this network should be physically separated from the
617 other rings network to actually increase availability.
618
619 RRP On Cluster Creation
620 ~~~~~~~~~~~~~~~~~~~~~~~
621
622 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
623 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
624
625 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
626
627 So if you have two networks, one on the 10.10.10.1/24 and the other on the
628 10.10.20.1/24 subnet you would execute:
629
630 [source,bash]
631 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
632 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
633
634 RRP On A Created Cluster
635 ~~~~~~~~~~~~~~~~~~~~~~~~
636
637 When enabling an already running cluster to use RRP you will take similar steps
638 as describe in <<separate-cluster-net-after-creation,separating the cluster
639 network>>. You just do it on another ring.
640
641 First add a new `interface` subsection in the `totem` section, set its
642 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
643 address of the subnet you have configured for your new ring.
644 Further set the `rrp_mode` to `passive`, this is the only stable mode.
645
646 Then add to each node entry in the `nodelist` section its new `ring1_addr`
647 property with the nodes additional ring address.
648
649 So if you have two networks, one on the 10.10.10.1/24 and the other on the
650 10.10.20.1/24 subnet, the final configuration file should look like:
651
652 ----
653 totem {
654 cluster_name: tweak
655 config_version: 9
656 ip_version: ipv4
657 rrp_mode: passive
658 secauth: on
659 version: 2
660 interface {
661 bindnetaddr: 10.10.10.1
662 ringnumber: 0
663 }
664 interface {
665 bindnetaddr: 10.10.20.1
666 ringnumber: 1
667 }
668 }
669
670 nodelist {
671 node {
672 name: pvecm1
673 nodeid: 1
674 quorum_votes: 1
675 ring0_addr: 10.10.10.1
676 ring1_addr: 10.10.20.1
677 }
678
679 node {
680 name: pvecm2
681 nodeid: 2
682 quorum_votes: 1
683 ring0_addr: 10.10.10.2
684 ring1_addr: 10.10.20.2
685 }
686
687 [...] # other cluster nodes here
688 }
689
690 [...] # other remaining config sections here
691
692 ----
693
694 Bring it in effect like described in the <<edit-corosync-conf,edit the
695 corosync.conf file>> section.
696
697 This is a change which cannot take live in effect and needs at least a restart
698 of corosync. Recommended is a restart of the whole cluster.
699
700 If you cannot reboot the whole cluster ensure no High Availability services are
701 configured and the stop the corosync service on all nodes. After corosync is
702 stopped on all nodes start it one after the other again.
703
704 Corosync Configuration
705 ----------------------
706
707 The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
708 controls the cluster member ship and its network.
709 For reading more about it check the corosync.conf man page:
710 [source,bash]
711 man corosync.conf
712
713 For node membership you should always use the `pvecm` tool provided by {pve}.
714 You may have to edit the configuration file manually for other changes.
715 Here are a few best practice tips for doing this.
716
717 [[edit-corosync-conf]]
718 Edit corosync.conf
719 ~~~~~~~~~~~~~~~~~~
720
721 Editing the corosync.conf file can be not always straight forward. There are
722 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
723 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
724 propagate the changes to the local one, but not vice versa.
725
726 The configuration will get updated automatically as soon as the file changes.
727 This means changes which can be integrated in a running corosync will take
728 instantly effect. So you should always make a copy and edit that instead, to
729 avoid triggering some unwanted changes by an in between safe.
730
731 [source,bash]
732 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
733
734 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
735 preinstalled on {pve} for example.
736
737 NOTE: Always increment the 'config_version' number on configuration changes,
738 omitting this can lead to problems.
739
740 After making the necessary changes create another copy of the current working
741 configuration file. This serves as a backup if the new configuration fails to
742 apply or makes problems in other ways.
743
744 [source,bash]
745 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
746
747 Then move the new configuration file over the old one:
748 [source,bash]
749 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
750
751 You may check with the commands
752 [source,bash]
753 systemctl status corosync
754 journalctl -b -u corosync
755
756 If the change could applied automatically. If not you may have to restart the
757 corosync service via:
758 [source,bash]
759 systemctl restart corosync
760
761 On errors check the troubleshooting section below.
762
763 Troubleshooting
764 ~~~~~~~~~~~~~~~
765
766 Issue: 'quorum.expected_votes must be configured'
767 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
768
769 When corosync starts to fail and you get the following message in the system log:
770
771 ----
772 [...]
773 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
774 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
775 'configuration error: nodelist or quorum.expected_votes must be configured!'
776 [...]
777 ----
778
779 It means that the hostname you set for corosync 'ringX_addr' in the
780 configuration could not be resolved.
781
782
783 Write Configuration When Not Quorate
784 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
785
786 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
787 know what you do, use:
788 [source,bash]
789 pvecm expected 1
790
791 This sets the expected vote count to 1 and makes the cluster quorate. You can
792 now fix your configuration, or revert it back to the last working backup.
793
794 This is not enough if corosync cannot start anymore. Here its best to edit the
795 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
796 that corosync can start again. Ensure that on all nodes this configuration has
797 the same content to avoid split brains. If you are not sure what went wrong
798 it's best to ask the Proxmox Community to help you.
799
800
801 [[corosync-conf-glossary]]
802 Corosync Configuration Glossary
803 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
804
805 ringX_addr::
806 This names the different ring addresses for the corosync totem rings used for
807 the cluster communication.
808
809 bindnetaddr::
810 Defines to which interface the ring should bind to. It may be any address of
811 the subnet configured on the interface we want to use. In general its the
812 recommended to just use an address a node uses on this interface.
813
814 rrp_mode::
815 Specifies the mode of the redundant ring protocol and may be passive, active or
816 none. Note that use of active is highly experimental and not official
817 supported. Passive is the preferred mode, it may double the cluster
818 communication throughput and increases availability.
819
820
821 Cluster Cold Start
822 ------------------
823
824 It is obvious that a cluster is not quorate when all nodes are
825 offline. This is a common case after a power failure.
826
827 NOTE: It is always a good idea to use an uninterruptible power supply
828 (``UPS'', also called ``battery backup'') to avoid this state, especially if
829 you want HA.
830
831 On node startup, service `pve-manager` is started and waits for
832 quorum. Once quorate, it starts all guests which have the `onboot`
833 flag set.
834
835 When you turn on nodes, or when power comes back after power failure,
836 it is likely that some nodes boots faster than others. Please keep in
837 mind that guest startup is delayed until you reach quorum.
838
839
840 ifdef::manvolnum[]
841 include::pve-copyright.adoc[]
842 endif::manvolnum[]