4 include::attributes.txt[]
10 pvecm - Proxmox VE Cluster Manager
15 include::pvecm.1-synopsis.adoc[]
24 include::attributes.txt[]
28 The {PVE} cluster manager `pvecm` is a tool to create a group of
29 physical servers. Such a group is called a *cluster*. We use the
30 http://www.corosync.org[Corosync Cluster Engine] for reliable group
31 communication, and such clusters can consist of up to 32 physical nodes
32 (probably more, dependent on network latency).
34 `pvecm` can be used to create a new cluster, join nodes to a cluster,
35 leave the cluster, get status information and do various other cluster
36 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
37 is used to transparently distribute the cluster configuration to all cluster
40 Grouping nodes into a cluster has the following advantages:
42 * Centralized, web based management
44 * Multi-master clusters: each node can do all management task
46 * `pmxcfs`: database-driven file system for storing configuration files,
47 replicated in real-time on all nodes using `corosync`.
49 * Easy migration of virtual machines and containers between physical
54 * Cluster-wide services like firewall and HA
60 * All nodes must be in the same network as `corosync` uses IP Multicast
61 to communicate between nodes (also see
62 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
63 ports 5404 and 5405 for cluster communication.
65 NOTE: Some switches do not support IP multicast by default and must be
66 manually enabled first.
68 * Date and time have to be synchronized.
70 * SSH tunnel on TCP port 22 between nodes is used.
72 * If you are interested in High Availability, you need to have at
73 least three nodes for reliable quorum. All nodes should have the
76 * We recommend a dedicated NIC for the cluster traffic, especially if
77 you use shared storage.
79 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
80 Proxmox VE 4.0 cluster nodes.
86 First, install {PVE} on all nodes. Make sure that each node is
87 installed with the final hostname and IP configuration. Changing the
88 hostname and IP is not possible after cluster creation.
90 Currently the cluster creation has to be done on the console, so you
91 need to login via `ssh`.
96 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
97 This name cannot be changed later.
99 hp1# pvecm create YOUR-CLUSTER-NAME
101 CAUTION: The cluster name is used to compute the default multicast
102 address. Please use unique cluster names if you run more than one
103 cluster inside your network.
105 To check the state of your cluster use:
110 Adding Nodes to the Cluster
111 ---------------------------
113 Login via `ssh` to the node you want to add.
115 hp2# pvecm add IP-ADDRESS-CLUSTER
117 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
119 CAUTION: A new node cannot hold any VMs, because you would get
120 conflicts about identical VM IDs. Also, all existing configuration in
121 `/etc/pve` is overwritten when you join a new node to the cluster. To
122 workaround, use `vzdump` to backup and restore to a different VMID after
123 adding the node to the cluster.
125 To check the state of cluster:
129 .Cluster status after adding 4 nodes
134 Date: Mon Apr 20 12:30:13 2015
135 Quorum provider: corosync_votequorum
141 Votequorum information
142 ~~~~~~~~~~~~~~~~~~~~~~
149 Membership information
150 ~~~~~~~~~~~~~~~~~~~~~~
152 0x00000001 1 192.168.15.91
153 0x00000002 1 192.168.15.92 (local)
154 0x00000003 1 192.168.15.93
155 0x00000004 1 192.168.15.94
158 If you only want the list of all nodes use:
162 .List nodes in a cluster
166 Membership information
167 ~~~~~~~~~~~~~~~~~~~~~~
175 Adding Nodes With Separated Cluster Network
176 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
178 When adding a node to a cluster with a separated cluster network you need to
179 use the 'ringX_addr' parameters to set the nodes address on those networks:
183 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
186 If you want to use the Redundant Ring Protocol you will also want to pass the
187 'ring1_addr' parameter.
190 Remove a Cluster Node
191 ---------------------
193 CAUTION: Read carefully the procedure before proceeding, as it could
194 not be what you want or need.
196 Move all virtual machines from the node. Make sure you have no local
197 data or backups you want to keep, or save them accordingly.
199 Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
200 identify the node ID:
207 Date: Mon Apr 20 12:30:13 2015
208 Quorum provider: corosync_votequorum
214 Votequorum information
215 ~~~~~~~~~~~~~~~~~~~~~~
222 Membership information
223 ~~~~~~~~~~~~~~~~~~~~~~
225 0x00000001 1 192.168.15.91 (local)
226 0x00000002 1 192.168.15.92
227 0x00000003 1 192.168.15.93
228 0x00000004 1 192.168.15.94
231 IMPORTANT: at this point you must power off the node to be removed and
232 make sure that it will not power on again (in the network) as it
238 Membership information
239 ~~~~~~~~~~~~~~~~~~~~~~
247 Log in to one remaining node via ssh. Issue the delete command (here
248 deleting node `hp4`):
250 hp1# pvecm delnode hp4
252 If the operation succeeds no output is returned, just check the node
253 list again with `pvecm nodes` or `pvecm status`. You should see
261 Date: Mon Apr 20 12:44:28 2015
262 Quorum provider: corosync_votequorum
268 Votequorum information
269 ~~~~~~~~~~~~~~~~~~~~~~
276 Membership information
277 ~~~~~~~~~~~~~~~~~~~~~~
279 0x00000001 1 192.168.15.90 (local)
280 0x00000002 1 192.168.15.91
281 0x00000003 1 192.168.15.92
284 IMPORTANT: as said above, it is very important to power off the node
285 *before* removal, and make sure that it will *never* power on again
286 (in the existing cluster network) as it is.
288 If you power on the node as it is, your cluster will be screwed up and
289 it could be difficult to restore a clean cluster state.
291 If, for whatever reason, you want that this server joins the same
292 cluster again, you have to
294 * reinstall {pve} on it from scratch
296 * then join it, as explained in the previous section.
298 [[pvecm_separate_node_without_reinstall]]
299 Separate A Node Without Reinstalling
300 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
302 CAUTION: This is *not* the recommended method, proceed with caution. Use the
303 above mentioned method if you're unsure.
305 You can also separate a node from a cluster without reinstalling it from
306 scratch. But after removing the node from the cluster it will still have
307 access to the shared storages! This must be resolved before you start removing
308 the node from the cluster. A {pve} cluster cannot share the exact same
309 storage with another cluster, as it leads to VMID conflicts.
311 Its suggested that you create a new storage where only the node which you want
312 to separate has access. This can be an new export on your NFS or a new Ceph
313 pool, to name a few examples. Its just important that the exact same storage
314 does not gets accessed by multiple clusters. After setting this storage up move
315 all data from the node and its VMs to it. Then you are ready to separate the
316 node from the cluster.
318 WARNING: Ensure all shared resources are cleanly separated! You will run into
319 conflicts and problems else.
321 First stop the corosync and the pve-cluster services on the node:
324 systemctl stop pve-cluster
325 systemctl stop corosync
328 Start the cluster filesystem again in local mode:
334 Delete the corosync configuration files:
337 rm /etc/pve/corosync.conf
341 You can now start the filesystem again as normal service:
345 systemctl start pve-cluster
348 The node is now separated from the cluster. You can deleted it from a remaining
349 node of the cluster with:
352 pvecm delnode oldnode
355 If the command failed, because the remaining node in the cluster lost quorum
356 when the now separate node exited, you may set the expected votes to 1 as a workaround:
362 And the repeat the 'pvecm delnode' command.
364 Now switch back to the separated node, here delete all remaining files left
365 from the old cluster. This ensures that the node can be added to another
366 cluster again without problems.
370 rm /var/lib/corosync/*
373 As the configuration files from the other nodes are still in the cluster
374 filesystem you may want to clean those up too. Remove simply the whole
375 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
376 you used the correct one before deleting it.
378 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
379 the nodes can still connect to each other with public key authentication. This
380 should be fixed by removing the respective keys from the
381 '/etc/pve/priv/authorized_keys' file.
386 {pve} use a quorum-based technique to provide a consistent state among
389 [quote, from Wikipedia, Quorum (distributed computing)]
391 A quorum is the minimum number of votes that a distributed transaction
392 has to obtain in order to be allowed to perform an operation in a
396 In case of network partitioning, state changes requires that a
397 majority of nodes are online. The cluster switches to read-only mode
400 NOTE: {pve} assigns a single vote to each node by default.
405 The cluster network is the core of a cluster. All messages sent over it have to
406 be delivered reliable to all nodes in their respective order. In {pve} this
407 part is done by corosync, an implementation of a high performance low overhead
408 high availability development toolkit. It serves our decentralized
409 configuration file system (`pmxcfs`).
411 [[cluster-network-requirements]]
414 This needs a reliable network with latencies under 2 milliseconds (LAN
415 performance) to work properly. While corosync can also use unicast for
416 communication between nodes its **highly recommended** to have a multicast
417 capable network. The network should not be used heavily by other members,
418 ideally corosync runs on its own network.
419 *never* share it with network where storage communicates too.
421 Before setting up a cluster it is good practice to check if the network is fit
424 * Ensure that all nodes are in the same subnet. This must only be true for the
425 network interfaces used for cluster communication (corosync).
427 * Ensure all nodes can reach each other over those interfaces, using `ping` is
428 enough for a basic test.
430 * Ensure that multicast works in general and a high package rates. This can be
431 done with the `omping` tool. The final "%loss" number should be < 1%.
434 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
437 * Ensure that multicast communication works over an extended period of time.
438 This covers up problems where IGMP snooping is activated on the network but
439 no multicast querier is active. This test has a duration of around 10
443 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
446 Your network is not ready for clustering if any of these test fails. Recheck
447 your network configuration. Especially switches are notorious for having
448 multicast disabled by default or IGMP snooping enabled with no IGMP querier
451 In smaller cluster its also an option to use unicast if you really cannot get
454 Separate Cluster Network
455 ~~~~~~~~~~~~~~~~~~~~~~~~
457 When creating a cluster without any parameters the cluster network is generally
458 shared with the Web UI and the VMs and its traffic. Depending on your setup
459 even storage traffic may get sent over the same network. Its recommended to
460 change that, as corosync is a time critical real time application.
462 Setting Up A New Network
463 ^^^^^^^^^^^^^^^^^^^^^^^^
465 First you have to setup a new network interface. It should be on a physical
466 separate network. Ensure that your network fulfills the
467 <<cluster-network-requirements,cluster network requirements>>.
469 Separate On Cluster Creation
470 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
472 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
473 the 'pvecm create' command used for creating a new cluster.
475 If you have setup a additional NIC with a static address on 10.10.10.1/25
476 and want to send and receive all cluster communication over this interface
481 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
484 To check if everything is working properly execute:
487 systemctl status corosync
490 [[separate-cluster-net-after-creation]]
491 Separate After Cluster Creation
492 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
494 You can do this also if you have already created a cluster and want to switch
495 its communication to another network, without rebuilding the whole cluster.
496 This change may lead to short durations of quorum loss in the cluster, as nodes
497 have to restart corosync and come up one after the other on the new network.
499 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
500 The open it and you should see a file similar to:
534 provider: corosync_votequorum
538 cluster_name: thomas-testcluster
544 bindnetaddr: 192.168.30.50
551 The first you want to do is add the 'name' properties in the node entries if
552 you do not see them already. Those *must* match the node name.
554 Then replace the address from the 'ring0_addr' properties with the new
555 addresses. You may use plain IP addresses or also hostnames here. If you use
556 hostnames ensure that they are resolvable from all nodes.
558 In my example I want to switch my cluster communication to the 10.10.10.1/25
559 network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
560 in the totem section of the config to an address of the new network. It can be
561 any address from the subnet configured on the new network interface.
563 After you increased the 'config_version' property the new configuration file
579 ring0_addr: 10.10.10.2
586 ring0_addr: 10.10.10.3
593 ring0_addr: 10.10.10.1
599 provider: corosync_votequorum
603 cluster_name: thomas-testcluster
609 bindnetaddr: 10.10.10.1
616 Now after a final check whether all changed information is correct we save it
617 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
618 learn how to bring it in effect.
620 As our change cannot be enforced live from corosync we have to do an restart.
622 On a single node execute:
625 systemctl restart corosync
628 Now check if everything is fine:
632 systemctl status corosync
635 If corosync runs again correct restart corosync also on all other nodes.
636 They will then join the cluster membership one by one on the new network.
638 Redundant Ring Protocol
639 ~~~~~~~~~~~~~~~~~~~~~~~
640 To avoid a single point of failure you should implement counter measurements.
641 This can be on the hardware and operating system level through network bonding.
643 Corosync itself offers also a possibility to add redundancy through the so
644 called 'Redundant Ring Protocol'. This protocol allows running a second totem
645 ring on another network, this network should be physically separated from the
646 other rings network to actually increase availability.
648 RRP On Cluster Creation
649 ~~~~~~~~~~~~~~~~~~~~~~~
651 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
652 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
654 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
656 So if you have two networks, one on the 10.10.10.1/24 and the other on the
657 10.10.20.1/24 subnet you would execute:
661 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
662 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
665 RRP On A Created Cluster
666 ~~~~~~~~~~~~~~~~~~~~~~~~
668 When enabling an already running cluster to use RRP you will take similar steps
670 <<separate-cluster-net-after-creation,separating the cluster network>>. You
671 just do it on another ring.
673 First add a new `interface` subsection in the `totem` section, set its
674 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
675 address of the subnet you have configured for your new ring.
676 Further set the `rrp_mode` to `passive`, this is the only stable mode.
678 Then add to each node entry in the `nodelist` section its new `ring1_addr`
679 property with the nodes additional ring address.
681 So if you have two networks, one on the 10.10.10.1/24 and the other on the
682 10.10.20.1/24 subnet, the final configuration file should look like:
693 bindnetaddr: 10.10.10.1
697 bindnetaddr: 10.10.20.1
707 ring0_addr: 10.10.10.1
708 ring1_addr: 10.10.20.1
715 ring0_addr: 10.10.10.2
716 ring1_addr: 10.10.20.2
719 [...] # other cluster nodes here
722 [...] # other remaining config sections here
726 Bring it in effect like described in the
727 <<edit-corosync-conf,edit the corosync.conf file>> section.
729 This is a change which cannot take live in effect and needs at least a restart
730 of corosync. Recommended is a restart of the whole cluster.
732 If you cannot reboot the whole cluster ensure no High Availability services are
733 configured and the stop the corosync service on all nodes. After corosync is
734 stopped on all nodes start it one after the other again.
736 Corosync Configuration
737 ----------------------
739 The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
740 controls the cluster member ship and its network.
741 For reading more about it check the corosync.conf man page:
747 For node membership you should always use the `pvecm` tool provided by {pve}.
748 You may have to edit the configuration file manually for other changes.
749 Here are a few best practice tips for doing this.
751 [[edit-corosync-conf]]
755 Editing the corosync.conf file can be not always straight forward. There are
756 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
757 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
758 propagate the changes to the local one, but not vice versa.
760 The configuration will get updated automatically as soon as the file changes.
761 This means changes which can be integrated in a running corosync will take
762 instantly effect. So you should always make a copy and edit that instead, to
763 avoid triggering some unwanted changes by an in between safe.
767 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
770 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
771 preinstalled on {pve} for example.
773 NOTE: Always increment the 'config_version' number on configuration changes,
774 omitting this can lead to problems.
776 After making the necessary changes create another copy of the current working
777 configuration file. This serves as a backup if the new configuration fails to
778 apply or makes problems in other ways.
782 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
785 Then move the new configuration file over the old one:
788 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
791 You may check with the commands
794 systemctl status corosync
795 journalctl -b -u corosync
798 If the change could applied automatically. If not you may have to restart the
799 corosync service via:
802 systemctl restart corosync
805 On errors check the troubleshooting section below.
810 Issue: 'quorum.expected_votes must be configured'
811 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
813 When corosync starts to fail and you get the following message in the system log:
817 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
818 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
819 'configuration error: nodelist or quorum.expected_votes must be configured!'
823 It means that the hostname you set for corosync 'ringX_addr' in the
824 configuration could not be resolved.
827 Write Configuration When Not Quorate
828 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
830 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
831 know what you do, use:
837 This sets the expected vote count to 1 and makes the cluster quorate. You can
838 now fix your configuration, or revert it back to the last working backup.
840 This is not enough if corosync cannot start anymore. Here its best to edit the
841 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
842 that corosync can start again. Ensure that on all nodes this configuration has
843 the same content to avoid split brains. If you are not sure what went wrong
844 it's best to ask the Proxmox Community to help you.
847 [[corosync-conf-glossary]]
848 Corosync Configuration Glossary
849 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
852 This names the different ring addresses for the corosync totem rings used for
853 the cluster communication.
856 Defines to which interface the ring should bind to. It may be any address of
857 the subnet configured on the interface we want to use. In general its the
858 recommended to just use an address a node uses on this interface.
861 Specifies the mode of the redundant ring protocol and may be passive, active or
862 none. Note that use of active is highly experimental and not official
863 supported. Passive is the preferred mode, it may double the cluster
864 communication throughput and increases availability.
870 It is obvious that a cluster is not quorate when all nodes are
871 offline. This is a common case after a power failure.
873 NOTE: It is always a good idea to use an uninterruptible power supply
874 (``UPS'', also called ``battery backup'') to avoid this state, especially if
877 On node startup, service `pve-manager` is started and waits for
878 quorum. Once quorate, it starts all guests which have the `onboot`
881 When you turn on nodes, or when power comes back after power failure,
882 it is likely that some nodes boots faster than others. Please keep in
883 mind that guest startup is delayed until you reach quorum.
887 include::pve-copyright.adoc[]