4 include::attributes.txt[]
11 pvecm - Proxmox VE Cluster Manager
16 include::pvecm.1-synopsis.adoc[]
25 include::attributes.txt[]
32 The {PVE} cluster manager `pvecm` is a tool to create a group of
33 physical servers. Such a group is called a *cluster*. We use the
34 http://www.corosync.org[Corosync Cluster Engine] for reliable group
35 communication, and such clusters can consist of up to 32 physical nodes
36 (probably more, dependent on network latency).
38 `pvecm` can be used to create a new cluster, join nodes to a cluster,
39 leave the cluster, get status information and do various other cluster
40 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
41 is used to transparently distribute the cluster configuration to all cluster
44 Grouping nodes into a cluster has the following advantages:
46 * Centralized, web based management
48 * Multi-master clusters: each node can do all management task
50 * `pmxcfs`: database-driven file system for storing configuration files,
51 replicated in real-time on all nodes using `corosync`.
53 * Easy migration of virtual machines and containers between physical
58 * Cluster-wide services like firewall and HA
64 * All nodes must be in the same network as `corosync` uses IP Multicast
65 to communicate between nodes (also see
66 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
67 ports 5404 and 5405 for cluster communication.
69 NOTE: Some switches do not support IP multicast by default and must be
70 manually enabled first.
72 * Date and time have to be synchronized.
74 * SSH tunnel on TCP port 22 between nodes is used.
76 * If you are interested in High Availability, you need to have at
77 least three nodes for reliable quorum. All nodes should have the
80 * We recommend a dedicated NIC for the cluster traffic, especially if
81 you use shared storage.
83 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
84 Proxmox VE 4.0 cluster nodes.
90 First, install {PVE} on all nodes. Make sure that each node is
91 installed with the final hostname and IP configuration. Changing the
92 hostname and IP is not possible after cluster creation.
94 Currently the cluster creation has to be done on the console, so you
95 need to login via `ssh`.
100 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
101 This name cannot be changed later.
103 hp1# pvecm create YOUR-CLUSTER-NAME
105 CAUTION: The cluster name is used to compute the default multicast
106 address. Please use unique cluster names if you run more than one
107 cluster inside your network.
109 To check the state of your cluster use:
114 Adding Nodes to the Cluster
115 ---------------------------
117 Login via `ssh` to the node you want to add.
119 hp2# pvecm add IP-ADDRESS-CLUSTER
121 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
123 CAUTION: A new node cannot hold any VMs, because you would get
124 conflicts about identical VM IDs. Also, all existing configuration in
125 `/etc/pve` is overwritten when you join a new node to the cluster. To
126 workaround, use `vzdump` to backup and restore to a different VMID after
127 adding the node to the cluster.
129 To check the state of cluster:
133 .Cluster status after adding 4 nodes
138 Date: Mon Apr 20 12:30:13 2015
139 Quorum provider: corosync_votequorum
145 Votequorum information
146 ~~~~~~~~~~~~~~~~~~~~~~
153 Membership information
154 ~~~~~~~~~~~~~~~~~~~~~~
156 0x00000001 1 192.168.15.91
157 0x00000002 1 192.168.15.92 (local)
158 0x00000003 1 192.168.15.93
159 0x00000004 1 192.168.15.94
162 If you only want the list of all nodes use:
166 .List nodes in a cluster
170 Membership information
171 ~~~~~~~~~~~~~~~~~~~~~~
179 Adding Nodes With Separated Cluster Network
180 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
182 When adding a node to a cluster with a separated cluster network you need to
183 use the 'ringX_addr' parameters to set the nodes address on those networks:
187 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
190 If you want to use the Redundant Ring Protocol you will also want to pass the
191 'ring1_addr' parameter.
194 Remove a Cluster Node
195 ---------------------
197 CAUTION: Read carefully the procedure before proceeding, as it could
198 not be what you want or need.
200 Move all virtual machines from the node. Make sure you have no local
201 data or backups you want to keep, or save them accordingly.
203 Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
204 identify the node ID:
211 Date: Mon Apr 20 12:30:13 2015
212 Quorum provider: corosync_votequorum
218 Votequorum information
219 ~~~~~~~~~~~~~~~~~~~~~~
226 Membership information
227 ~~~~~~~~~~~~~~~~~~~~~~
229 0x00000001 1 192.168.15.91 (local)
230 0x00000002 1 192.168.15.92
231 0x00000003 1 192.168.15.93
232 0x00000004 1 192.168.15.94
235 IMPORTANT: at this point you must power off the node to be removed and
236 make sure that it will not power on again (in the network) as it
242 Membership information
243 ~~~~~~~~~~~~~~~~~~~~~~
251 Log in to one remaining node via ssh. Issue the delete command (here
252 deleting node `hp4`):
254 hp1# pvecm delnode hp4
256 If the operation succeeds no output is returned, just check the node
257 list again with `pvecm nodes` or `pvecm status`. You should see
265 Date: Mon Apr 20 12:44:28 2015
266 Quorum provider: corosync_votequorum
272 Votequorum information
273 ~~~~~~~~~~~~~~~~~~~~~~
280 Membership information
281 ~~~~~~~~~~~~~~~~~~~~~~
283 0x00000001 1 192.168.15.90 (local)
284 0x00000002 1 192.168.15.91
285 0x00000003 1 192.168.15.92
288 IMPORTANT: as said above, it is very important to power off the node
289 *before* removal, and make sure that it will *never* power on again
290 (in the existing cluster network) as it is.
292 If you power on the node as it is, your cluster will be screwed up and
293 it could be difficult to restore a clean cluster state.
295 If, for whatever reason, you want that this server joins the same
296 cluster again, you have to
298 * reinstall {pve} on it from scratch
300 * then join it, as explained in the previous section.
302 Separate A Node Without Reinstalling
303 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
305 CAUTION: This is *not* the recommended method, proceed with caution. Use the
306 above mentioned method if you're unsure.
308 You can also separate a node from a cluster without reinstalling it from
309 scratch. But after removing the node from the cluster it will still have
310 access to the shared storages! This must be resolved before you start removing
311 the node from the cluster. A {pve} cluster cannot share the exact same
312 storage with another cluster, as it leads to VMID conflicts.
314 Its suggested that you create a new storage where only the node which you want
315 to separate has access. This can be an new export on your NFS or a new Ceph
316 pool, to name a few examples. Its just important that the exact same storage
317 does not gets accessed by multiple clusters. After setting this storage up move
318 all data from the node and its VMs to it. Then you are ready to separate the
319 node from the cluster.
321 WARNING: Ensure all shared resources are cleanly separated! You will run into
322 conflicts and problems else.
324 First stop the corosync and the pve-cluster services on the node:
327 systemctl stop pve-cluster
328 systemctl stop corosync
331 Start the cluster filesystem again in local mode:
337 Delete the corosync configuration files:
340 rm /etc/pve/corosync.conf
344 You can now start the filesystem again as normal service:
348 systemctl start pve-cluster
351 The node is now separated from the cluster. You can deleted it from a remaining
352 node of the cluster with:
355 pvecm delnode oldnode
358 If the command failed, because the remaining node in the cluster lost quorum
359 when the now separate node exited, you may set the expected votes to 1 as a workaround:
365 And the repeat the 'pvecm delnode' command.
367 Now switch back to the separated node, here delete all remaining files left
368 from the old cluster. This ensures that the node can be added to another
369 cluster again without problems.
373 rm /var/lib/corosync/*
376 As the configuration files from the other nodes are still in the cluster
377 filesystem you may want to clean those up too. Remove simply the whole
378 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
379 you used the correct one before deleting it.
381 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
382 the nodes can still connect to each other with public key authentication. This
383 should be fixed by removing the respective keys from the
384 '/etc/pve/priv/authorized_keys' file.
389 {pve} use a quorum-based technique to provide a consistent state among
392 [quote, from Wikipedia, Quorum (distributed computing)]
394 A quorum is the minimum number of votes that a distributed transaction
395 has to obtain in order to be allowed to perform an operation in a
399 In case of network partitioning, state changes requires that a
400 majority of nodes are online. The cluster switches to read-only mode
403 NOTE: {pve} assigns a single vote to each node by default.
408 The cluster network is the core of a cluster. All messages sent over it have to
409 be delivered reliable to all nodes in their respective order. In {pve} this
410 part is done by corosync, an implementation of a high performance low overhead
411 high availability development toolkit. It serves our decentralized
412 configuration file system (`pmxcfs`).
414 [[cluster-network-requirements]]
417 This needs a reliable network with latencies under 2 milliseconds (LAN
418 performance) to work properly. While corosync can also use unicast for
419 communication between nodes its **highly recommended** to have a multicast
420 capable network. The network should not be used heavily by other members,
421 ideally corosync runs on its own network.
422 *never* share it with network where storage communicates too.
424 Before setting up a cluster it is good practice to check if the network is fit
427 * Ensure that all nodes are in the same subnet. This must only be true for the
428 network interfaces used for cluster communication (corosync).
430 * Ensure all nodes can reach each other over those interfaces, using `ping` is
431 enough for a basic test.
433 * Ensure that multicast works in general and a high package rates. This can be
434 done with the `omping` tool. The final "%loss" number should be < 1%.
437 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
440 * Ensure that multicast communication works over an extended period of time.
441 This covers up problems where IGMP snooping is activated on the network but
442 no multicast querier is active. This test has a duration of around 10
446 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
449 Your network is not ready for clustering if any of these test fails. Recheck
450 your network configuration. Especially switches are notorious for having
451 multicast disabled by default or IGMP snooping enabled with no IGMP querier
454 In smaller cluster its also an option to use unicast if you really cannot get
457 Separate Cluster Network
458 ~~~~~~~~~~~~~~~~~~~~~~~~
460 When creating a cluster without any parameters the cluster network is generally
461 shared with the Web UI and the VMs and its traffic. Depending on your setup
462 even storage traffic may get sent over the same network. Its recommended to
463 change that, as corosync is a time critical real time application.
465 Setting Up A New Network
466 ^^^^^^^^^^^^^^^^^^^^^^^^
468 First you have to setup a new network interface. It should be on a physical
469 separate network. Ensure that your network fulfills the
470 <<cluster-network-requirements,cluster network requirements>>.
472 Separate On Cluster Creation
473 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
475 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
476 the 'pvecm create' command used for creating a new cluster.
478 If you have setup a additional NIC with a static address on 10.10.10.1/25
479 and want to send and receive all cluster communication over this interface
484 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
487 To check if everything is working properly execute:
490 systemctl status corosync
493 [[separate-cluster-net-after-creation]]
494 Separate After Cluster Creation
495 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
497 You can do this also if you have already created a cluster and want to switch
498 its communication to another network, without rebuilding the whole cluster.
499 This change may lead to short durations of quorum loss in the cluster, as nodes
500 have to restart corosync and come up one after the other on the new network.
502 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
503 The open it and you should see a file similar to:
537 provider: corosync_votequorum
541 cluster_name: thomas-testcluster
547 bindnetaddr: 192.168.30.50
554 The first you want to do is add the 'name' properties in the node entries if
555 you do not see them already. Those *must* match the node name.
557 Then replace the address from the 'ring0_addr' properties with the new
558 addresses. You may use plain IP addresses or also hostnames here. If you use
559 hostnames ensure that they are resolvable from all nodes.
561 In my example I want to switch my cluster communication to the 10.10.10.1/25
562 network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
563 in the totem section of the config to an address of the new network. It can be
564 any address from the subnet configured on the new network interface.
566 After you increased the 'config_version' property the new configuration file
582 ring0_addr: 10.10.10.2
589 ring0_addr: 10.10.10.3
596 ring0_addr: 10.10.10.1
602 provider: corosync_votequorum
606 cluster_name: thomas-testcluster
612 bindnetaddr: 10.10.10.1
619 Now after a final check whether all changed information is correct we save it
620 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
621 learn how to bring it in effect.
623 As our change cannot be enforced live from corosync we have to do an restart.
625 On a single node execute:
628 systemctl restart corosync
631 Now check if everything is fine:
635 systemctl status corosync
638 If corosync runs again correct restart corosync also on all other nodes.
639 They will then join the cluster membership one by one on the new network.
641 Redundant Ring Protocol
642 ~~~~~~~~~~~~~~~~~~~~~~~
643 To avoid a single point of failure you should implement counter measurements.
644 This can be on the hardware and operating system level through network bonding.
646 Corosync itself offers also a possibility to add redundancy through the so
647 called 'Redundant Ring Protocol'. This protocol allows running a second totem
648 ring on another network, this network should be physically separated from the
649 other rings network to actually increase availability.
651 RRP On Cluster Creation
652 ~~~~~~~~~~~~~~~~~~~~~~~
654 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
655 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
657 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
659 So if you have two networks, one on the 10.10.10.1/24 and the other on the
660 10.10.20.1/24 subnet you would execute:
664 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
665 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
668 RRP On A Created Cluster
669 ~~~~~~~~~~~~~~~~~~~~~~~~
671 When enabling an already running cluster to use RRP you will take similar steps
672 as describe in <<separate-cluster-net-after-creation,separating the cluster
673 network>>. You just do it on another ring.
675 First add a new `interface` subsection in the `totem` section, set its
676 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
677 address of the subnet you have configured for your new ring.
678 Further set the `rrp_mode` to `passive`, this is the only stable mode.
680 Then add to each node entry in the `nodelist` section its new `ring1_addr`
681 property with the nodes additional ring address.
683 So if you have two networks, one on the 10.10.10.1/24 and the other on the
684 10.10.20.1/24 subnet, the final configuration file should look like:
695 bindnetaddr: 10.10.10.1
699 bindnetaddr: 10.10.20.1
709 ring0_addr: 10.10.10.1
710 ring1_addr: 10.10.20.1
717 ring0_addr: 10.10.10.2
718 ring1_addr: 10.10.20.2
721 [...] # other cluster nodes here
724 [...] # other remaining config sections here
728 Bring it in effect like described in the <<edit-corosync-conf,edit the
729 corosync.conf file>> section.
731 This is a change which cannot take live in effect and needs at least a restart
732 of corosync. Recommended is a restart of the whole cluster.
734 If you cannot reboot the whole cluster ensure no High Availability services are
735 configured and the stop the corosync service on all nodes. After corosync is
736 stopped on all nodes start it one after the other again.
738 Corosync Configuration
739 ----------------------
741 The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
742 controls the cluster member ship and its network.
743 For reading more about it check the corosync.conf man page:
749 For node membership you should always use the `pvecm` tool provided by {pve}.
750 You may have to edit the configuration file manually for other changes.
751 Here are a few best practice tips for doing this.
753 [[edit-corosync-conf]]
757 Editing the corosync.conf file can be not always straight forward. There are
758 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
759 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
760 propagate the changes to the local one, but not vice versa.
762 The configuration will get updated automatically as soon as the file changes.
763 This means changes which can be integrated in a running corosync will take
764 instantly effect. So you should always make a copy and edit that instead, to
765 avoid triggering some unwanted changes by an in between safe.
769 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
772 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
773 preinstalled on {pve} for example.
775 NOTE: Always increment the 'config_version' number on configuration changes,
776 omitting this can lead to problems.
778 After making the necessary changes create another copy of the current working
779 configuration file. This serves as a backup if the new configuration fails to
780 apply or makes problems in other ways.
784 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
787 Then move the new configuration file over the old one:
790 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
793 You may check with the commands
796 systemctl status corosync
797 journalctl -b -u corosync
800 If the change could applied automatically. If not you may have to restart the
801 corosync service via:
804 systemctl restart corosync
807 On errors check the troubleshooting section below.
812 Issue: 'quorum.expected_votes must be configured'
813 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
815 When corosync starts to fail and you get the following message in the system log:
819 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
820 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
821 'configuration error: nodelist or quorum.expected_votes must be configured!'
825 It means that the hostname you set for corosync 'ringX_addr' in the
826 configuration could not be resolved.
829 Write Configuration When Not Quorate
830 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
832 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
833 know what you do, use:
839 This sets the expected vote count to 1 and makes the cluster quorate. You can
840 now fix your configuration, or revert it back to the last working backup.
842 This is not enough if corosync cannot start anymore. Here its best to edit the
843 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
844 that corosync can start again. Ensure that on all nodes this configuration has
845 the same content to avoid split brains. If you are not sure what went wrong
846 it's best to ask the Proxmox Community to help you.
849 [[corosync-conf-glossary]]
850 Corosync Configuration Glossary
851 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
854 This names the different ring addresses for the corosync totem rings used for
855 the cluster communication.
858 Defines to which interface the ring should bind to. It may be any address of
859 the subnet configured on the interface we want to use. In general its the
860 recommended to just use an address a node uses on this interface.
863 Specifies the mode of the redundant ring protocol and may be passive, active or
864 none. Note that use of active is highly experimental and not official
865 supported. Passive is the preferred mode, it may double the cluster
866 communication throughput and increases availability.
872 It is obvious that a cluster is not quorate when all nodes are
873 offline. This is a common case after a power failure.
875 NOTE: It is always a good idea to use an uninterruptible power supply
876 (``UPS'', also called ``battery backup'') to avoid this state, especially if
879 On node startup, service `pve-manager` is started and waits for
880 quorum. Once quorate, it starts all guests which have the `onboot`
883 When you turn on nodes, or when power comes back after power failure,
884 it is likely that some nodes boots faster than others. Please keep in
885 mind that guest startup is delayed until you reach quorum.
889 include::pve-copyright.adoc[]