4 include::attributes.txt[]
9 pvecm - Proxmox VE Cluster Manager
14 include::pvecm.1-synopsis.adoc[]
23 include::attributes.txt[]
26 The {PVE} cluster manager `pvecm` is a tool to create a group of
27 physical servers. Such a group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such clusters can consist of up to 32 physical nodes
30 (probably more, dependent on network latency).
32 `pvecm` can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35 is used to transparently distribute the cluster configuration to all cluster
38 Grouping nodes into a cluster has the following advantages:
40 * Centralized, web based management
42 * Multi-master clusters: each node can do all management task
44 * `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
47 * Easy migration of virtual machines and containers between physical
52 * Cluster-wide services like firewall and HA
58 * All nodes must be in the same network as `corosync` uses IP Multicast
59 to communicate between nodes (also see
60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
61 ports 5404 and 5405 for cluster communication.
63 NOTE: Some switches do not support IP multicast by default and must be
64 manually enabled first.
66 * Date and time have to be synchronized.
68 * SSH tunnel on TCP port 22 between nodes is used.
70 * If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
74 * We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
77 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
78 Proxmox VE 4.0 cluster nodes.
84 First, install {PVE} on all nodes. Make sure that each node is
85 installed with the final hostname and IP configuration. Changing the
86 hostname and IP is not possible after cluster creation.
88 Currently the cluster creation has to be done on the console, so you
89 need to login via `ssh`.
94 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95 This name cannot be changed later.
97 hp1# pvecm create YOUR-CLUSTER-NAME
99 CAUTION: The cluster name is used to compute the default multicast
100 address. Please use unique cluster names if you run more than one
101 cluster inside your network.
103 To check the state of your cluster use:
108 Adding Nodes to the Cluster
109 ---------------------------
111 Login via `ssh` to the node you want to add.
113 hp2# pvecm add IP-ADDRESS-CLUSTER
115 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
117 CAUTION: A new node cannot hold any VMs, because you would get
118 conflicts about identical VM IDs. Also, all existing configuration in
119 `/etc/pve` is overwritten when you join a new node to the cluster. To
120 workaround, use `vzdump` to backup and restore to a different VMID after
121 adding the node to the cluster.
123 To check the state of cluster:
127 .Cluster status after adding 4 nodes
132 Date: Mon Apr 20 12:30:13 2015
133 Quorum provider: corosync_votequorum
139 Votequorum information
140 ~~~~~~~~~~~~~~~~~~~~~~
147 Membership information
148 ~~~~~~~~~~~~~~~~~~~~~~
150 0x00000001 1 192.168.15.91
151 0x00000002 1 192.168.15.92 (local)
152 0x00000003 1 192.168.15.93
153 0x00000004 1 192.168.15.94
156 If you only want the list of all nodes use:
160 .List nodes in a cluster
164 Membership information
165 ~~~~~~~~~~~~~~~~~~~~~~
173 Adding Nodes With Separated Cluster Network
174 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
176 When adding a node to a cluster with a separated cluster network you need to
177 use the 'ringX_addr' parameters to set the nodes address on those networks:
180 pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
182 If you want to use the Redundant Ring Protocol you will also want to pass the
183 'ring1_addr' parameter.
186 Remove a Cluster Node
187 ---------------------
189 CAUTION: Read carefully the procedure before proceeding, as it could
190 not be what you want or need.
192 Move all virtual machines from the node. Make sure you have no local
193 data or backups you want to keep, or save them accordingly.
195 Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
196 identify the node ID:
203 Date: Mon Apr 20 12:30:13 2015
204 Quorum provider: corosync_votequorum
210 Votequorum information
211 ~~~~~~~~~~~~~~~~~~~~~~
218 Membership information
219 ~~~~~~~~~~~~~~~~~~~~~~
221 0x00000001 1 192.168.15.91 (local)
222 0x00000002 1 192.168.15.92
223 0x00000003 1 192.168.15.93
224 0x00000004 1 192.168.15.94
227 IMPORTANT: at this point you must power off the node to be removed and
228 make sure that it will not power on again (in the network) as it
234 Membership information
235 ~~~~~~~~~~~~~~~~~~~~~~
243 Log in to one remaining node via ssh. Issue the delete command (here
244 deleting node `hp4`):
246 hp1# pvecm delnode hp4
248 If the operation succeeds no output is returned, just check the node
249 list again with `pvecm nodes` or `pvecm status`. You should see
257 Date: Mon Apr 20 12:44:28 2015
258 Quorum provider: corosync_votequorum
264 Votequorum information
265 ~~~~~~~~~~~~~~~~~~~~~~
272 Membership information
273 ~~~~~~~~~~~~~~~~~~~~~~
275 0x00000001 1 192.168.15.90 (local)
276 0x00000002 1 192.168.15.91
277 0x00000003 1 192.168.15.92
280 IMPORTANT: as said above, it is very important to power off the node
281 *before* removal, and make sure that it will *never* power on again
282 (in the existing cluster network) as it is.
284 If you power on the node as it is, your cluster will be screwed up and
285 it could be difficult to restore a clean cluster state.
287 If, for whatever reason, you want that this server joins the same
288 cluster again, you have to
290 * reinstall {pve} on it from scratch
292 * then join it, as explained in the previous section.
294 Separate A Node Without Reinstalling
295 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
297 CAUTION: This is *not* the recommended method, proceed with caution. Use the
298 above mentioned method if you're unsure.
300 You can also separate a node from a cluster without reinstalling it from
301 scratch. But after removing the node from the cluster it will still have
302 access to the shared storages! This must be resolved before you start removing
303 the node from the cluster. A {pve} cluster cannot share the exact same
304 storage with another cluster, as it leads to VMID conflicts.
306 Move the guests which you want to keep on this node now, after the removal you
307 can do this only via backup and restore. Its suggested that you create a new
308 storage where only the node which you want to separate has access. This can be
309 an new export on your NFS or a new Ceph pool, to name a few examples. Its just
310 important that the exact same storage does not gets accessed by multiple
311 clusters. After setting this storage up move all data from the node and its VMs
312 to it. Then you are ready to separate the node from the cluster.
314 WARNING: Ensure all shared resources are cleanly separated! You will run into
315 conflicts and problems else.
317 First stop the corosync and the pve-cluster services on the node:
319 systemctl stop pve-cluster
320 systemctl stop corosync
322 Start the cluster filesystem again in local mode:
326 Delete the corosync configuration files:
328 rm /etc/pve/corosync.conf
331 You can now start the filesystem again as normal service:
334 systemctl start pve-cluster
336 The node is now separated from the cluster. You can deleted it from a remaining
337 node of the cluster with:
339 pvecm delnode oldnode
341 If the command failed, because the remaining node in the cluster lost quorum
342 when the now separate node exited, you may set the expected votes to 1 as a workaround:
346 And the repeat the 'pvecm delnode' command.
348 Now switch back to the separated node, here delete all remaining files left
349 from the old cluster. This ensures that the node can be added to another
350 cluster again without problems.
353 rm /var/lib/corosync/*
355 As the configuration files from the other nodes are still in the cluster
356 filesystem you may want to clean those up too. Remove simply the whole
357 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
358 you used the correct one before deleting it.
360 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
361 the nodes can still connect to each other with public key authentication. This
362 should be fixed by removing the respective keys from the
363 '/etc/pve/priv/authorized_keys' file.
368 {pve} use a quorum-based technique to provide a consistent state among
371 [quote, from Wikipedia, Quorum (distributed computing)]
373 A quorum is the minimum number of votes that a distributed transaction
374 has to obtain in order to be allowed to perform an operation in a
378 In case of network partitioning, state changes requires that a
379 majority of nodes are online. The cluster switches to read-only mode
382 NOTE: {pve} assigns a single vote to each node by default.
387 The cluster network is the core of a cluster. All messages sent over it have to
388 be delivered reliable to all nodes in their respective order. In {pve} this
389 part is done by corosync, an implementation of a high performance low overhead
390 high availability development toolkit. It serves our decentralized
391 configuration file system (`pmxcfs`).
393 [[cluster-network-requirements]]
396 This needs a reliable network with latencies under 2 milliseconds (LAN
397 performance) to work properly. While corosync can also use unicast for
398 communication between nodes its **highly recommended** to have a multicast
399 capable network. The network should not be used heavily by other members,
400 ideally corosync runs on its own network.
401 *never* share it with network where storage communicates too.
403 Before setting up a cluster it is good practice to check if the network is fit
406 * Ensure that all nodes are in the same subnet. This must only be true for the
407 network interfaces used for cluster communication (corosync).
409 * Ensure all nodes can reach each other over those interfaces, using `ping` is
410 enough for a basic test.
412 * Ensure that multicast works in general and a high package rates. This can be
413 done with the `omping` tool. The final "%loss" number should be < 1%.
416 omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
419 * Ensure that multicast communication works over an extended period of time.
420 This covers up problems where IGMP snooping is activated on the network but
421 no multicast querier is active. This test has a duration of around 10
424 omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
426 Your network is not ready for clustering if any of these test fails. Recheck
427 your network configuration. Especially switches are notorious for having
428 multicast disabled by default or IGMP snooping enabled with no IGMP querier
431 In smaller cluster its also an option to use unicast if you really cannot get
434 Separate Cluster Network
435 ~~~~~~~~~~~~~~~~~~~~~~~~
437 When creating a cluster without any parameters the cluster network is generally
438 shared with the Web UI and the VMs and its traffic. Depending on your setup
439 even storage traffic may get sent over the same network. Its recommended to
440 change that, as corosync is a time critical real time application.
442 Setting Up A New Network
443 ^^^^^^^^^^^^^^^^^^^^^^^^
445 First you have to setup a new network interface. It should be on a physical
446 separate network. Ensure that your network fulfills the
447 <<cluster-network-requirements,cluster network requirements>>.
449 Separate On Cluster Creation
450 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
452 This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
453 the 'pvecm create' command used for creating a new cluster.
455 If you have setup a additional NIC with a static address on 10.10.10.1/25
456 and want to send and receive all cluster communication over this interface
460 pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
462 To check if everything is working properly execute:
464 systemctl status corosync
466 [[separate-cluster-net-after-creation]]
467 Separate After Cluster Creation
468 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
470 You can do this also if you have already created a cluster and want to switch
471 its communication to another network, without rebuilding the whole cluster.
472 This change may lead to short durations of quorum loss in the cluster, as nodes
473 have to restart corosync and come up one after the other on the new network.
475 Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
476 The open it and you should see a file similar to:
510 provider: corosync_votequorum
514 cluster_name: thomas-testcluster
520 bindnetaddr: 192.168.30.50
527 The first you want to do is add the 'name' properties in the node entries if
528 you do not see them already. Those *must* match the node name.
530 Then replace the address from the 'ring0_addr' properties with the new
531 addresses. You may use plain IP addresses or also hostnames here. If you use
532 hostnames ensure that they are resolvable from all nodes.
534 In my example I want to switch my cluster communication to the 10.10.10.1/25
535 network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
536 in the totem section of the config to an address of the new network. It can be
537 any address from the subnet configured on the new network interface.
539 After you increased the 'config_version' property the new configuration file
555 ring0_addr: 10.10.10.2
562 ring0_addr: 10.10.10.3
569 ring0_addr: 10.10.10.1
575 provider: corosync_votequorum
579 cluster_name: thomas-testcluster
585 bindnetaddr: 10.10.10.1
592 Now after a final check whether all changed information is correct we save it
593 and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
594 learn how to bring it in effect.
596 As our change cannot be enforced live from corosync we have to do an restart.
598 On a single node execute:
600 systemctl restart corosync
602 Now check if everything is fine:
605 systemctl status corosync
607 If corosync runs again correct restart corosync also on all other nodes.
608 They will then join the cluster membership one by one on the new network.
610 Redundant Ring Protocol
611 ~~~~~~~~~~~~~~~~~~~~~~~
612 To avoid a single point of failure you should implement counter measurements.
613 This can be on the hardware and operating system level through network bonding.
615 Corosync itself offers also a possibility to add redundancy through the so
616 called 'Redundant Ring Protocol'. This protocol allows running a second totem
617 ring on another network, this network should be physically separated from the
618 other rings network to actually increase availability.
620 RRP On Cluster Creation
621 ~~~~~~~~~~~~~~~~~~~~~~~
623 The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
624 'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
626 NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
628 So if you have two networks, one on the 10.10.10.1/24 and the other on the
629 10.10.20.1/24 subnet you would execute:
632 pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
633 -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
635 RRP On A Created Cluster
636 ~~~~~~~~~~~~~~~~~~~~~~~~
638 When enabling an already running cluster to use RRP you will take similar steps
639 as describe in <<separate-cluster-net-after-creation,separating the cluster
640 network>>. You just do it on another ring.
642 First add a new `interface` subsection in the `totem` section, set its
643 `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
644 address of the subnet you have configured for your new ring.
645 Further set the `rrp_mode` to `passive`, this is the only stable mode.
647 Then add to each node entry in the `nodelist` section its new `ring1_addr`
648 property with the nodes additional ring address.
650 So if you have two networks, one on the 10.10.10.1/24 and the other on the
651 10.10.20.1/24 subnet, the final configuration file should look like:
662 bindnetaddr: 10.10.10.1
666 bindnetaddr: 10.10.20.1
676 ring0_addr: 10.10.10.1
677 ring1_addr: 10.10.20.1
684 ring0_addr: 10.10.10.2
685 ring1_addr: 10.10.20.2
688 [...] # other cluster nodes here
691 [...] # other remaining config sections here
695 Bring it in effect like described in the <<edit-corosync-conf,edit the
696 corosync.conf file>> section.
698 This is a change which cannot take live in effect and needs at least a restart
699 of corosync. Recommended is a restart of the whole cluster.
701 If you cannot reboot the whole cluster ensure no High Availability services are
702 configured and the stop the corosync service on all nodes. After corosync is
703 stopped on all nodes start it one after the other again.
705 Corosync Configuration
706 ----------------------
708 The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
709 controls the cluster member ship and its network.
710 For reading more about it check the corosync.conf man page:
714 For node membership you should always use the `pvecm` tool provided by {pve}.
715 You may have to edit the configuration file manually for other changes.
716 Here are a few best practice tips for doing this.
718 [[edit-corosync-conf]]
722 Editing the corosync.conf file can be not always straight forward. There are
723 two on each cluster, one in `/etc/pve/corosync.conf` and the other in
724 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
725 propagate the changes to the local one, but not vice versa.
727 The configuration will get updated automatically as soon as the file changes.
728 This means changes which can be integrated in a running corosync will take
729 instantly effect. So you should always make a copy and edit that instead, to
730 avoid triggering some unwanted changes by an in between safe.
733 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
735 Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
736 preinstalled on {pve} for example.
738 NOTE: Always increment the 'config_version' number on configuration changes,
739 omitting this can lead to problems.
741 After making the necessary changes create another copy of the current working
742 configuration file. This serves as a backup if the new configuration fails to
743 apply or makes problems in other ways.
746 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
748 Then move the new configuration file over the old one:
750 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
752 You may check with the commands
754 systemctl status corosync
755 journalctl -b -u corosync
757 If the change could applied automatically. If not you may have to restart the
758 corosync service via:
760 systemctl restart corosync
762 On errors check the troubleshooting section below.
767 Issue: 'quorum.expected_votes must be configured'
768 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
770 When corosync starts to fail and you get the following message in the system log:
774 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
775 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
776 'configuration error: nodelist or quorum.expected_votes must be configured!'
780 It means that the hostname you set for corosync 'ringX_addr' in the
781 configuration could not be resolved.
784 Write Configuration When Not Quorate
785 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
787 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
788 know what you do, use:
792 This sets the expected vote count to 1 and makes the cluster quorate. You can
793 now fix your configuration, or revert it back to the last working backup.
795 This is not enough if corosync cannot start anymore. Here its best to edit the
796 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
797 that corosync can start again. Ensure that on all nodes this configuration has
798 the same content to avoid split brains. If you are not sure what went wrong
799 it's best to ask the Proxmox Community to help you.
802 [[corosync-conf-glossary]]
803 Corosync Configuration Glossary
804 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
807 This names the different ring addresses for the corosync totem rings used for
808 the cluster communication.
811 Defines to which interface the ring should bind to. It may be any address of
812 the subnet configured on the interface we want to use. In general its the
813 recommended to just use an address a node uses on this interface.
816 Specifies the mode of the redundant ring protocol and may be passive, active or
817 none. Note that use of active is highly experimental and not official
818 supported. Passive is the preferred mode, it may double the cluster
819 communication throughput and increases availability.
825 It is obvious that a cluster is not quorate when all nodes are
826 offline. This is a common case after a power failure.
828 NOTE: It is always a good idea to use an uninterruptible power supply
829 (``UPS'', also called ``battery backup'') to avoid this state, especially if
832 On node startup, service `pve-manager` is started and waits for
833 quorum. Once quorate, it starts all guests which have the `onboot`
836 When you turn on nodes, or when power comes back after power failure,
837 it is likely that some nodes boots faster than others. Please keep in
838 mind that guest startup is delayed until you reach quorum.
842 include::pve-copyright.adoc[]