================
include::attributes.txt[]
+:pve-toplevel:
+
NAME
----
pvecm - Proxmox VE Cluster Manager
-SYNOPSYS
+SYNOPSIS
--------
include::pvecm.1-synopsis.adoc[]
include::attributes.txt[]
endif::manvolnum[]
+ifdef::wiki[]
+:pve-toplevel:
+endif::wiki[]
+
The {PVE} cluster manager `pvecm` is a tool to create a group of
physical servers. Such a group is called a *cluster*. We use the
http://www.corosync.org[Corosync Cluster Engine] for reliable group
use the 'ringX_addr' parameters to set the nodes address on those networks:
[source,bash]
+----
pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
+----
If you want to use the Redundant Ring Protocol you will also want to pass the
'ring1_addr' parameter.
the node from the cluster. A {pve} cluster cannot share the exact same
storage with another cluster, as it leads to VMID conflicts.
-Move the guests which you want to keep on this node now, after the removal you
-can do this only via backup and restore. Its suggested that you create a new
-storage where only the node which you want to separate has access. This can be
-an new export on your NFS or a new Ceph pool, to name a few examples. Its just
-important that the exact same storage does not gets accessed by multiple
-clusters. After setting this storage up move all data from the node and its VMs
-to it. Then you are ready to separate the node from the cluster.
+Its suggested that you create a new storage where only the node which you want
+to separate has access. This can be an new export on your NFS or a new Ceph
+pool, to name a few examples. Its just important that the exact same storage
+does not gets accessed by multiple clusters. After setting this storage up move
+all data from the node and its VMs to it. Then you are ready to separate the
+node from the cluster.
WARNING: Ensure all shared resources are cleanly separated! You will run into
conflicts and problems else.
First stop the corosync and the pve-cluster services on the node:
[source,bash]
+----
systemctl stop pve-cluster
systemctl stop corosync
+----
Start the cluster filesystem again in local mode:
[source,bash]
+----
pmxcfs -l
+----
Delete the corosync configuration files:
[source,bash]
+----
rm /etc/pve/corosync.conf
rm /etc/corosync/*
+----
You can now start the filesystem again as normal service:
[source,bash]
+----
killall pmxcfs
systemctl start pve-cluster
+----
The node is now separated from the cluster. You can deleted it from a remaining
node of the cluster with:
[source,bash]
+----
pvecm delnode oldnode
+----
If the command failed, because the remaining node in the cluster lost quorum
when the now separate node exited, you may set the expected votes to 1 as a workaround:
[source,bash]
+----
pvecm expected 1
+----
And the repeat the 'pvecm delnode' command.
cluster again without problems.
[source,bash]
+----
rm /var/lib/corosync/*
+----
As the configuration files from the other nodes are still in the cluster
filesystem you may want to clean those up too. Remove simply the whole
no multicast querier is active. This test has a duration of around 10
minutes.
[source,bash]
+----
omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
+----
Your network is not ready for clustering if any of these test fails. Recheck
your network configuration. Especially switches are notorious for having
you would execute:
[source,bash]
+----
pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
+----
To check if everything is working properly execute:
[source,bash]
+----
systemctl status corosync
+----
[[separate-cluster-net-after-creation]]
Separate After Cluster Creation
On a single node execute:
[source,bash]
+----
systemctl restart corosync
+----
Now check if everything is fine:
[source,bash]
+----
systemctl status corosync
+----
If corosync runs again correct restart corosync also on all other nodes.
They will then join the cluster membership one by one on the new network.
10.10.20.1/24 subnet you would execute:
[source,bash]
+----
pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
+----
RRP On A Created Cluster
~~~~~~~~~~~~~~~~~~~~~~~~
controls the cluster member ship and its network.
For reading more about it check the corosync.conf man page:
[source,bash]
+----
man corosync.conf
+----
For node membership you should always use the `pvecm` tool provided by {pve}.
You may have to edit the configuration file manually for other changes.
avoid triggering some unwanted changes by an in between safe.
[source,bash]
+----
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
+----
Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
preinstalled on {pve} for example.
apply or makes problems in other ways.
[source,bash]
+----
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
+----
Then move the new configuration file over the old one:
[source,bash]
+----
mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
+----
You may check with the commands
[source,bash]
+----
systemctl status corosync
journalctl -b -u corosync
+----
If the change could applied automatically. If not you may have to restart the
corosync service via:
[source,bash]
+----
systemctl restart corosync
+----
On errors check the troubleshooting section below.
If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
know what you do, use:
[source,bash]
+----
pvecm expected 1
+----
This sets the expected vote count to 1 and makes the cluster quorate. You can
now fix your configuration, or revert it back to the last working backup.