ifdef::manvolnum[]
-PVE({manvolnum})
-================
-include::attributes.txt[]
+pvecm(1)
+========
+:pve-toplevel:
NAME
----
pvecm - Proxmox VE Cluster Manager
-SYNOPSYS
+SYNOPSIS
--------
include::pvecm.1-synopsis.adoc[]
ifndef::manvolnum[]
Cluster Manager
===============
-include::attributes.txt[]
+:pve-toplevel:
endif::manvolnum[]
The {PVE} cluster manager `pvecm` is a tool to create a group of
use the 'ringX_addr' parameters to set the nodes address on those networks:
[source,bash]
+----
pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
+----
If you want to use the Redundant Ring Protocol you will also want to pass the
'ring1_addr' parameter.
* then join it, as explained in the previous section.
+[[pvecm_separate_node_without_reinstall]]
Separate A Node Without Reinstalling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First stop the corosync and the pve-cluster services on the node:
[source,bash]
+----
systemctl stop pve-cluster
systemctl stop corosync
+----
Start the cluster filesystem again in local mode:
[source,bash]
+----
pmxcfs -l
+----
Delete the corosync configuration files:
[source,bash]
+----
rm /etc/pve/corosync.conf
rm /etc/corosync/*
+----
You can now start the filesystem again as normal service:
[source,bash]
+----
killall pmxcfs
systemctl start pve-cluster
+----
The node is now separated from the cluster. You can deleted it from a remaining
node of the cluster with:
[source,bash]
+----
pvecm delnode oldnode
+----
If the command failed, because the remaining node in the cluster lost quorum
when the now separate node exited, you may set the expected votes to 1 as a workaround:
[source,bash]
+----
pvecm expected 1
+----
And the repeat the 'pvecm delnode' command.
cluster again without problems.
[source,bash]
+----
rm /var/lib/corosync/*
+----
As the configuration files from the other nodes are still in the cluster
filesystem you may want to clean those up too. Remove simply the whole
no multicast querier is active. This test has a duration of around 10
minutes.
[source,bash]
+----
omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
+----
Your network is not ready for clustering if any of these test fails. Recheck
your network configuration. Especially switches are notorious for having
you would execute:
[source,bash]
+----
pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
+----
To check if everything is working properly execute:
[source,bash]
+----
systemctl status corosync
+----
[[separate-cluster-net-after-creation]]
Separate After Cluster Creation
On a single node execute:
[source,bash]
+----
systemctl restart corosync
+----
Now check if everything is fine:
[source,bash]
+----
systemctl status corosync
+----
If corosync runs again correct restart corosync also on all other nodes.
They will then join the cluster membership one by one on the new network.
10.10.20.1/24 subnet you would execute:
[source,bash]
+----
pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
+----
RRP On A Created Cluster
~~~~~~~~~~~~~~~~~~~~~~~~
When enabling an already running cluster to use RRP you will take similar steps
-as describe in <<separate-cluster-net-after-creation,separating the cluster
-network>>. You just do it on another ring.
+as describe in
+<<separate-cluster-net-after-creation,separating the cluster network>>. You
+just do it on another ring.
First add a new `interface` subsection in the `totem` section, set its
`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
----
-Bring it in effect like described in the <<edit-corosync-conf,edit the
-corosync.conf file>> section.
+Bring it in effect like described in the
+<<edit-corosync-conf,edit the corosync.conf file>> section.
This is a change which cannot take live in effect and needs at least a restart
of corosync. Recommended is a restart of the whole cluster.
controls the cluster member ship and its network.
For reading more about it check the corosync.conf man page:
[source,bash]
+----
man corosync.conf
+----
For node membership you should always use the `pvecm` tool provided by {pve}.
You may have to edit the configuration file manually for other changes.
avoid triggering some unwanted changes by an in between safe.
[source,bash]
+----
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
+----
Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
preinstalled on {pve} for example.
apply or makes problems in other ways.
[source,bash]
+----
cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
+----
Then move the new configuration file over the old one:
[source,bash]
+----
mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
+----
You may check with the commands
[source,bash]
+----
systemctl status corosync
journalctl -b -u corosync
+----
If the change could applied automatically. If not you may have to restart the
corosync service via:
[source,bash]
+----
systemctl restart corosync
+----
On errors check the troubleshooting section below.
If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
know what you do, use:
[source,bash]
+----
pvecm expected 1
+----
This sets the expected vote count to 1 and makes the cluster quorate. You can
now fix your configuration, or revert it back to the last working backup.