+[[chapter_pvecm]]
ifdef::manvolnum[]
pvecm(1)
========
* We recommend a dedicated NIC for the cluster traffic, especially if
you use shared storage.
-NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
-Proxmox VE 4.0 cluster nodes.
+* Root password of a cluster node is required for adding nodes.
+
+NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
+nodes.
+
+NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
+production configuration and should only used temporarily during upgrading the
+whole cluster from one to another major version.
Preparing Nodes
installed with the final hostname and IP configuration. Changing the
hostname and IP is not possible after cluster creation.
-Currently the cluster creation has to be done on the console, so you
-need to login via `ssh`.
+Currently the cluster creation can either be done on the console (login via
+`ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
+Cluster__).
+
+While it's often common use to reference all other nodenames in `/etc/hosts`
+with their IP this is not strictly necessary for a cluster, which normally uses
+multicast, to work. It maybe useful as you then can connect from one node to
+the other with SSH through the easier to remember node name.
+[[pvecm_create_cluster]]
Create the Cluster
------------------
Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
-This name cannot be changed later.
+This name cannot be changed later. The cluster name follows the same rules as
+node names.
- hp1# pvecm create YOUR-CLUSTER-NAME
+----
+ hp1# pvecm create CLUSTERNAME
+----
-CAUTION: The cluster name is used to compute the default multicast
-address. Please use unique cluster names if you run more than one
-cluster inside your network.
+CAUTION: The cluster name is used to compute the default multicast address.
+Please use unique cluster names if you run more than one cluster inside your
+network. To avoid human confusion, it is also recommended to choose different
+names even if clusters do not share the cluster network.
To check the state of your cluster use:
+----
hp1# pvecm status
+----
Multiple Clusters In Same Network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
to endpoints of the respective member nodes.
+[[pvecm_join_node_to_cluster]]
Adding Nodes to the Cluster
---------------------------
Login via `ssh` to the node you want to add.
+----
hp2# pvecm add IP-ADDRESS-CLUSTER
+----
For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
To check the state of cluster:
+----
# pvecm status
+----
.Cluster status after adding 4 nodes
----
If you only want the list of all nodes use:
+----
# pvecm nodes
+----
.List nodes in a cluster
----
After powering off the node hp4, we can safely remove it from the cluster.
+----
hp1# pvecm delnode hp4
+----
If the operation succeeds no output is returned, just check the node
list again with `pvecm nodes` or `pvecm status`. You should see
systemctl status corosync
----
-Follow the section to add
-<<adding-nodes-with-separated-cluster-network,nodes to separated cluster network>>.
+Afterwards, proceed as descripted in the section to
+<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
[[separate-cluster-net-after-creation]]
Separate After Cluster Creation
If corosync runs again correct restart corosync also on all other nodes.
They will then join the cluster membership one by one on the new network.
+[[pvecm_rrp]]
Redundant Ring Protocol
~~~~~~~~~~~~~~~~~~~~~~~
To avoid a single point of failure you should implement counter measurements.