NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
nodes.
-NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
-production configuration and should only used temporarily during upgrading the
-whole cluster from one to another major version.
+NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
+not supported as production configuration and should only used temporarily
+during upgrading the whole cluster from one to another major version.
NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
cluster protocol (corosync) between {pve} 6.x and earlier versions changed
installed with the final hostname and IP configuration. Changing the
hostname and IP is not possible after cluster creation.
-Currently the cluster creation can either be done on the console (login via
-`ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
-Cluster__).
-
While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
make their names resolvable through other means), this is not necessary for a
cluster to work. It may be useful however, as you can then connect from one node
[[pvecm_create_cluster]]
-Create the Cluster
-------------------
+Create a Cluster
+----------------
+
+You can either create a cluster on the console (login via `ssh`), or through
+the API using the {pve} Webinterface (__Datacenter -> Cluster__).
-Use a unique name for your cluster. This name cannot be changed later. The
-cluster name follows the same rules as node names.
+NOTE: Use a unique name for your cluster. This name cannot be changed later.
+The cluster name follows the same rules as node names.
+[[pvecm_cluster_create_via_gui]]
Create via Web GUI
~~~~~~~~~~~~~~~~~~
+[thumbnail="screenshot/gui-cluster-create.png"]
+
Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
name and select a network connection from the dropdown to serve as the main
cluster network (Link 0). It defaults to the IP resolved via the node's
choose an additional network interface (Link 1, see also
xref:pvecm_redundancy[Corosync Redundancy]).
+NOTE: Ensure the network selected for the cluster communication is not used for
+any high traffic loads like those of (network) storages or live-migration.
+While the cluster network itself produces small amounts of data, it is very
+sensitive to latency. Check out full
+xref:pvecm_cluster_network_requirements[cluster network requirements].
+
+[[pvecm_cluster_create_via_cli]]
Create via Command Line
~~~~~~~~~~~~~~~~~~~~~~~
guest (`vzdump`) and restore it as a different ID after the node has been added
to the cluster.
-Add Node via GUI
-~~~~~~~~~~~~~~~~
+Join Node to Cluster via GUI
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-cluster-join-information.png"]
Login to the web interface on an existing cluster node. Under __Datacenter ->
Cluster__, click the button *Join Information* at the top. Then, click on the
button *Copy Information*. Alternatively, copy the string from the 'Information'
field manually.
+[thumbnail="screenshot/gui-cluster-join.png"]
+
Next, login to the web interface on the node you want to add.
Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
-'Information' field with the text you copied earlier.
-
-For security reasons, the cluster password has to be entered manually.
+'Information' field with the 'Join Information' text you copied earlier.
+Most settings required for joining the cluster will be filled out
+automatically. For security reasons, the cluster password has to be entered
+manually.
NOTE: To enter all required data manually, you can disable the 'Assisted Join'
checkbox.
-After clicking on *Join* the node will immediately be added to the cluster. You
-might need to reload the web page and re-login with the cluster credentials.
+After clicking the *Join* button, the cluster join process will start
+immediately. After the node joined the cluster its current node certificate
+will be replaced by one signed from the cluster certificate authority (CA),
+that means the current session will stop to work after a few seconds. You might
+then need to force-reload the webinterface and re-login with the cluster
+credentials.
-Confirm that your node is visible under __Datacenter -> Cluster__.
+Now your node should be visible under __Datacenter -> Cluster__.
-Add Node via Command Line
-~~~~~~~~~~~~~~~~~~~~~~~~~
+Join Node to Cluster via Command Line
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Login via `ssh` to the node you want to add.
+Login via `ssh` to the node you want to join into an existing cluster.
----
hp2# pvecm add IP-ADDRESS-CLUSTER
which may lead to a situation where an address is changed without thinking
about implications for corosync.
-A seperate, static hostname specifically for corosync is recommended, if
+A separate, static hostname specifically for corosync is recommended, if
hostnames are preferred. Also, make sure that every node in the cluster can
resolve all hostnames correctly.
Nodes that joined the cluster on earlier versions likely still use their
unresolved hostname in `corosync.conf`. It might be a good idea to replace
-them with IPs or a seperate hostname, as mentioned above.
+them with IPs or a separate hostname, as mentioned above.
[[pvecm_redundancy]]
Links are used according to a priority setting. You can configure this priority
by setting 'knet_link_priority' in the corresponding interface section in
-`corosync.conf`, or, preferrably, using the 'priority' parameter when creating
+`corosync.conf`, or, preferably, using the 'priority' parameter when creating
your cluster with `pvecm`:
----
- # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=20 --link1 10.20.20.1,priority=15
+ # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
----
-This would cause 'link1' to be used first, since it has the lower priority.
+This would cause 'link1' to be used first, since it has the higher priority.
If no priorities are configured manually (or two links have the same priority),
links will be used in order of their number, with the lower number having higher
QDevice Technical Overview
~~~~~~~~~~~~~~~~~~~~~~~~~~
-The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
+The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
node. It provides a configured number of votes to the clusters quorum
subsystem based on an external running third-party arbitrator's decision.
Its primary use is to allow a cluster to sustain more node failures than