main cluster network (Link 0). It defaults to the IP resolved via the node's
hostname.
-To add a second link as fallback, you can select the 'Advanced' checkbox and
-choose an additional network interface (Link 1, see also
-xref:pvecm_redundancy[Corosync Redundancy]).
+As of {pve} 6.2, up to 8 fallback links can be added to a cluster. To add a
+redundant link, click the 'Add' button and select a link number and IP address
+from the respective fields. Prior to {pve} 6.2, to add a second link as
+fallback, you can select the 'Advanced' checkbox and choose an additional
+network interface (Link 1, see also xref:pvecm_redundancy[Corosync Redundancy]).
NOTE: Ensure that the network selected for cluster communication is not used for
any high traffic purposes, like network storage or live-migration.
Log in to the node you want to join into an existing cluster via `ssh`.
----
- hp2# pvecm add IP-ADDRESS-CLUSTER
+ # pvecm add IP-ADDRESS-CLUSTER
----
For `IP-ADDRESS-CLUSTER`, use the IP or hostname of an existing cluster node.
.Cluster status after adding 4 nodes
----
-hp2# pvecm status
+ # pvecm status
+Cluster information
+~~~~~~~~~~~~~~~~~~~
+Name: prod-central
+Config Version: 3
+Transport: knet
+Secure auth: on
+
Quorum information
~~~~~~~~~~~~~~~~~~
-Date: Mon Apr 20 12:30:13 2015
+Date: Tue Sep 14 11:06:47 2021
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000001
-Ring ID: 1/8
+Ring ID: 1.1a8
Quorate: Yes
Votequorum information
.List nodes in a cluster
----
-hp2# pvecm nodes
+ # pvecm nodes
Membership information
~~~~~~~~~~~~~~~~~~~~~~
CAUTION: Read the procedure carefully before proceeding, as it may
not be what you want or need.
-Move all virtual machines from the node. Make sure you have made copies of any
-local data or backups that you want to keep. In the following example, we will
-remove the node hp4 from the cluster.
+Move all virtual machines from the node. Ensure that you have made copies of any
+local data or backups that you want to keep. In addition, make sure to remove
+any scheduled replication jobs to the node to be removed.
+
+CAUTION: Failure to remove replication jobs to a node before removing said node
+will result in the replication job becoming irremovable. Especially note that
+replication automatically switches direction if a replicated VM is migrated, so
+by migrating a replicated VM from a node to be deleted, replication jobs will be
+set up to that node automatically.
+
+In the following example, we will remove the node hp4 from the cluster.
Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
command to identify the node ID to remove:
----
-hp1# pvecm nodes
+ hp1# pvecm nodes
Membership information
~~~~~~~~~~~~~~~~~~~~~~
Killing node 4
----
+NOTE: At this point, it is possible that you will receive an error message
+stating `Could not kill node (error = CS_ERR_NOT_EXIST)`. This does not
+signify an actual failure in the deletion of the node, but rather a failure in
+corosync trying to kill an offline node. Thus, it can be safely ignored.
+
Use `pvecm nodes` or `pvecm status` to check the node list again. It should
look something like:
----
hp1# pvecm status
-Quorum information
-~~~~~~~~~~~~~~~~~~
-Date: Mon Apr 20 12:44:28 2015
-Quorum provider: corosync_votequorum
-Nodes: 3
-Node ID: 0x00000001
-Ring ID: 1/8
-Quorate: Yes
+...
Votequorum information
~~~~~~~~~~~~~~~~~~~~~~
# public network
auto vmbr0
iface vmbr0 inet static
- address 192.X.Y.57
- netmask 255.255.250.0
+ address 192.X.Y.57/24
gateway 192.X.Y.1
bridge-ports eno1
bridge-stp off
# cluster network
auto eno2
iface eno2 inet static
- address 10.1.1.1
- netmask 255.255.255.0
+ address 10.1.1.1/24
# fast network
auto eno3
iface eno3 inet static
- address 10.1.2.1
- netmask 255.255.255.0
+ address 10.1.2.1/24
----
Here, we will use the network 10.1.2.0/24 as a migration network. For