X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pvecm.adoc;h=0b1857e48c0148577000c73dfa29af369c6a6cd4;hb=fc9c969da4295500759c2576b761cb4b1007a73b;hp=2d1c744c201c3cae5c64d6f2ad3ae4a255d1f186;hpb=65a0aa490b3c6d81cbb0b76c8f8b16c2fd63881b;p=pve-docs.git diff --git a/pvecm.adoc b/pvecm.adoc index 2d1c744..0b1857e 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -123,9 +123,11 @@ name and select a network connection from the drop-down list to serve as the main cluster network (Link 0). It defaults to the IP resolved via the node's hostname. -To add a second link as fallback, you can select the 'Advanced' checkbox and -choose an additional network interface (Link 1, see also -xref:pvecm_redundancy[Corosync Redundancy]). +As of {pve} 6.2, up to 8 fallback links can be added to a cluster. To add a +redundant link, click the 'Add' button and select a link number and IP address +from the respective fields. Prior to {pve} 6.2, to add a second link as +fallback, you can select the 'Advanced' checkbox and choose an additional +network interface (Link 1, see also xref:pvecm_redundancy[Corosync Redundancy]). NOTE: Ensure that the network selected for cluster communication is not used for any high traffic purposes, like network storage or live-migration. @@ -210,7 +212,7 @@ Join Node to Cluster via Command Line Log in to the node you want to join into an existing cluster via `ssh`. ---- - hp2# pvecm add IP-ADDRESS-CLUSTER + # pvecm add IP-ADDRESS-CLUSTER ---- For `IP-ADDRESS-CLUSTER`, use the IP or hostname of an existing cluster node. @@ -225,14 +227,21 @@ To check the state of the cluster use: .Cluster status after adding 4 nodes ---- -hp2# pvecm status + # pvecm status +Cluster information +~~~~~~~~~~~~~~~~~~~ +Name: prod-central +Config Version: 3 +Transport: knet +Secure auth: on + Quorum information ~~~~~~~~~~~~~~~~~~ -Date: Mon Apr 20 12:30:13 2015 +Date: Tue Sep 14 11:06:47 2021 Quorum provider: corosync_votequorum Nodes: 4 Node ID: 0x00000001 -Ring ID: 1/8 +Ring ID: 1.1a8 Quorate: Yes Votequorum information @@ -260,7 +269,7 @@ If you only want a list of all nodes, use: .List nodes in a cluster ---- -hp2# pvecm nodes + # pvecm nodes Membership information ~~~~~~~~~~~~~~~~~~~~~~ @@ -295,15 +304,23 @@ Remove a Cluster Node CAUTION: Read the procedure carefully before proceeding, as it may not be what you want or need. -Move all virtual machines from the node. Make sure you have made copies of any -local data or backups that you want to keep. In the following example, we will -remove the node hp4 from the cluster. +Move all virtual machines from the node. Ensure that you have made copies of any +local data or backups that you want to keep. In addition, make sure to remove +any scheduled replication jobs to the node to be removed. + +CAUTION: Failure to remove replication jobs to a node before removing said node +will result in the replication job becoming irremovable. Especially note that +replication automatically switches direction if a replicated VM is migrated, so +by migrating a replicated VM from a node to be deleted, replication jobs will be +set up to that node automatically. + +In the following example, we will remove the node hp4 from the cluster. Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes` command to identify the node ID to remove: ---- -hp1# pvecm nodes + hp1# pvecm nodes Membership information ~~~~~~~~~~~~~~~~~~~~~~ @@ -331,20 +348,18 @@ After powering off the node hp4, we can safely remove it from the cluster. Killing node 4 ---- +NOTE: At this point, it is possible that you will receive an error message +stating `Could not kill node (error = CS_ERR_NOT_EXIST)`. This does not +signify an actual failure in the deletion of the node, but rather a failure in +corosync trying to kill an offline node. Thus, it can be safely ignored. + Use `pvecm nodes` or `pvecm status` to check the node list again. It should look something like: ---- hp1# pvecm status -Quorum information -~~~~~~~~~~~~~~~~~~ -Date: Mon Apr 20 12:44:28 2015 -Quorum provider: corosync_votequorum -Nodes: 3 -Node ID: 0x00000001 -Ring ID: 1/8 -Quorate: Yes +... Votequorum information ~~~~~~~~~~~~~~~~~~~~~~ @@ -1308,8 +1323,7 @@ iface eno1 inet manual # public network auto vmbr0 iface vmbr0 inet static - address 192.X.Y.57 - netmask 255.255.250.0 + address 192.X.Y.57/24 gateway 192.X.Y.1 bridge-ports eno1 bridge-stp off @@ -1318,14 +1332,12 @@ iface vmbr0 inet static # cluster network auto eno2 iface eno2 inet static - address 10.1.1.1 - netmask 255.255.255.0 + address 10.1.1.1/24 # fast network auto eno3 iface eno3 inet static - address 10.1.2.1 - netmask 255.255.255.0 + address 10.1.2.1/24 ---- Here, we will use the network 10.1.2.0/24 as a migration network. For