X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pvecm.adoc;h=3d62c0b6be5dafce2c3d8e27a352bb2ea62243d9;hp=ca53e3cd7ba776e39cb2679fbd556260cd46cc8a;hb=94958b8b9230d5b9b5e2e70c481f115b18a5fa0b;hpb=82445c4eec12b8b41e55ffd87beeef3ae4c2bbd1 diff --git a/pvecm.adoc b/pvecm.adoc index ca53e3c..3d62c0b 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -1,3 +1,4 @@ +[[chapter_pvecm]] ifdef::manvolnum[] pvecm(1) ======== @@ -74,8 +75,14 @@ manually enabled first. * We recommend a dedicated NIC for the cluster traffic, especially if you use shared storage. -NOTE: It is not possible to mix Proxmox VE 3.x and earlier with -Proxmox VE 4.0 cluster nodes. +* Root password of a cluster node is required for adding nodes. + +NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster +nodes. + +NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as +production configuration and should only used temporarily during upgrading the +whole cluster from one to another major version. Preparing Nodes @@ -85,24 +92,37 @@ First, install {PVE} on all nodes. Make sure that each node is installed with the final hostname and IP configuration. Changing the hostname and IP is not possible after cluster creation. -Currently the cluster creation has to be done on the console, so you -need to login via `ssh`. +Currently the cluster creation can either be done on the console (login via +`ssh`) or the API, which we have a GUI implementation for (__Datacenter -> +Cluster__). +While it's often common use to reference all other nodenames in `/etc/hosts` +with their IP this is not strictly necessary for a cluster, which normally uses +multicast, to work. It maybe useful as you then can connect from one node to +the other with SSH through the easier to remember node name. + +[[pvecm_create_cluster]] Create the Cluster ------------------ Login via `ssh` to the first {pve} node. Use a unique name for your cluster. -This name cannot be changed later. +This name cannot be changed later. The cluster name follows the same rules as +node names. - hp1# pvecm create YOUR-CLUSTER-NAME +---- + hp1# pvecm create CLUSTERNAME +---- -CAUTION: The cluster name is used to compute the default multicast -address. Please use unique cluster names if you run more than one -cluster inside your network. +CAUTION: The cluster name is used to compute the default multicast address. +Please use unique cluster names if you run more than one cluster inside your +network. To avoid human confusion, it is also recommended to choose different +names even if clusters do not share the cluster network. To check the state of your cluster use: +---- hp1# pvecm status +---- Multiple Clusters In Same Network ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -119,12 +139,15 @@ load of the network significantly because multicast packets are only delivered to endpoints of the respective member nodes. +[[pvecm_join_node_to_cluster]] Adding Nodes to the Cluster --------------------------- Login via `ssh` to the node you want to add. +---- hp2# pvecm add IP-ADDRESS-CLUSTER +---- For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node. @@ -136,7 +159,9 @@ adding the node to the cluster. To check the state of cluster: +---- # pvecm status +---- .Cluster status after adding 4 nodes ---- @@ -169,7 +194,9 @@ Membership information If you only want the list of all nodes use: +---- # pvecm nodes +---- .List nodes in a cluster ---- @@ -184,6 +211,7 @@ Membership information 4 1 hp4 ---- +[[adding-nodes-with-separated-cluster-network]] Adding Nodes With Separated Cluster Network ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -237,7 +265,9 @@ it could be difficult to restore a clean cluster state. After powering off the node hp4, we can safely remove it from the cluster. +---- hp1# pvecm delnode hp4 +---- If the operation succeeds no output is returned, just check the node list again with `pvecm nodes` or `pvecm status`. You should see @@ -473,6 +503,9 @@ To check if everything is working properly execute: systemctl status corosync ---- +Afterwards, proceed as descripted in the section to +<>. + [[separate-cluster-net-after-creation]] Separate After Cluster Creation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -621,6 +654,7 @@ systemctl status corosync If corosync runs again correct restart corosync also on all other nodes. They will then join the cluster membership one by one on the new network. +[[pvecm_rrp]] Redundant Ring Protocol ~~~~~~~~~~~~~~~~~~~~~~~ To avoid a single point of failure you should implement counter measurements.