X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pvecm.adoc;h=3d62c0b6be5dafce2c3d8e27a352bb2ea62243d9;hp=491b2ac9cb25a080a651ce85da94a04c22dade12;hb=a75eeddebed864bde81358463958ed1d166935e1;hpb=da6c7dee9c59f7ccaa746a5bc644fc0a4c8c94c1 diff --git a/pvecm.adoc b/pvecm.adoc index 491b2ac..3d62c0b 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -1,3 +1,4 @@ +[[chapter_pvecm]] ifdef::manvolnum[] pvecm(1) ======== @@ -74,8 +75,14 @@ manually enabled first. * We recommend a dedicated NIC for the cluster traffic, especially if you use shared storage. -NOTE: It is not possible to mix Proxmox VE 3.x and earlier with -Proxmox VE 4.0 cluster nodes. +* Root password of a cluster node is required for adding nodes. + +NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster +nodes. + +NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as +production configuration and should only used temporarily during upgrading the +whole cluster from one to another major version. Preparing Nodes @@ -85,32 +92,62 @@ First, install {PVE} on all nodes. Make sure that each node is installed with the final hostname and IP configuration. Changing the hostname and IP is not possible after cluster creation. -Currently the cluster creation has to be done on the console, so you -need to login via `ssh`. +Currently the cluster creation can either be done on the console (login via +`ssh`) or the API, which we have a GUI implementation for (__Datacenter -> +Cluster__). +While it's often common use to reference all other nodenames in `/etc/hosts` +with their IP this is not strictly necessary for a cluster, which normally uses +multicast, to work. It maybe useful as you then can connect from one node to +the other with SSH through the easier to remember node name. + +[[pvecm_create_cluster]] Create the Cluster ------------------ Login via `ssh` to the first {pve} node. Use a unique name for your cluster. -This name cannot be changed later. +This name cannot be changed later. The cluster name follows the same rules as +node names. - hp1# pvecm create YOUR-CLUSTER-NAME +---- + hp1# pvecm create CLUSTERNAME +---- -CAUTION: The cluster name is used to compute the default multicast -address. Please use unique cluster names if you run more than one -cluster inside your network. +CAUTION: The cluster name is used to compute the default multicast address. +Please use unique cluster names if you run more than one cluster inside your +network. To avoid human confusion, it is also recommended to choose different +names even if clusters do not share the cluster network. To check the state of your cluster use: +---- hp1# pvecm status +---- + +Multiple Clusters In Same Network +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +It is possible to create multiple clusters in the same physical or logical +network. Each cluster must have a unique name, which is used to generate the +cluster's multicast group address. As long as no duplicate cluster names are +configured in one network segment, the different clusters won't interfere with +each other. +If multiple clusters operate in a single network it may be beneficial to setup +an IGMP querier and enable IGMP Snooping in said network. This may reduce the +load of the network significantly because multicast packets are only delivered +to endpoints of the respective member nodes. + +[[pvecm_join_node_to_cluster]] Adding Nodes to the Cluster --------------------------- Login via `ssh` to the node you want to add. +---- hp2# pvecm add IP-ADDRESS-CLUSTER +---- For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node. @@ -122,7 +159,9 @@ adding the node to the cluster. To check the state of cluster: +---- # pvecm status +---- .Cluster status after adding 4 nodes ---- @@ -155,7 +194,9 @@ Membership information If you only want the list of all nodes use: +---- # pvecm nodes +---- .List nodes in a cluster ---- @@ -170,6 +211,7 @@ Membership information 4 1 hp4 ---- +[[adding-nodes-with-separated-cluster-network]] Adding Nodes With Separated Cluster Network ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -223,7 +265,9 @@ it could be difficult to restore a clean cluster state. After powering off the node hp4, we can safely remove it from the cluster. +---- hp1# pvecm delnode hp4 +---- If the operation succeeds no output is returned, just check the node list again with `pvecm nodes` or `pvecm status`. You should see @@ -406,7 +450,7 @@ omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ... ---- * Ensure that multicast communication works over an extended period of time. - This covers up problems where IGMP snooping is activated on the network but + This uncovers problems where IGMP snooping is activated on the network but no multicast querier is active. This test has a duration of around 10 minutes. + @@ -444,7 +488,7 @@ Separate On Cluster Creation This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of the 'pvecm create' command used for creating a new cluster. -If you have setup a additional NIC with a static address on 10.10.10.1/25 +If you have setup an additional NIC with a static address on 10.10.10.1/25 and want to send and receive all cluster communication over this interface you would execute: @@ -459,6 +503,9 @@ To check if everything is working properly execute: systemctl status corosync ---- +Afterwards, proceed as descripted in the section to +<>. + [[separate-cluster-net-after-creation]] Separate After Cluster Creation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -528,7 +575,7 @@ addresses. You may use plain IP addresses or also hostnames here. If you use hostnames ensure that they are resolvable from all nodes. In my example I want to switch my cluster communication to the 10.10.10.1/25 -network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr +network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr in the totem section of the config to an address of the new network. It can be any address from the subnet configured on the new network interface. @@ -607,6 +654,7 @@ systemctl status corosync If corosync runs again correct restart corosync also on all other nodes. They will then join the cluster membership one by one on the new network. +[[pvecm_rrp]] Redundant Ring Protocol ~~~~~~~~~~~~~~~~~~~~~~~ To avoid a single point of failure you should implement counter measurements. @@ -708,7 +756,7 @@ stopped on all nodes start it one after the other again. Corosync Configuration ---------------------- -The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It +The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It controls the cluster member ship and its network. For reading more about it check the corosync.conf man page: [source,bash] @@ -846,7 +894,7 @@ NOTE: It is always a good idea to use an uninterruptible power supply (``UPS'', also called ``battery backup'') to avoid this state, especially if you want HA. -On node startup, service `pve-manager` is started and waits for +On node startup, the `pve-guests` service is started and waits for quorum. Once quorate, it starts all guests which have the `onboot` flag set. @@ -876,10 +924,10 @@ xref:pct_migration[Container Migration Chapter] Migration Type ~~~~~~~~~~~~~~ -The migration type defines if the migration data should be sent over a +The migration type defines if the migration data should be sent over an encrypted (`secure`) channel or an unencrypted (`insecure`) one. Setting the migration type to insecure means that the RAM content of a -virtual guest gets also transfered unencrypted, which can lead to +virtual guest gets also transferred unencrypted, which can lead to information disclosure of critical data from inside the guest (for example passwords or encryption keys). @@ -928,7 +976,7 @@ dedicated network for migration. A network configuration for such a setup might look as follows: ---- -iface eth0 inet manual +iface eno1 inet manual # public network auto vmbr0 @@ -936,19 +984,19 @@ iface vmbr0 inet static address 192.X.Y.57 netmask 255.255.250.0 gateway 192.X.Y.1 - bridge_ports eth0 + bridge_ports eno1 bridge_stp off bridge_fd 0 # cluster network -auto eth1 -iface eth1 inet static +auto eno2 +iface eno2 inet static address 10.1.1.1 netmask 255.255.255.0 # fast network -auto eth2 -iface eth2 inet static +auto eno3 +iface eno3 inet static address 10.1.2.1 netmask 255.255.255.0 ----