X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pvecm.adoc;h=34e5520884c48353bdf1fa543871b5ebcffa9376;hp=84eb411bde9ec50b2283d0c07bb2f9b7df08da1f;hb=26ca7ff55309331b9b11b10b64fab2d819454909;hpb=ceabe189d9594e11e1e9795ebcf5810b1e346505 diff --git a/pvecm.adoc b/pvecm.adoc index 84eb411..34e5520 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -6,7 +6,7 @@ include::attributes.txt[] NAME ---- -pvecm - Proxmox VE Cluster Manager +pvecm - {pve} Cluster Manager SYNOPSYS -------- @@ -23,29 +23,28 @@ Cluster Manager include::attributes.txt[] endif::manvolnum[] -The {PVE} cluster manager 'pvecm' is a tool to create a group of -physical servers. Such group is called a *cluster*. We use the +The {PVE} cluster manager `pvecm` is a tool to create a group of +physical servers. Such a group is called a *cluster*. We use the http://www.corosync.org[Corosync Cluster Engine] for reliable group -communication, and such cluster can consists of up to 32 physical nodes +communication, and such clusters can consist of up to 32 physical nodes (probably more, dependent on network latency). -'pvecm' can be used to create a new cluster, join nodes to a cluster, +`pvecm` can be used to create a new cluster, join nodes to a cluster, leave the cluster, get status information and do various other cluster -related tasks. The Proxmox Cluster file system (pmxcfs) is used to -transparently distribute the cluster configuration to all cluster +related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'') +is used to transparently distribute the cluster configuration to all cluster nodes. Grouping nodes into a cluster has the following advantages: * Centralized, web based management -* Multi-master clusters: Each node can do all management task +* Multi-master clusters: each node can do all management task -* Proxmox Cluster file system (pmxcfs): Database-driven file system - for storing configuration files, replicated in real-time on all - nodes using corosync. +* `pmxcfs`: database-driven file system for storing configuration files, + replicated in real-time on all nodes using `corosync`. -* Easy migration of Virtual Machines and Containers between physical +* Easy migration of virtual machines and containers between physical hosts * Fast deployment @@ -56,10 +55,10 @@ Grouping nodes into a cluster has the following advantages: Requirements ------------ -* All nodes must be in the same network as corosync uses IP Multicast +* All nodes must be in the same network as `corosync` uses IP Multicast to communicate between nodes (also see http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP - ports 5404and 5405 for cluster communication. + ports 5404 and 5405 for cluster communication. + NOTE: Some switches do not support IP multicast by default and must be manually enabled first. @@ -87,17 +86,20 @@ installed with the final hostname and IP configuration. Changing the hostname and IP is not possible after cluster creation. Currently the cluster creation has to be done on the console, so you -need to login via 'ssh'. - +need to login via `ssh`. Create the Cluster ------------------ -Login via 'ssh' to the first Proxmox VE node. Use a unique name for -your cluster. This name cannot be changed later. +Login via `ssh` to the first {pve} node. Use a unique name for your cluster. +This name cannot be changed later. hp1# pvecm create YOUR-CLUSTER-NAME +CAUTION: The cluster name is used to compute the default multicast +address. Please use unique cluster names if you run more than one +cluster inside your network. + To check the state of your cluster use: hp1# pvecm status @@ -106,17 +108,17 @@ To check the state of your cluster use: Adding Nodes to the Cluster --------------------------- -Login via 'ssh' to the node you want to add. +Login via `ssh` to the node you want to add. hp2# pvecm add IP-ADDRESS-CLUSTER For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node. -CAUTION: A new node cannot hold any VM´s, because you would get -conflicts about identical VM IDs. Also, all existing configuration is -overwritten when you join a new node to the cluster. To workaround, -use vzdump to backup and restore to a different VMID after adding -the node to the cluster. +CAUTION: A new node cannot hold any VMs, because you would get +conflicts about identical VM IDs. Also, all existing configuration in +`/etc/pve` is overwritten when you join a new node to the cluster. To +workaround, use `vzdump` to backup and restore to a different VMID after +adding the node to the cluster. To check the state of cluster: @@ -155,7 +157,7 @@ If you only want the list of all nodes use: # pvecm nodes -.List Nodes in a Cluster +.List nodes in a cluster ---- hp2# pvecm nodes @@ -178,8 +180,8 @@ not be what you want or need. Move all virtual machines from the node. Make sure you have no local data or backups you want to keep, or save them accordingly. -Log in to one remaining node via ssh. Issue a 'pvecm nodes' command to -identify the nodeID: +Log in to one remaining node via ssh. Issue a `pvecm nodes` command to +identify the node ID: ---- hp1# pvecm status @@ -227,12 +229,12 @@ Membership information ---- Log in to one remaining node via ssh. Issue the delete command (here -deleting node hp4): +deleting node `hp4`): hp1# pvecm delnode hp4 If the operation succeeds no output is returned, just check the node -list again with 'pvecm nodes' or 'pvecm status'. You should see +list again with `pvecm nodes` or `pvecm status`. You should see something like: ---- @@ -273,11 +275,50 @@ it could be difficult to restore a clean cluster state. If, for whatever reason, you want that this server joins the same cluster again, you have to -* reinstall pve on it from scratch +* reinstall {pve} on it from scratch * then join it, as explained in the previous section. +Quorum +------ + +{pve} use a quorum-based technique to provide a consistent state among +all cluster nodes. + +[quote, from Wikipedia, Quorum (distributed computing)] +____ +A quorum is the minimum number of votes that a distributed transaction +has to obtain in order to be allowed to perform an operation in a +distributed system. +____ + +In case of network partitioning, state changes requires that a +majority of nodes are online. The cluster switches to read-only mode +if it loses quorum. + +NOTE: {pve} assigns a single vote to each node by default. + + +Cluster Cold Start +------------------ + +It is obvious that a cluster is not quorate when all nodes are +offline. This is a common case after a power failure. + +NOTE: It is always a good idea to use an uninterruptible power supply +(``UPS'', also called ``battery backup'') to avoid this state, especially if +you want HA. + +On node startup, service `pve-manager` is started and waits for +quorum. Once quorate, it starts all guests which have the `onboot` +flag set. + +When you turn on nodes, or when power comes back after power failure, +it is likely that some nodes boots faster than others. Please keep in +mind that guest startup is delayed until you reach quorum. + + ifdef::manvolnum[] include::pve-copyright.adoc[] endif::manvolnum[]