4 include::attributes.txt[]
9 pvecm - Proxmox VE Cluster Manager
14 include::pvecm.1-synopsis.adoc[]
23 include::attributes.txt[]
26 The {PVE} cluster manager 'pvecm' is a tool to create a group of
27 physical servers. Such group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such cluster can consists of up to 32 physical nodes
30 (probably more, dependent on network latency).
32 'pvecm' can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The Proxmox Cluster file system (pmxcfs) is used to
35 transparently distribute the cluster configuration to all cluster
38 Grouping nodes into a cluster has the following advantages:
40 * Centralized, web based management
42 * Multi-master clusters: Each node can do all management task
44 * Proxmox Cluster file system (pmxcfs): Database-driven file system
45 for storing configuration files, replicated in real-time on all
48 * Easy migration of Virtual Machines and Containers between physical
53 * Cluster-wide services like firewall and HA
59 * All nodes must be in the same network as corosync uses IP Multicast
60 to communicate between nodes (also see
61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
62 ports 5404and 5405 for cluster communication.
64 NOTE: Some switches do not support IP multicast by default and must be
65 manually enabled first.
67 * Date and time have to be synchronized.
69 * SSH tunnel on TCP port 22 between nodes is used.
71 * If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
75 * We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
78 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
79 Proxmox VE 4.0 cluster nodes.
85 First, install {PVE} on all nodes. Make sure that each node is
86 installed with the final hostname and IP configuration. Changing the
87 hostname and IP is not possible after cluster creation.
89 Currently the cluster creation has to be done on the console, so you
90 need to login via 'ssh'.
96 Login via 'ssh' to the first Proxmox VE node. Use a unique name for
97 your cluster. This name cannot be changed later.
99 hp1# pvecm create YOUR-CLUSTER-NAME
101 To check the state of your cluster use:
106 Adding Nodes to the Cluster
107 ---------------------------
109 Login via 'ssh' to the node you want to add.
111 hp2# pvecm add IP-ADDRESS-CLUSTER
113 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
115 CAUTION: A new node cannot hold any VM´s, because you would get
116 conflicts about identical VM IDs. Also, all existing configuration in
117 '/etc/pve' is overwritten when you join a new node to the cluster. To
118 workaround, use vzdump to backup and restore to a different VMID after
119 adding the node to the cluster.
121 To check the state of cluster:
125 .Cluster status after adding 4 nodes
130 Date: Mon Apr 20 12:30:13 2015
131 Quorum provider: corosync_votequorum
137 Votequorum information
138 ~~~~~~~~~~~~~~~~~~~~~~
145 Membership information
146 ~~~~~~~~~~~~~~~~~~~~~~
148 0x00000001 1 192.168.15.91
149 0x00000002 1 192.168.15.92 (local)
150 0x00000003 1 192.168.15.93
151 0x00000004 1 192.168.15.94
154 If you only want the list of all nodes use:
158 .List Nodes in a Cluster
162 Membership information
163 ~~~~~~~~~~~~~~~~~~~~~~
172 Remove a Cluster Node
173 ---------------------
175 CAUTION: Read carefully the procedure before proceeding, as it could
176 not be what you want or need.
178 Move all virtual machines from the node. Make sure you have no local
179 data or backups you want to keep, or save them accordingly.
181 Log in to one remaining node via ssh. Issue a 'pvecm nodes' command to
182 identify the node ID:
189 Date: Mon Apr 20 12:30:13 2015
190 Quorum provider: corosync_votequorum
196 Votequorum information
197 ~~~~~~~~~~~~~~~~~~~~~~
204 Membership information
205 ~~~~~~~~~~~~~~~~~~~~~~
207 0x00000001 1 192.168.15.91 (local)
208 0x00000002 1 192.168.15.92
209 0x00000003 1 192.168.15.93
210 0x00000004 1 192.168.15.94
213 IMPORTANT: at this point you must power off the node to be removed and
214 make sure that it will not power on again (in the network) as it
220 Membership information
221 ~~~~~~~~~~~~~~~~~~~~~~
229 Log in to one remaining node via ssh. Issue the delete command (here
232 hp1# pvecm delnode hp4
234 If the operation succeeds no output is returned, just check the node
235 list again with 'pvecm nodes' or 'pvecm status'. You should see
243 Date: Mon Apr 20 12:44:28 2015
244 Quorum provider: corosync_votequorum
250 Votequorum information
251 ~~~~~~~~~~~~~~~~~~~~~~
258 Membership information
259 ~~~~~~~~~~~~~~~~~~~~~~
261 0x00000001 1 192.168.15.90 (local)
262 0x00000002 1 192.168.15.91
263 0x00000003 1 192.168.15.92
266 IMPORTANT: as said above, it is very important to power off the node
267 *before* removal, and make sure that it will *never* power on again
268 (in the existing cluster network) as it is.
270 If you power on the node as it is, your cluster will be screwed up and
271 it could be difficult to restore a clean cluster state.
273 If, for whatever reason, you want that this server joins the same
274 cluster again, you have to
276 * reinstall pve on it from scratch
278 * then join it, as explained in the previous section.
282 include::pve-copyright.adoc[]