4 include::attributes.txt[]
9 pvecm - Proxmox VE Cluster Manager
14 include::pvecm.1-synopsis.adoc[]
23 include::attributes.txt[]
26 The {PVE} cluster manager 'pvecm' is a tool to create a group of
27 physical servers. Such group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such cluster can consists of up to 32 physical nodes
30 (probably more, dependent on network latency).
32 'pvecm' can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The Proxmox Cluster file system (pmxcfs) is used to
35 transparently distribute the cluster configuration to all cluster
38 Grouping nodes into a cluster has the following advantages:
40 * Centralized, web based management
42 * Multi-master clusters: Each node can do all management task
44 * Proxmox Cluster file system (pmxcfs): Database-driven file system
45 for storing configuration files, replicated in real-time on all
48 * Easy migration of Virtual Machines and Containers between physical
53 * Cluster-wide services like firewall and HA
59 * All nodes must be in the same network as corosync uses IP Multicast
60 to communicate between nodes (also see
61 http://www.corosync.org[Corosync Cluster Engine]). NOTE: Some
62 switches do not support IP multicast by default and must be manually
65 * Date and time have to be synchronized.
67 * SSH tunnel on port 22 between nodes is used.
69 * If you are interested in High Availability too, for reliable quorum
70 you must have at least 3 nodes (all nodes should have the same
73 * We recommend a dedicated NIC for the cluster traffic, especially if
74 you use shared storage.
76 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
77 Proxmox VE 4.0 cluster.
83 First, install {PVE} on all nodes. Make sure that each node is
84 installed with the final hostname and IP configuration. Changing the
85 hostname and IP is not possible after cluster creation.
87 Currently the cluster creation has to be done on the console, so you
88 need to login via 'ssh'.
94 Login via 'ssh' to the first Proxmox VE node. Use a unique name for
95 your cluster. This name cannot be changed later.
97 hp1# pvecm create YOUR-CLUSTER-NAME
99 To check the state of your cluster use:
104 Adding Nodes to the Cluster
105 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
107 Login via 'ssh' to the node you want to add.
109 hp2# pvecm add IP-ADDRESS-CLUSTER
111 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
113 CAUTION: A new node cannot hold any VM´s, because you would get
114 conflicts about identical VM IDs. To workaround, use vzdump to backup
115 and to restore to a different VMID after adding the node to the
118 To check the state of cluster:
122 .Check Cluster Status
127 Date: Mon Apr 20 12:30:13 2015
128 Quorum provider: corosync_votequorum
134 Votequorum information
135 ~~~~~~~~~~~~~~~~~~~~~~
142 Membership information
143 ~~~~~~~~~~~~~~~~~~~~~~
145 0x00000001 1 192.168.15.91
146 0x00000002 1 192.168.15.92 (local)
147 0x00000003 1 192.168.15.93
148 0x00000004 1 192.168.15.94
151 If you only want the list of all nodes use:
155 .List Nodes in a Cluster
159 Membership information
160 ~~~~~~~~~~~~~~~~~~~~~~
169 Remove a Cluster Node
170 ~~~~~~~~~~~~~~~~~~~~~
172 CAUTION: Read carefully the procedure before proceeding, as it could
173 not be what you want or need.
175 Move all virtual machines from the node. Make sure you have no local
176 data or backups you want to keep, or save them accordingly.
178 Log in to one remaining node via ssh. Issue a 'pvecm nodes' command to
186 Date: Mon Apr 20 12:30:13 2015
187 Quorum provider: corosync_votequorum
193 Votequorum information
194 ~~~~~~~~~~~~~~~~~~~~~~
201 Membership information
202 ~~~~~~~~~~~~~~~~~~~~~~
204 0x00000001 1 192.168.15.91 (local)
205 0x00000002 1 192.168.15.92
206 0x00000003 1 192.168.15.93
207 0x00000004 1 192.168.15.94
210 IMPORTANT: at this point you must power off the node to be removed and
211 make sure that it will not power on again (in the network) as it
217 Membership information
218 ~~~~~~~~~~~~~~~~~~~~~~
226 Log in to one remaining node via ssh. Issue the delete command (here
229 hp1# pvecm delnode hp4
231 If the operation succeeds no output is returned, just check the node
232 list again with 'pvecm nodes' or 'pvecm status'. You should see
240 Date: Mon Apr 20 12:44:28 2015
241 Quorum provider: corosync_votequorum
247 Votequorum information
248 ~~~~~~~~~~~~~~~~~~~~~~
255 Membership information
256 ~~~~~~~~~~~~~~~~~~~~~~
258 0x00000001 1 192.168.15.90 (local)
259 0x00000002 1 192.168.15.91
260 0x00000003 1 192.168.15.92
263 IMPORTANT: as said above, it is very important to power off the node
264 *before* removal, and make sure that it will *never* power on again
265 (in the existing cluster network) as it is.
267 If you power on the node as it is, your cluster will be screwed up and
268 it could be difficult to restore a clean cluster state.
270 If, for whatever reason, you want that this server joins the same
271 cluster again, you have to
273 * reinstall pve on it from scratch
275 * then join it, as explained in the previous section.
279 include::pve-copyright.adoc[]