]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
NOTE: cleanup
[pve-docs.git] / pvecm.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 NAME
7 ----
8
9 pvecm - Proxmox VE Cluster Manager
10
11 SYNOPSYS
12 --------
13
14 include::pvecm.1-synopsis.adoc[]
15
16 DESCRIPTION
17 -----------
18 endif::manvolnum[]
19
20 ifndef::manvolnum[]
21 Cluster Manager
22 ===============
23 include::attributes.txt[]
24 endif::manvolnum[]
25
26 The {PVE} cluster manager 'pvecm' is a tool to create a group of
27 physical servers. Such group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such cluster can consists of up to 32 physical nodes
30 (probably more, dependent on network latency).
31
32 'pvecm' can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The Proxmox Cluster file system (pmxcfs) is used to
35 transparently distribute the cluster configuration to all cluster
36 nodes.
37
38 Grouping nodes into a cluster has the following advantages:
39
40 * Centralized, web based management
41
42 * Multi-master clusters: Each node can do all management task
43
44 * Proxmox Cluster file system (pmxcfs): Database-driven file system
45 for storing configuration files, replicated in real-time on all
46 nodes using corosync.
47
48 * Easy migration of Virtual Machines and Containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be in the same network as corosync uses IP Multicast
60 to communicate between nodes (also see
61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
62 ports 5404and 5405 for cluster communication.
63 +
64 NOTE: Some switches do not support IP multicast by default and must be
65 manually enabled first.
66
67 * Date and time have to be synchronized.
68
69 * SSH tunnel on TCP port 22 between nodes is used.
70
71 * If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
74
75 * We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
78 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
79 Proxmox VE 4.0 cluster nodes.
80
81
82 Preparing Nodes
83 ---------------
84
85 First, install {PVE} on all nodes. Make sure that each node is
86 installed with the final hostname and IP configuration. Changing the
87 hostname and IP is not possible after cluster creation.
88
89 Currently the cluster creation has to be done on the console, so you
90 need to login via 'ssh'.
91
92
93 Create the Cluster
94 ------------------
95
96 Login via 'ssh' to the first Proxmox VE node. Use a unique name for
97 your cluster. This name cannot be changed later.
98
99 hp1# pvecm create YOUR-CLUSTER-NAME
100
101 To check the state of your cluster use:
102
103 hp1# pvecm status
104
105
106 Adding Nodes to the Cluster
107 ---------------------------
108
109 Login via 'ssh' to the node you want to add.
110
111 hp2# pvecm add IP-ADDRESS-CLUSTER
112
113 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
114
115 CAUTION: A new node cannot hold any VM´s, because you would get
116 conflicts about identical VM IDs. Also, all existing configuration in
117 '/etc/pve' is overwritten when you join a new node to the cluster. To
118 workaround, use vzdump to backup and restore to a different VMID after
119 adding the node to the cluster.
120
121 To check the state of cluster:
122
123 # pvecm status
124
125 .Cluster status after adding 4 nodes
126 ----
127 hp2# pvecm status
128 Quorum information
129 ~~~~~~~~~~~~~~~~~~
130 Date: Mon Apr 20 12:30:13 2015
131 Quorum provider: corosync_votequorum
132 Nodes: 4
133 Node ID: 0x00000001
134 Ring ID: 1928
135 Quorate: Yes
136
137 Votequorum information
138 ~~~~~~~~~~~~~~~~~~~~~~
139 Expected votes: 4
140 Highest expected: 4
141 Total votes: 4
142 Quorum: 2
143 Flags: Quorate
144
145 Membership information
146 ~~~~~~~~~~~~~~~~~~~~~~
147 Nodeid Votes Name
148 0x00000001 1 192.168.15.91
149 0x00000002 1 192.168.15.92 (local)
150 0x00000003 1 192.168.15.93
151 0x00000004 1 192.168.15.94
152 ----
153
154 If you only want the list of all nodes use:
155
156 # pvecm nodes
157
158 .List Nodes in a Cluster
159 ----
160 hp2# pvecm nodes
161
162 Membership information
163 ~~~~~~~~~~~~~~~~~~~~~~
164 Nodeid Votes Name
165 1 1 hp1
166 2 1 hp2 (local)
167 3 1 hp3
168 4 1 hp4
169 ----
170
171
172 Remove a Cluster Node
173 ---------------------
174
175 CAUTION: Read carefully the procedure before proceeding, as it could
176 not be what you want or need.
177
178 Move all virtual machines from the node. Make sure you have no local
179 data or backups you want to keep, or save them accordingly.
180
181 Log in to one remaining node via ssh. Issue a 'pvecm nodes' command to
182 identify the node ID:
183
184 ----
185 hp1# pvecm status
186
187 Quorum information
188 ~~~~~~~~~~~~~~~~~~
189 Date: Mon Apr 20 12:30:13 2015
190 Quorum provider: corosync_votequorum
191 Nodes: 4
192 Node ID: 0x00000001
193 Ring ID: 1928
194 Quorate: Yes
195
196 Votequorum information
197 ~~~~~~~~~~~~~~~~~~~~~~
198 Expected votes: 4
199 Highest expected: 4
200 Total votes: 4
201 Quorum: 2
202 Flags: Quorate
203
204 Membership information
205 ~~~~~~~~~~~~~~~~~~~~~~
206 Nodeid Votes Name
207 0x00000001 1 192.168.15.91 (local)
208 0x00000002 1 192.168.15.92
209 0x00000003 1 192.168.15.93
210 0x00000004 1 192.168.15.94
211 ----
212
213 IMPORTANT: at this point you must power off the node to be removed and
214 make sure that it will not power on again (in the network) as it
215 is.
216
217 ----
218 hp1# pvecm nodes
219
220 Membership information
221 ~~~~~~~~~~~~~~~~~~~~~~
222 Nodeid Votes Name
223 1 1 hp1 (local)
224 2 1 hp2
225 3 1 hp3
226 4 1 hp4
227 ----
228
229 Log in to one remaining node via ssh. Issue the delete command (here
230 deleting node hp4):
231
232 hp1# pvecm delnode hp4
233
234 If the operation succeeds no output is returned, just check the node
235 list again with 'pvecm nodes' or 'pvecm status'. You should see
236 something like:
237
238 ----
239 hp1# pvecm status
240
241 Quorum information
242 ~~~~~~~~~~~~~~~~~~
243 Date: Mon Apr 20 12:44:28 2015
244 Quorum provider: corosync_votequorum
245 Nodes: 3
246 Node ID: 0x00000001
247 Ring ID: 1992
248 Quorate: Yes
249
250 Votequorum information
251 ~~~~~~~~~~~~~~~~~~~~~~
252 Expected votes: 3
253 Highest expected: 3
254 Total votes: 3
255 Quorum: 3
256 Flags: Quorate
257
258 Membership information
259 ~~~~~~~~~~~~~~~~~~~~~~
260 Nodeid Votes Name
261 0x00000001 1 192.168.15.90 (local)
262 0x00000002 1 192.168.15.91
263 0x00000003 1 192.168.15.92
264 ----
265
266 IMPORTANT: as said above, it is very important to power off the node
267 *before* removal, and make sure that it will *never* power on again
268 (in the existing cluster network) as it is.
269
270 If you power on the node as it is, your cluster will be screwed up and
271 it could be difficult to restore a clean cluster state.
272
273 If, for whatever reason, you want that this server joins the same
274 cluster again, you have to
275
276 * reinstall pve on it from scratch
277
278 * then join it, as explained in the previous section.
279
280
281 ifdef::manvolnum[]
282 include::pve-copyright.adoc[]
283 endif::manvolnum[]