]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
add reference to source repository
[pve-docs.git] / pvecm.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 NAME
7 ----
8
9 pvecm - Proxmox VE Cluster Manager
10
11 SYNOPSYS
12 --------
13
14 include::pvecm.1-synopsis.adoc[]
15
16 DESCRIPTION
17 -----------
18 endif::manvolnum[]
19
20 ifndef::manvolnum[]
21 Cluster Manager
22 ===============
23 include::attributes.txt[]
24 endif::manvolnum[]
25
26 The {PVE} cluster manager 'pvecm' is a tool to create a group of
27 physical servers. Such group is called a *cluster*. We use the
28 http://www.corosync.org[Corosync Cluster Engine] for reliable group
29 communication, and such cluster can consists of up to 32 physical nodes
30 (probably more, dependent on network latency).
31
32 'pvecm' can be used to create a new cluster, join nodes to a cluster,
33 leave the cluster, get status information and do various other cluster
34 related tasks. The Proxmox Cluster file system (pmxcfs) is used to
35 transparently distribute the cluster configuration to all cluster
36 nodes.
37
38 Grouping nodes into a cluster has the following advantages:
39
40 * Centralized, web based management
41
42 * Multi-master clusters: Each node can do all management task
43
44 * Proxmox Cluster file system (pmxcfs): Database-driven file system
45 for storing configuration files, replicated in real-time on all
46 nodes using corosync.
47
48 * Easy migration of Virtual Machines and Containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be in the same network as corosync uses IP Multicast
60 to communicate between nodes (also see
61 http://www.corosync.org[Corosync Cluster Engine]). NOTE: Some
62 switches do not support IP multicast by default and must be manually
63 enabled first.
64
65 * Date and time have to be synchronized.
66
67 * SSH tunnel on port 22 between nodes is used.
68
69 * If you are interested in High Availability too, for reliable quorum
70 you must have at least 3 nodes (all nodes should have the same
71 version).
72
73 * We recommend a dedicated NIC for the cluster traffic, especially if
74 you use shared storage.
75
76 NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
77 Proxmox VE 4.0 cluster.
78
79
80 Cluster Setup
81 -------------
82
83 First, install {PVE} on all nodes. Make sure that each node is
84 installed with the final hostname and IP configuration. Changing the
85 hostname and IP is not possible after cluster creation.
86
87 Currently the cluster creation has to be done on the console, so you
88 need to login via 'ssh'.
89
90
91 Create the Cluster
92 ~~~~~~~~~~~~~~~~~~
93
94 Login via 'ssh' to the first Proxmox VE node. Use a unique name for
95 your cluster. This name cannot be changed later.
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
99 To check the state of your cluster use:
100
101 hp1# pvecm status
102
103
104 Adding Nodes to the Cluster
105 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
106
107 Login via 'ssh' to the node you want to add.
108
109 hp2# pvecm add IP-ADDRESS-CLUSTER
110
111 For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
112
113 CAUTION: A new node cannot hold any VM´s, because you would get
114 conflicts about identical VM IDs. To workaround, use vzdump to backup
115 and to restore to a different VMID after adding the node to the
116 cluster.
117
118 To check the state of cluster:
119
120 # pvecm status
121
122 .Check Cluster Status
123 ----
124 hp2# pvecm status
125 Quorum information
126 ~~~~~~~~~~~~~~~~~~
127 Date: Mon Apr 20 12:30:13 2015
128 Quorum provider: corosync_votequorum
129 Nodes: 4
130 Node ID: 0x00000001
131 Ring ID: 1928
132 Quorate: Yes
133
134 Votequorum information
135 ~~~~~~~~~~~~~~~~~~~~~~
136 Expected votes: 4
137 Highest expected: 4
138 Total votes: 4
139 Quorum: 2
140 Flags: Quorate
141
142 Membership information
143 ~~~~~~~~~~~~~~~~~~~~~~
144 Nodeid Votes Name
145 0x00000001 1 192.168.15.91
146 0x00000002 1 192.168.15.92 (local)
147 0x00000003 1 192.168.15.93
148 0x00000004 1 192.168.15.94
149 ----
150
151 If you only want the list of all nodes use:
152
153 # pvecm nodes
154
155 .List Nodes in a Cluster
156 ----
157 hp2# pvecm nodes
158
159 Membership information
160 ~~~~~~~~~~~~~~~~~~~~~~
161 Nodeid Votes Name
162 1 1 hp1
163 2 1 hp2 (local)
164 3 1 hp3
165 4 1 hp4
166 ----
167
168
169 Remove a Cluster Node
170 ~~~~~~~~~~~~~~~~~~~~~
171
172 CAUTION: Read carefully the procedure before proceeding, as it could
173 not be what you want or need.
174
175 Move all virtual machines from the node. Make sure you have no local
176 data or backups you want to keep, or save them accordingly.
177
178 Log in to one remaining node via ssh. Issue a 'pvecm nodes' command to
179 identify the nodeID:
180
181 ----
182 hp1# pvecm status
183
184 Quorum information
185 ~~~~~~~~~~~~~~~~~~
186 Date: Mon Apr 20 12:30:13 2015
187 Quorum provider: corosync_votequorum
188 Nodes: 4
189 Node ID: 0x00000001
190 Ring ID: 1928
191 Quorate: Yes
192
193 Votequorum information
194 ~~~~~~~~~~~~~~~~~~~~~~
195 Expected votes: 4
196 Highest expected: 4
197 Total votes: 4
198 Quorum: 2
199 Flags: Quorate
200
201 Membership information
202 ~~~~~~~~~~~~~~~~~~~~~~
203 Nodeid Votes Name
204 0x00000001 1 192.168.15.91 (local)
205 0x00000002 1 192.168.15.92
206 0x00000003 1 192.168.15.93
207 0x00000004 1 192.168.15.94
208 ----
209
210 IMPORTANT: at this point you must power off the node to be removed and
211 make sure that it will not power on again (in the network) as it
212 is.
213
214 ----
215 hp1# pvecm nodes
216
217 Membership information
218 ~~~~~~~~~~~~~~~~~~~~~~
219 Nodeid Votes Name
220 1 1 hp1 (local)
221 2 1 hp2
222 3 1 hp3
223 4 1 hp4
224 ----
225
226 Log in to one remaining node via ssh. Issue the delete command (here
227 deleting node hp4):
228
229 hp1# pvecm delnode hp4
230
231 If the operation succeeds no output is returned, just check the node
232 list again with 'pvecm nodes' or 'pvecm status'. You should see
233 something like:
234
235 ----
236 hp1# pvecm status
237
238 Quorum information
239 ~~~~~~~~~~~~~~~~~~
240 Date: Mon Apr 20 12:44:28 2015
241 Quorum provider: corosync_votequorum
242 Nodes: 3
243 Node ID: 0x00000001
244 Ring ID: 1992
245 Quorate: Yes
246
247 Votequorum information
248 ~~~~~~~~~~~~~~~~~~~~~~
249 Expected votes: 3
250 Highest expected: 3
251 Total votes: 3
252 Quorum: 3
253 Flags: Quorate
254
255 Membership information
256 ~~~~~~~~~~~~~~~~~~~~~~
257 Nodeid Votes Name
258 0x00000001 1 192.168.15.90 (local)
259 0x00000002 1 192.168.15.91
260 0x00000003 1 192.168.15.92
261 ----
262
263 IMPORTANT: as said above, it is very important to power off the node
264 *before* removal, and make sure that it will *never* power on again
265 (in the existing cluster network) as it is.
266
267 If you power on the node as it is, your cluster will be screwed up and
268 it could be difficult to restore a clean cluster state.
269
270 If, for whatever reason, you want that this server joins the same
271 cluster again, you have to
272
273 * reinstall pve on it from scratch
274
275 * then join it, as explained in the previous section.
276
277
278 ifdef::manvolnum[]
279 include::pve-copyright.adoc[]
280 endif::manvolnum[]