]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
bump package versions
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c
DM
1ifdef::manvolnum[]
2PVE({manvolnum})
3================
4include::attributes.txt[]
5
6NAME
7----
8
9pvecm - Proxmox VE Cluster Manager
10
11SYNOPSYS
12--------
13
14include::pvecm.1-synopsis.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19
20ifndef::manvolnum[]
21Cluster Manager
22===============
23include::attributes.txt[]
24endif::manvolnum[]
25
8a865621
DM
26The {PVE} cluster manager 'pvecm' is a tool to create a group of
27physical servers. Such group is called a *cluster*. We use the
28http://www.corosync.org[Corosync Cluster Engine] for reliable group
29communication, and such cluster can consists of up to 32 physical nodes
30(probably more, dependent on network latency).
31
32'pvecm' can be used to create a new cluster, join nodes to a cluster,
33leave the cluster, get status information and do various other cluster
34related tasks. The Proxmox Cluster file system (pmxcfs) is used to
35transparently distribute the cluster configuration to all cluster
36nodes.
37
38Grouping nodes into a cluster has the following advantages:
39
40* Centralized, web based management
41
42* Multi-master clusters: Each node can do all management task
43
44* Proxmox Cluster file system (pmxcfs): Database-driven file system
45 for storing configuration files, replicated in real-time on all
46 nodes using corosync.
47
48* Easy migration of Virtual Machines and Containers between physical
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
59* All nodes must be in the same network as corosync uses IP Multicast
60 to communicate between nodes (also see
ceabe189
DM
61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
62 ports 5404and 5405 for cluster communication.
63+
64NOTE: Some switches do not support IP multicast by default and must be
65manually enabled first.
8a865621
DM
66
67* Date and time have to be synchronized.
68
ceabe189 69* SSH tunnel on TCP port 22 between nodes is used.
8a865621 70
ceabe189
DM
71* If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
8a865621
DM
74
75* We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
78NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 79Proxmox VE 4.0 cluster nodes.
8a865621
DM
80
81
ceabe189
DM
82Preparing Nodes
83---------------
8a865621
DM
84
85First, install {PVE} on all nodes. Make sure that each node is
86installed with the final hostname and IP configuration. Changing the
87hostname and IP is not possible after cluster creation.
88
89Currently the cluster creation has to be done on the console, so you
90need to login via 'ssh'.
91
92
93Create the Cluster
ceabe189 94------------------
8a865621
DM
95
96Login via 'ssh' to the first Proxmox VE node. Use a unique name for
97your cluster. This name cannot be changed later.
98
99 hp1# pvecm create YOUR-CLUSTER-NAME
100
101To check the state of your cluster use:
102
103 hp1# pvecm status
104
105
106Adding Nodes to the Cluster
ceabe189 107---------------------------
8a865621
DM
108
109Login via 'ssh' to the node you want to add.
110
111 hp2# pvecm add IP-ADDRESS-CLUSTER
112
113For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
114
115CAUTION: A new node cannot hold any VM´s, because you would get
7980581f
DM
116conflicts about identical VM IDs. Also, all existing configuration in
117'/etc/pve' is overwritten when you join a new node to the cluster. To
118workaround, use vzdump to backup and restore to a different VMID after
119adding the node to the cluster.
8a865621
DM
120
121To check the state of cluster:
122
123 # pvecm status
124
ceabe189 125.Cluster status after adding 4 nodes
8a865621
DM
126----
127hp2# pvecm status
128Quorum information
129~~~~~~~~~~~~~~~~~~
130Date: Mon Apr 20 12:30:13 2015
131Quorum provider: corosync_votequorum
132Nodes: 4
133Node ID: 0x00000001
134Ring ID: 1928
135Quorate: Yes
136
137Votequorum information
138~~~~~~~~~~~~~~~~~~~~~~
139Expected votes: 4
140Highest expected: 4
141Total votes: 4
142Quorum: 2
143Flags: Quorate
144
145Membership information
146~~~~~~~~~~~~~~~~~~~~~~
147 Nodeid Votes Name
1480x00000001 1 192.168.15.91
1490x00000002 1 192.168.15.92 (local)
1500x00000003 1 192.168.15.93
1510x00000004 1 192.168.15.94
152----
153
154If you only want the list of all nodes use:
155
156 # pvecm nodes
157
158.List Nodes in a Cluster
159----
160hp2# pvecm nodes
161
162Membership information
163~~~~~~~~~~~~~~~~~~~~~~
164 Nodeid Votes Name
165 1 1 hp1
166 2 1 hp2 (local)
167 3 1 hp3
168 4 1 hp4
169----
170
171
172Remove a Cluster Node
ceabe189 173---------------------
8a865621
DM
174
175CAUTION: Read carefully the procedure before proceeding, as it could
176not be what you want or need.
177
178Move all virtual machines from the node. Make sure you have no local
179data or backups you want to keep, or save them accordingly.
180
181Log in to one remaining node via ssh. Issue a 'pvecm nodes' command to
7980581f 182identify the node ID:
8a865621
DM
183
184----
185hp1# pvecm status
186
187Quorum information
188~~~~~~~~~~~~~~~~~~
189Date: Mon Apr 20 12:30:13 2015
190Quorum provider: corosync_votequorum
191Nodes: 4
192Node ID: 0x00000001
193Ring ID: 1928
194Quorate: Yes
195
196Votequorum information
197~~~~~~~~~~~~~~~~~~~~~~
198Expected votes: 4
199Highest expected: 4
200Total votes: 4
201Quorum: 2
202Flags: Quorate
203
204Membership information
205~~~~~~~~~~~~~~~~~~~~~~
206 Nodeid Votes Name
2070x00000001 1 192.168.15.91 (local)
2080x00000002 1 192.168.15.92
2090x00000003 1 192.168.15.93
2100x00000004 1 192.168.15.94
211----
212
213IMPORTANT: at this point you must power off the node to be removed and
214make sure that it will not power on again (in the network) as it
215is.
216
217----
218hp1# pvecm nodes
219
220Membership information
221~~~~~~~~~~~~~~~~~~~~~~
222 Nodeid Votes Name
223 1 1 hp1 (local)
224 2 1 hp2
225 3 1 hp3
226 4 1 hp4
227----
228
229Log in to one remaining node via ssh. Issue the delete command (here
230deleting node hp4):
231
232 hp1# pvecm delnode hp4
233
234If the operation succeeds no output is returned, just check the node
235list again with 'pvecm nodes' or 'pvecm status'. You should see
236something like:
237
238----
239hp1# pvecm status
240
241Quorum information
242~~~~~~~~~~~~~~~~~~
243Date: Mon Apr 20 12:44:28 2015
244Quorum provider: corosync_votequorum
245Nodes: 3
246Node ID: 0x00000001
247Ring ID: 1992
248Quorate: Yes
249
250Votequorum information
251~~~~~~~~~~~~~~~~~~~~~~
252Expected votes: 3
253Highest expected: 3
254Total votes: 3
255Quorum: 3
256Flags: Quorate
257
258Membership information
259~~~~~~~~~~~~~~~~~~~~~~
260 Nodeid Votes Name
2610x00000001 1 192.168.15.90 (local)
2620x00000002 1 192.168.15.91
2630x00000003 1 192.168.15.92
264----
265
266IMPORTANT: as said above, it is very important to power off the node
267*before* removal, and make sure that it will *never* power on again
268(in the existing cluster network) as it is.
269
270If you power on the node as it is, your cluster will be screwed up and
271it could be difficult to restore a clean cluster state.
272
273If, for whatever reason, you want that this server joins the same
274cluster again, you have to
275
276* reinstall pve on it from scratch
277
278* then join it, as explained in the previous section.
d8742b0c
DM
279
280
281ifdef::manvolnum[]
282include::pve-copyright.adoc[]
283endif::manvolnum[]