]>
Commit | Line | Data |
---|---|---|
bde0e57d | 1 | [[chapter_pvecm]] |
d8742b0c | 2 | ifdef::manvolnum[] |
b2f242ab DM |
3 | pvecm(1) |
4 | ======== | |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
d8742b0c DM |
7 | NAME |
8 | ---- | |
9 | ||
74026b8f | 10 | pvecm - Proxmox VE Cluster Manager |
d8742b0c | 11 | |
49a5e11c | 12 | SYNOPSIS |
d8742b0c DM |
13 | -------- |
14 | ||
15 | include::pvecm.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | Cluster Manager | |
23 | =============== | |
5f09af76 | 24 | :pve-toplevel: |
194d2f29 | 25 | endif::manvolnum[] |
5f09af76 | 26 | |
65a0aa49 | 27 | The {pve} cluster manager `pvecm` is a tool to create a group of |
8c1189b6 | 28 | physical servers. Such a group is called a *cluster*. We use the |
8a865621 | 29 | http://www.corosync.org[Corosync Cluster Engine] for reliable group |
fdf1dd36 TL |
30 | communication. There's no explicit limit for the number of nodes in a cluster. |
31 | In practice, the actual possible node count may be limited by the host and | |
79bb0794 | 32 | network performance. Currently (2021), there are reports of clusters (using |
fdf1dd36 | 33 | high-end enterprise hardware) with over 50 nodes in production. |
8a865621 | 34 | |
8c1189b6 | 35 | `pvecm` can be used to create a new cluster, join nodes to a cluster, |
a37d539f | 36 | leave the cluster, get status information, and do various other cluster-related |
60ed554f | 37 | tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'') |
e300cf7d | 38 | is used to transparently distribute the cluster configuration to all cluster |
8a865621 DM |
39 | nodes. |
40 | ||
41 | Grouping nodes into a cluster has the following advantages: | |
42 | ||
a37d539f | 43 | * Centralized, web-based management |
8a865621 | 44 | |
6d3c0b34 | 45 | * Multi-master clusters: each node can do all management tasks |
8a865621 | 46 | |
a37d539f DW |
47 | * Use of `pmxcfs`, a database-driven file system, for storing configuration |
48 | files, replicated in real-time on all nodes using `corosync` | |
8a865621 | 49 | |
5eba0743 | 50 | * Easy migration of virtual machines and containers between physical |
8a865621 DM |
51 | hosts |
52 | ||
53 | * Fast deployment | |
54 | ||
55 | * Cluster-wide services like firewall and HA | |
56 | ||
57 | ||
58 | Requirements | |
59 | ------------ | |
60 | ||
337a2d42 | 61 | * All nodes must be able to connect to each other via UDP ports 5405-5412 |
a9e7c3aa | 62 | for corosync to work. |
8a865621 | 63 | |
a37d539f | 64 | * Date and time must be synchronized. |
8a865621 | 65 | |
a37d539f | 66 | * An SSH tunnel on TCP port 22 between nodes is required. |
8a865621 | 67 | |
ceabe189 DM |
68 | * If you are interested in High Availability, you need to have at |
69 | least three nodes for reliable quorum. All nodes should have the | |
70 | same version. | |
8a865621 DM |
71 | |
72 | * We recommend a dedicated NIC for the cluster traffic, especially if | |
73 | you use shared storage. | |
74 | ||
a37d539f | 75 | * The root password of a cluster node is required for adding nodes. |
d4a9910f | 76 | |
8e0e0bcf FE |
77 | * Online migration of virtual machines is only supported when nodes have CPUs |
78 | from the same vendor. It might work otherwise, but this is never guaranteed. | |
79 | ||
e4b62d04 TL |
80 | NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster |
81 | nodes. | |
82 | ||
a37d539f DW |
83 | NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is |
84 | not supported as a production configuration and should only be done temporarily, | |
85 | during an upgrade of the whole cluster from one major version to another. | |
8a865621 | 86 | |
a9e7c3aa SR |
87 | NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The |
88 | cluster protocol (corosync) between {pve} 6.x and earlier versions changed | |
89 | fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the | |
90 | upgrade procedure to {pve} 6.0. | |
91 | ||
8a865621 | 92 | |
ceabe189 DM |
93 | Preparing Nodes |
94 | --------------- | |
8a865621 | 95 | |
65a0aa49 | 96 | First, install {pve} on all nodes. Make sure that each node is |
8a865621 DM |
97 | installed with the final hostname and IP configuration. Changing the |
98 | hostname and IP is not possible after cluster creation. | |
99 | ||
a37d539f | 100 | While it's common to reference all node names and their IPs in `/etc/hosts` (or |
a9e7c3aa SR |
101 | make their names resolvable through other means), this is not necessary for a |
102 | cluster to work. It may be useful however, as you can then connect from one node | |
a37d539f | 103 | to another via SSH, using the easier to remember node name (see also |
a9e7c3aa | 104 | xref:pvecm_corosync_addresses[Link Address Types]). Note that we always |
a37d539f | 105 | recommend referencing nodes by their IP addresses in the cluster configuration. |
a9e7c3aa | 106 | |
9a7396aa | 107 | |
11202f1d | 108 | [[pvecm_create_cluster]] |
6cab1704 TL |
109 | Create a Cluster |
110 | ---------------- | |
111 | ||
112 | You can either create a cluster on the console (login via `ssh`), or through | |
a37d539f | 113 | the API using the {pve} web interface (__Datacenter -> Cluster__). |
8a865621 | 114 | |
6cab1704 TL |
115 | NOTE: Use a unique name for your cluster. This name cannot be changed later. |
116 | The cluster name follows the same rules as node names. | |
3e380ce0 | 117 | |
6cab1704 | 118 | [[pvecm_cluster_create_via_gui]] |
3e380ce0 SR |
119 | Create via Web GUI |
120 | ~~~~~~~~~~~~~~~~~~ | |
121 | ||
24398259 SR |
122 | [thumbnail="screenshot/gui-cluster-create.png"] |
123 | ||
3e380ce0 | 124 | Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster |
a37d539f DW |
125 | name and select a network connection from the drop-down list to serve as the |
126 | main cluster network (Link 0). It defaults to the IP resolved via the node's | |
3e380ce0 SR |
127 | hostname. |
128 | ||
663ae2bf DW |
129 | As of {pve} 6.2, up to 8 fallback links can be added to a cluster. To add a |
130 | redundant link, click the 'Add' button and select a link number and IP address | |
131 | from the respective fields. Prior to {pve} 6.2, to add a second link as | |
132 | fallback, you can select the 'Advanced' checkbox and choose an additional | |
133 | network interface (Link 1, see also xref:pvecm_redundancy[Corosync Redundancy]). | |
3e380ce0 | 134 | |
a37d539f DW |
135 | NOTE: Ensure that the network selected for cluster communication is not used for |
136 | any high traffic purposes, like network storage or live-migration. | |
6cab1704 TL |
137 | While the cluster network itself produces small amounts of data, it is very |
138 | sensitive to latency. Check out full | |
139 | xref:pvecm_cluster_network_requirements[cluster network requirements]. | |
140 | ||
141 | [[pvecm_cluster_create_via_cli]] | |
a37d539f DW |
142 | Create via the Command Line |
143 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
3e380ce0 SR |
144 | |
145 | Login via `ssh` to the first {pve} node and run the following command: | |
8a865621 | 146 | |
c15cdfba TL |
147 | ---- |
148 | hp1# pvecm create CLUSTERNAME | |
149 | ---- | |
8a865621 | 150 | |
3e380ce0 | 151 | To check the state of the new cluster use: |
8a865621 | 152 | |
c15cdfba | 153 | ---- |
8a865621 | 154 | hp1# pvecm status |
c15cdfba | 155 | ---- |
8a865621 | 156 | |
a37d539f DW |
157 | Multiple Clusters in the Same Network |
158 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
dd1aa0e0 TL |
159 | |
160 | It is possible to create multiple clusters in the same physical or logical | |
a37d539f DW |
161 | network. In this case, each cluster must have a unique name to avoid possible |
162 | clashes in the cluster communication stack. Furthermore, this helps avoid human | |
163 | confusion by making clusters clearly distinguishable. | |
dd1aa0e0 TL |
164 | |
165 | While the bandwidth requirement of a corosync cluster is relatively low, the | |
166 | latency of packages and the package per second (PPS) rate is the limiting | |
167 | factor. Different clusters in the same network can compete with each other for | |
168 | these resources, so it may still make sense to use separate physical network | |
169 | infrastructure for bigger clusters. | |
8a865621 | 170 | |
11202f1d | 171 | [[pvecm_join_node_to_cluster]] |
8a865621 | 172 | Adding Nodes to the Cluster |
ceabe189 | 173 | --------------------------- |
8a865621 | 174 | |
aaf632d5 FE |
175 | CAUTION: All existing configuration in `/etc/pve` is overwritten when joining a |
176 | cluster. In particular, a joining node cannot hold any guests, since guest IDs | |
177 | could otherwise conflict, and the node will inherit the cluster's storage | |
178 | configuration. To join a node with existing guest, as a workaround, you can | |
179 | create a backup of each guest (using `vzdump`) and restore it under a different | |
180 | ID after joining. If the node's storage layout differs, you will need to re-add | |
181 | the node's storages, and adapt each storage's node restriction to reflect on | |
182 | which nodes the storage is actually available. | |
3e380ce0 | 183 | |
6cab1704 TL |
184 | Join Node to Cluster via GUI |
185 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
3e380ce0 | 186 | |
24398259 SR |
187 | [thumbnail="screenshot/gui-cluster-join-information.png"] |
188 | ||
a37d539f DW |
189 | Log in to the web interface on an existing cluster node. Under __Datacenter -> |
190 | Cluster__, click the *Join Information* button at the top. Then, click on the | |
3e380ce0 SR |
191 | button *Copy Information*. Alternatively, copy the string from the 'Information' |
192 | field manually. | |
193 | ||
24398259 SR |
194 | [thumbnail="screenshot/gui-cluster-join.png"] |
195 | ||
a37d539f | 196 | Next, log in to the web interface on the node you want to add. |
3e380ce0 | 197 | Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the |
6cab1704 TL |
198 | 'Information' field with the 'Join Information' text you copied earlier. |
199 | Most settings required for joining the cluster will be filled out | |
200 | automatically. For security reasons, the cluster password has to be entered | |
201 | manually. | |
3e380ce0 SR |
202 | |
203 | NOTE: To enter all required data manually, you can disable the 'Assisted Join' | |
204 | checkbox. | |
205 | ||
6cab1704 | 206 | After clicking the *Join* button, the cluster join process will start |
a37d539f DW |
207 | immediately. After the node has joined the cluster, its current node certificate |
208 | will be replaced by one signed from the cluster certificate authority (CA). | |
209 | This means that the current session will stop working after a few seconds. You | |
210 | then might need to force-reload the web interface and log in again with the | |
211 | cluster credentials. | |
3e380ce0 | 212 | |
6cab1704 | 213 | Now your node should be visible under __Datacenter -> Cluster__. |
3e380ce0 | 214 | |
6cab1704 TL |
215 | Join Node to Cluster via Command Line |
216 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
3e380ce0 | 217 | |
a37d539f | 218 | Log in to the node you want to join into an existing cluster via `ssh`. |
8a865621 | 219 | |
c15cdfba | 220 | ---- |
8673c878 | 221 | # pvecm add IP-ADDRESS-CLUSTER |
c15cdfba | 222 | ---- |
8a865621 | 223 | |
a37d539f | 224 | For `IP-ADDRESS-CLUSTER`, use the IP or hostname of an existing cluster node. |
a9e7c3aa | 225 | An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]). |
8a865621 | 226 | |
8a865621 | 227 | |
a9e7c3aa | 228 | To check the state of the cluster use: |
8a865621 | 229 | |
c15cdfba | 230 | ---- |
8a865621 | 231 | # pvecm status |
c15cdfba | 232 | ---- |
8a865621 | 233 | |
ceabe189 | 234 | .Cluster status after adding 4 nodes |
8a865621 | 235 | ---- |
8673c878 DW |
236 | # pvecm status |
237 | Cluster information | |
238 | ~~~~~~~~~~~~~~~~~~~ | |
239 | Name: prod-central | |
240 | Config Version: 3 | |
241 | Transport: knet | |
242 | Secure auth: on | |
243 | ||
8a865621 DM |
244 | Quorum information |
245 | ~~~~~~~~~~~~~~~~~~ | |
8673c878 | 246 | Date: Tue Sep 14 11:06:47 2021 |
8a865621 DM |
247 | Quorum provider: corosync_votequorum |
248 | Nodes: 4 | |
249 | Node ID: 0x00000001 | |
8673c878 | 250 | Ring ID: 1.1a8 |
8a865621 DM |
251 | Quorate: Yes |
252 | ||
253 | Votequorum information | |
254 | ~~~~~~~~~~~~~~~~~~~~~~ | |
255 | Expected votes: 4 | |
256 | Highest expected: 4 | |
257 | Total votes: 4 | |
91f3edd0 | 258 | Quorum: 3 |
8a865621 DM |
259 | Flags: Quorate |
260 | ||
261 | Membership information | |
262 | ~~~~~~~~~~~~~~~~~~~~~~ | |
263 | Nodeid Votes Name | |
264 | 0x00000001 1 192.168.15.91 | |
265 | 0x00000002 1 192.168.15.92 (local) | |
266 | 0x00000003 1 192.168.15.93 | |
267 | 0x00000004 1 192.168.15.94 | |
268 | ---- | |
269 | ||
a37d539f | 270 | If you only want a list of all nodes, use: |
8a865621 | 271 | |
c15cdfba | 272 | ---- |
8a865621 | 273 | # pvecm nodes |
c15cdfba | 274 | ---- |
8a865621 | 275 | |
5eba0743 | 276 | .List nodes in a cluster |
8a865621 | 277 | ---- |
8673c878 | 278 | # pvecm nodes |
8a865621 DM |
279 | |
280 | Membership information | |
281 | ~~~~~~~~~~~~~~~~~~~~~~ | |
282 | Nodeid Votes Name | |
283 | 1 1 hp1 | |
284 | 2 1 hp2 (local) | |
285 | 3 1 hp3 | |
286 | 4 1 hp4 | |
287 | ---- | |
288 | ||
3254bfdd | 289 | [[pvecm_adding_nodes_with_separated_cluster_network]] |
a37d539f | 290 | Adding Nodes with Separated Cluster Network |
e4ec4154 TL |
291 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
292 | ||
a37d539f | 293 | When adding a node to a cluster with a separated cluster network, you need to |
a9e7c3aa | 294 | use the 'link0' parameter to set the nodes address on that network: |
e4ec4154 TL |
295 | |
296 | [source,bash] | |
4d19cb00 | 297 | ---- |
a9e7c3aa | 298 | pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0 |
4d19cb00 | 299 | ---- |
e4ec4154 | 300 | |
a9e7c3aa | 301 | If you want to use the built-in xref:pvecm_redundancy[redundancy] of the |
a37d539f | 302 | Kronosnet transport layer, also use the 'link1' parameter. |
e4ec4154 | 303 | |
a37d539f DW |
304 | Using the GUI, you can select the correct interface from the corresponding |
305 | 'Link X' fields in the *Cluster Join* dialog. | |
8a865621 DM |
306 | |
307 | Remove a Cluster Node | |
ceabe189 | 308 | --------------------- |
8a865621 | 309 | |
a37d539f | 310 | CAUTION: Read the procedure carefully before proceeding, as it may |
8a865621 DM |
311 | not be what you want or need. |
312 | ||
7ec7bcee DW |
313 | Move all virtual machines from the node. Ensure that you have made copies of any |
314 | local data or backups that you want to keep. In addition, make sure to remove | |
315 | any scheduled replication jobs to the node to be removed. | |
316 | ||
317 | CAUTION: Failure to remove replication jobs to a node before removing said node | |
318 | will result in the replication job becoming irremovable. Especially note that | |
319 | replication automatically switches direction if a replicated VM is migrated, so | |
320 | by migrating a replicated VM from a node to be deleted, replication jobs will be | |
321 | set up to that node automatically. | |
322 | ||
323 | In the following example, we will remove the node hp4 from the cluster. | |
8a865621 | 324 | |
e8503c6c EK |
325 | Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes` |
326 | command to identify the node ID to remove: | |
8a865621 DM |
327 | |
328 | ---- | |
8673c878 | 329 | hp1# pvecm nodes |
8a865621 DM |
330 | |
331 | Membership information | |
332 | ~~~~~~~~~~~~~~~~~~~~~~ | |
333 | Nodeid Votes Name | |
334 | 1 1 hp1 (local) | |
335 | 2 1 hp2 | |
336 | 3 1 hp3 | |
337 | 4 1 hp4 | |
338 | ---- | |
339 | ||
e8503c6c | 340 | |
a37d539f DW |
341 | At this point, you must power off hp4 and ensure that it will not power on |
342 | again (in the network) with its current configuration. | |
e8503c6c | 343 | |
a37d539f DW |
344 | IMPORTANT: As mentioned above, it is critical to power off the node |
345 | *before* removal, and make sure that it will *not* power on again | |
346 | (in the existing cluster network) with its current configuration. | |
347 | If you power on the node as it is, the cluster could end up broken, | |
348 | and it could be difficult to restore it to a functioning state. | |
e8503c6c EK |
349 | |
350 | After powering off the node hp4, we can safely remove it from the cluster. | |
8a865621 | 351 | |
c15cdfba | 352 | ---- |
8a865621 | 353 | hp1# pvecm delnode hp4 |
10da5ce1 | 354 | Killing node 4 |
c15cdfba | 355 | ---- |
8a865621 | 356 | |
249fd833 DW |
357 | NOTE: At this point, it is possible that you will receive an error message |
358 | stating `Could not kill node (error = CS_ERR_NOT_EXIST)`. This does not | |
359 | signify an actual failure in the deletion of the node, but rather a failure in | |
360 | corosync trying to kill an offline node. Thus, it can be safely ignored. | |
361 | ||
10da5ce1 DJ |
362 | Use `pvecm nodes` or `pvecm status` to check the node list again. It should |
363 | look something like: | |
8a865621 DM |
364 | |
365 | ---- | |
366 | hp1# pvecm status | |
367 | ||
8673c878 | 368 | ... |
8a865621 DM |
369 | |
370 | Votequorum information | |
371 | ~~~~~~~~~~~~~~~~~~~~~~ | |
372 | Expected votes: 3 | |
373 | Highest expected: 3 | |
374 | Total votes: 3 | |
91f3edd0 | 375 | Quorum: 2 |
8a865621 DM |
376 | Flags: Quorate |
377 | ||
378 | Membership information | |
379 | ~~~~~~~~~~~~~~~~~~~~~~ | |
380 | Nodeid Votes Name | |
381 | 0x00000001 1 192.168.15.90 (local) | |
382 | 0x00000002 1 192.168.15.91 | |
383 | 0x00000003 1 192.168.15.92 | |
384 | ---- | |
385 | ||
a9e7c3aa | 386 | If, for whatever reason, you want this server to join the same cluster again, |
a37d539f | 387 | you have to: |
8a865621 | 388 | |
a37d539f | 389 | * do a fresh install of {pve} on it, |
8a865621 DM |
390 | |
391 | * then join it, as explained in the previous section. | |
d8742b0c | 392 | |
41925ede SR |
393 | NOTE: After removal of the node, its SSH fingerprint will still reside in the |
394 | 'known_hosts' of the other nodes. If you receive an SSH error after rejoining | |
9121b45b TL |
395 | a node with the same IP or hostname, run `pvecm updatecerts` once on the |
396 | re-added node to update its fingerprint cluster wide. | |
41925ede | 397 | |
38ae8db3 | 398 | [[pvecm_separate_node_without_reinstall]] |
a37d539f | 399 | Separate a Node Without Reinstalling |
555e966b TL |
400 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
401 | ||
402 | CAUTION: This is *not* the recommended method, proceed with caution. Use the | |
a37d539f | 403 | previous method if you're unsure. |
555e966b TL |
404 | |
405 | You can also separate a node from a cluster without reinstalling it from | |
a37d539f DW |
406 | scratch. But after removing the node from the cluster, it will still have |
407 | access to any shared storage. This must be resolved before you start removing | |
555e966b | 408 | the node from the cluster. A {pve} cluster cannot share the exact same |
60ed554f | 409 | storage with another cluster, as storage locking doesn't work over the cluster |
a37d539f | 410 | boundary. Furthermore, it may also lead to VMID conflicts. |
555e966b | 411 | |
a37d539f | 412 | It's suggested that you create a new storage, where only the node which you want |
a9e7c3aa | 413 | to separate has access. This can be a new export on your NFS or a new Ceph |
a37d539f DW |
414 | pool, to name a few examples. It's just important that the exact same storage |
415 | does not get accessed by multiple clusters. After setting up this storage, move | |
416 | all data and VMs from the node to it. Then you are ready to separate the | |
3be22308 | 417 | node from the cluster. |
555e966b | 418 | |
a37d539f DW |
419 | WARNING: Ensure that all shared resources are cleanly separated! Otherwise you |
420 | will run into conflicts and problems. | |
555e966b | 421 | |
a37d539f | 422 | First, stop the corosync and pve-cluster services on the node: |
555e966b | 423 | [source,bash] |
4d19cb00 | 424 | ---- |
555e966b TL |
425 | systemctl stop pve-cluster |
426 | systemctl stop corosync | |
4d19cb00 | 427 | ---- |
555e966b | 428 | |
a37d539f | 429 | Start the cluster file system again in local mode: |
555e966b | 430 | [source,bash] |
4d19cb00 | 431 | ---- |
555e966b | 432 | pmxcfs -l |
4d19cb00 | 433 | ---- |
555e966b TL |
434 | |
435 | Delete the corosync configuration files: | |
436 | [source,bash] | |
4d19cb00 | 437 | ---- |
555e966b | 438 | rm /etc/pve/corosync.conf |
838081cd | 439 | rm -r /etc/corosync/* |
4d19cb00 | 440 | ---- |
555e966b | 441 | |
a37d539f | 442 | You can now start the file system again as a normal service: |
555e966b | 443 | [source,bash] |
4d19cb00 | 444 | ---- |
555e966b TL |
445 | killall pmxcfs |
446 | systemctl start pve-cluster | |
4d19cb00 | 447 | ---- |
555e966b | 448 | |
a37d539f DW |
449 | The node is now separated from the cluster. You can deleted it from any |
450 | remaining node of the cluster with: | |
555e966b | 451 | [source,bash] |
4d19cb00 | 452 | ---- |
555e966b | 453 | pvecm delnode oldnode |
4d19cb00 | 454 | ---- |
555e966b | 455 | |
a37d539f DW |
456 | If the command fails due to a loss of quorum in the remaining node, you can set |
457 | the expected votes to 1 as a workaround: | |
555e966b | 458 | [source,bash] |
4d19cb00 | 459 | ---- |
555e966b | 460 | pvecm expected 1 |
4d19cb00 | 461 | ---- |
555e966b | 462 | |
96d698db | 463 | And then repeat the 'pvecm delnode' command. |
555e966b | 464 | |
a37d539f DW |
465 | Now switch back to the separated node and delete all the remaining cluster |
466 | files on it. This ensures that the node can be added to another cluster again | |
467 | without problems. | |
555e966b TL |
468 | |
469 | [source,bash] | |
4d19cb00 | 470 | ---- |
555e966b | 471 | rm /var/lib/corosync/* |
4d19cb00 | 472 | ---- |
555e966b TL |
473 | |
474 | As the configuration files from the other nodes are still in the cluster | |
a37d539f DW |
475 | file system, you may want to clean those up too. After making absolutely sure |
476 | that you have the correct node name, you can simply remove the entire | |
477 | directory recursively from '/etc/pve/nodes/NODENAME'. | |
555e966b | 478 | |
a37d539f DW |
479 | CAUTION: The node's SSH keys will remain in the 'authorized_key' file. This |
480 | means that the nodes can still connect to each other with public key | |
481 | authentication. You should fix this by removing the respective keys from the | |
555e966b | 482 | '/etc/pve/priv/authorized_keys' file. |
d8742b0c | 483 | |
a9e7c3aa | 484 | |
806ef12d DM |
485 | Quorum |
486 | ------ | |
487 | ||
488 | {pve} use a quorum-based technique to provide a consistent state among | |
489 | all cluster nodes. | |
490 | ||
491 | [quote, from Wikipedia, Quorum (distributed computing)] | |
492 | ____ | |
493 | A quorum is the minimum number of votes that a distributed transaction | |
494 | has to obtain in order to be allowed to perform an operation in a | |
495 | distributed system. | |
496 | ____ | |
497 | ||
498 | In case of network partitioning, state changes requires that a | |
499 | majority of nodes are online. The cluster switches to read-only mode | |
5eba0743 | 500 | if it loses quorum. |
806ef12d DM |
501 | |
502 | NOTE: {pve} assigns a single vote to each node by default. | |
503 | ||
a9e7c3aa | 504 | |
e4ec4154 TL |
505 | Cluster Network |
506 | --------------- | |
507 | ||
508 | The cluster network is the core of a cluster. All messages sent over it have to | |
a9e7c3aa | 509 | be delivered reliably to all nodes in their respective order. In {pve} this |
a37d539f DW |
510 | part is done by corosync, an implementation of a high performance, low overhead, |
511 | high availability development toolkit. It serves our decentralized configuration | |
512 | file system (`pmxcfs`). | |
e4ec4154 | 513 | |
3254bfdd | 514 | [[pvecm_cluster_network_requirements]] |
e4ec4154 TL |
515 | Network Requirements |
516 | ~~~~~~~~~~~~~~~~~~~~ | |
c43c999f TL |
517 | |
518 | The {pve} cluster stack requires a reliable network with latencies under 5 | |
519 | milliseconds (LAN performance) between all nodes to operate stably. While on | |
520 | setups with a small node count a network with higher latencies _may_ work, this | |
521 | is not guaranteed and gets rather unlikely with more than three nodes and | |
522 | latencies above around 10 ms. | |
523 | ||
524 | The network should not be used heavily by other members, as while corosync does | |
525 | not uses much bandwidth it is sensitive to latency jitters; ideally corosync | |
526 | runs on its own physically separated network. Especially do not use a shared | |
527 | network for corosync and storage (except as a potential low-priority fallback | |
528 | in a xref:pvecm_redundancy[redundant] configuration). | |
e4ec4154 | 529 | |
a9e7c3aa | 530 | Before setting up a cluster, it is good practice to check if the network is fit |
a37d539f | 531 | for that purpose. To ensure that the nodes can connect to each other on the |
a9e7c3aa SR |
532 | cluster network, you can test the connectivity between them with the `ping` |
533 | tool. | |
e4ec4154 | 534 | |
a9e7c3aa SR |
535 | If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically |
536 | be generated - no manual action is required. | |
e4ec4154 | 537 | |
a9e7c3aa SR |
538 | NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0). |
539 | Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster | |
540 | communication, which, for now, only supports regular UDP unicast. | |
e4ec4154 | 541 | |
a9e7c3aa SR |
542 | CAUTION: You can still enable Multicast or legacy unicast by setting your |
543 | transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf], | |
544 | but keep in mind that this will disable all cryptography and redundancy support. | |
545 | This is therefore not recommended. | |
e4ec4154 TL |
546 | |
547 | Separate Cluster Network | |
548 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
549 | ||
a37d539f DW |
550 | When creating a cluster without any parameters, the corosync cluster network is |
551 | generally shared with the web interface and the VMs' network. Depending on | |
552 | your setup, even storage traffic may get sent over the same network. It's | |
553 | recommended to change that, as corosync is a time-critical, real-time | |
a9e7c3aa | 554 | application. |
e4ec4154 | 555 | |
a37d539f | 556 | Setting Up a New Network |
e4ec4154 TL |
557 | ^^^^^^^^^^^^^^^^^^^^^^^^ |
558 | ||
9ffebff5 | 559 | First, you have to set up a new network interface. It should be on a physically |
e4ec4154 | 560 | separate network. Ensure that your network fulfills the |
3254bfdd | 561 | xref:pvecm_cluster_network_requirements[cluster network requirements]. |
e4ec4154 TL |
562 | |
563 | Separate On Cluster Creation | |
564 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
565 | ||
a9e7c3aa | 566 | This is possible via the 'linkX' parameters of the 'pvecm create' |
a37d539f | 567 | command, used for creating a new cluster. |
e4ec4154 | 568 | |
a9e7c3aa SR |
569 | If you have set up an additional NIC with a static address on 10.10.10.1/25, |
570 | and want to send and receive all cluster communication over this interface, | |
e4ec4154 TL |
571 | you would execute: |
572 | ||
573 | [source,bash] | |
4d19cb00 | 574 | ---- |
a9e7c3aa | 575 | pvecm create test --link0 10.10.10.1 |
4d19cb00 | 576 | ---- |
e4ec4154 | 577 | |
a37d539f | 578 | To check if everything is working properly, execute: |
e4ec4154 | 579 | [source,bash] |
4d19cb00 | 580 | ---- |
e4ec4154 | 581 | systemctl status corosync |
4d19cb00 | 582 | ---- |
e4ec4154 | 583 | |
a9e7c3aa | 584 | Afterwards, proceed as described above to |
3254bfdd | 585 | xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network]. |
82d52451 | 586 | |
3254bfdd | 587 | [[pvecm_separate_cluster_net_after_creation]] |
e4ec4154 TL |
588 | Separate After Cluster Creation |
589 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
590 | ||
a9e7c3aa | 591 | You can do this if you have already created a cluster and want to switch |
e4ec4154 | 592 | its communication to another network, without rebuilding the whole cluster. |
a37d539f | 593 | This change may lead to short periods of quorum loss in the cluster, as nodes |
e4ec4154 TL |
594 | have to restart corosync and come up one after the other on the new network. |
595 | ||
3254bfdd | 596 | Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first. |
a9e7c3aa | 597 | Then, open it and you should see a file similar to: |
e4ec4154 TL |
598 | |
599 | ---- | |
600 | logging { | |
601 | debug: off | |
602 | to_syslog: yes | |
603 | } | |
604 | ||
605 | nodelist { | |
606 | ||
607 | node { | |
608 | name: due | |
609 | nodeid: 2 | |
610 | quorum_votes: 1 | |
611 | ring0_addr: due | |
612 | } | |
613 | ||
614 | node { | |
615 | name: tre | |
616 | nodeid: 3 | |
617 | quorum_votes: 1 | |
618 | ring0_addr: tre | |
619 | } | |
620 | ||
621 | node { | |
622 | name: uno | |
623 | nodeid: 1 | |
624 | quorum_votes: 1 | |
625 | ring0_addr: uno | |
626 | } | |
627 | ||
628 | } | |
629 | ||
630 | quorum { | |
631 | provider: corosync_votequorum | |
632 | } | |
633 | ||
634 | totem { | |
a9e7c3aa | 635 | cluster_name: testcluster |
e4ec4154 | 636 | config_version: 3 |
a9e7c3aa | 637 | ip_version: ipv4-6 |
e4ec4154 TL |
638 | secauth: on |
639 | version: 2 | |
640 | interface { | |
a9e7c3aa | 641 | linknumber: 0 |
e4ec4154 TL |
642 | } |
643 | ||
644 | } | |
645 | ---- | |
646 | ||
a37d539f | 647 | NOTE: `ringX_addr` actually specifies a corosync *link address*. The name "ring" |
a9e7c3aa SR |
648 | is a remnant of older corosync versions that is kept for backwards |
649 | compatibility. | |
650 | ||
a37d539f | 651 | The first thing you want to do is add the 'name' properties in the node entries, |
a9e7c3aa | 652 | if you do not see them already. Those *must* match the node name. |
e4ec4154 | 653 | |
a9e7c3aa SR |
654 | Then replace all addresses from the 'ring0_addr' properties of all nodes with |
655 | the new addresses. You may use plain IP addresses or hostnames here. If you use | |
a37d539f DW |
656 | hostnames, ensure that they are resolvable from all nodes (see also |
657 | xref:pvecm_corosync_addresses[Link Address Types]). | |
e4ec4154 | 658 | |
a37d539f DW |
659 | In this example, we want to switch cluster communication to the |
660 | 10.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively. | |
e4ec4154 | 661 | |
a9e7c3aa | 662 | NOTE: The exact same procedure can be used to change other 'ringX_addr' values |
a37d539f DW |
663 | as well. However, we recommend only changing one link address at a time, so |
664 | that it's easier to recover if something goes wrong. | |
a9e7c3aa SR |
665 | |
666 | After we increase the 'config_version' property, the new configuration file | |
e4ec4154 TL |
667 | should look like: |
668 | ||
669 | ---- | |
e4ec4154 TL |
670 | logging { |
671 | debug: off | |
672 | to_syslog: yes | |
673 | } | |
674 | ||
675 | nodelist { | |
676 | ||
677 | node { | |
678 | name: due | |
679 | nodeid: 2 | |
680 | quorum_votes: 1 | |
681 | ring0_addr: 10.10.10.2 | |
682 | } | |
683 | ||
684 | node { | |
685 | name: tre | |
686 | nodeid: 3 | |
687 | quorum_votes: 1 | |
688 | ring0_addr: 10.10.10.3 | |
689 | } | |
690 | ||
691 | node { | |
692 | name: uno | |
693 | nodeid: 1 | |
694 | quorum_votes: 1 | |
695 | ring0_addr: 10.10.10.1 | |
696 | } | |
697 | ||
698 | } | |
699 | ||
700 | quorum { | |
701 | provider: corosync_votequorum | |
702 | } | |
703 | ||
704 | totem { | |
a9e7c3aa | 705 | cluster_name: testcluster |
e4ec4154 | 706 | config_version: 4 |
a9e7c3aa | 707 | ip_version: ipv4-6 |
e4ec4154 TL |
708 | secauth: on |
709 | version: 2 | |
710 | interface { | |
a9e7c3aa | 711 | linknumber: 0 |
e4ec4154 TL |
712 | } |
713 | ||
714 | } | |
715 | ---- | |
716 | ||
a37d539f DW |
717 | Then, after a final check to see that all changed information is correct, we |
718 | save it and once again follow the | |
719 | xref:pvecm_edit_corosync_conf[edit corosync.conf file] section to bring it into | |
720 | effect. | |
e4ec4154 | 721 | |
a9e7c3aa SR |
722 | The changes will be applied live, so restarting corosync is not strictly |
723 | necessary. If you changed other settings as well, or notice corosync | |
724 | complaining, you can optionally trigger a restart. | |
e4ec4154 TL |
725 | |
726 | On a single node execute: | |
a9e7c3aa | 727 | |
e4ec4154 | 728 | [source,bash] |
4d19cb00 | 729 | ---- |
e4ec4154 | 730 | systemctl restart corosync |
4d19cb00 | 731 | ---- |
e4ec4154 | 732 | |
a37d539f | 733 | Now check if everything is okay: |
e4ec4154 TL |
734 | |
735 | [source,bash] | |
4d19cb00 | 736 | ---- |
e4ec4154 | 737 | systemctl status corosync |
4d19cb00 | 738 | ---- |
e4ec4154 | 739 | |
a37d539f | 740 | If corosync begins to work again, restart it on all other nodes too. |
e4ec4154 TL |
741 | They will then join the cluster membership one by one on the new network. |
742 | ||
3254bfdd | 743 | [[pvecm_corosync_addresses]] |
a37d539f | 744 | Corosync Addresses |
270757a1 SR |
745 | ~~~~~~~~~~~~~~~~~~ |
746 | ||
a9e7c3aa SR |
747 | A corosync link address (for backwards compatibility denoted by 'ringX_addr' in |
748 | `corosync.conf`) can be specified in two ways: | |
270757a1 | 749 | |
a37d539f | 750 | * **IPv4/v6 addresses** can be used directly. They are recommended, since they |
270757a1 SR |
751 | are static and usually not changed carelessly. |
752 | ||
a37d539f | 753 | * **Hostnames** will be resolved using `getaddrinfo`, which means that by |
270757a1 SR |
754 | default, IPv6 addresses will be used first, if available (see also |
755 | `man gai.conf`). Keep this in mind, especially when upgrading an existing | |
756 | cluster to IPv6. | |
757 | ||
a37d539f | 758 | CAUTION: Hostnames should be used with care, since the addresses they |
270757a1 SR |
759 | resolve to can be changed without touching corosync or the node it runs on - |
760 | which may lead to a situation where an address is changed without thinking | |
761 | about implications for corosync. | |
762 | ||
5f318cc0 | 763 | A separate, static hostname specifically for corosync is recommended, if |
270757a1 SR |
764 | hostnames are preferred. Also, make sure that every node in the cluster can |
765 | resolve all hostnames correctly. | |
766 | ||
767 | Since {pve} 5.1, while supported, hostnames will be resolved at the time of | |
a37d539f | 768 | entry. Only the resolved IP is saved to the configuration. |
270757a1 SR |
769 | |
770 | Nodes that joined the cluster on earlier versions likely still use their | |
771 | unresolved hostname in `corosync.conf`. It might be a good idea to replace | |
5f318cc0 | 772 | them with IPs or a separate hostname, as mentioned above. |
270757a1 | 773 | |
e4ec4154 | 774 | |
a9e7c3aa SR |
775 | [[pvecm_redundancy]] |
776 | Corosync Redundancy | |
777 | ------------------- | |
e4ec4154 | 778 | |
a37d539f | 779 | Corosync supports redundant networking via its integrated Kronosnet layer by |
a9e7c3aa SR |
780 | default (it is not supported on the legacy udp/udpu transports). It can be |
781 | enabled by specifying more than one link address, either via the '--linkX' | |
3e380ce0 SR |
782 | parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or |
783 | adding a new node) or by specifying more than one 'ringX_addr' in | |
784 | `corosync.conf`. | |
e4ec4154 | 785 | |
a9e7c3aa SR |
786 | NOTE: To provide useful failover, every link should be on its own |
787 | physical network connection. | |
e4ec4154 | 788 | |
a9e7c3aa SR |
789 | Links are used according to a priority setting. You can configure this priority |
790 | by setting 'knet_link_priority' in the corresponding interface section in | |
5f318cc0 | 791 | `corosync.conf`, or, preferably, using the 'priority' parameter when creating |
a9e7c3aa | 792 | your cluster with `pvecm`: |
e4ec4154 | 793 | |
4d19cb00 | 794 | ---- |
fcf0226e | 795 | # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20 |
4d19cb00 | 796 | ---- |
e4ec4154 | 797 | |
fcf0226e | 798 | This would cause 'link1' to be used first, since it has the higher priority. |
a9e7c3aa SR |
799 | |
800 | If no priorities are configured manually (or two links have the same priority), | |
801 | links will be used in order of their number, with the lower number having higher | |
802 | priority. | |
803 | ||
804 | Even if all links are working, only the one with the highest priority will see | |
a37d539f DW |
805 | corosync traffic. Link priorities cannot be mixed, meaning that links with |
806 | different priorities will not be able to communicate with each other. | |
e4ec4154 | 807 | |
a9e7c3aa | 808 | Since lower priority links will not see traffic unless all higher priorities |
a37d539f DW |
809 | have failed, it becomes a useful strategy to specify networks used for |
810 | other tasks (VMs, storage, etc.) as low-priority links. If worst comes to | |
811 | worst, a higher latency or more congested connection might be better than no | |
a9e7c3aa | 812 | connection at all. |
e4ec4154 | 813 | |
a9e7c3aa SR |
814 | Adding Redundant Links To An Existing Cluster |
815 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
e4ec4154 | 816 | |
a9e7c3aa SR |
817 | To add a new link to a running configuration, first check how to |
818 | xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. | |
e4ec4154 | 819 | |
a9e7c3aa SR |
820 | Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make |
821 | sure that your 'X' is the same for every node you add it to, and that it is | |
822 | unique for each node. | |
823 | ||
824 | Lastly, add a new 'interface', as shown below, to your `totem` | |
a37d539f | 825 | section, replacing 'X' with the link number chosen above. |
a9e7c3aa SR |
826 | |
827 | Assuming you added a link with number 1, the new configuration file could look | |
828 | like this: | |
e4ec4154 TL |
829 | |
830 | ---- | |
a9e7c3aa SR |
831 | logging { |
832 | debug: off | |
833 | to_syslog: yes | |
e4ec4154 TL |
834 | } |
835 | ||
836 | nodelist { | |
a9e7c3aa | 837 | |
e4ec4154 | 838 | node { |
a9e7c3aa SR |
839 | name: due |
840 | nodeid: 2 | |
e4ec4154 | 841 | quorum_votes: 1 |
a9e7c3aa SR |
842 | ring0_addr: 10.10.10.2 |
843 | ring1_addr: 10.20.20.2 | |
e4ec4154 TL |
844 | } |
845 | ||
a9e7c3aa SR |
846 | node { |
847 | name: tre | |
848 | nodeid: 3 | |
e4ec4154 | 849 | quorum_votes: 1 |
a9e7c3aa SR |
850 | ring0_addr: 10.10.10.3 |
851 | ring1_addr: 10.20.20.3 | |
e4ec4154 TL |
852 | } |
853 | ||
a9e7c3aa SR |
854 | node { |
855 | name: uno | |
856 | nodeid: 1 | |
857 | quorum_votes: 1 | |
858 | ring0_addr: 10.10.10.1 | |
859 | ring1_addr: 10.20.20.1 | |
860 | } | |
861 | ||
862 | } | |
863 | ||
864 | quorum { | |
865 | provider: corosync_votequorum | |
866 | } | |
867 | ||
868 | totem { | |
869 | cluster_name: testcluster | |
870 | config_version: 4 | |
871 | ip_version: ipv4-6 | |
872 | secauth: on | |
873 | version: 2 | |
874 | interface { | |
875 | linknumber: 0 | |
876 | } | |
877 | interface { | |
878 | linknumber: 1 | |
879 | } | |
e4ec4154 | 880 | } |
a9e7c3aa | 881 | ---- |
e4ec4154 | 882 | |
a9e7c3aa SR |
883 | The new link will be enabled as soon as you follow the last steps to |
884 | xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not | |
885 | be necessary. You can check that corosync loaded the new link using: | |
e4ec4154 | 886 | |
a9e7c3aa SR |
887 | ---- |
888 | journalctl -b -u corosync | |
e4ec4154 TL |
889 | ---- |
890 | ||
a9e7c3aa SR |
891 | It might be a good idea to test the new link by temporarily disconnecting the |
892 | old link on one node and making sure that its status remains online while | |
893 | disconnected: | |
e4ec4154 | 894 | |
a9e7c3aa SR |
895 | ---- |
896 | pvecm status | |
897 | ---- | |
898 | ||
899 | If you see a healthy cluster state, it means that your new link is being used. | |
e4ec4154 | 900 | |
e4ec4154 | 901 | |
65a0aa49 | 902 | Role of SSH in {pve} Clusters |
9d999d1b | 903 | ----------------------------- |
39aa8892 | 904 | |
65a0aa49 | 905 | {pve} utilizes SSH tunnels for various features. |
39aa8892 | 906 | |
4e8fe2a9 | 907 | * Proxying console/shell sessions (node and guests) |
9d999d1b | 908 | + |
4e8fe2a9 FG |
909 | When using the shell for node B while being connected to node A, connects to a |
910 | terminal proxy on node A, which is in turn connected to the login shell on node | |
911 | B via a non-interactive SSH tunnel. | |
39aa8892 | 912 | |
4e8fe2a9 FG |
913 | * VM and CT memory and local-storage migration in 'secure' mode. |
914 | + | |
a37d539f | 915 | During the migration, one or more SSH tunnel(s) are established between the |
4e8fe2a9 FG |
916 | source and target nodes, in order to exchange migration information and |
917 | transfer memory and disk contents. | |
9d999d1b TL |
918 | |
919 | * Storage replication | |
39aa8892 | 920 | |
9d999d1b TL |
921 | .Pitfalls due to automatic execution of `.bashrc` and siblings |
922 | [IMPORTANT] | |
923 | ==== | |
924 | In case you have a custom `.bashrc`, or similar files that get executed on | |
925 | login by the configured shell, `ssh` will automatically run it once the session | |
926 | is established successfully. This can cause some unexpected behavior, as those | |
a37d539f DW |
927 | commands may be executed with root permissions on any of the operations |
928 | described above. This can cause possible problematic side-effects! | |
39aa8892 OB |
929 | |
930 | In order to avoid such complications, it's recommended to add a check in | |
931 | `/root/.bashrc` to make sure the session is interactive, and only then run | |
932 | `.bashrc` commands. | |
933 | ||
934 | You can add this snippet at the beginning of your `.bashrc` file: | |
935 | ||
936 | ---- | |
9d999d1b | 937 | # Early exit if not running interactively to avoid side-effects! |
39aa8892 OB |
938 | case $- in |
939 | *i*) ;; | |
940 | *) return;; | |
941 | esac | |
942 | ---- | |
9d999d1b | 943 | ==== |
39aa8892 OB |
944 | |
945 | ||
c21d2cbe OB |
946 | Corosync External Vote Support |
947 | ------------------------------ | |
948 | ||
949 | This section describes a way to deploy an external voter in a {pve} cluster. | |
950 | When configured, the cluster can sustain more node failures without | |
951 | violating safety properties of the cluster communication. | |
952 | ||
a37d539f | 953 | For this to work, there are two services involved: |
c21d2cbe | 954 | |
a37d539f | 955 | * A QDevice daemon which runs on each {pve} node |
c21d2cbe | 956 | |
a37d539f | 957 | * An external vote daemon which runs on an independent server |
c21d2cbe | 958 | |
a37d539f | 959 | As a result, you can achieve higher availability, even in smaller setups (for |
c21d2cbe OB |
960 | example 2+1 nodes). |
961 | ||
962 | QDevice Technical Overview | |
963 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
964 | ||
5f318cc0 | 965 | The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster |
a37d539f DW |
966 | node. It provides a configured number of votes to the cluster's quorum |
967 | subsystem, based on an externally running third-party arbitrator's decision. | |
c21d2cbe OB |
968 | Its primary use is to allow a cluster to sustain more node failures than |
969 | standard quorum rules allow. This can be done safely as the external device | |
970 | can see all nodes and thus choose only one set of nodes to give its vote. | |
a37d539f | 971 | This will only be done if said set of nodes can have quorum (again) after |
c21d2cbe OB |
972 | receiving the third-party vote. |
973 | ||
a37d539f DW |
974 | Currently, only 'QDevice Net' is supported as a third-party arbitrator. This is |
975 | a daemon which provides a vote to a cluster partition, if it can reach the | |
976 | partition members over the network. It will only give votes to one partition | |
c21d2cbe OB |
977 | of a cluster at any time. |
978 | It's designed to support multiple clusters and is almost configuration and | |
979 | state free. New clusters are handled dynamically and no configuration file | |
980 | is needed on the host running a QDevice. | |
981 | ||
a37d539f DW |
982 | The only requirements for the external host are that it needs network access to |
983 | the cluster and to have a corosync-qnetd package available. We provide a package | |
984 | for Debian based hosts, and other Linux distributions should also have a package | |
c21d2cbe OB |
985 | available through their respective package manager. |
986 | ||
c43c999f TL |
987 | NOTE: Unlike corosync itself, a QDevice connects to the cluster over TCP/IP. |
988 | The daemon can also run outside the LAN of the cluster and isn't limited to the | |
989 | low latencies requirements of corosync. | |
c21d2cbe OB |
990 | |
991 | Supported Setups | |
992 | ~~~~~~~~~~~~~~~~ | |
993 | ||
994 | We support QDevices for clusters with an even number of nodes and recommend | |
995 | it for 2 node clusters, if they should provide higher availability. | |
a37d539f DW |
996 | For clusters with an odd node count, we currently discourage the use of |
997 | QDevices. The reason for this is the difference in the votes which the QDevice | |
998 | provides for each cluster type. Even numbered clusters get a single additional | |
999 | vote, which only increases availability, because if the QDevice | |
1000 | itself fails, you are in the same position as with no QDevice at all. | |
1001 | ||
1002 | On the other hand, with an odd numbered cluster size, the QDevice provides | |
1003 | '(N-1)' votes -- where 'N' corresponds to the cluster node count. This | |
1004 | alternative behavior makes sense; if it had only one additional vote, the | |
1005 | cluster could get into a split-brain situation. This algorithm allows for all | |
1006 | nodes but one (and naturally the QDevice itself) to fail. However, there are two | |
1007 | drawbacks to this: | |
c21d2cbe OB |
1008 | |
1009 | * If the QNet daemon itself fails, no other node may fail or the cluster | |
a37d539f | 1010 | immediately loses quorum. For example, in a cluster with 15 nodes, 7 |
c21d2cbe | 1011 | could fail before the cluster becomes inquorate. But, if a QDevice is |
a37d539f DW |
1012 | configured here and it itself fails, **no single node** of the 15 may fail. |
1013 | The QDevice acts almost as a single point of failure in this case. | |
c21d2cbe | 1014 | |
a37d539f DW |
1015 | * The fact that all but one node plus QDevice may fail sounds promising at |
1016 | first, but this may result in a mass recovery of HA services, which could | |
1017 | overload the single remaining node. Furthermore, a Ceph server will stop | |
1018 | providing services if only '((N-1)/2)' nodes or less remain online. | |
c21d2cbe | 1019 | |
a37d539f DW |
1020 | If you understand the drawbacks and implications, you can decide yourself if |
1021 | you want to use this technology in an odd numbered cluster setup. | |
c21d2cbe | 1022 | |
c21d2cbe OB |
1023 | QDevice-Net Setup |
1024 | ~~~~~~~~~~~~~~~~~ | |
1025 | ||
a37d539f | 1026 | We recommend running any daemon which provides votes to corosync-qdevice as an |
7c039095 | 1027 | unprivileged user. {pve} and Debian provide a package which is already |
e34c3e91 | 1028 | configured to do so. |
c21d2cbe | 1029 | The traffic between the daemon and the cluster must be encrypted to ensure a |
a37d539f | 1030 | safe and secure integration of the QDevice in {pve}. |
c21d2cbe | 1031 | |
41a37193 DJ |
1032 | First, install the 'corosync-qnetd' package on your external server |
1033 | ||
1034 | ---- | |
1035 | external# apt install corosync-qnetd | |
1036 | ---- | |
1037 | ||
1038 | and the 'corosync-qdevice' package on all cluster nodes | |
1039 | ||
1040 | ---- | |
1041 | pve# apt install corosync-qdevice | |
1042 | ---- | |
c21d2cbe | 1043 | |
a37d539f | 1044 | After doing this, ensure that all the nodes in the cluster are online. |
c21d2cbe | 1045 | |
a37d539f | 1046 | You can now set up your QDevice by running the following command on one |
c21d2cbe OB |
1047 | of the {pve} nodes: |
1048 | ||
1049 | ---- | |
1050 | pve# pvecm qdevice setup <QDEVICE-IP> | |
1051 | ---- | |
1052 | ||
1b80fbaa DJ |
1053 | The SSH key from the cluster will be automatically copied to the QDevice. |
1054 | ||
1055 | NOTE: Make sure that the SSH configuration on your external server allows root | |
1056 | login via password, if you are asked for a password during this step. | |
16162db8 OB |
1057 | If you receive an error such as 'Host key verification failed.' at this |
1058 | stage, running `pvecm updatecerts` could fix the issue. | |
c21d2cbe | 1059 | |
a37d539f DW |
1060 | After you enter the password and all the steps have successfully completed, you |
1061 | will see "Done". You can verify that the QDevice has been set up with: | |
c21d2cbe OB |
1062 | |
1063 | ---- | |
1064 | pve# pvecm status | |
1065 | ||
1066 | ... | |
1067 | ||
1068 | Votequorum information | |
1069 | ~~~~~~~~~~~~~~~~~~~~~ | |
1070 | Expected votes: 3 | |
1071 | Highest expected: 3 | |
1072 | Total votes: 3 | |
1073 | Quorum: 2 | |
1074 | Flags: Quorate Qdevice | |
1075 | ||
1076 | Membership information | |
1077 | ~~~~~~~~~~~~~~~~~~~~~~ | |
1078 | Nodeid Votes Qdevice Name | |
1079 | 0x00000001 1 A,V,NMW 192.168.22.180 (local) | |
1080 | 0x00000002 1 A,V,NMW 192.168.22.181 | |
1081 | 0x00000000 1 Qdevice | |
1082 | ||
1083 | ---- | |
1084 | ||
c21d2cbe | 1085 | |
c21d2cbe OB |
1086 | Frequently Asked Questions |
1087 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
1088 | ||
1089 | Tie Breaking | |
1090 | ^^^^^^^^^^^^ | |
1091 | ||
00821894 | 1092 | In case of a tie, where two same-sized cluster partitions cannot see each other |
a37d539f DW |
1093 | but can see the QDevice, the QDevice chooses one of those partitions randomly |
1094 | and provides a vote to it. | |
c21d2cbe | 1095 | |
d31de328 TL |
1096 | Possible Negative Implications |
1097 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
1098 | ||
a37d539f DW |
1099 | For clusters with an even node count, there are no negative implications when |
1100 | using a QDevice. If it fails to work, it is the same as not having a QDevice | |
1101 | at all. | |
d31de328 | 1102 | |
870c2817 OB |
1103 | Adding/Deleting Nodes After QDevice Setup |
1104 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
d31de328 TL |
1105 | |
1106 | If you want to add a new node or remove an existing one from a cluster with a | |
00821894 TL |
1107 | QDevice setup, you need to remove the QDevice first. After that, you can add or |
1108 | remove nodes normally. Once you have a cluster with an even node count again, | |
a37d539f | 1109 | you can set up the QDevice again as described previously. |
870c2817 OB |
1110 | |
1111 | Removing the QDevice | |
1112 | ^^^^^^^^^^^^^^^^^^^^ | |
1113 | ||
00821894 | 1114 | If you used the official `pvecm` tool to add the QDevice, you can remove it |
a37d539f | 1115 | by running: |
870c2817 OB |
1116 | |
1117 | ---- | |
1118 | pve# pvecm qdevice remove | |
1119 | ---- | |
d31de328 | 1120 | |
51730d56 TL |
1121 | //Still TODO |
1122 | //^^^^^^^^^^ | |
a9e7c3aa | 1123 | //There is still stuff to add here |
c21d2cbe OB |
1124 | |
1125 | ||
e4ec4154 TL |
1126 | Corosync Configuration |
1127 | ---------------------- | |
1128 | ||
a9e7c3aa SR |
1129 | The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It |
1130 | controls the cluster membership and its network. | |
1131 | For further information about it, check the corosync.conf man page: | |
e4ec4154 | 1132 | [source,bash] |
4d19cb00 | 1133 | ---- |
e4ec4154 | 1134 | man corosync.conf |
4d19cb00 | 1135 | ---- |
e4ec4154 | 1136 | |
a37d539f | 1137 | For node membership, you should always use the `pvecm` tool provided by {pve}. |
e4ec4154 TL |
1138 | You may have to edit the configuration file manually for other changes. |
1139 | Here are a few best practice tips for doing this. | |
1140 | ||
3254bfdd | 1141 | [[pvecm_edit_corosync_conf]] |
e4ec4154 TL |
1142 | Edit corosync.conf |
1143 | ~~~~~~~~~~~~~~~~~~ | |
1144 | ||
a9e7c3aa SR |
1145 | Editing the corosync.conf file is not always very straightforward. There are |
1146 | two on each cluster node, one in `/etc/pve/corosync.conf` and the other in | |
e4ec4154 TL |
1147 | `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will |
1148 | propagate the changes to the local one, but not vice versa. | |
1149 | ||
a37d539f DW |
1150 | The configuration will get updated automatically, as soon as the file changes. |
1151 | This means that changes which can be integrated in a running corosync will take | |
1152 | effect immediately. Thus, you should always make a copy and edit that instead, | |
1153 | to avoid triggering unintended changes when saving the file while editing. | |
e4ec4154 TL |
1154 | |
1155 | [source,bash] | |
4d19cb00 | 1156 | ---- |
e4ec4154 | 1157 | cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new |
4d19cb00 | 1158 | ---- |
e4ec4154 | 1159 | |
a37d539f DW |
1160 | Then, open the config file with your favorite editor, such as `nano` or |
1161 | `vim.tiny`, which come pre-installed on every {pve} node. | |
e4ec4154 | 1162 | |
a37d539f | 1163 | NOTE: Always increment the 'config_version' number after configuration changes; |
e4ec4154 TL |
1164 | omitting this can lead to problems. |
1165 | ||
a37d539f | 1166 | After making the necessary changes, create another copy of the current working |
e4ec4154 | 1167 | configuration file. This serves as a backup if the new configuration fails to |
a37d539f | 1168 | apply or causes other issues. |
e4ec4154 TL |
1169 | |
1170 | [source,bash] | |
4d19cb00 | 1171 | ---- |
e4ec4154 | 1172 | cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak |
4d19cb00 | 1173 | ---- |
e4ec4154 | 1174 | |
a37d539f | 1175 | Then replace the old configuration file with the new one: |
e4ec4154 | 1176 | [source,bash] |
4d19cb00 | 1177 | ---- |
e4ec4154 | 1178 | mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf |
4d19cb00 | 1179 | ---- |
e4ec4154 | 1180 | |
a37d539f DW |
1181 | You can check if the changes could be applied automatically, using the following |
1182 | commands: | |
e4ec4154 | 1183 | [source,bash] |
4d19cb00 | 1184 | ---- |
e4ec4154 TL |
1185 | systemctl status corosync |
1186 | journalctl -b -u corosync | |
4d19cb00 | 1187 | ---- |
e4ec4154 | 1188 | |
a37d539f | 1189 | If the changes could not be applied automatically, you may have to restart the |
e4ec4154 TL |
1190 | corosync service via: |
1191 | [source,bash] | |
4d19cb00 | 1192 | ---- |
e4ec4154 | 1193 | systemctl restart corosync |
4d19cb00 | 1194 | ---- |
e4ec4154 | 1195 | |
a37d539f | 1196 | On errors, check the troubleshooting section below. |
e4ec4154 TL |
1197 | |
1198 | Troubleshooting | |
1199 | ~~~~~~~~~~~~~~~ | |
1200 | ||
1201 | Issue: 'quorum.expected_votes must be configured' | |
1202 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
1203 | ||
1204 | When corosync starts to fail and you get the following message in the system log: | |
1205 | ||
1206 | ---- | |
1207 | [...] | |
1208 | corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize. | |
1209 | corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason | |
1210 | 'configuration error: nodelist or quorum.expected_votes must be configured!' | |
1211 | [...] | |
1212 | ---- | |
1213 | ||
a37d539f | 1214 | It means that the hostname you set for a corosync 'ringX_addr' in the |
e4ec4154 TL |
1215 | configuration could not be resolved. |
1216 | ||
e4ec4154 TL |
1217 | Write Configuration When Not Quorate |
1218 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
1219 | ||
a37d539f DW |
1220 | If you need to change '/etc/pve/corosync.conf' on a node with no quorum, and you |
1221 | understand what you are doing, use: | |
e4ec4154 | 1222 | [source,bash] |
4d19cb00 | 1223 | ---- |
e4ec4154 | 1224 | pvecm expected 1 |
4d19cb00 | 1225 | ---- |
e4ec4154 TL |
1226 | |
1227 | This sets the expected vote count to 1 and makes the cluster quorate. You can | |
a37d539f | 1228 | then fix your configuration, or revert it back to the last working backup. |
e4ec4154 | 1229 | |
a37d539f DW |
1230 | This is not enough if corosync cannot start anymore. In that case, it is best to |
1231 | edit the local copy of the corosync configuration in | |
1232 | '/etc/corosync/corosync.conf', so that corosync can start again. Ensure that on | |
1233 | all nodes, this configuration has the same content to avoid split-brain | |
1234 | situations. | |
e4ec4154 TL |
1235 | |
1236 | ||
3254bfdd | 1237 | [[pvecm_corosync_conf_glossary]] |
e4ec4154 TL |
1238 | Corosync Configuration Glossary |
1239 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
1240 | ||
1241 | ringX_addr:: | |
a37d539f | 1242 | This names the different link addresses for the Kronosnet connections between |
a9e7c3aa | 1243 | nodes. |
e4ec4154 | 1244 | |
806ef12d DM |
1245 | |
1246 | Cluster Cold Start | |
1247 | ------------------ | |
1248 | ||
1249 | It is obvious that a cluster is not quorate when all nodes are | |
1250 | offline. This is a common case after a power failure. | |
1251 | ||
1252 | NOTE: It is always a good idea to use an uninterruptible power supply | |
8c1189b6 | 1253 | (``UPS'', also called ``battery backup'') to avoid this state, especially if |
806ef12d DM |
1254 | you want HA. |
1255 | ||
204231df | 1256 | On node startup, the `pve-guests` service is started and waits for |
8c1189b6 | 1257 | quorum. Once quorate, it starts all guests which have the `onboot` |
612417fd DM |
1258 | flag set. |
1259 | ||
1260 | When you turn on nodes, or when power comes back after power failure, | |
a37d539f | 1261 | it is likely that some nodes will boot faster than others. Please keep in |
612417fd | 1262 | mind that guest startup is delayed until you reach quorum. |
806ef12d | 1263 | |
054a7e7d | 1264 | |
eee1c5de TL |
1265 | [[pvecm_next_id_range]] |
1266 | Guest VMID Auto-Selection | |
1267 | ------------------------ | |
1268 | ||
1269 | When creating new guests the web interface will ask the backend for a free VMID | |
1270 | automatically. The default range for searching is `100` to `1000000` (lower | |
1271 | than the maximal allowed VMID enforced by the schema). | |
1272 | ||
1273 | Sometimes admins either want to allocate new VMIDs in a separate range, for | |
1274 | example to easily separate temporary VMs with ones that choose a VMID manually. | |
1275 | Other times its just desired to provided a stable length VMID, for which | |
1276 | setting the lower boundary to, for example, `100000` gives much more room for. | |
1277 | ||
1278 | To accommodate this use case one can set either lower, upper or both boundaries | |
1279 | via the `datacenter.cfg` configuration file, which can be edited in the web | |
1280 | interface under 'Datacenter' -> 'Options'. | |
1281 | ||
1282 | NOTE: The range is only used for the next-id API call, so it isn't a hard | |
1283 | limit. | |
1284 | ||
082ea7d9 TL |
1285 | Guest Migration |
1286 | --------------- | |
1287 | ||
054a7e7d DM |
1288 | Migrating virtual guests to other nodes is a useful feature in a |
1289 | cluster. There are settings to control the behavior of such | |
1290 | migrations. This can be done via the configuration file | |
1291 | `datacenter.cfg` or for a specific migration via API or command line | |
1292 | parameters. | |
1293 | ||
a37d539f | 1294 | It makes a difference if a guest is online or offline, or if it has |
da6c7dee DC |
1295 | local resources (like a local disk). |
1296 | ||
a37d539f | 1297 | For details about virtual machine migration, see the |
a9e7c3aa | 1298 | xref:qm_migration[QEMU/KVM Migration Chapter]. |
da6c7dee | 1299 | |
a37d539f | 1300 | For details about container migration, see the |
a9e7c3aa | 1301 | xref:pct_migration[Container Migration Chapter]. |
082ea7d9 TL |
1302 | |
1303 | Migration Type | |
1304 | ~~~~~~~~~~~~~~ | |
1305 | ||
44f38275 | 1306 | The migration type defines if the migration data should be sent over an |
d63be10b | 1307 | encrypted (`secure`) channel or an unencrypted (`insecure`) one. |
da0c6793 | 1308 | Setting the migration type to `insecure` means that the RAM content of a |
a37d539f | 1309 | virtual guest is also transferred unencrypted, which can lead to |
b1743473 | 1310 | information disclosure of critical data from inside the guest (for |
a37d539f | 1311 | example, passwords or encryption keys). |
054a7e7d DM |
1312 | |
1313 | Therefore, we strongly recommend using the secure channel if you do | |
1314 | not have full control over the network and can not guarantee that no | |
6d3c0b34 | 1315 | one is eavesdropping on it. |
082ea7d9 | 1316 | |
054a7e7d DM |
1317 | NOTE: Storage migration does not follow this setting. Currently, it |
1318 | always sends the storage content over a secure channel. | |
1319 | ||
1320 | Encryption requires a lot of computing power, so this setting is often | |
da0c6793 | 1321 | changed to `insecure` to achieve better performance. The impact on |
054a7e7d | 1322 | modern systems is lower because they implement AES encryption in |
b1743473 | 1323 | hardware. The performance impact is particularly evident in fast |
a37d539f | 1324 | networks, where you can transfer 10 Gbps or more. |
082ea7d9 | 1325 | |
082ea7d9 TL |
1326 | Migration Network |
1327 | ~~~~~~~~~~~~~~~~~ | |
1328 | ||
a9baa444 | 1329 | By default, {pve} uses the network in which cluster communication |
a37d539f | 1330 | takes place to send the migration traffic. This is not optimal both because |
a9baa444 TL |
1331 | sensitive cluster traffic can be disrupted and this network may not |
1332 | have the best bandwidth available on the node. | |
1333 | ||
1334 | Setting the migration network parameter allows the use of a dedicated | |
a37d539f | 1335 | network for all migration traffic. In addition to the memory, |
a9baa444 TL |
1336 | this also affects the storage traffic for offline migrations. |
1337 | ||
a37d539f DW |
1338 | The migration network is set as a network using CIDR notation. This |
1339 | has the advantage that you don't have to set individual IP addresses | |
1340 | for each node. {pve} can determine the real address on the | |
1341 | destination node from the network specified in the CIDR form. To | |
1342 | enable this, the network must be specified so that each node has exactly one | |
1343 | IP in the respective network. | |
a9baa444 | 1344 | |
082ea7d9 TL |
1345 | Example |
1346 | ^^^^^^^ | |
1347 | ||
a37d539f | 1348 | We assume that we have a three-node setup, with three separate |
a9baa444 | 1349 | networks. One for public communication with the Internet, one for |
a37d539f | 1350 | cluster communication, and a very fast one, which we want to use as a |
a9baa444 TL |
1351 | dedicated network for migration. |
1352 | ||
1353 | A network configuration for such a setup might look as follows: | |
082ea7d9 TL |
1354 | |
1355 | ---- | |
7a0d4784 | 1356 | iface eno1 inet manual |
082ea7d9 TL |
1357 | |
1358 | # public network | |
1359 | auto vmbr0 | |
1360 | iface vmbr0 inet static | |
8673c878 | 1361 | address 192.X.Y.57/24 |
082ea7d9 | 1362 | gateway 192.X.Y.1 |
7a39aabd AL |
1363 | bridge-ports eno1 |
1364 | bridge-stp off | |
1365 | bridge-fd 0 | |
082ea7d9 TL |
1366 | |
1367 | # cluster network | |
7a0d4784 WL |
1368 | auto eno2 |
1369 | iface eno2 inet static | |
8673c878 | 1370 | address 10.1.1.1/24 |
082ea7d9 TL |
1371 | |
1372 | # fast network | |
7a0d4784 WL |
1373 | auto eno3 |
1374 | iface eno3 inet static | |
8673c878 | 1375 | address 10.1.2.1/24 |
082ea7d9 TL |
1376 | ---- |
1377 | ||
a9baa444 TL |
1378 | Here, we will use the network 10.1.2.0/24 as a migration network. For |
1379 | a single migration, you can do this using the `migration_network` | |
1380 | parameter of the command line tool: | |
1381 | ||
082ea7d9 | 1382 | ---- |
b1743473 | 1383 | # qm migrate 106 tre --online --migration_network 10.1.2.0/24 |
082ea7d9 TL |
1384 | ---- |
1385 | ||
a9baa444 TL |
1386 | To configure this as the default network for all migrations in the |
1387 | cluster, set the `migration` property of the `/etc/pve/datacenter.cfg` | |
1388 | file: | |
1389 | ||
082ea7d9 | 1390 | ---- |
a9baa444 | 1391 | # use dedicated migration network |
b1743473 | 1392 | migration: secure,network=10.1.2.0/24 |
082ea7d9 TL |
1393 | ---- |
1394 | ||
a9baa444 | 1395 | NOTE: The migration type must always be set when the migration network |
a37d539f | 1396 | is set in `/etc/pve/datacenter.cfg`. |
a9baa444 | 1397 | |
806ef12d | 1398 | |
d8742b0c DM |
1399 | ifdef::manvolnum[] |
1400 | include::pve-copyright.adoc[] | |
1401 | endif::manvolnum[] |