]>
Commit | Line | Data |
---|---|---|
bde0e57d | 1 | [[chapter_pvecm]] |
d8742b0c | 2 | ifdef::manvolnum[] |
b2f242ab DM |
3 | pvecm(1) |
4 | ======== | |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
d8742b0c DM |
7 | NAME |
8 | ---- | |
9 | ||
74026b8f | 10 | pvecm - Proxmox VE Cluster Manager |
d8742b0c | 11 | |
49a5e11c | 12 | SYNOPSIS |
d8742b0c DM |
13 | -------- |
14 | ||
15 | include::pvecm.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | Cluster Manager | |
23 | =============== | |
5f09af76 | 24 | :pve-toplevel: |
194d2f29 | 25 | endif::manvolnum[] |
5f09af76 | 26 | |
8c1189b6 FG |
27 | The {PVE} cluster manager `pvecm` is a tool to create a group of |
28 | physical servers. Such a group is called a *cluster*. We use the | |
8a865621 | 29 | http://www.corosync.org[Corosync Cluster Engine] for reliable group |
5eba0743 | 30 | communication, and such clusters can consist of up to 32 physical nodes |
8a865621 DM |
31 | (probably more, dependent on network latency). |
32 | ||
8c1189b6 | 33 | `pvecm` can be used to create a new cluster, join nodes to a cluster, |
8a865621 | 34 | leave the cluster, get status information and do various other cluster |
e300cf7d FG |
35 | related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'') |
36 | is used to transparently distribute the cluster configuration to all cluster | |
8a865621 DM |
37 | nodes. |
38 | ||
39 | Grouping nodes into a cluster has the following advantages: | |
40 | ||
41 | * Centralized, web based management | |
42 | ||
5eba0743 | 43 | * Multi-master clusters: each node can do all management task |
8a865621 | 44 | |
8c1189b6 FG |
45 | * `pmxcfs`: database-driven file system for storing configuration files, |
46 | replicated in real-time on all nodes using `corosync`. | |
8a865621 | 47 | |
5eba0743 | 48 | * Easy migration of virtual machines and containers between physical |
8a865621 DM |
49 | hosts |
50 | ||
51 | * Fast deployment | |
52 | ||
53 | * Cluster-wide services like firewall and HA | |
54 | ||
55 | ||
56 | Requirements | |
57 | ------------ | |
58 | ||
8c1189b6 | 59 | * All nodes must be in the same network as `corosync` uses IP Multicast |
8a865621 | 60 | to communicate between nodes (also see |
ceabe189 | 61 | http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP |
ff72a2ba | 62 | ports 5404 and 5405 for cluster communication. |
ceabe189 DM |
63 | + |
64 | NOTE: Some switches do not support IP multicast by default and must be | |
65 | manually enabled first. | |
8a865621 DM |
66 | |
67 | * Date and time have to be synchronized. | |
68 | ||
ceabe189 | 69 | * SSH tunnel on TCP port 22 between nodes is used. |
8a865621 | 70 | |
ceabe189 DM |
71 | * If you are interested in High Availability, you need to have at |
72 | least three nodes for reliable quorum. All nodes should have the | |
73 | same version. | |
8a865621 DM |
74 | |
75 | * We recommend a dedicated NIC for the cluster traffic, especially if | |
76 | you use shared storage. | |
77 | ||
d4a9910f DL |
78 | * Root password of a cluster node is required for adding nodes. |
79 | ||
e4b62d04 TL |
80 | NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster |
81 | nodes. | |
82 | ||
83 | NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as | |
84 | production configuration and should only used temporarily during upgrading the | |
85 | whole cluster from one to another major version. | |
8a865621 DM |
86 | |
87 | ||
ceabe189 DM |
88 | Preparing Nodes |
89 | --------------- | |
8a865621 DM |
90 | |
91 | First, install {PVE} on all nodes. Make sure that each node is | |
92 | installed with the final hostname and IP configuration. Changing the | |
93 | hostname and IP is not possible after cluster creation. | |
94 | ||
30101530 TL |
95 | Currently the cluster creation can either be done on the console (login via |
96 | `ssh`) or the API, which we have a GUI implementation for (__Datacenter -> | |
97 | Cluster__). | |
8a865621 | 98 | |
9a7396aa TL |
99 | While it's often common use to reference all other nodenames in `/etc/hosts` |
100 | with their IP this is not strictly necessary for a cluster, which normally uses | |
101 | multicast, to work. It maybe useful as you then can connect from one node to | |
102 | the other with SSH through the easier to remember node name. | |
103 | ||
11202f1d | 104 | [[pvecm_create_cluster]] |
8a865621 | 105 | Create the Cluster |
ceabe189 | 106 | ------------------ |
8a865621 | 107 | |
8c1189b6 | 108 | Login via `ssh` to the first {pve} node. Use a unique name for your cluster. |
9a7396aa TL |
109 | This name cannot be changed later. The cluster name follows the same rules as |
110 | node names. | |
8a865621 | 111 | |
c15cdfba TL |
112 | ---- |
113 | hp1# pvecm create CLUSTERNAME | |
114 | ---- | |
8a865621 | 115 | |
9a7396aa TL |
116 | CAUTION: The cluster name is used to compute the default multicast address. |
117 | Please use unique cluster names if you run more than one cluster inside your | |
118 | network. To avoid human confusion, it is also recommended to choose different | |
119 | names even if clusters do not share the cluster network. | |
63f956c8 | 120 | |
8a865621 DM |
121 | To check the state of your cluster use: |
122 | ||
c15cdfba | 123 | ---- |
8a865621 | 124 | hp1# pvecm status |
c15cdfba | 125 | ---- |
8a865621 | 126 | |
82445c4e TL |
127 | Multiple Clusters In Same Network |
128 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
129 | ||
130 | It is possible to create multiple clusters in the same physical or logical | |
131 | network. Each cluster must have a unique name, which is used to generate the | |
132 | cluster's multicast group address. As long as no duplicate cluster names are | |
133 | configured in one network segment, the different clusters won't interfere with | |
134 | each other. | |
135 | ||
136 | If multiple clusters operate in a single network it may be beneficial to setup | |
137 | an IGMP querier and enable IGMP Snooping in said network. This may reduce the | |
138 | load of the network significantly because multicast packets are only delivered | |
139 | to endpoints of the respective member nodes. | |
140 | ||
8a865621 | 141 | |
11202f1d | 142 | [[pvecm_join_node_to_cluster]] |
8a865621 | 143 | Adding Nodes to the Cluster |
ceabe189 | 144 | --------------------------- |
8a865621 | 145 | |
8c1189b6 | 146 | Login via `ssh` to the node you want to add. |
8a865621 | 147 | |
c15cdfba | 148 | ---- |
8a865621 | 149 | hp2# pvecm add IP-ADDRESS-CLUSTER |
c15cdfba | 150 | ---- |
8a865621 DM |
151 | |
152 | For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node. | |
153 | ||
5eba0743 | 154 | CAUTION: A new node cannot hold any VMs, because you would get |
7980581f | 155 | conflicts about identical VM IDs. Also, all existing configuration in |
8c1189b6 FG |
156 | `/etc/pve` is overwritten when you join a new node to the cluster. To |
157 | workaround, use `vzdump` to backup and restore to a different VMID after | |
7980581f | 158 | adding the node to the cluster. |
8a865621 DM |
159 | |
160 | To check the state of cluster: | |
161 | ||
c15cdfba | 162 | ---- |
8a865621 | 163 | # pvecm status |
c15cdfba | 164 | ---- |
8a865621 | 165 | |
ceabe189 | 166 | .Cluster status after adding 4 nodes |
8a865621 DM |
167 | ---- |
168 | hp2# pvecm status | |
169 | Quorum information | |
170 | ~~~~~~~~~~~~~~~~~~ | |
171 | Date: Mon Apr 20 12:30:13 2015 | |
172 | Quorum provider: corosync_votequorum | |
173 | Nodes: 4 | |
174 | Node ID: 0x00000001 | |
175 | Ring ID: 1928 | |
176 | Quorate: Yes | |
177 | ||
178 | Votequorum information | |
179 | ~~~~~~~~~~~~~~~~~~~~~~ | |
180 | Expected votes: 4 | |
181 | Highest expected: 4 | |
182 | Total votes: 4 | |
91f3edd0 | 183 | Quorum: 3 |
8a865621 DM |
184 | Flags: Quorate |
185 | ||
186 | Membership information | |
187 | ~~~~~~~~~~~~~~~~~~~~~~ | |
188 | Nodeid Votes Name | |
189 | 0x00000001 1 192.168.15.91 | |
190 | 0x00000002 1 192.168.15.92 (local) | |
191 | 0x00000003 1 192.168.15.93 | |
192 | 0x00000004 1 192.168.15.94 | |
193 | ---- | |
194 | ||
195 | If you only want the list of all nodes use: | |
196 | ||
c15cdfba | 197 | ---- |
8a865621 | 198 | # pvecm nodes |
c15cdfba | 199 | ---- |
8a865621 | 200 | |
5eba0743 | 201 | .List nodes in a cluster |
8a865621 DM |
202 | ---- |
203 | hp2# pvecm nodes | |
204 | ||
205 | Membership information | |
206 | ~~~~~~~~~~~~~~~~~~~~~~ | |
207 | Nodeid Votes Name | |
208 | 1 1 hp1 | |
209 | 2 1 hp2 (local) | |
210 | 3 1 hp3 | |
211 | 4 1 hp4 | |
212 | ---- | |
213 | ||
82d52451 | 214 | [[adding-nodes-with-separated-cluster-network]] |
e4ec4154 TL |
215 | Adding Nodes With Separated Cluster Network |
216 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
217 | ||
218 | When adding a node to a cluster with a separated cluster network you need to | |
219 | use the 'ringX_addr' parameters to set the nodes address on those networks: | |
220 | ||
221 | [source,bash] | |
4d19cb00 | 222 | ---- |
e4ec4154 | 223 | pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0 |
4d19cb00 | 224 | ---- |
e4ec4154 TL |
225 | |
226 | If you want to use the Redundant Ring Protocol you will also want to pass the | |
227 | 'ring1_addr' parameter. | |
228 | ||
8a865621 DM |
229 | |
230 | Remove a Cluster Node | |
ceabe189 | 231 | --------------------- |
8a865621 DM |
232 | |
233 | CAUTION: Read carefully the procedure before proceeding, as it could | |
234 | not be what you want or need. | |
235 | ||
236 | Move all virtual machines from the node. Make sure you have no local | |
237 | data or backups you want to keep, or save them accordingly. | |
e8503c6c | 238 | In the following example we will remove the node hp4 from the cluster. |
8a865621 | 239 | |
e8503c6c EK |
240 | Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes` |
241 | command to identify the node ID to remove: | |
8a865621 DM |
242 | |
243 | ---- | |
244 | hp1# pvecm nodes | |
245 | ||
246 | Membership information | |
247 | ~~~~~~~~~~~~~~~~~~~~~~ | |
248 | Nodeid Votes Name | |
249 | 1 1 hp1 (local) | |
250 | 2 1 hp2 | |
251 | 3 1 hp3 | |
252 | 4 1 hp4 | |
253 | ---- | |
254 | ||
e8503c6c EK |
255 | |
256 | At this point you must power off hp4 and | |
257 | make sure that it will not power on again (in the network) as it | |
258 | is. | |
259 | ||
260 | IMPORTANT: As said above, it is critical to power off the node | |
261 | *before* removal, and make sure that it will *never* power on again | |
262 | (in the existing cluster network) as it is. | |
263 | If you power on the node as it is, your cluster will be screwed up and | |
264 | it could be difficult to restore a clean cluster state. | |
265 | ||
266 | After powering off the node hp4, we can safely remove it from the cluster. | |
8a865621 | 267 | |
c15cdfba | 268 | ---- |
8a865621 | 269 | hp1# pvecm delnode hp4 |
c15cdfba | 270 | ---- |
8a865621 DM |
271 | |
272 | If the operation succeeds no output is returned, just check the node | |
8c1189b6 | 273 | list again with `pvecm nodes` or `pvecm status`. You should see |
8a865621 DM |
274 | something like: |
275 | ||
276 | ---- | |
277 | hp1# pvecm status | |
278 | ||
279 | Quorum information | |
280 | ~~~~~~~~~~~~~~~~~~ | |
281 | Date: Mon Apr 20 12:44:28 2015 | |
282 | Quorum provider: corosync_votequorum | |
283 | Nodes: 3 | |
284 | Node ID: 0x00000001 | |
285 | Ring ID: 1992 | |
286 | Quorate: Yes | |
287 | ||
288 | Votequorum information | |
289 | ~~~~~~~~~~~~~~~~~~~~~~ | |
290 | Expected votes: 3 | |
291 | Highest expected: 3 | |
292 | Total votes: 3 | |
91f3edd0 | 293 | Quorum: 2 |
8a865621 DM |
294 | Flags: Quorate |
295 | ||
296 | Membership information | |
297 | ~~~~~~~~~~~~~~~~~~~~~~ | |
298 | Nodeid Votes Name | |
299 | 0x00000001 1 192.168.15.90 (local) | |
300 | 0x00000002 1 192.168.15.91 | |
301 | 0x00000003 1 192.168.15.92 | |
302 | ---- | |
303 | ||
8a865621 DM |
304 | If, for whatever reason, you want that this server joins the same |
305 | cluster again, you have to | |
306 | ||
26ca7ff5 | 307 | * reinstall {pve} on it from scratch |
8a865621 DM |
308 | |
309 | * then join it, as explained in the previous section. | |
d8742b0c | 310 | |
38ae8db3 | 311 | [[pvecm_separate_node_without_reinstall]] |
555e966b TL |
312 | Separate A Node Without Reinstalling |
313 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
314 | ||
315 | CAUTION: This is *not* the recommended method, proceed with caution. Use the | |
316 | above mentioned method if you're unsure. | |
317 | ||
318 | You can also separate a node from a cluster without reinstalling it from | |
319 | scratch. But after removing the node from the cluster it will still have | |
320 | access to the shared storages! This must be resolved before you start removing | |
321 | the node from the cluster. A {pve} cluster cannot share the exact same | |
2ea5c4a5 TL |
322 | storage with another cluster, as storage locking doesn't work over cluster |
323 | boundary. Further, it may also lead to VMID conflicts. | |
555e966b | 324 | |
3be22308 TL |
325 | Its suggested that you create a new storage where only the node which you want |
326 | to separate has access. This can be an new export on your NFS or a new Ceph | |
327 | pool, to name a few examples. Its just important that the exact same storage | |
328 | does not gets accessed by multiple clusters. After setting this storage up move | |
329 | all data from the node and its VMs to it. Then you are ready to separate the | |
330 | node from the cluster. | |
555e966b TL |
331 | |
332 | WARNING: Ensure all shared resources are cleanly separated! You will run into | |
333 | conflicts and problems else. | |
334 | ||
335 | First stop the corosync and the pve-cluster services on the node: | |
336 | [source,bash] | |
4d19cb00 | 337 | ---- |
555e966b TL |
338 | systemctl stop pve-cluster |
339 | systemctl stop corosync | |
4d19cb00 | 340 | ---- |
555e966b TL |
341 | |
342 | Start the cluster filesystem again in local mode: | |
343 | [source,bash] | |
4d19cb00 | 344 | ---- |
555e966b | 345 | pmxcfs -l |
4d19cb00 | 346 | ---- |
555e966b TL |
347 | |
348 | Delete the corosync configuration files: | |
349 | [source,bash] | |
4d19cb00 | 350 | ---- |
555e966b TL |
351 | rm /etc/pve/corosync.conf |
352 | rm /etc/corosync/* | |
4d19cb00 | 353 | ---- |
555e966b TL |
354 | |
355 | You can now start the filesystem again as normal service: | |
356 | [source,bash] | |
4d19cb00 | 357 | ---- |
555e966b TL |
358 | killall pmxcfs |
359 | systemctl start pve-cluster | |
4d19cb00 | 360 | ---- |
555e966b TL |
361 | |
362 | The node is now separated from the cluster. You can deleted it from a remaining | |
363 | node of the cluster with: | |
364 | [source,bash] | |
4d19cb00 | 365 | ---- |
555e966b | 366 | pvecm delnode oldnode |
4d19cb00 | 367 | ---- |
555e966b TL |
368 | |
369 | If the command failed, because the remaining node in the cluster lost quorum | |
370 | when the now separate node exited, you may set the expected votes to 1 as a workaround: | |
371 | [source,bash] | |
4d19cb00 | 372 | ---- |
555e966b | 373 | pvecm expected 1 |
4d19cb00 | 374 | ---- |
555e966b | 375 | |
96d698db | 376 | And then repeat the 'pvecm delnode' command. |
555e966b TL |
377 | |
378 | Now switch back to the separated node, here delete all remaining files left | |
379 | from the old cluster. This ensures that the node can be added to another | |
380 | cluster again without problems. | |
381 | ||
382 | [source,bash] | |
4d19cb00 | 383 | ---- |
555e966b | 384 | rm /var/lib/corosync/* |
4d19cb00 | 385 | ---- |
555e966b TL |
386 | |
387 | As the configuration files from the other nodes are still in the cluster | |
388 | filesystem you may want to clean those up too. Remove simply the whole | |
389 | directory recursive from '/etc/pve/nodes/NODENAME', but check three times that | |
390 | you used the correct one before deleting it. | |
391 | ||
392 | CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means | |
393 | the nodes can still connect to each other with public key authentication. This | |
394 | should be fixed by removing the respective keys from the | |
395 | '/etc/pve/priv/authorized_keys' file. | |
d8742b0c | 396 | |
806ef12d DM |
397 | Quorum |
398 | ------ | |
399 | ||
400 | {pve} use a quorum-based technique to provide a consistent state among | |
401 | all cluster nodes. | |
402 | ||
403 | [quote, from Wikipedia, Quorum (distributed computing)] | |
404 | ____ | |
405 | A quorum is the minimum number of votes that a distributed transaction | |
406 | has to obtain in order to be allowed to perform an operation in a | |
407 | distributed system. | |
408 | ____ | |
409 | ||
410 | In case of network partitioning, state changes requires that a | |
411 | majority of nodes are online. The cluster switches to read-only mode | |
5eba0743 | 412 | if it loses quorum. |
806ef12d DM |
413 | |
414 | NOTE: {pve} assigns a single vote to each node by default. | |
415 | ||
e4ec4154 TL |
416 | Cluster Network |
417 | --------------- | |
418 | ||
419 | The cluster network is the core of a cluster. All messages sent over it have to | |
420 | be delivered reliable to all nodes in their respective order. In {pve} this | |
421 | part is done by corosync, an implementation of a high performance low overhead | |
422 | high availability development toolkit. It serves our decentralized | |
423 | configuration file system (`pmxcfs`). | |
424 | ||
425 | [[cluster-network-requirements]] | |
426 | Network Requirements | |
427 | ~~~~~~~~~~~~~~~~~~~~ | |
428 | This needs a reliable network with latencies under 2 milliseconds (LAN | |
429 | performance) to work properly. While corosync can also use unicast for | |
430 | communication between nodes its **highly recommended** to have a multicast | |
431 | capable network. The network should not be used heavily by other members, | |
432 | ideally corosync runs on its own network. | |
433 | *never* share it with network where storage communicates too. | |
434 | ||
435 | Before setting up a cluster it is good practice to check if the network is fit | |
436 | for that purpose. | |
437 | ||
438 | * Ensure that all nodes are in the same subnet. This must only be true for the | |
439 | network interfaces used for cluster communication (corosync). | |
440 | ||
441 | * Ensure all nodes can reach each other over those interfaces, using `ping` is | |
442 | enough for a basic test. | |
443 | ||
444 | * Ensure that multicast works in general and a high package rates. This can be | |
445 | done with the `omping` tool. The final "%loss" number should be < 1%. | |
9e73d831 | 446 | + |
e4ec4154 TL |
447 | [source,bash] |
448 | ---- | |
449 | omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ... | |
450 | ---- | |
451 | ||
452 | * Ensure that multicast communication works over an extended period of time. | |
a181f090 | 453 | This uncovers problems where IGMP snooping is activated on the network but |
e4ec4154 TL |
454 | no multicast querier is active. This test has a duration of around 10 |
455 | minutes. | |
9e73d831 | 456 | + |
e4ec4154 | 457 | [source,bash] |
4d19cb00 | 458 | ---- |
e4ec4154 | 459 | omping -c 600 -i 1 -q NODE1-IP NODE2-IP ... |
4d19cb00 | 460 | ---- |
e4ec4154 TL |
461 | |
462 | Your network is not ready for clustering if any of these test fails. Recheck | |
463 | your network configuration. Especially switches are notorious for having | |
464 | multicast disabled by default or IGMP snooping enabled with no IGMP querier | |
465 | active. | |
466 | ||
467 | In smaller cluster its also an option to use unicast if you really cannot get | |
468 | multicast to work. | |
469 | ||
470 | Separate Cluster Network | |
471 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
472 | ||
473 | When creating a cluster without any parameters the cluster network is generally | |
474 | shared with the Web UI and the VMs and its traffic. Depending on your setup | |
475 | even storage traffic may get sent over the same network. Its recommended to | |
476 | change that, as corosync is a time critical real time application. | |
477 | ||
478 | Setting Up A New Network | |
479 | ^^^^^^^^^^^^^^^^^^^^^^^^ | |
480 | ||
481 | First you have to setup a new network interface. It should be on a physical | |
482 | separate network. Ensure that your network fulfills the | |
483 | <<cluster-network-requirements,cluster network requirements>>. | |
484 | ||
485 | Separate On Cluster Creation | |
486 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
487 | ||
488 | This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of | |
489 | the 'pvecm create' command used for creating a new cluster. | |
490 | ||
44f38275 | 491 | If you have setup an additional NIC with a static address on 10.10.10.1/25 |
e4ec4154 TL |
492 | and want to send and receive all cluster communication over this interface |
493 | you would execute: | |
494 | ||
495 | [source,bash] | |
4d19cb00 | 496 | ---- |
e4ec4154 | 497 | pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0 |
4d19cb00 | 498 | ---- |
e4ec4154 TL |
499 | |
500 | To check if everything is working properly execute: | |
501 | [source,bash] | |
4d19cb00 | 502 | ---- |
e4ec4154 | 503 | systemctl status corosync |
4d19cb00 | 504 | ---- |
e4ec4154 | 505 | |
266cb17b WB |
506 | Afterwards, proceed as descripted in the section to |
507 | <<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>. | |
82d52451 | 508 | |
e4ec4154 TL |
509 | [[separate-cluster-net-after-creation]] |
510 | Separate After Cluster Creation | |
511 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
512 | ||
513 | You can do this also if you have already created a cluster and want to switch | |
514 | its communication to another network, without rebuilding the whole cluster. | |
515 | This change may lead to short durations of quorum loss in the cluster, as nodes | |
516 | have to restart corosync and come up one after the other on the new network. | |
517 | ||
518 | Check how to <<edit-corosync-conf,edit the corosync.conf file>> first. | |
519 | The open it and you should see a file similar to: | |
520 | ||
521 | ---- | |
522 | logging { | |
523 | debug: off | |
524 | to_syslog: yes | |
525 | } | |
526 | ||
527 | nodelist { | |
528 | ||
529 | node { | |
530 | name: due | |
531 | nodeid: 2 | |
532 | quorum_votes: 1 | |
533 | ring0_addr: due | |
534 | } | |
535 | ||
536 | node { | |
537 | name: tre | |
538 | nodeid: 3 | |
539 | quorum_votes: 1 | |
540 | ring0_addr: tre | |
541 | } | |
542 | ||
543 | node { | |
544 | name: uno | |
545 | nodeid: 1 | |
546 | quorum_votes: 1 | |
547 | ring0_addr: uno | |
548 | } | |
549 | ||
550 | } | |
551 | ||
552 | quorum { | |
553 | provider: corosync_votequorum | |
554 | } | |
555 | ||
556 | totem { | |
557 | cluster_name: thomas-testcluster | |
558 | config_version: 3 | |
559 | ip_version: ipv4 | |
560 | secauth: on | |
561 | version: 2 | |
562 | interface { | |
563 | bindnetaddr: 192.168.30.50 | |
564 | ringnumber: 0 | |
565 | } | |
566 | ||
567 | } | |
568 | ---- | |
569 | ||
570 | The first you want to do is add the 'name' properties in the node entries if | |
571 | you do not see them already. Those *must* match the node name. | |
572 | ||
573 | Then replace the address from the 'ring0_addr' properties with the new | |
574 | addresses. You may use plain IP addresses or also hostnames here. If you use | |
575 | hostnames ensure that they are resolvable from all nodes. | |
576 | ||
577 | In my example I want to switch my cluster communication to the 10.10.10.1/25 | |
470d4313 | 578 | network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr |
e4ec4154 TL |
579 | in the totem section of the config to an address of the new network. It can be |
580 | any address from the subnet configured on the new network interface. | |
581 | ||
582 | After you increased the 'config_version' property the new configuration file | |
583 | should look like: | |
584 | ||
585 | ---- | |
586 | ||
587 | logging { | |
588 | debug: off | |
589 | to_syslog: yes | |
590 | } | |
591 | ||
592 | nodelist { | |
593 | ||
594 | node { | |
595 | name: due | |
596 | nodeid: 2 | |
597 | quorum_votes: 1 | |
598 | ring0_addr: 10.10.10.2 | |
599 | } | |
600 | ||
601 | node { | |
602 | name: tre | |
603 | nodeid: 3 | |
604 | quorum_votes: 1 | |
605 | ring0_addr: 10.10.10.3 | |
606 | } | |
607 | ||
608 | node { | |
609 | name: uno | |
610 | nodeid: 1 | |
611 | quorum_votes: 1 | |
612 | ring0_addr: 10.10.10.1 | |
613 | } | |
614 | ||
615 | } | |
616 | ||
617 | quorum { | |
618 | provider: corosync_votequorum | |
619 | } | |
620 | ||
621 | totem { | |
622 | cluster_name: thomas-testcluster | |
623 | config_version: 4 | |
624 | ip_version: ipv4 | |
625 | secauth: on | |
626 | version: 2 | |
627 | interface { | |
628 | bindnetaddr: 10.10.10.1 | |
629 | ringnumber: 0 | |
630 | } | |
631 | ||
632 | } | |
633 | ---- | |
634 | ||
635 | Now after a final check whether all changed information is correct we save it | |
636 | and see again the <<edit-corosync-conf,edit corosync.conf file>> section to | |
637 | learn how to bring it in effect. | |
638 | ||
639 | As our change cannot be enforced live from corosync we have to do an restart. | |
640 | ||
641 | On a single node execute: | |
642 | [source,bash] | |
4d19cb00 | 643 | ---- |
e4ec4154 | 644 | systemctl restart corosync |
4d19cb00 | 645 | ---- |
e4ec4154 TL |
646 | |
647 | Now check if everything is fine: | |
648 | ||
649 | [source,bash] | |
4d19cb00 | 650 | ---- |
e4ec4154 | 651 | systemctl status corosync |
4d19cb00 | 652 | ---- |
e4ec4154 TL |
653 | |
654 | If corosync runs again correct restart corosync also on all other nodes. | |
655 | They will then join the cluster membership one by one on the new network. | |
656 | ||
11202f1d | 657 | [[pvecm_rrp]] |
e4ec4154 TL |
658 | Redundant Ring Protocol |
659 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
660 | To avoid a single point of failure you should implement counter measurements. | |
661 | This can be on the hardware and operating system level through network bonding. | |
662 | ||
663 | Corosync itself offers also a possibility to add redundancy through the so | |
664 | called 'Redundant Ring Protocol'. This protocol allows running a second totem | |
665 | ring on another network, this network should be physically separated from the | |
666 | other rings network to actually increase availability. | |
667 | ||
668 | RRP On Cluster Creation | |
669 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
670 | ||
671 | The 'pvecm create' command provides the additional parameters 'bindnetX_addr', | |
672 | 'ringX_addr' and 'rrp_mode', can be used for RRP configuration. | |
673 | ||
674 | NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means. | |
675 | ||
676 | So if you have two networks, one on the 10.10.10.1/24 and the other on the | |
677 | 10.10.20.1/24 subnet you would execute: | |
678 | ||
679 | [source,bash] | |
4d19cb00 | 680 | ---- |
e4ec4154 TL |
681 | pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \ |
682 | -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1 | |
4d19cb00 | 683 | ---- |
e4ec4154 | 684 | |
6e78f927 | 685 | RRP On Existing Clusters |
e4ec4154 TL |
686 | ~~~~~~~~~~~~~~~~~~~~~~~~ |
687 | ||
6e78f927 TL |
688 | You will take similar steps as described in |
689 | <<separate-cluster-net-after-creation,separating the cluster network>> to | |
690 | enable RRP on an already running cluster. The single difference is, that you | |
691 | will add `ring1` and use it instead of `ring0`. | |
e4ec4154 TL |
692 | |
693 | First add a new `interface` subsection in the `totem` section, set its | |
694 | `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an | |
695 | address of the subnet you have configured for your new ring. | |
696 | Further set the `rrp_mode` to `passive`, this is the only stable mode. | |
697 | ||
698 | Then add to each node entry in the `nodelist` section its new `ring1_addr` | |
699 | property with the nodes additional ring address. | |
700 | ||
701 | So if you have two networks, one on the 10.10.10.1/24 and the other on the | |
702 | 10.10.20.1/24 subnet, the final configuration file should look like: | |
703 | ||
704 | ---- | |
705 | totem { | |
706 | cluster_name: tweak | |
707 | config_version: 9 | |
708 | ip_version: ipv4 | |
709 | rrp_mode: passive | |
710 | secauth: on | |
711 | version: 2 | |
712 | interface { | |
713 | bindnetaddr: 10.10.10.1 | |
714 | ringnumber: 0 | |
715 | } | |
716 | interface { | |
717 | bindnetaddr: 10.10.20.1 | |
718 | ringnumber: 1 | |
719 | } | |
720 | } | |
721 | ||
722 | nodelist { | |
723 | node { | |
724 | name: pvecm1 | |
725 | nodeid: 1 | |
726 | quorum_votes: 1 | |
727 | ring0_addr: 10.10.10.1 | |
728 | ring1_addr: 10.10.20.1 | |
729 | } | |
730 | ||
731 | node { | |
732 | name: pvecm2 | |
733 | nodeid: 2 | |
734 | quorum_votes: 1 | |
735 | ring0_addr: 10.10.10.2 | |
736 | ring1_addr: 10.10.20.2 | |
737 | } | |
738 | ||
739 | [...] # other cluster nodes here | |
740 | } | |
741 | ||
742 | [...] # other remaining config sections here | |
743 | ||
744 | ---- | |
745 | ||
7d48940b DM |
746 | Bring it in effect like described in the |
747 | <<edit-corosync-conf,edit the corosync.conf file>> section. | |
e4ec4154 TL |
748 | |
749 | This is a change which cannot take live in effect and needs at least a restart | |
750 | of corosync. Recommended is a restart of the whole cluster. | |
751 | ||
752 | If you cannot reboot the whole cluster ensure no High Availability services are | |
753 | configured and the stop the corosync service on all nodes. After corosync is | |
754 | stopped on all nodes start it one after the other again. | |
755 | ||
c21d2cbe OB |
756 | Corosync External Vote Support |
757 | ------------------------------ | |
758 | ||
759 | This section describes a way to deploy an external voter in a {pve} cluster. | |
760 | When configured, the cluster can sustain more node failures without | |
761 | violating safety properties of the cluster communication. | |
762 | ||
763 | For this to work there are two services involved: | |
764 | ||
765 | * a so called qdevice daemon which runs on each {pve} node | |
766 | ||
767 | * an external vote daemon which runs on an independent server. | |
768 | ||
769 | As a result you can achieve higher availability even in smaller setups (for | |
770 | example 2+1 nodes). | |
771 | ||
772 | QDevice Technical Overview | |
773 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
774 | ||
775 | The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster | |
776 | node. It provides a configured number of votes to the clusters quorum | |
777 | subsystem based on an external running third-party arbitrator's decision. | |
778 | Its primary use is to allow a cluster to sustain more node failures than | |
779 | standard quorum rules allow. This can be done safely as the external device | |
780 | can see all nodes and thus choose only one set of nodes to give its vote. | |
51730d56 | 781 | This will only be done if said set of nodes can have quorum (again) when |
c21d2cbe OB |
782 | receiving the third-party vote. |
783 | ||
784 | Currently only 'QDevice Net' is supported as a third-party arbitrator. It is | |
785 | a daemon which provides a vote to a cluster partition if it can reach the | |
786 | partition members over the network. It will give only votes to one partition | |
787 | of a cluster at any time. | |
788 | It's designed to support multiple clusters and is almost configuration and | |
789 | state free. New clusters are handled dynamically and no configuration file | |
790 | is needed on the host running a QDevice. | |
791 | ||
792 | The external host has the only requirement that it needs network access to the | |
793 | cluster and a corosync-qnetd package available. We provide such a package | |
794 | for Debian based hosts, other Linux distributions should also have a package | |
795 | available through their respective package manager. | |
796 | ||
797 | NOTE: In contrast to corosync itself, a QDevice connects to the cluster over | |
798 | TCP/IP and thus does not need a multicast capable network between itself and | |
799 | the cluster. In fact the daemon may run outside of the LAN and can have | |
800 | longer latencies than 2 ms. | |
801 | ||
802 | ||
803 | Supported Setups | |
804 | ~~~~~~~~~~~~~~~~ | |
805 | ||
806 | We support QDevices for clusters with an even number of nodes and recommend | |
807 | it for 2 node clusters, if they should provide higher availability. | |
808 | For clusters with an odd node count we discourage the use of QDevices | |
809 | currently. The reason for this, is the difference of the votes the QDevice | |
810 | provides for each cluster type. Even numbered clusters get single additional | |
811 | vote, with this we can only increase availability, i.e. if the QDevice | |
812 | itself fails we are in the same situation as with no QDevice at all. | |
813 | ||
814 | Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes -- | |
815 | where 'N' corresponds to the cluster node count. This difference makes | |
816 | sense, if we had only one additional vote the cluster can get into a split | |
817 | brain situation. | |
818 | This algorithm would allow that all nodes but one (and naturally the | |
819 | QDevice itself) could fail. | |
820 | There are two drawbacks with this: | |
821 | ||
822 | * If the QNet daemon itself fails, no other node may fail or the cluster | |
823 | immediately loses quorum. For example, in a cluster with 15 nodes 7 | |
824 | could fail before the cluster becomes inquorate. But, if a QDevice is | |
825 | configured here and said QDevice fails itself **no single node** of | |
826 | the 15 may fail. The QDevice acts almost as a single point of failure in | |
827 | this case. | |
828 | ||
829 | * The fact that all but one node plus QDevice may fail sound promising at | |
830 | first, but this may result in a mass recovery of HA services that would | |
831 | overload the single node left. Also ceph server will stop to provide | |
832 | services after only '((N-1)/2)' nodes are online. | |
833 | ||
834 | If you understand the drawbacks and implications you can decide yourself if | |
835 | you should use this technology in an odd numbered cluster setup. | |
836 | ||
837 | ||
838 | QDevice-Net Setup | |
839 | ~~~~~~~~~~~~~~~~~ | |
840 | ||
841 | We recommend to run any daemon which provides votes to corosync-qdevice as an | |
e34c3e91 TL |
842 | unprivileged user. {pve} and Debian provides a package which is already |
843 | configured to do so. | |
c21d2cbe OB |
844 | The traffic between the daemon and the cluster must be encrypted to ensure a |
845 | safe and secure QDevice integration in {pve}. | |
846 | ||
847 | First install the 'corosync-qnetd' package on your external server and | |
848 | the 'corosync-qdevice' package on all cluster nodes. | |
849 | ||
850 | After that, ensure that all your nodes on the cluster are online. | |
851 | ||
852 | You can now easily set up your QDevice by running the following command on one | |
853 | of the {pve} nodes: | |
854 | ||
855 | ---- | |
856 | pve# pvecm qdevice setup <QDEVICE-IP> | |
857 | ---- | |
858 | ||
859 | The SSH key from the cluster will be automatically copied to the QDevice. You | |
860 | might need to enter an SSH password during this step. | |
861 | ||
862 | After you enter the password and all the steps are successfully completed, you | |
863 | will see "Done". You can check the status now: | |
864 | ||
865 | ---- | |
866 | pve# pvecm status | |
867 | ||
868 | ... | |
869 | ||
870 | Votequorum information | |
871 | ~~~~~~~~~~~~~~~~~~~~~ | |
872 | Expected votes: 3 | |
873 | Highest expected: 3 | |
874 | Total votes: 3 | |
875 | Quorum: 2 | |
876 | Flags: Quorate Qdevice | |
877 | ||
878 | Membership information | |
879 | ~~~~~~~~~~~~~~~~~~~~~~ | |
880 | Nodeid Votes Qdevice Name | |
881 | 0x00000001 1 A,V,NMW 192.168.22.180 (local) | |
882 | 0x00000002 1 A,V,NMW 192.168.22.181 | |
883 | 0x00000000 1 Qdevice | |
884 | ||
885 | ---- | |
886 | ||
887 | which means the QDevice is set up. | |
888 | ||
889 | ||
890 | Frequently Asked Questions | |
891 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
892 | ||
893 | Tie Breaking | |
894 | ^^^^^^^^^^^^ | |
895 | ||
00821894 TL |
896 | In case of a tie, where two same-sized cluster partitions cannot see each other |
897 | but the QDevice, the QDevice chooses randomly one of those partitions and | |
c21d2cbe OB |
898 | provides a vote to it. |
899 | ||
d31de328 TL |
900 | Possible Negative Implications |
901 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
902 | ||
00821894 TL |
903 | For clusters with an even node count there are no negative implications when |
904 | setting up a QDevice. If it fails to work, you are as good as without QDevice at | |
905 | all. | |
d31de328 | 906 | |
870c2817 OB |
907 | Adding/Deleting Nodes After QDevice Setup |
908 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
d31de328 TL |
909 | |
910 | If you want to add a new node or remove an existing one from a cluster with a | |
00821894 TL |
911 | QDevice setup, you need to remove the QDevice first. After that, you can add or |
912 | remove nodes normally. Once you have a cluster with an even node count again, | |
913 | you can set up the QDevice again as described above. | |
870c2817 OB |
914 | |
915 | Removing the QDevice | |
916 | ^^^^^^^^^^^^^^^^^^^^ | |
917 | ||
00821894 TL |
918 | If you used the official `pvecm` tool to add the QDevice, you can remove it |
919 | trivially by running: | |
870c2817 OB |
920 | |
921 | ---- | |
922 | pve# pvecm qdevice remove | |
923 | ---- | |
d31de328 | 924 | |
51730d56 TL |
925 | //Still TODO |
926 | //^^^^^^^^^^ | |
927 | //There ist still stuff to add here | |
c21d2cbe OB |
928 | |
929 | ||
e4ec4154 TL |
930 | Corosync Configuration |
931 | ---------------------- | |
932 | ||
470d4313 | 933 | The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It |
e4ec4154 TL |
934 | controls the cluster member ship and its network. |
935 | For reading more about it check the corosync.conf man page: | |
936 | [source,bash] | |
4d19cb00 | 937 | ---- |
e4ec4154 | 938 | man corosync.conf |
4d19cb00 | 939 | ---- |
e4ec4154 TL |
940 | |
941 | For node membership you should always use the `pvecm` tool provided by {pve}. | |
942 | You may have to edit the configuration file manually for other changes. | |
943 | Here are a few best practice tips for doing this. | |
944 | ||
945 | [[edit-corosync-conf]] | |
946 | Edit corosync.conf | |
947 | ~~~~~~~~~~~~~~~~~~ | |
948 | ||
949 | Editing the corosync.conf file can be not always straight forward. There are | |
950 | two on each cluster, one in `/etc/pve/corosync.conf` and the other in | |
951 | `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will | |
952 | propagate the changes to the local one, but not vice versa. | |
953 | ||
954 | The configuration will get updated automatically as soon as the file changes. | |
955 | This means changes which can be integrated in a running corosync will take | |
956 | instantly effect. So you should always make a copy and edit that instead, to | |
957 | avoid triggering some unwanted changes by an in between safe. | |
958 | ||
959 | [source,bash] | |
4d19cb00 | 960 | ---- |
e4ec4154 | 961 | cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new |
4d19cb00 | 962 | ---- |
e4ec4154 TL |
963 | |
964 | Then open the Config file with your favorite editor, `nano` and `vim.tiny` are | |
965 | preinstalled on {pve} for example. | |
966 | ||
967 | NOTE: Always increment the 'config_version' number on configuration changes, | |
968 | omitting this can lead to problems. | |
969 | ||
970 | After making the necessary changes create another copy of the current working | |
971 | configuration file. This serves as a backup if the new configuration fails to | |
972 | apply or makes problems in other ways. | |
973 | ||
974 | [source,bash] | |
4d19cb00 | 975 | ---- |
e4ec4154 | 976 | cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak |
4d19cb00 | 977 | ---- |
e4ec4154 TL |
978 | |
979 | Then move the new configuration file over the old one: | |
980 | [source,bash] | |
4d19cb00 | 981 | ---- |
e4ec4154 | 982 | mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf |
4d19cb00 | 983 | ---- |
e4ec4154 TL |
984 | |
985 | You may check with the commands | |
986 | [source,bash] | |
4d19cb00 | 987 | ---- |
e4ec4154 TL |
988 | systemctl status corosync |
989 | journalctl -b -u corosync | |
4d19cb00 | 990 | ---- |
e4ec4154 TL |
991 | |
992 | If the change could applied automatically. If not you may have to restart the | |
993 | corosync service via: | |
994 | [source,bash] | |
4d19cb00 | 995 | ---- |
e4ec4154 | 996 | systemctl restart corosync |
4d19cb00 | 997 | ---- |
e4ec4154 TL |
998 | |
999 | On errors check the troubleshooting section below. | |
1000 | ||
1001 | Troubleshooting | |
1002 | ~~~~~~~~~~~~~~~ | |
1003 | ||
1004 | Issue: 'quorum.expected_votes must be configured' | |
1005 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
1006 | ||
1007 | When corosync starts to fail and you get the following message in the system log: | |
1008 | ||
1009 | ---- | |
1010 | [...] | |
1011 | corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize. | |
1012 | corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason | |
1013 | 'configuration error: nodelist or quorum.expected_votes must be configured!' | |
1014 | [...] | |
1015 | ---- | |
1016 | ||
1017 | It means that the hostname you set for corosync 'ringX_addr' in the | |
1018 | configuration could not be resolved. | |
1019 | ||
1020 | ||
1021 | Write Configuration When Not Quorate | |
1022 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
1023 | ||
1024 | If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you | |
1025 | know what you do, use: | |
1026 | [source,bash] | |
4d19cb00 | 1027 | ---- |
e4ec4154 | 1028 | pvecm expected 1 |
4d19cb00 | 1029 | ---- |
e4ec4154 TL |
1030 | |
1031 | This sets the expected vote count to 1 and makes the cluster quorate. You can | |
1032 | now fix your configuration, or revert it back to the last working backup. | |
1033 | ||
1034 | This is not enough if corosync cannot start anymore. Here its best to edit the | |
1035 | local copy of the corosync configuration in '/etc/corosync/corosync.conf' so | |
1036 | that corosync can start again. Ensure that on all nodes this configuration has | |
1037 | the same content to avoid split brains. If you are not sure what went wrong | |
1038 | it's best to ask the Proxmox Community to help you. | |
1039 | ||
1040 | ||
1041 | [[corosync-conf-glossary]] | |
1042 | Corosync Configuration Glossary | |
1043 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
1044 | ||
1045 | ringX_addr:: | |
1046 | This names the different ring addresses for the corosync totem rings used for | |
1047 | the cluster communication. | |
1048 | ||
1049 | bindnetaddr:: | |
1050 | Defines to which interface the ring should bind to. It may be any address of | |
1051 | the subnet configured on the interface we want to use. In general its the | |
1052 | recommended to just use an address a node uses on this interface. | |
1053 | ||
1054 | rrp_mode:: | |
1055 | Specifies the mode of the redundant ring protocol and may be passive, active or | |
1056 | none. Note that use of active is highly experimental and not official | |
1057 | supported. Passive is the preferred mode, it may double the cluster | |
1058 | communication throughput and increases availability. | |
1059 | ||
806ef12d DM |
1060 | |
1061 | Cluster Cold Start | |
1062 | ------------------ | |
1063 | ||
1064 | It is obvious that a cluster is not quorate when all nodes are | |
1065 | offline. This is a common case after a power failure. | |
1066 | ||
1067 | NOTE: It is always a good idea to use an uninterruptible power supply | |
8c1189b6 | 1068 | (``UPS'', also called ``battery backup'') to avoid this state, especially if |
806ef12d DM |
1069 | you want HA. |
1070 | ||
204231df | 1071 | On node startup, the `pve-guests` service is started and waits for |
8c1189b6 | 1072 | quorum. Once quorate, it starts all guests which have the `onboot` |
612417fd DM |
1073 | flag set. |
1074 | ||
1075 | When you turn on nodes, or when power comes back after power failure, | |
1076 | it is likely that some nodes boots faster than others. Please keep in | |
1077 | mind that guest startup is delayed until you reach quorum. | |
806ef12d | 1078 | |
054a7e7d | 1079 | |
082ea7d9 TL |
1080 | Guest Migration |
1081 | --------------- | |
1082 | ||
054a7e7d DM |
1083 | Migrating virtual guests to other nodes is a useful feature in a |
1084 | cluster. There are settings to control the behavior of such | |
1085 | migrations. This can be done via the configuration file | |
1086 | `datacenter.cfg` or for a specific migration via API or command line | |
1087 | parameters. | |
1088 | ||
da6c7dee DC |
1089 | It makes a difference if a Guest is online or offline, or if it has |
1090 | local resources (like a local disk). | |
1091 | ||
1092 | For Details about Virtual Machine Migration see the | |
1093 | xref:qm_migration[QEMU/KVM Migration Chapter] | |
1094 | ||
1095 | For Details about Container Migration see the | |
1096 | xref:pct_migration[Container Migration Chapter] | |
082ea7d9 TL |
1097 | |
1098 | Migration Type | |
1099 | ~~~~~~~~~~~~~~ | |
1100 | ||
44f38275 | 1101 | The migration type defines if the migration data should be sent over an |
d63be10b | 1102 | encrypted (`secure`) channel or an unencrypted (`insecure`) one. |
082ea7d9 | 1103 | Setting the migration type to insecure means that the RAM content of a |
470d4313 | 1104 | virtual guest gets also transferred unencrypted, which can lead to |
b1743473 DM |
1105 | information disclosure of critical data from inside the guest (for |
1106 | example passwords or encryption keys). | |
054a7e7d DM |
1107 | |
1108 | Therefore, we strongly recommend using the secure channel if you do | |
1109 | not have full control over the network and can not guarantee that no | |
1110 | one is eavesdropping to it. | |
082ea7d9 | 1111 | |
054a7e7d DM |
1112 | NOTE: Storage migration does not follow this setting. Currently, it |
1113 | always sends the storage content over a secure channel. | |
1114 | ||
1115 | Encryption requires a lot of computing power, so this setting is often | |
1116 | changed to "unsafe" to achieve better performance. The impact on | |
1117 | modern systems is lower because they implement AES encryption in | |
b1743473 DM |
1118 | hardware. The performance impact is particularly evident in fast |
1119 | networks where you can transfer 10 Gbps or more. | |
082ea7d9 | 1120 | |
082ea7d9 TL |
1121 | |
1122 | Migration Network | |
1123 | ~~~~~~~~~~~~~~~~~ | |
1124 | ||
a9baa444 TL |
1125 | By default, {pve} uses the network in which cluster communication |
1126 | takes place to send the migration traffic. This is not optimal because | |
1127 | sensitive cluster traffic can be disrupted and this network may not | |
1128 | have the best bandwidth available on the node. | |
1129 | ||
1130 | Setting the migration network parameter allows the use of a dedicated | |
1131 | network for the entire migration traffic. In addition to the memory, | |
1132 | this also affects the storage traffic for offline migrations. | |
1133 | ||
1134 | The migration network is set as a network in the CIDR notation. This | |
1135 | has the advantage that you do not have to set individual IP addresses | |
1136 | for each node. {pve} can determine the real address on the | |
1137 | destination node from the network specified in the CIDR form. To | |
1138 | enable this, the network must be specified so that each node has one, | |
1139 | but only one IP in the respective network. | |
1140 | ||
082ea7d9 TL |
1141 | |
1142 | Example | |
1143 | ^^^^^^^ | |
1144 | ||
a9baa444 TL |
1145 | We assume that we have a three-node setup with three separate |
1146 | networks. One for public communication with the Internet, one for | |
1147 | cluster communication and a very fast one, which we want to use as a | |
1148 | dedicated network for migration. | |
1149 | ||
1150 | A network configuration for such a setup might look as follows: | |
082ea7d9 TL |
1151 | |
1152 | ---- | |
7a0d4784 | 1153 | iface eno1 inet manual |
082ea7d9 TL |
1154 | |
1155 | # public network | |
1156 | auto vmbr0 | |
1157 | iface vmbr0 inet static | |
1158 | address 192.X.Y.57 | |
1159 | netmask 255.255.250.0 | |
1160 | gateway 192.X.Y.1 | |
7a0d4784 | 1161 | bridge_ports eno1 |
082ea7d9 TL |
1162 | bridge_stp off |
1163 | bridge_fd 0 | |
1164 | ||
1165 | # cluster network | |
7a0d4784 WL |
1166 | auto eno2 |
1167 | iface eno2 inet static | |
082ea7d9 TL |
1168 | address 10.1.1.1 |
1169 | netmask 255.255.255.0 | |
1170 | ||
1171 | # fast network | |
7a0d4784 WL |
1172 | auto eno3 |
1173 | iface eno3 inet static | |
082ea7d9 TL |
1174 | address 10.1.2.1 |
1175 | netmask 255.255.255.0 | |
082ea7d9 TL |
1176 | ---- |
1177 | ||
a9baa444 TL |
1178 | Here, we will use the network 10.1.2.0/24 as a migration network. For |
1179 | a single migration, you can do this using the `migration_network` | |
1180 | parameter of the command line tool: | |
1181 | ||
082ea7d9 | 1182 | ---- |
b1743473 | 1183 | # qm migrate 106 tre --online --migration_network 10.1.2.0/24 |
082ea7d9 TL |
1184 | ---- |
1185 | ||
a9baa444 TL |
1186 | To configure this as the default network for all migrations in the |
1187 | cluster, set the `migration` property of the `/etc/pve/datacenter.cfg` | |
1188 | file: | |
1189 | ||
082ea7d9 | 1190 | ---- |
a9baa444 | 1191 | # use dedicated migration network |
b1743473 | 1192 | migration: secure,network=10.1.2.0/24 |
082ea7d9 TL |
1193 | ---- |
1194 | ||
a9baa444 TL |
1195 | NOTE: The migration type must always be set when the migration network |
1196 | gets set in `/etc/pve/datacenter.cfg`. | |
1197 | ||
806ef12d | 1198 | |
d8742b0c DM |
1199 | ifdef::manvolnum[] |
1200 | include::pve-copyright.adoc[] | |
1201 | endif::manvolnum[] |