]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
pvecm: fix pvecm status indentation
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {pve} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication. There's no explicit limit for the number of nodes in a cluster.
31 In practice, the actual possible node count may be limited by the host and
32 network performance. Currently (2021), there are reports of clusters (using
33 high-end enterprise hardware) with over 50 nodes in production.
34
35 `pvecm` can be used to create a new cluster, join nodes to a cluster,
36 leave the cluster, get status information, and do various other cluster-related
37 tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
38 is used to transparently distribute the cluster configuration to all cluster
39 nodes.
40
41 Grouping nodes into a cluster has the following advantages:
42
43 * Centralized, web-based management
44
45 * Multi-master clusters: each node can do all management tasks
46
47 * Use of `pmxcfs`, a database-driven file system, for storing configuration
48 files, replicated in real-time on all nodes using `corosync`
49
50 * Easy migration of virtual machines and containers between physical
51 hosts
52
53 * Fast deployment
54
55 * Cluster-wide services like firewall and HA
56
57
58 Requirements
59 ------------
60
61 * All nodes must be able to connect to each other via UDP ports 5405-5412
62 for corosync to work.
63
64 * Date and time must be synchronized.
65
66 * An SSH tunnel on TCP port 22 between nodes is required.
67
68 * If you are interested in High Availability, you need to have at
69 least three nodes for reliable quorum. All nodes should have the
70 same version.
71
72 * We recommend a dedicated NIC for the cluster traffic, especially if
73 you use shared storage.
74
75 * The root password of a cluster node is required for adding nodes.
76
77 * Online migration of virtual machines is only supported when nodes have CPUs
78 from the same vendor. It might work otherwise, but this is never guaranteed.
79
80 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
81 nodes.
82
83 NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
84 not supported as a production configuration and should only be done temporarily,
85 during an upgrade of the whole cluster from one major version to another.
86
87 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
88 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
89 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
90 upgrade procedure to {pve} 6.0.
91
92
93 Preparing Nodes
94 ---------------
95
96 First, install {pve} on all nodes. Make sure that each node is
97 installed with the final hostname and IP configuration. Changing the
98 hostname and IP is not possible after cluster creation.
99
100 While it's common to reference all node names and their IPs in `/etc/hosts` (or
101 make their names resolvable through other means), this is not necessary for a
102 cluster to work. It may be useful however, as you can then connect from one node
103 to another via SSH, using the easier to remember node name (see also
104 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
105 recommend referencing nodes by their IP addresses in the cluster configuration.
106
107
108 [[pvecm_create_cluster]]
109 Create a Cluster
110 ----------------
111
112 You can either create a cluster on the console (login via `ssh`), or through
113 the API using the {pve} web interface (__Datacenter -> Cluster__).
114
115 NOTE: Use a unique name for your cluster. This name cannot be changed later.
116 The cluster name follows the same rules as node names.
117
118 [[pvecm_cluster_create_via_gui]]
119 Create via Web GUI
120 ~~~~~~~~~~~~~~~~~~
121
122 [thumbnail="screenshot/gui-cluster-create.png"]
123
124 Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
125 name and select a network connection from the drop-down list to serve as the
126 main cluster network (Link 0). It defaults to the IP resolved via the node's
127 hostname.
128
129 As of {pve} 6.2, up to 8 fallback links can be added to a cluster. To add a
130 redundant link, click the 'Add' button and select a link number and IP address
131 from the respective fields. Prior to {pve} 6.2, to add a second link as
132 fallback, you can select the 'Advanced' checkbox and choose an additional
133 network interface (Link 1, see also xref:pvecm_redundancy[Corosync Redundancy]).
134
135 NOTE: Ensure that the network selected for cluster communication is not used for
136 any high traffic purposes, like network storage or live-migration.
137 While the cluster network itself produces small amounts of data, it is very
138 sensitive to latency. Check out full
139 xref:pvecm_cluster_network_requirements[cluster network requirements].
140
141 [[pvecm_cluster_create_via_cli]]
142 Create via the Command Line
143 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
144
145 Login via `ssh` to the first {pve} node and run the following command:
146
147 ----
148 hp1# pvecm create CLUSTERNAME
149 ----
150
151 To check the state of the new cluster use:
152
153 ----
154 hp1# pvecm status
155 ----
156
157 Multiple Clusters in the Same Network
158 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
159
160 It is possible to create multiple clusters in the same physical or logical
161 network. In this case, each cluster must have a unique name to avoid possible
162 clashes in the cluster communication stack. Furthermore, this helps avoid human
163 confusion by making clusters clearly distinguishable.
164
165 While the bandwidth requirement of a corosync cluster is relatively low, the
166 latency of packages and the package per second (PPS) rate is the limiting
167 factor. Different clusters in the same network can compete with each other for
168 these resources, so it may still make sense to use separate physical network
169 infrastructure for bigger clusters.
170
171 [[pvecm_join_node_to_cluster]]
172 Adding Nodes to the Cluster
173 ---------------------------
174
175 CAUTION: All existing configuration in `/etc/pve` is overwritten when joining a
176 cluster. In particular, a joining node cannot hold any guests, since guest IDs
177 could otherwise conflict, and the node will inherit the cluster's storage
178 configuration. To join a node with existing guest, as a workaround, you can
179 create a backup of each guest (using `vzdump`) and restore it under a different
180 ID after joining. If the node's storage layout differs, you will need to re-add
181 the node's storages, and adapt each storage's node restriction to reflect on
182 which nodes the storage is actually available.
183
184 Join Node to Cluster via GUI
185 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
186
187 [thumbnail="screenshot/gui-cluster-join-information.png"]
188
189 Log in to the web interface on an existing cluster node. Under __Datacenter ->
190 Cluster__, click the *Join Information* button at the top. Then, click on the
191 button *Copy Information*. Alternatively, copy the string from the 'Information'
192 field manually.
193
194 [thumbnail="screenshot/gui-cluster-join.png"]
195
196 Next, log in to the web interface on the node you want to add.
197 Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
198 'Information' field with the 'Join Information' text you copied earlier.
199 Most settings required for joining the cluster will be filled out
200 automatically. For security reasons, the cluster password has to be entered
201 manually.
202
203 NOTE: To enter all required data manually, you can disable the 'Assisted Join'
204 checkbox.
205
206 After clicking the *Join* button, the cluster join process will start
207 immediately. After the node has joined the cluster, its current node certificate
208 will be replaced by one signed from the cluster certificate authority (CA).
209 This means that the current session will stop working after a few seconds. You
210 then might need to force-reload the web interface and log in again with the
211 cluster credentials.
212
213 Now your node should be visible under __Datacenter -> Cluster__.
214
215 Join Node to Cluster via Command Line
216 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
217
218 Log in to the node you want to join into an existing cluster via `ssh`.
219
220 ----
221 # pvecm add IP-ADDRESS-CLUSTER
222 ----
223
224 For `IP-ADDRESS-CLUSTER`, use the IP or hostname of an existing cluster node.
225 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
226
227
228 To check the state of the cluster use:
229
230 ----
231 # pvecm status
232 ----
233
234 .Cluster status after adding 4 nodes
235 ----
236 # pvecm status
237 Cluster information
238 ~~~~~~~~~~~~~~~~~~~
239 Name: prod-central
240 Config Version: 3
241 Transport: knet
242 Secure auth: on
243
244 Quorum information
245 ~~~~~~~~~~~~~~~~~~
246 Date: Tue Sep 14 11:06:47 2021
247 Quorum provider: corosync_votequorum
248 Nodes: 4
249 Node ID: 0x00000001
250 Ring ID: 1.1a8
251 Quorate: Yes
252
253 Votequorum information
254 ~~~~~~~~~~~~~~~~~~~~~~
255 Expected votes: 4
256 Highest expected: 4
257 Total votes: 4
258 Quorum: 3
259 Flags: Quorate
260
261 Membership information
262 ~~~~~~~~~~~~~~~~~~~~~~
263 Nodeid Votes Name
264 0x00000001 1 192.168.15.91
265 0x00000002 1 192.168.15.92 (local)
266 0x00000003 1 192.168.15.93
267 0x00000004 1 192.168.15.94
268 ----
269
270 If you only want a list of all nodes, use:
271
272 ----
273 # pvecm nodes
274 ----
275
276 .List nodes in a cluster
277 ----
278 # pvecm nodes
279
280 Membership information
281 ~~~~~~~~~~~~~~~~~~~~~~
282 Nodeid Votes Name
283 1 1 hp1
284 2 1 hp2 (local)
285 3 1 hp3
286 4 1 hp4
287 ----
288
289 [[pvecm_adding_nodes_with_separated_cluster_network]]
290 Adding Nodes with Separated Cluster Network
291 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
292
293 When adding a node to a cluster with a separated cluster network, you need to
294 use the 'link0' parameter to set the nodes address on that network:
295
296 [source,bash]
297 ----
298 # pvecm add IP-ADDRESS-CLUSTER --link0 LOCAL-IP-ADDRESS-LINK0
299 ----
300
301 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
302 Kronosnet transport layer, also use the 'link1' parameter.
303
304 Using the GUI, you can select the correct interface from the corresponding
305 'Link X' fields in the *Cluster Join* dialog.
306
307 Remove a Cluster Node
308 ---------------------
309
310 CAUTION: Read the procedure carefully before proceeding, as it may
311 not be what you want or need.
312
313 Move all virtual machines from the node. Ensure that you have made copies of any
314 local data or backups that you want to keep. In addition, make sure to remove
315 any scheduled replication jobs to the node to be removed.
316
317 CAUTION: Failure to remove replication jobs to a node before removing said node
318 will result in the replication job becoming irremovable. Especially note that
319 replication automatically switches direction if a replicated VM is migrated, so
320 by migrating a replicated VM from a node to be deleted, replication jobs will be
321 set up to that node automatically.
322
323 In the following example, we will remove the node hp4 from the cluster.
324
325 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
326 command to identify the node ID to remove:
327
328 ----
329 hp1# pvecm nodes
330
331 Membership information
332 ~~~~~~~~~~~~~~~~~~~~~~
333 Nodeid Votes Name
334 1 1 hp1 (local)
335 2 1 hp2
336 3 1 hp3
337 4 1 hp4
338 ----
339
340
341 At this point, you must power off hp4 and ensure that it will not power on
342 again (in the network) with its current configuration.
343
344 IMPORTANT: As mentioned above, it is critical to power off the node
345 *before* removal, and make sure that it will *not* power on again
346 (in the existing cluster network) with its current configuration.
347 If you power on the node as it is, the cluster could end up broken,
348 and it could be difficult to restore it to a functioning state.
349
350 After powering off the node hp4, we can safely remove it from the cluster.
351
352 ----
353 hp1# pvecm delnode hp4
354 Killing node 4
355 ----
356
357 NOTE: At this point, it is possible that you will receive an error message
358 stating `Could not kill node (error = CS_ERR_NOT_EXIST)`. This does not
359 signify an actual failure in the deletion of the node, but rather a failure in
360 corosync trying to kill an offline node. Thus, it can be safely ignored.
361
362 Use `pvecm nodes` or `pvecm status` to check the node list again. It should
363 look something like:
364
365 ----
366 hp1# pvecm status
367
368 ...
369
370 Votequorum information
371 ~~~~~~~~~~~~~~~~~~~~~~
372 Expected votes: 3
373 Highest expected: 3
374 Total votes: 3
375 Quorum: 2
376 Flags: Quorate
377
378 Membership information
379 ~~~~~~~~~~~~~~~~~~~~~~
380 Nodeid Votes Name
381 0x00000001 1 192.168.15.90 (local)
382 0x00000002 1 192.168.15.91
383 0x00000003 1 192.168.15.92
384 ----
385
386 If, for whatever reason, you want this server to join the same cluster again,
387 you have to:
388
389 * do a fresh install of {pve} on it,
390
391 * then join it, as explained in the previous section.
392
393 NOTE: After removal of the node, its SSH fingerprint will still reside in the
394 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
395 a node with the same IP or hostname, run `pvecm updatecerts` once on the
396 re-added node to update its fingerprint cluster wide.
397
398 [[pvecm_separate_node_without_reinstall]]
399 Separate a Node Without Reinstalling
400 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
401
402 CAUTION: This is *not* the recommended method, proceed with caution. Use the
403 previous method if you're unsure.
404
405 You can also separate a node from a cluster without reinstalling it from
406 scratch. But after removing the node from the cluster, it will still have
407 access to any shared storage. This must be resolved before you start removing
408 the node from the cluster. A {pve} cluster cannot share the exact same
409 storage with another cluster, as storage locking doesn't work over the cluster
410 boundary. Furthermore, it may also lead to VMID conflicts.
411
412 It's suggested that you create a new storage, where only the node which you want
413 to separate has access. This can be a new export on your NFS or a new Ceph
414 pool, to name a few examples. It's just important that the exact same storage
415 does not get accessed by multiple clusters. After setting up this storage, move
416 all data and VMs from the node to it. Then you are ready to separate the
417 node from the cluster.
418
419 WARNING: Ensure that all shared resources are cleanly separated! Otherwise you
420 will run into conflicts and problems.
421
422 First, stop the corosync and pve-cluster services on the node:
423 [source,bash]
424 ----
425 systemctl stop pve-cluster
426 systemctl stop corosync
427 ----
428
429 Start the cluster file system again in local mode:
430 [source,bash]
431 ----
432 pmxcfs -l
433 ----
434
435 Delete the corosync configuration files:
436 [source,bash]
437 ----
438 rm /etc/pve/corosync.conf
439 rm -r /etc/corosync/*
440 ----
441
442 You can now start the file system again as a normal service:
443 [source,bash]
444 ----
445 killall pmxcfs
446 systemctl start pve-cluster
447 ----
448
449 The node is now separated from the cluster. You can deleted it from any
450 remaining node of the cluster with:
451 [source,bash]
452 ----
453 pvecm delnode oldnode
454 ----
455
456 If the command fails due to a loss of quorum in the remaining node, you can set
457 the expected votes to 1 as a workaround:
458 [source,bash]
459 ----
460 pvecm expected 1
461 ----
462
463 And then repeat the 'pvecm delnode' command.
464
465 Now switch back to the separated node and delete all the remaining cluster
466 files on it. This ensures that the node can be added to another cluster again
467 without problems.
468
469 [source,bash]
470 ----
471 rm /var/lib/corosync/*
472 ----
473
474 As the configuration files from the other nodes are still in the cluster
475 file system, you may want to clean those up too. After making absolutely sure
476 that you have the correct node name, you can simply remove the entire
477 directory recursively from '/etc/pve/nodes/NODENAME'.
478
479 CAUTION: The node's SSH keys will remain in the 'authorized_key' file. This
480 means that the nodes can still connect to each other with public key
481 authentication. You should fix this by removing the respective keys from the
482 '/etc/pve/priv/authorized_keys' file.
483
484
485 Quorum
486 ------
487
488 {pve} use a quorum-based technique to provide a consistent state among
489 all cluster nodes.
490
491 [quote, from Wikipedia, Quorum (distributed computing)]
492 ____
493 A quorum is the minimum number of votes that a distributed transaction
494 has to obtain in order to be allowed to perform an operation in a
495 distributed system.
496 ____
497
498 In case of network partitioning, state changes requires that a
499 majority of nodes are online. The cluster switches to read-only mode
500 if it loses quorum.
501
502 NOTE: {pve} assigns a single vote to each node by default.
503
504
505 Cluster Network
506 ---------------
507
508 The cluster network is the core of a cluster. All messages sent over it have to
509 be delivered reliably to all nodes in their respective order. In {pve} this
510 part is done by corosync, an implementation of a high performance, low overhead,
511 high availability development toolkit. It serves our decentralized configuration
512 file system (`pmxcfs`).
513
514 [[pvecm_cluster_network_requirements]]
515 Network Requirements
516 ~~~~~~~~~~~~~~~~~~~~
517
518 The {pve} cluster stack requires a reliable network with latencies under 5
519 milliseconds (LAN performance) between all nodes to operate stably. While on
520 setups with a small node count a network with higher latencies _may_ work, this
521 is not guaranteed and gets rather unlikely with more than three nodes and
522 latencies above around 10 ms.
523
524 The network should not be used heavily by other members, as while corosync does
525 not uses much bandwidth it is sensitive to latency jitters; ideally corosync
526 runs on its own physically separated network. Especially do not use a shared
527 network for corosync and storage (except as a potential low-priority fallback
528 in a xref:pvecm_redundancy[redundant] configuration).
529
530 Before setting up a cluster, it is good practice to check if the network is fit
531 for that purpose. To ensure that the nodes can connect to each other on the
532 cluster network, you can test the connectivity between them with the `ping`
533 tool.
534
535 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
536 be generated - no manual action is required.
537
538 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
539 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
540 communication, which, for now, only supports regular UDP unicast.
541
542 CAUTION: You can still enable Multicast or legacy unicast by setting your
543 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
544 but keep in mind that this will disable all cryptography and redundancy support.
545 This is therefore not recommended.
546
547 Separate Cluster Network
548 ~~~~~~~~~~~~~~~~~~~~~~~~
549
550 When creating a cluster without any parameters, the corosync cluster network is
551 generally shared with the web interface and the VMs' network. Depending on
552 your setup, even storage traffic may get sent over the same network. It's
553 recommended to change that, as corosync is a time-critical, real-time
554 application.
555
556 Setting Up a New Network
557 ^^^^^^^^^^^^^^^^^^^^^^^^
558
559 First, you have to set up a new network interface. It should be on a physically
560 separate network. Ensure that your network fulfills the
561 xref:pvecm_cluster_network_requirements[cluster network requirements].
562
563 Separate On Cluster Creation
564 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
565
566 This is possible via the 'linkX' parameters of the 'pvecm create'
567 command, used for creating a new cluster.
568
569 If you have set up an additional NIC with a static address on 10.10.10.1/25,
570 and want to send and receive all cluster communication over this interface,
571 you would execute:
572
573 [source,bash]
574 ----
575 pvecm create test --link0 10.10.10.1
576 ----
577
578 To check if everything is working properly, execute:
579 [source,bash]
580 ----
581 systemctl status corosync
582 ----
583
584 Afterwards, proceed as described above to
585 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
586
587 [[pvecm_separate_cluster_net_after_creation]]
588 Separate After Cluster Creation
589 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
590
591 You can do this if you have already created a cluster and want to switch
592 its communication to another network, without rebuilding the whole cluster.
593 This change may lead to short periods of quorum loss in the cluster, as nodes
594 have to restart corosync and come up one after the other on the new network.
595
596 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
597 Then, open it and you should see a file similar to:
598
599 ----
600 logging {
601 debug: off
602 to_syslog: yes
603 }
604
605 nodelist {
606
607 node {
608 name: due
609 nodeid: 2
610 quorum_votes: 1
611 ring0_addr: due
612 }
613
614 node {
615 name: tre
616 nodeid: 3
617 quorum_votes: 1
618 ring0_addr: tre
619 }
620
621 node {
622 name: uno
623 nodeid: 1
624 quorum_votes: 1
625 ring0_addr: uno
626 }
627
628 }
629
630 quorum {
631 provider: corosync_votequorum
632 }
633
634 totem {
635 cluster_name: testcluster
636 config_version: 3
637 ip_version: ipv4-6
638 secauth: on
639 version: 2
640 interface {
641 linknumber: 0
642 }
643
644 }
645 ----
646
647 NOTE: `ringX_addr` actually specifies a corosync *link address*. The name "ring"
648 is a remnant of older corosync versions that is kept for backwards
649 compatibility.
650
651 The first thing you want to do is add the 'name' properties in the node entries,
652 if you do not see them already. Those *must* match the node name.
653
654 Then replace all addresses from the 'ring0_addr' properties of all nodes with
655 the new addresses. You may use plain IP addresses or hostnames here. If you use
656 hostnames, ensure that they are resolvable from all nodes (see also
657 xref:pvecm_corosync_addresses[Link Address Types]).
658
659 In this example, we want to switch cluster communication to the
660 10.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively.
661
662 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
663 as well. However, we recommend only changing one link address at a time, so
664 that it's easier to recover if something goes wrong.
665
666 After we increase the 'config_version' property, the new configuration file
667 should look like:
668
669 ----
670 logging {
671 debug: off
672 to_syslog: yes
673 }
674
675 nodelist {
676
677 node {
678 name: due
679 nodeid: 2
680 quorum_votes: 1
681 ring0_addr: 10.10.10.2
682 }
683
684 node {
685 name: tre
686 nodeid: 3
687 quorum_votes: 1
688 ring0_addr: 10.10.10.3
689 }
690
691 node {
692 name: uno
693 nodeid: 1
694 quorum_votes: 1
695 ring0_addr: 10.10.10.1
696 }
697
698 }
699
700 quorum {
701 provider: corosync_votequorum
702 }
703
704 totem {
705 cluster_name: testcluster
706 config_version: 4
707 ip_version: ipv4-6
708 secauth: on
709 version: 2
710 interface {
711 linknumber: 0
712 }
713
714 }
715 ----
716
717 Then, after a final check to see that all changed information is correct, we
718 save it and once again follow the
719 xref:pvecm_edit_corosync_conf[edit corosync.conf file] section to bring it into
720 effect.
721
722 The changes will be applied live, so restarting corosync is not strictly
723 necessary. If you changed other settings as well, or notice corosync
724 complaining, you can optionally trigger a restart.
725
726 On a single node execute:
727
728 [source,bash]
729 ----
730 systemctl restart corosync
731 ----
732
733 Now check if everything is okay:
734
735 [source,bash]
736 ----
737 systemctl status corosync
738 ----
739
740 If corosync begins to work again, restart it on all other nodes too.
741 They will then join the cluster membership one by one on the new network.
742
743 [[pvecm_corosync_addresses]]
744 Corosync Addresses
745 ~~~~~~~~~~~~~~~~~~
746
747 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
748 `corosync.conf`) can be specified in two ways:
749
750 * **IPv4/v6 addresses** can be used directly. They are recommended, since they
751 are static and usually not changed carelessly.
752
753 * **Hostnames** will be resolved using `getaddrinfo`, which means that by
754 default, IPv6 addresses will be used first, if available (see also
755 `man gai.conf`). Keep this in mind, especially when upgrading an existing
756 cluster to IPv6.
757
758 CAUTION: Hostnames should be used with care, since the addresses they
759 resolve to can be changed without touching corosync or the node it runs on -
760 which may lead to a situation where an address is changed without thinking
761 about implications for corosync.
762
763 A separate, static hostname specifically for corosync is recommended, if
764 hostnames are preferred. Also, make sure that every node in the cluster can
765 resolve all hostnames correctly.
766
767 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
768 entry. Only the resolved IP is saved to the configuration.
769
770 Nodes that joined the cluster on earlier versions likely still use their
771 unresolved hostname in `corosync.conf`. It might be a good idea to replace
772 them with IPs or a separate hostname, as mentioned above.
773
774
775 [[pvecm_redundancy]]
776 Corosync Redundancy
777 -------------------
778
779 Corosync supports redundant networking via its integrated Kronosnet layer by
780 default (it is not supported on the legacy udp/udpu transports). It can be
781 enabled by specifying more than one link address, either via the '--linkX'
782 parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
783 adding a new node) or by specifying more than one 'ringX_addr' in
784 `corosync.conf`.
785
786 NOTE: To provide useful failover, every link should be on its own
787 physical network connection.
788
789 Links are used according to a priority setting. You can configure this priority
790 by setting 'knet_link_priority' in the corresponding interface section in
791 `corosync.conf`, or, preferably, using the 'priority' parameter when creating
792 your cluster with `pvecm`:
793
794 ----
795 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
796 ----
797
798 This would cause 'link1' to be used first, since it has the higher priority.
799
800 If no priorities are configured manually (or two links have the same priority),
801 links will be used in order of their number, with the lower number having higher
802 priority.
803
804 Even if all links are working, only the one with the highest priority will see
805 corosync traffic. Link priorities cannot be mixed, meaning that links with
806 different priorities will not be able to communicate with each other.
807
808 Since lower priority links will not see traffic unless all higher priorities
809 have failed, it becomes a useful strategy to specify networks used for
810 other tasks (VMs, storage, etc.) as low-priority links. If worst comes to
811 worst, a higher latency or more congested connection might be better than no
812 connection at all.
813
814 Adding Redundant Links To An Existing Cluster
815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
816
817 To add a new link to a running configuration, first check how to
818 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
819
820 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
821 sure that your 'X' is the same for every node you add it to, and that it is
822 unique for each node.
823
824 Lastly, add a new 'interface', as shown below, to your `totem`
825 section, replacing 'X' with the link number chosen above.
826
827 Assuming you added a link with number 1, the new configuration file could look
828 like this:
829
830 ----
831 logging {
832 debug: off
833 to_syslog: yes
834 }
835
836 nodelist {
837
838 node {
839 name: due
840 nodeid: 2
841 quorum_votes: 1
842 ring0_addr: 10.10.10.2
843 ring1_addr: 10.20.20.2
844 }
845
846 node {
847 name: tre
848 nodeid: 3
849 quorum_votes: 1
850 ring0_addr: 10.10.10.3
851 ring1_addr: 10.20.20.3
852 }
853
854 node {
855 name: uno
856 nodeid: 1
857 quorum_votes: 1
858 ring0_addr: 10.10.10.1
859 ring1_addr: 10.20.20.1
860 }
861
862 }
863
864 quorum {
865 provider: corosync_votequorum
866 }
867
868 totem {
869 cluster_name: testcluster
870 config_version: 4
871 ip_version: ipv4-6
872 secauth: on
873 version: 2
874 interface {
875 linknumber: 0
876 }
877 interface {
878 linknumber: 1
879 }
880 }
881 ----
882
883 The new link will be enabled as soon as you follow the last steps to
884 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
885 be necessary. You can check that corosync loaded the new link using:
886
887 ----
888 journalctl -b -u corosync
889 ----
890
891 It might be a good idea to test the new link by temporarily disconnecting the
892 old link on one node and making sure that its status remains online while
893 disconnected:
894
895 ----
896 pvecm status
897 ----
898
899 If you see a healthy cluster state, it means that your new link is being used.
900
901
902 Role of SSH in {pve} Clusters
903 -----------------------------
904
905 {pve} utilizes SSH tunnels for various features.
906
907 * Proxying console/shell sessions (node and guests)
908 +
909 When using the shell for node B while being connected to node A, connects to a
910 terminal proxy on node A, which is in turn connected to the login shell on node
911 B via a non-interactive SSH tunnel.
912
913 * VM and CT memory and local-storage migration in 'secure' mode.
914 +
915 During the migration, one or more SSH tunnel(s) are established between the
916 source and target nodes, in order to exchange migration information and
917 transfer memory and disk contents.
918
919 * Storage replication
920
921 .Pitfalls due to automatic execution of `.bashrc` and siblings
922 [IMPORTANT]
923 ====
924 In case you have a custom `.bashrc`, or similar files that get executed on
925 login by the configured shell, `ssh` will automatically run it once the session
926 is established successfully. This can cause some unexpected behavior, as those
927 commands may be executed with root permissions on any of the operations
928 described above. This can cause possible problematic side-effects!
929
930 In order to avoid such complications, it's recommended to add a check in
931 `/root/.bashrc` to make sure the session is interactive, and only then run
932 `.bashrc` commands.
933
934 You can add this snippet at the beginning of your `.bashrc` file:
935
936 ----
937 # Early exit if not running interactively to avoid side-effects!
938 case $- in
939 *i*) ;;
940 *) return;;
941 esac
942 ----
943 ====
944
945
946 Corosync External Vote Support
947 ------------------------------
948
949 This section describes a way to deploy an external voter in a {pve} cluster.
950 When configured, the cluster can sustain more node failures without
951 violating safety properties of the cluster communication.
952
953 For this to work, there are two services involved:
954
955 * A QDevice daemon which runs on each {pve} node
956
957 * An external vote daemon which runs on an independent server
958
959 As a result, you can achieve higher availability, even in smaller setups (for
960 example 2+1 nodes).
961
962 QDevice Technical Overview
963 ~~~~~~~~~~~~~~~~~~~~~~~~~~
964
965 The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
966 node. It provides a configured number of votes to the cluster's quorum
967 subsystem, based on an externally running third-party arbitrator's decision.
968 Its primary use is to allow a cluster to sustain more node failures than
969 standard quorum rules allow. This can be done safely as the external device
970 can see all nodes and thus choose only one set of nodes to give its vote.
971 This will only be done if said set of nodes can have quorum (again) after
972 receiving the third-party vote.
973
974 Currently, only 'QDevice Net' is supported as a third-party arbitrator. This is
975 a daemon which provides a vote to a cluster partition, if it can reach the
976 partition members over the network. It will only give votes to one partition
977 of a cluster at any time.
978 It's designed to support multiple clusters and is almost configuration and
979 state free. New clusters are handled dynamically and no configuration file
980 is needed on the host running a QDevice.
981
982 The only requirements for the external host are that it needs network access to
983 the cluster and to have a corosync-qnetd package available. We provide a package
984 for Debian based hosts, and other Linux distributions should also have a package
985 available through their respective package manager.
986
987 NOTE: Unlike corosync itself, a QDevice connects to the cluster over TCP/IP.
988 The daemon can also run outside the LAN of the cluster and isn't limited to the
989 low latencies requirements of corosync.
990
991 Supported Setups
992 ~~~~~~~~~~~~~~~~
993
994 We support QDevices for clusters with an even number of nodes and recommend
995 it for 2 node clusters, if they should provide higher availability.
996 For clusters with an odd node count, we currently discourage the use of
997 QDevices. The reason for this is the difference in the votes which the QDevice
998 provides for each cluster type. Even numbered clusters get a single additional
999 vote, which only increases availability, because if the QDevice
1000 itself fails, you are in the same position as with no QDevice at all.
1001
1002 On the other hand, with an odd numbered cluster size, the QDevice provides
1003 '(N-1)' votes -- where 'N' corresponds to the cluster node count. This
1004 alternative behavior makes sense; if it had only one additional vote, the
1005 cluster could get into a split-brain situation. This algorithm allows for all
1006 nodes but one (and naturally the QDevice itself) to fail. However, there are two
1007 drawbacks to this:
1008
1009 * If the QNet daemon itself fails, no other node may fail or the cluster
1010 immediately loses quorum. For example, in a cluster with 15 nodes, 7
1011 could fail before the cluster becomes inquorate. But, if a QDevice is
1012 configured here and it itself fails, **no single node** of the 15 may fail.
1013 The QDevice acts almost as a single point of failure in this case.
1014
1015 * The fact that all but one node plus QDevice may fail sounds promising at
1016 first, but this may result in a mass recovery of HA services, which could
1017 overload the single remaining node. Furthermore, a Ceph server will stop
1018 providing services if only '((N-1)/2)' nodes or less remain online.
1019
1020 If you understand the drawbacks and implications, you can decide yourself if
1021 you want to use this technology in an odd numbered cluster setup.
1022
1023 QDevice-Net Setup
1024 ~~~~~~~~~~~~~~~~~
1025
1026 We recommend running any daemon which provides votes to corosync-qdevice as an
1027 unprivileged user. {pve} and Debian provide a package which is already
1028 configured to do so.
1029 The traffic between the daemon and the cluster must be encrypted to ensure a
1030 safe and secure integration of the QDevice in {pve}.
1031
1032 First, install the 'corosync-qnetd' package on your external server
1033
1034 ----
1035 external# apt install corosync-qnetd
1036 ----
1037
1038 and the 'corosync-qdevice' package on all cluster nodes
1039
1040 ----
1041 pve# apt install corosync-qdevice
1042 ----
1043
1044 After doing this, ensure that all the nodes in the cluster are online.
1045
1046 You can now set up your QDevice by running the following command on one
1047 of the {pve} nodes:
1048
1049 ----
1050 pve# pvecm qdevice setup <QDEVICE-IP>
1051 ----
1052
1053 The SSH key from the cluster will be automatically copied to the QDevice.
1054
1055 NOTE: Make sure that the SSH configuration on your external server allows root
1056 login via password, if you are asked for a password during this step.
1057 If you receive an error such as 'Host key verification failed.' at this
1058 stage, running `pvecm updatecerts` could fix the issue.
1059
1060 After you enter the password and all the steps have successfully completed, you
1061 will see "Done". You can verify that the QDevice has been set up with:
1062
1063 ----
1064 pve# pvecm status
1065
1066 ...
1067
1068 Votequorum information
1069 ~~~~~~~~~~~~~~~~~~~~~
1070 Expected votes: 3
1071 Highest expected: 3
1072 Total votes: 3
1073 Quorum: 2
1074 Flags: Quorate Qdevice
1075
1076 Membership information
1077 ~~~~~~~~~~~~~~~~~~~~~~
1078 Nodeid Votes Qdevice Name
1079 0x00000001 1 A,V,NMW 192.168.22.180 (local)
1080 0x00000002 1 A,V,NMW 192.168.22.181
1081 0x00000000 1 Qdevice
1082
1083 ----
1084
1085
1086 Frequently Asked Questions
1087 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1088
1089 Tie Breaking
1090 ^^^^^^^^^^^^
1091
1092 In case of a tie, where two same-sized cluster partitions cannot see each other
1093 but can see the QDevice, the QDevice chooses one of those partitions randomly
1094 and provides a vote to it.
1095
1096 Possible Negative Implications
1097 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1098
1099 For clusters with an even node count, there are no negative implications when
1100 using a QDevice. If it fails to work, it is the same as not having a QDevice
1101 at all.
1102
1103 Adding/Deleting Nodes After QDevice Setup
1104 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1105
1106 If you want to add a new node or remove an existing one from a cluster with a
1107 QDevice setup, you need to remove the QDevice first. After that, you can add or
1108 remove nodes normally. Once you have a cluster with an even node count again,
1109 you can set up the QDevice again as described previously.
1110
1111 Removing the QDevice
1112 ^^^^^^^^^^^^^^^^^^^^
1113
1114 If you used the official `pvecm` tool to add the QDevice, you can remove it
1115 by running:
1116
1117 ----
1118 pve# pvecm qdevice remove
1119 ----
1120
1121 //Still TODO
1122 //^^^^^^^^^^
1123 //There is still stuff to add here
1124
1125
1126 Corosync Configuration
1127 ----------------------
1128
1129 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1130 controls the cluster membership and its network.
1131 For further information about it, check the corosync.conf man page:
1132 [source,bash]
1133 ----
1134 man corosync.conf
1135 ----
1136
1137 For node membership, you should always use the `pvecm` tool provided by {pve}.
1138 You may have to edit the configuration file manually for other changes.
1139 Here are a few best practice tips for doing this.
1140
1141 [[pvecm_edit_corosync_conf]]
1142 Edit corosync.conf
1143 ~~~~~~~~~~~~~~~~~~
1144
1145 Editing the corosync.conf file is not always very straightforward. There are
1146 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1147 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1148 propagate the changes to the local one, but not vice versa.
1149
1150 The configuration will get updated automatically, as soon as the file changes.
1151 This means that changes which can be integrated in a running corosync will take
1152 effect immediately. Thus, you should always make a copy and edit that instead,
1153 to avoid triggering unintended changes when saving the file while editing.
1154
1155 [source,bash]
1156 ----
1157 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1158 ----
1159
1160 Then, open the config file with your favorite editor, such as `nano` or
1161 `vim.tiny`, which come pre-installed on every {pve} node.
1162
1163 NOTE: Always increment the 'config_version' number after configuration changes;
1164 omitting this can lead to problems.
1165
1166 After making the necessary changes, create another copy of the current working
1167 configuration file. This serves as a backup if the new configuration fails to
1168 apply or causes other issues.
1169
1170 [source,bash]
1171 ----
1172 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1173 ----
1174
1175 Then replace the old configuration file with the new one:
1176 [source,bash]
1177 ----
1178 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1179 ----
1180
1181 You can check if the changes could be applied automatically, using the following
1182 commands:
1183 [source,bash]
1184 ----
1185 systemctl status corosync
1186 journalctl -b -u corosync
1187 ----
1188
1189 If the changes could not be applied automatically, you may have to restart the
1190 corosync service via:
1191 [source,bash]
1192 ----
1193 systemctl restart corosync
1194 ----
1195
1196 On errors, check the troubleshooting section below.
1197
1198 Troubleshooting
1199 ~~~~~~~~~~~~~~~
1200
1201 Issue: 'quorum.expected_votes must be configured'
1202 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1203
1204 When corosync starts to fail and you get the following message in the system log:
1205
1206 ----
1207 [...]
1208 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1209 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1210 'configuration error: nodelist or quorum.expected_votes must be configured!'
1211 [...]
1212 ----
1213
1214 It means that the hostname you set for a corosync 'ringX_addr' in the
1215 configuration could not be resolved.
1216
1217 Write Configuration When Not Quorate
1218 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1219
1220 If you need to change '/etc/pve/corosync.conf' on a node with no quorum, and you
1221 understand what you are doing, use:
1222 [source,bash]
1223 ----
1224 pvecm expected 1
1225 ----
1226
1227 This sets the expected vote count to 1 and makes the cluster quorate. You can
1228 then fix your configuration, or revert it back to the last working backup.
1229
1230 This is not enough if corosync cannot start anymore. In that case, it is best to
1231 edit the local copy of the corosync configuration in
1232 '/etc/corosync/corosync.conf', so that corosync can start again. Ensure that on
1233 all nodes, this configuration has the same content to avoid split-brain
1234 situations.
1235
1236
1237 [[pvecm_corosync_conf_glossary]]
1238 Corosync Configuration Glossary
1239 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1240
1241 ringX_addr::
1242 This names the different link addresses for the Kronosnet connections between
1243 nodes.
1244
1245
1246 Cluster Cold Start
1247 ------------------
1248
1249 It is obvious that a cluster is not quorate when all nodes are
1250 offline. This is a common case after a power failure.
1251
1252 NOTE: It is always a good idea to use an uninterruptible power supply
1253 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1254 you want HA.
1255
1256 On node startup, the `pve-guests` service is started and waits for
1257 quorum. Once quorate, it starts all guests which have the `onboot`
1258 flag set.
1259
1260 When you turn on nodes, or when power comes back after power failure,
1261 it is likely that some nodes will boot faster than others. Please keep in
1262 mind that guest startup is delayed until you reach quorum.
1263
1264
1265 [[pvecm_next_id_range]]
1266 Guest VMID Auto-Selection
1267 ------------------------
1268
1269 When creating new guests the web interface will ask the backend for a free VMID
1270 automatically. The default range for searching is `100` to `1000000` (lower
1271 than the maximal allowed VMID enforced by the schema).
1272
1273 Sometimes admins either want to allocate new VMIDs in a separate range, for
1274 example to easily separate temporary VMs with ones that choose a VMID manually.
1275 Other times its just desired to provided a stable length VMID, for which
1276 setting the lower boundary to, for example, `100000` gives much more room for.
1277
1278 To accommodate this use case one can set either lower, upper or both boundaries
1279 via the `datacenter.cfg` configuration file, which can be edited in the web
1280 interface under 'Datacenter' -> 'Options'.
1281
1282 NOTE: The range is only used for the next-id API call, so it isn't a hard
1283 limit.
1284
1285 Guest Migration
1286 ---------------
1287
1288 Migrating virtual guests to other nodes is a useful feature in a
1289 cluster. There are settings to control the behavior of such
1290 migrations. This can be done via the configuration file
1291 `datacenter.cfg` or for a specific migration via API or command line
1292 parameters.
1293
1294 It makes a difference if a guest is online or offline, or if it has
1295 local resources (like a local disk).
1296
1297 For details about virtual machine migration, see the
1298 xref:qm_migration[QEMU/KVM Migration Chapter].
1299
1300 For details about container migration, see the
1301 xref:pct_migration[Container Migration Chapter].
1302
1303 Migration Type
1304 ~~~~~~~~~~~~~~
1305
1306 The migration type defines if the migration data should be sent over an
1307 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1308 Setting the migration type to `insecure` means that the RAM content of a
1309 virtual guest is also transferred unencrypted, which can lead to
1310 information disclosure of critical data from inside the guest (for
1311 example, passwords or encryption keys).
1312
1313 Therefore, we strongly recommend using the secure channel if you do
1314 not have full control over the network and can not guarantee that no
1315 one is eavesdropping on it.
1316
1317 NOTE: Storage migration does not follow this setting. Currently, it
1318 always sends the storage content over a secure channel.
1319
1320 Encryption requires a lot of computing power, so this setting is often
1321 changed to `insecure` to achieve better performance. The impact on
1322 modern systems is lower because they implement AES encryption in
1323 hardware. The performance impact is particularly evident in fast
1324 networks, where you can transfer 10 Gbps or more.
1325
1326 Migration Network
1327 ~~~~~~~~~~~~~~~~~
1328
1329 By default, {pve} uses the network in which cluster communication
1330 takes place to send the migration traffic. This is not optimal both because
1331 sensitive cluster traffic can be disrupted and this network may not
1332 have the best bandwidth available on the node.
1333
1334 Setting the migration network parameter allows the use of a dedicated
1335 network for all migration traffic. In addition to the memory,
1336 this also affects the storage traffic for offline migrations.
1337
1338 The migration network is set as a network using CIDR notation. This
1339 has the advantage that you don't have to set individual IP addresses
1340 for each node. {pve} can determine the real address on the
1341 destination node from the network specified in the CIDR form. To
1342 enable this, the network must be specified so that each node has exactly one
1343 IP in the respective network.
1344
1345 Example
1346 ^^^^^^^
1347
1348 We assume that we have a three-node setup, with three separate
1349 networks. One for public communication with the Internet, one for
1350 cluster communication, and a very fast one, which we want to use as a
1351 dedicated network for migration.
1352
1353 A network configuration for such a setup might look as follows:
1354
1355 ----
1356 iface eno1 inet manual
1357
1358 # public network
1359 auto vmbr0
1360 iface vmbr0 inet static
1361 address 192.X.Y.57/24
1362 gateway 192.X.Y.1
1363 bridge-ports eno1
1364 bridge-stp off
1365 bridge-fd 0
1366
1367 # cluster network
1368 auto eno2
1369 iface eno2 inet static
1370 address 10.1.1.1/24
1371
1372 # fast network
1373 auto eno3
1374 iface eno3 inet static
1375 address 10.1.2.1/24
1376 ----
1377
1378 Here, we will use the network 10.1.2.0/24 as a migration network. For
1379 a single migration, you can do this using the `migration_network`
1380 parameter of the command line tool:
1381
1382 ----
1383 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1384 ----
1385
1386 To configure this as the default network for all migrations in the
1387 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1388 file:
1389
1390 ----
1391 # use dedicated migration network
1392 migration: secure,network=10.1.2.0/24
1393 ----
1394
1395 NOTE: The migration type must always be set when the migration network
1396 is set in `/etc/pve/datacenter.cfg`.
1397
1398
1399 ifdef::manvolnum[]
1400 include::pve-copyright.adoc[]
1401 endif::manvolnum[]