]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
fix #3884: Add section for kernel samepage merging
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {pve} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication. There's no explicit limit for the number of nodes in a cluster.
31 In practice, the actual possible node count may be limited by the host and
32 network performance. Currently (2021), there are reports of clusters (using
33 high-end enterprise hardware) with over 50 nodes in production.
34
35 `pvecm` can be used to create a new cluster, join nodes to a cluster,
36 leave the cluster, get status information, and do various other cluster-related
37 tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
38 is used to transparently distribute the cluster configuration to all cluster
39 nodes.
40
41 Grouping nodes into a cluster has the following advantages:
42
43 * Centralized, web-based management
44
45 * Multi-master clusters: each node can do all management tasks
46
47 * Use of `pmxcfs`, a database-driven file system, for storing configuration
48 files, replicated in real-time on all nodes using `corosync`
49
50 * Easy migration of virtual machines and containers between physical
51 hosts
52
53 * Fast deployment
54
55 * Cluster-wide services like firewall and HA
56
57
58 Requirements
59 ------------
60
61 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
62 for corosync to work.
63
64 * Date and time must be synchronized.
65
66 * An SSH tunnel on TCP port 22 between nodes is required.
67
68 * If you are interested in High Availability, you need to have at
69 least three nodes for reliable quorum. All nodes should have the
70 same version.
71
72 * We recommend a dedicated NIC for the cluster traffic, especially if
73 you use shared storage.
74
75 * The root password of a cluster node is required for adding nodes.
76
77 * Online migration of virtual machines is only supported when nodes have CPUs
78 from the same vendor. It might work otherwise, but this is never guaranteed.
79
80 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
81 nodes.
82
83 NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
84 not supported as a production configuration and should only be done temporarily,
85 during an upgrade of the whole cluster from one major version to another.
86
87 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
88 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
89 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
90 upgrade procedure to {pve} 6.0.
91
92
93 Preparing Nodes
94 ---------------
95
96 First, install {pve} on all nodes. Make sure that each node is
97 installed with the final hostname and IP configuration. Changing the
98 hostname and IP is not possible after cluster creation.
99
100 While it's common to reference all node names and their IPs in `/etc/hosts` (or
101 make their names resolvable through other means), this is not necessary for a
102 cluster to work. It may be useful however, as you can then connect from one node
103 to another via SSH, using the easier to remember node name (see also
104 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
105 recommend referencing nodes by their IP addresses in the cluster configuration.
106
107
108 [[pvecm_create_cluster]]
109 Create a Cluster
110 ----------------
111
112 You can either create a cluster on the console (login via `ssh`), or through
113 the API using the {pve} web interface (__Datacenter -> Cluster__).
114
115 NOTE: Use a unique name for your cluster. This name cannot be changed later.
116 The cluster name follows the same rules as node names.
117
118 [[pvecm_cluster_create_via_gui]]
119 Create via Web GUI
120 ~~~~~~~~~~~~~~~~~~
121
122 [thumbnail="screenshot/gui-cluster-create.png"]
123
124 Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
125 name and select a network connection from the drop-down list to serve as the
126 main cluster network (Link 0). It defaults to the IP resolved via the node's
127 hostname.
128
129 As of {pve} 6.2, up to 8 fallback links can be added to a cluster. To add a
130 redundant link, click the 'Add' button and select a link number and IP address
131 from the respective fields. Prior to {pve} 6.2, to add a second link as
132 fallback, you can select the 'Advanced' checkbox and choose an additional
133 network interface (Link 1, see also xref:pvecm_redundancy[Corosync Redundancy]).
134
135 NOTE: Ensure that the network selected for cluster communication is not used for
136 any high traffic purposes, like network storage or live-migration.
137 While the cluster network itself produces small amounts of data, it is very
138 sensitive to latency. Check out full
139 xref:pvecm_cluster_network_requirements[cluster network requirements].
140
141 [[pvecm_cluster_create_via_cli]]
142 Create via the Command Line
143 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
144
145 Login via `ssh` to the first {pve} node and run the following command:
146
147 ----
148 hp1# pvecm create CLUSTERNAME
149 ----
150
151 To check the state of the new cluster use:
152
153 ----
154 hp1# pvecm status
155 ----
156
157 Multiple Clusters in the Same Network
158 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
159
160 It is possible to create multiple clusters in the same physical or logical
161 network. In this case, each cluster must have a unique name to avoid possible
162 clashes in the cluster communication stack. Furthermore, this helps avoid human
163 confusion by making clusters clearly distinguishable.
164
165 While the bandwidth requirement of a corosync cluster is relatively low, the
166 latency of packages and the package per second (PPS) rate is the limiting
167 factor. Different clusters in the same network can compete with each other for
168 these resources, so it may still make sense to use separate physical network
169 infrastructure for bigger clusters.
170
171 [[pvecm_join_node_to_cluster]]
172 Adding Nodes to the Cluster
173 ---------------------------
174
175 CAUTION: A node that is about to be added to the cluster cannot hold any guests.
176 All existing configuration in `/etc/pve` is overwritten when joining a cluster,
177 since guest IDs could otherwise conflict. As a workaround, you can create a
178 backup of the guest (`vzdump`) and restore it under a different ID, after the
179 node has been added to the cluster.
180
181 Join Node to Cluster via GUI
182 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
183
184 [thumbnail="screenshot/gui-cluster-join-information.png"]
185
186 Log in to the web interface on an existing cluster node. Under __Datacenter ->
187 Cluster__, click the *Join Information* button at the top. Then, click on the
188 button *Copy Information*. Alternatively, copy the string from the 'Information'
189 field manually.
190
191 [thumbnail="screenshot/gui-cluster-join.png"]
192
193 Next, log in to the web interface on the node you want to add.
194 Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
195 'Information' field with the 'Join Information' text you copied earlier.
196 Most settings required for joining the cluster will be filled out
197 automatically. For security reasons, the cluster password has to be entered
198 manually.
199
200 NOTE: To enter all required data manually, you can disable the 'Assisted Join'
201 checkbox.
202
203 After clicking the *Join* button, the cluster join process will start
204 immediately. After the node has joined the cluster, its current node certificate
205 will be replaced by one signed from the cluster certificate authority (CA).
206 This means that the current session will stop working after a few seconds. You
207 then might need to force-reload the web interface and log in again with the
208 cluster credentials.
209
210 Now your node should be visible under __Datacenter -> Cluster__.
211
212 Join Node to Cluster via Command Line
213 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
214
215 Log in to the node you want to join into an existing cluster via `ssh`.
216
217 ----
218 # pvecm add IP-ADDRESS-CLUSTER
219 ----
220
221 For `IP-ADDRESS-CLUSTER`, use the IP or hostname of an existing cluster node.
222 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
223
224
225 To check the state of the cluster use:
226
227 ----
228 # pvecm status
229 ----
230
231 .Cluster status after adding 4 nodes
232 ----
233 # pvecm status
234 Cluster information
235 ~~~~~~~~~~~~~~~~~~~
236 Name: prod-central
237 Config Version: 3
238 Transport: knet
239 Secure auth: on
240
241 Quorum information
242 ~~~~~~~~~~~~~~~~~~
243 Date: Tue Sep 14 11:06:47 2021
244 Quorum provider: corosync_votequorum
245 Nodes: 4
246 Node ID: 0x00000001
247 Ring ID: 1.1a8
248 Quorate: Yes
249
250 Votequorum information
251 ~~~~~~~~~~~~~~~~~~~~~~
252 Expected votes: 4
253 Highest expected: 4
254 Total votes: 4
255 Quorum: 3
256 Flags: Quorate
257
258 Membership information
259 ~~~~~~~~~~~~~~~~~~~~~~
260 Nodeid Votes Name
261 0x00000001 1 192.168.15.91
262 0x00000002 1 192.168.15.92 (local)
263 0x00000003 1 192.168.15.93
264 0x00000004 1 192.168.15.94
265 ----
266
267 If you only want a list of all nodes, use:
268
269 ----
270 # pvecm nodes
271 ----
272
273 .List nodes in a cluster
274 ----
275 # pvecm nodes
276
277 Membership information
278 ~~~~~~~~~~~~~~~~~~~~~~
279 Nodeid Votes Name
280 1 1 hp1
281 2 1 hp2 (local)
282 3 1 hp3
283 4 1 hp4
284 ----
285
286 [[pvecm_adding_nodes_with_separated_cluster_network]]
287 Adding Nodes with Separated Cluster Network
288 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
289
290 When adding a node to a cluster with a separated cluster network, you need to
291 use the 'link0' parameter to set the nodes address on that network:
292
293 [source,bash]
294 ----
295 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
296 ----
297
298 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
299 Kronosnet transport layer, also use the 'link1' parameter.
300
301 Using the GUI, you can select the correct interface from the corresponding
302 'Link X' fields in the *Cluster Join* dialog.
303
304 Remove a Cluster Node
305 ---------------------
306
307 CAUTION: Read the procedure carefully before proceeding, as it may
308 not be what you want or need.
309
310 Move all virtual machines from the node. Ensure that you have made copies of any
311 local data or backups that you want to keep. In addition, make sure to remove
312 any scheduled replication jobs to the node to be removed.
313
314 CAUTION: Failure to remove replication jobs to a node before removing said node
315 will result in the replication job becoming irremovable. Especially note that
316 replication automatically switches direction if a replicated VM is migrated, so
317 by migrating a replicated VM from a node to be deleted, replication jobs will be
318 set up to that node automatically.
319
320 In the following example, we will remove the node hp4 from the cluster.
321
322 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
323 command to identify the node ID to remove:
324
325 ----
326 hp1# pvecm nodes
327
328 Membership information
329 ~~~~~~~~~~~~~~~~~~~~~~
330 Nodeid Votes Name
331 1 1 hp1 (local)
332 2 1 hp2
333 3 1 hp3
334 4 1 hp4
335 ----
336
337
338 At this point, you must power off hp4 and ensure that it will not power on
339 again (in the network) with its current configuration.
340
341 IMPORTANT: As mentioned above, it is critical to power off the node
342 *before* removal, and make sure that it will *not* power on again
343 (in the existing cluster network) with its current configuration.
344 If you power on the node as it is, the cluster could end up broken,
345 and it could be difficult to restore it to a functioning state.
346
347 After powering off the node hp4, we can safely remove it from the cluster.
348
349 ----
350 hp1# pvecm delnode hp4
351 Killing node 4
352 ----
353
354 NOTE: At this point, it is possible that you will receive an error message
355 stating `Could not kill node (error = CS_ERR_NOT_EXIST)`. This does not
356 signify an actual failure in the deletion of the node, but rather a failure in
357 corosync trying to kill an offline node. Thus, it can be safely ignored.
358
359 Use `pvecm nodes` or `pvecm status` to check the node list again. It should
360 look something like:
361
362 ----
363 hp1# pvecm status
364
365 ...
366
367 Votequorum information
368 ~~~~~~~~~~~~~~~~~~~~~~
369 Expected votes: 3
370 Highest expected: 3
371 Total votes: 3
372 Quorum: 2
373 Flags: Quorate
374
375 Membership information
376 ~~~~~~~~~~~~~~~~~~~~~~
377 Nodeid Votes Name
378 0x00000001 1 192.168.15.90 (local)
379 0x00000002 1 192.168.15.91
380 0x00000003 1 192.168.15.92
381 ----
382
383 If, for whatever reason, you want this server to join the same cluster again,
384 you have to:
385
386 * do a fresh install of {pve} on it,
387
388 * then join it, as explained in the previous section.
389
390 NOTE: After removal of the node, its SSH fingerprint will still reside in the
391 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
392 a node with the same IP or hostname, run `pvecm updatecerts` once on the
393 re-added node to update its fingerprint cluster wide.
394
395 [[pvecm_separate_node_without_reinstall]]
396 Separate a Node Without Reinstalling
397 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
398
399 CAUTION: This is *not* the recommended method, proceed with caution. Use the
400 previous method if you're unsure.
401
402 You can also separate a node from a cluster without reinstalling it from
403 scratch. But after removing the node from the cluster, it will still have
404 access to any shared storage. This must be resolved before you start removing
405 the node from the cluster. A {pve} cluster cannot share the exact same
406 storage with another cluster, as storage locking doesn't work over the cluster
407 boundary. Furthermore, it may also lead to VMID conflicts.
408
409 It's suggested that you create a new storage, where only the node which you want
410 to separate has access. This can be a new export on your NFS or a new Ceph
411 pool, to name a few examples. It's just important that the exact same storage
412 does not get accessed by multiple clusters. After setting up this storage, move
413 all data and VMs from the node to it. Then you are ready to separate the
414 node from the cluster.
415
416 WARNING: Ensure that all shared resources are cleanly separated! Otherwise you
417 will run into conflicts and problems.
418
419 First, stop the corosync and pve-cluster services on the node:
420 [source,bash]
421 ----
422 systemctl stop pve-cluster
423 systemctl stop corosync
424 ----
425
426 Start the cluster file system again in local mode:
427 [source,bash]
428 ----
429 pmxcfs -l
430 ----
431
432 Delete the corosync configuration files:
433 [source,bash]
434 ----
435 rm /etc/pve/corosync.conf
436 rm -r /etc/corosync/*
437 ----
438
439 You can now start the file system again as a normal service:
440 [source,bash]
441 ----
442 killall pmxcfs
443 systemctl start pve-cluster
444 ----
445
446 The node is now separated from the cluster. You can deleted it from any
447 remaining node of the cluster with:
448 [source,bash]
449 ----
450 pvecm delnode oldnode
451 ----
452
453 If the command fails due to a loss of quorum in the remaining node, you can set
454 the expected votes to 1 as a workaround:
455 [source,bash]
456 ----
457 pvecm expected 1
458 ----
459
460 And then repeat the 'pvecm delnode' command.
461
462 Now switch back to the separated node and delete all the remaining cluster
463 files on it. This ensures that the node can be added to another cluster again
464 without problems.
465
466 [source,bash]
467 ----
468 rm /var/lib/corosync/*
469 ----
470
471 As the configuration files from the other nodes are still in the cluster
472 file system, you may want to clean those up too. After making absolutely sure
473 that you have the correct node name, you can simply remove the entire
474 directory recursively from '/etc/pve/nodes/NODENAME'.
475
476 CAUTION: The node's SSH keys will remain in the 'authorized_key' file. This
477 means that the nodes can still connect to each other with public key
478 authentication. You should fix this by removing the respective keys from the
479 '/etc/pve/priv/authorized_keys' file.
480
481
482 Quorum
483 ------
484
485 {pve} use a quorum-based technique to provide a consistent state among
486 all cluster nodes.
487
488 [quote, from Wikipedia, Quorum (distributed computing)]
489 ____
490 A quorum is the minimum number of votes that a distributed transaction
491 has to obtain in order to be allowed to perform an operation in a
492 distributed system.
493 ____
494
495 In case of network partitioning, state changes requires that a
496 majority of nodes are online. The cluster switches to read-only mode
497 if it loses quorum.
498
499 NOTE: {pve} assigns a single vote to each node by default.
500
501
502 Cluster Network
503 ---------------
504
505 The cluster network is the core of a cluster. All messages sent over it have to
506 be delivered reliably to all nodes in their respective order. In {pve} this
507 part is done by corosync, an implementation of a high performance, low overhead,
508 high availability development toolkit. It serves our decentralized configuration
509 file system (`pmxcfs`).
510
511 [[pvecm_cluster_network_requirements]]
512 Network Requirements
513 ~~~~~~~~~~~~~~~~~~~~
514
515 The {pve} cluster stack requires a reliable network with latencies under 5
516 milliseconds (LAN performance) between all nodes to operate stably. While on
517 setups with a small node count a network with higher latencies _may_ work, this
518 is not guaranteed and gets rather unlikely with more than three nodes and
519 latencies above around 10 ms.
520
521 The network should not be used heavily by other members, as while corosync does
522 not uses much bandwidth it is sensitive to latency jitters; ideally corosync
523 runs on its own physically separated network. Especially do not use a shared
524 network for corosync and storage (except as a potential low-priority fallback
525 in a xref:pvecm_redundancy[redundant] configuration).
526
527 Before setting up a cluster, it is good practice to check if the network is fit
528 for that purpose. To ensure that the nodes can connect to each other on the
529 cluster network, you can test the connectivity between them with the `ping`
530 tool.
531
532 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
533 be generated - no manual action is required.
534
535 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
536 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
537 communication, which, for now, only supports regular UDP unicast.
538
539 CAUTION: You can still enable Multicast or legacy unicast by setting your
540 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
541 but keep in mind that this will disable all cryptography and redundancy support.
542 This is therefore not recommended.
543
544 Separate Cluster Network
545 ~~~~~~~~~~~~~~~~~~~~~~~~
546
547 When creating a cluster without any parameters, the corosync cluster network is
548 generally shared with the web interface and the VMs' network. Depending on
549 your setup, even storage traffic may get sent over the same network. It's
550 recommended to change that, as corosync is a time-critical, real-time
551 application.
552
553 Setting Up a New Network
554 ^^^^^^^^^^^^^^^^^^^^^^^^
555
556 First, you have to set up a new network interface. It should be on a physically
557 separate network. Ensure that your network fulfills the
558 xref:pvecm_cluster_network_requirements[cluster network requirements].
559
560 Separate On Cluster Creation
561 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
562
563 This is possible via the 'linkX' parameters of the 'pvecm create'
564 command, used for creating a new cluster.
565
566 If you have set up an additional NIC with a static address on 10.10.10.1/25,
567 and want to send and receive all cluster communication over this interface,
568 you would execute:
569
570 [source,bash]
571 ----
572 pvecm create test --link0 10.10.10.1
573 ----
574
575 To check if everything is working properly, execute:
576 [source,bash]
577 ----
578 systemctl status corosync
579 ----
580
581 Afterwards, proceed as described above to
582 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
583
584 [[pvecm_separate_cluster_net_after_creation]]
585 Separate After Cluster Creation
586 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
587
588 You can do this if you have already created a cluster and want to switch
589 its communication to another network, without rebuilding the whole cluster.
590 This change may lead to short periods of quorum loss in the cluster, as nodes
591 have to restart corosync and come up one after the other on the new network.
592
593 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
594 Then, open it and you should see a file similar to:
595
596 ----
597 logging {
598 debug: off
599 to_syslog: yes
600 }
601
602 nodelist {
603
604 node {
605 name: due
606 nodeid: 2
607 quorum_votes: 1
608 ring0_addr: due
609 }
610
611 node {
612 name: tre
613 nodeid: 3
614 quorum_votes: 1
615 ring0_addr: tre
616 }
617
618 node {
619 name: uno
620 nodeid: 1
621 quorum_votes: 1
622 ring0_addr: uno
623 }
624
625 }
626
627 quorum {
628 provider: corosync_votequorum
629 }
630
631 totem {
632 cluster_name: testcluster
633 config_version: 3
634 ip_version: ipv4-6
635 secauth: on
636 version: 2
637 interface {
638 linknumber: 0
639 }
640
641 }
642 ----
643
644 NOTE: `ringX_addr` actually specifies a corosync *link address*. The name "ring"
645 is a remnant of older corosync versions that is kept for backwards
646 compatibility.
647
648 The first thing you want to do is add the 'name' properties in the node entries,
649 if you do not see them already. Those *must* match the node name.
650
651 Then replace all addresses from the 'ring0_addr' properties of all nodes with
652 the new addresses. You may use plain IP addresses or hostnames here. If you use
653 hostnames, ensure that they are resolvable from all nodes (see also
654 xref:pvecm_corosync_addresses[Link Address Types]).
655
656 In this example, we want to switch cluster communication to the
657 10.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively.
658
659 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
660 as well. However, we recommend only changing one link address at a time, so
661 that it's easier to recover if something goes wrong.
662
663 After we increase the 'config_version' property, the new configuration file
664 should look like:
665
666 ----
667 logging {
668 debug: off
669 to_syslog: yes
670 }
671
672 nodelist {
673
674 node {
675 name: due
676 nodeid: 2
677 quorum_votes: 1
678 ring0_addr: 10.10.10.2
679 }
680
681 node {
682 name: tre
683 nodeid: 3
684 quorum_votes: 1
685 ring0_addr: 10.10.10.3
686 }
687
688 node {
689 name: uno
690 nodeid: 1
691 quorum_votes: 1
692 ring0_addr: 10.10.10.1
693 }
694
695 }
696
697 quorum {
698 provider: corosync_votequorum
699 }
700
701 totem {
702 cluster_name: testcluster
703 config_version: 4
704 ip_version: ipv4-6
705 secauth: on
706 version: 2
707 interface {
708 linknumber: 0
709 }
710
711 }
712 ----
713
714 Then, after a final check to see that all changed information is correct, we
715 save it and once again follow the
716 xref:pvecm_edit_corosync_conf[edit corosync.conf file] section to bring it into
717 effect.
718
719 The changes will be applied live, so restarting corosync is not strictly
720 necessary. If you changed other settings as well, or notice corosync
721 complaining, you can optionally trigger a restart.
722
723 On a single node execute:
724
725 [source,bash]
726 ----
727 systemctl restart corosync
728 ----
729
730 Now check if everything is okay:
731
732 [source,bash]
733 ----
734 systemctl status corosync
735 ----
736
737 If corosync begins to work again, restart it on all other nodes too.
738 They will then join the cluster membership one by one on the new network.
739
740 [[pvecm_corosync_addresses]]
741 Corosync Addresses
742 ~~~~~~~~~~~~~~~~~~
743
744 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
745 `corosync.conf`) can be specified in two ways:
746
747 * **IPv4/v6 addresses** can be used directly. They are recommended, since they
748 are static and usually not changed carelessly.
749
750 * **Hostnames** will be resolved using `getaddrinfo`, which means that by
751 default, IPv6 addresses will be used first, if available (see also
752 `man gai.conf`). Keep this in mind, especially when upgrading an existing
753 cluster to IPv6.
754
755 CAUTION: Hostnames should be used with care, since the addresses they
756 resolve to can be changed without touching corosync or the node it runs on -
757 which may lead to a situation where an address is changed without thinking
758 about implications for corosync.
759
760 A separate, static hostname specifically for corosync is recommended, if
761 hostnames are preferred. Also, make sure that every node in the cluster can
762 resolve all hostnames correctly.
763
764 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
765 entry. Only the resolved IP is saved to the configuration.
766
767 Nodes that joined the cluster on earlier versions likely still use their
768 unresolved hostname in `corosync.conf`. It might be a good idea to replace
769 them with IPs or a separate hostname, as mentioned above.
770
771
772 [[pvecm_redundancy]]
773 Corosync Redundancy
774 -------------------
775
776 Corosync supports redundant networking via its integrated Kronosnet layer by
777 default (it is not supported on the legacy udp/udpu transports). It can be
778 enabled by specifying more than one link address, either via the '--linkX'
779 parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
780 adding a new node) or by specifying more than one 'ringX_addr' in
781 `corosync.conf`.
782
783 NOTE: To provide useful failover, every link should be on its own
784 physical network connection.
785
786 Links are used according to a priority setting. You can configure this priority
787 by setting 'knet_link_priority' in the corresponding interface section in
788 `corosync.conf`, or, preferably, using the 'priority' parameter when creating
789 your cluster with `pvecm`:
790
791 ----
792 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
793 ----
794
795 This would cause 'link1' to be used first, since it has the higher priority.
796
797 If no priorities are configured manually (or two links have the same priority),
798 links will be used in order of their number, with the lower number having higher
799 priority.
800
801 Even if all links are working, only the one with the highest priority will see
802 corosync traffic. Link priorities cannot be mixed, meaning that links with
803 different priorities will not be able to communicate with each other.
804
805 Since lower priority links will not see traffic unless all higher priorities
806 have failed, it becomes a useful strategy to specify networks used for
807 other tasks (VMs, storage, etc.) as low-priority links. If worst comes to
808 worst, a higher latency or more congested connection might be better than no
809 connection at all.
810
811 Adding Redundant Links To An Existing Cluster
812 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
813
814 To add a new link to a running configuration, first check how to
815 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
816
817 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
818 sure that your 'X' is the same for every node you add it to, and that it is
819 unique for each node.
820
821 Lastly, add a new 'interface', as shown below, to your `totem`
822 section, replacing 'X' with the link number chosen above.
823
824 Assuming you added a link with number 1, the new configuration file could look
825 like this:
826
827 ----
828 logging {
829 debug: off
830 to_syslog: yes
831 }
832
833 nodelist {
834
835 node {
836 name: due
837 nodeid: 2
838 quorum_votes: 1
839 ring0_addr: 10.10.10.2
840 ring1_addr: 10.20.20.2
841 }
842
843 node {
844 name: tre
845 nodeid: 3
846 quorum_votes: 1
847 ring0_addr: 10.10.10.3
848 ring1_addr: 10.20.20.3
849 }
850
851 node {
852 name: uno
853 nodeid: 1
854 quorum_votes: 1
855 ring0_addr: 10.10.10.1
856 ring1_addr: 10.20.20.1
857 }
858
859 }
860
861 quorum {
862 provider: corosync_votequorum
863 }
864
865 totem {
866 cluster_name: testcluster
867 config_version: 4
868 ip_version: ipv4-6
869 secauth: on
870 version: 2
871 interface {
872 linknumber: 0
873 }
874 interface {
875 linknumber: 1
876 }
877 }
878 ----
879
880 The new link will be enabled as soon as you follow the last steps to
881 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
882 be necessary. You can check that corosync loaded the new link using:
883
884 ----
885 journalctl -b -u corosync
886 ----
887
888 It might be a good idea to test the new link by temporarily disconnecting the
889 old link on one node and making sure that its status remains online while
890 disconnected:
891
892 ----
893 pvecm status
894 ----
895
896 If you see a healthy cluster state, it means that your new link is being used.
897
898
899 Role of SSH in {pve} Clusters
900 -----------------------------
901
902 {pve} utilizes SSH tunnels for various features.
903
904 * Proxying console/shell sessions (node and guests)
905 +
906 When using the shell for node B while being connected to node A, connects to a
907 terminal proxy on node A, which is in turn connected to the login shell on node
908 B via a non-interactive SSH tunnel.
909
910 * VM and CT memory and local-storage migration in 'secure' mode.
911 +
912 During the migration, one or more SSH tunnel(s) are established between the
913 source and target nodes, in order to exchange migration information and
914 transfer memory and disk contents.
915
916 * Storage replication
917
918 .Pitfalls due to automatic execution of `.bashrc` and siblings
919 [IMPORTANT]
920 ====
921 In case you have a custom `.bashrc`, or similar files that get executed on
922 login by the configured shell, `ssh` will automatically run it once the session
923 is established successfully. This can cause some unexpected behavior, as those
924 commands may be executed with root permissions on any of the operations
925 described above. This can cause possible problematic side-effects!
926
927 In order to avoid such complications, it's recommended to add a check in
928 `/root/.bashrc` to make sure the session is interactive, and only then run
929 `.bashrc` commands.
930
931 You can add this snippet at the beginning of your `.bashrc` file:
932
933 ----
934 # Early exit if not running interactively to avoid side-effects!
935 case $- in
936 *i*) ;;
937 *) return;;
938 esac
939 ----
940 ====
941
942
943 Corosync External Vote Support
944 ------------------------------
945
946 This section describes a way to deploy an external voter in a {pve} cluster.
947 When configured, the cluster can sustain more node failures without
948 violating safety properties of the cluster communication.
949
950 For this to work, there are two services involved:
951
952 * A QDevice daemon which runs on each {pve} node
953
954 * An external vote daemon which runs on an independent server
955
956 As a result, you can achieve higher availability, even in smaller setups (for
957 example 2+1 nodes).
958
959 QDevice Technical Overview
960 ~~~~~~~~~~~~~~~~~~~~~~~~~~
961
962 The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
963 node. It provides a configured number of votes to the cluster's quorum
964 subsystem, based on an externally running third-party arbitrator's decision.
965 Its primary use is to allow a cluster to sustain more node failures than
966 standard quorum rules allow. This can be done safely as the external device
967 can see all nodes and thus choose only one set of nodes to give its vote.
968 This will only be done if said set of nodes can have quorum (again) after
969 receiving the third-party vote.
970
971 Currently, only 'QDevice Net' is supported as a third-party arbitrator. This is
972 a daemon which provides a vote to a cluster partition, if it can reach the
973 partition members over the network. It will only give votes to one partition
974 of a cluster at any time.
975 It's designed to support multiple clusters and is almost configuration and
976 state free. New clusters are handled dynamically and no configuration file
977 is needed on the host running a QDevice.
978
979 The only requirements for the external host are that it needs network access to
980 the cluster and to have a corosync-qnetd package available. We provide a package
981 for Debian based hosts, and other Linux distributions should also have a package
982 available through their respective package manager.
983
984 NOTE: Unlike corosync itself, a QDevice connects to the cluster over TCP/IP.
985 The daemon can also run outside the LAN of the cluster and isn't limited to the
986 low latencies requirements of corosync.
987
988 Supported Setups
989 ~~~~~~~~~~~~~~~~
990
991 We support QDevices for clusters with an even number of nodes and recommend
992 it for 2 node clusters, if they should provide higher availability.
993 For clusters with an odd node count, we currently discourage the use of
994 QDevices. The reason for this is the difference in the votes which the QDevice
995 provides for each cluster type. Even numbered clusters get a single additional
996 vote, which only increases availability, because if the QDevice
997 itself fails, you are in the same position as with no QDevice at all.
998
999 On the other hand, with an odd numbered cluster size, the QDevice provides
1000 '(N-1)' votes -- where 'N' corresponds to the cluster node count. This
1001 alternative behavior makes sense; if it had only one additional vote, the
1002 cluster could get into a split-brain situation. This algorithm allows for all
1003 nodes but one (and naturally the QDevice itself) to fail. However, there are two
1004 drawbacks to this:
1005
1006 * If the QNet daemon itself fails, no other node may fail or the cluster
1007 immediately loses quorum. For example, in a cluster with 15 nodes, 7
1008 could fail before the cluster becomes inquorate. But, if a QDevice is
1009 configured here and it itself fails, **no single node** of the 15 may fail.
1010 The QDevice acts almost as a single point of failure in this case.
1011
1012 * The fact that all but one node plus QDevice may fail sounds promising at
1013 first, but this may result in a mass recovery of HA services, which could
1014 overload the single remaining node. Furthermore, a Ceph server will stop
1015 providing services if only '((N-1)/2)' nodes or less remain online.
1016
1017 If you understand the drawbacks and implications, you can decide yourself if
1018 you want to use this technology in an odd numbered cluster setup.
1019
1020 QDevice-Net Setup
1021 ~~~~~~~~~~~~~~~~~
1022
1023 We recommend running any daemon which provides votes to corosync-qdevice as an
1024 unprivileged user. {pve} and Debian provide a package which is already
1025 configured to do so.
1026 The traffic between the daemon and the cluster must be encrypted to ensure a
1027 safe and secure integration of the QDevice in {pve}.
1028
1029 First, install the 'corosync-qnetd' package on your external server
1030
1031 ----
1032 external# apt install corosync-qnetd
1033 ----
1034
1035 and the 'corosync-qdevice' package on all cluster nodes
1036
1037 ----
1038 pve# apt install corosync-qdevice
1039 ----
1040
1041 After doing this, ensure that all the nodes in the cluster are online.
1042
1043 You can now set up your QDevice by running the following command on one
1044 of the {pve} nodes:
1045
1046 ----
1047 pve# pvecm qdevice setup <QDEVICE-IP>
1048 ----
1049
1050 The SSH key from the cluster will be automatically copied to the QDevice.
1051
1052 NOTE: Make sure that the SSH configuration on your external server allows root
1053 login via password, if you are asked for a password during this step.
1054 If you receive an error such as 'Host key verification failed.' at this
1055 stage, running `pvecm updatecerts` could fix the issue.
1056
1057 After you enter the password and all the steps have successfully completed, you
1058 will see "Done". You can verify that the QDevice has been set up with:
1059
1060 ----
1061 pve# pvecm status
1062
1063 ...
1064
1065 Votequorum information
1066 ~~~~~~~~~~~~~~~~~~~~~
1067 Expected votes: 3
1068 Highest expected: 3
1069 Total votes: 3
1070 Quorum: 2
1071 Flags: Quorate Qdevice
1072
1073 Membership information
1074 ~~~~~~~~~~~~~~~~~~~~~~
1075 Nodeid Votes Qdevice Name
1076 0x00000001 1 A,V,NMW 192.168.22.180 (local)
1077 0x00000002 1 A,V,NMW 192.168.22.181
1078 0x00000000 1 Qdevice
1079
1080 ----
1081
1082
1083 Frequently Asked Questions
1084 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1085
1086 Tie Breaking
1087 ^^^^^^^^^^^^
1088
1089 In case of a tie, where two same-sized cluster partitions cannot see each other
1090 but can see the QDevice, the QDevice chooses one of those partitions randomly
1091 and provides a vote to it.
1092
1093 Possible Negative Implications
1094 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1095
1096 For clusters with an even node count, there are no negative implications when
1097 using a QDevice. If it fails to work, it is the same as not having a QDevice
1098 at all.
1099
1100 Adding/Deleting Nodes After QDevice Setup
1101 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1102
1103 If you want to add a new node or remove an existing one from a cluster with a
1104 QDevice setup, you need to remove the QDevice first. After that, you can add or
1105 remove nodes normally. Once you have a cluster with an even node count again,
1106 you can set up the QDevice again as described previously.
1107
1108 Removing the QDevice
1109 ^^^^^^^^^^^^^^^^^^^^
1110
1111 If you used the official `pvecm` tool to add the QDevice, you can remove it
1112 by running:
1113
1114 ----
1115 pve# pvecm qdevice remove
1116 ----
1117
1118 //Still TODO
1119 //^^^^^^^^^^
1120 //There is still stuff to add here
1121
1122
1123 Corosync Configuration
1124 ----------------------
1125
1126 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1127 controls the cluster membership and its network.
1128 For further information about it, check the corosync.conf man page:
1129 [source,bash]
1130 ----
1131 man corosync.conf
1132 ----
1133
1134 For node membership, you should always use the `pvecm` tool provided by {pve}.
1135 You may have to edit the configuration file manually for other changes.
1136 Here are a few best practice tips for doing this.
1137
1138 [[pvecm_edit_corosync_conf]]
1139 Edit corosync.conf
1140 ~~~~~~~~~~~~~~~~~~
1141
1142 Editing the corosync.conf file is not always very straightforward. There are
1143 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1144 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1145 propagate the changes to the local one, but not vice versa.
1146
1147 The configuration will get updated automatically, as soon as the file changes.
1148 This means that changes which can be integrated in a running corosync will take
1149 effect immediately. Thus, you should always make a copy and edit that instead,
1150 to avoid triggering unintended changes when saving the file while editing.
1151
1152 [source,bash]
1153 ----
1154 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1155 ----
1156
1157 Then, open the config file with your favorite editor, such as `nano` or
1158 `vim.tiny`, which come pre-installed on every {pve} node.
1159
1160 NOTE: Always increment the 'config_version' number after configuration changes;
1161 omitting this can lead to problems.
1162
1163 After making the necessary changes, create another copy of the current working
1164 configuration file. This serves as a backup if the new configuration fails to
1165 apply or causes other issues.
1166
1167 [source,bash]
1168 ----
1169 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1170 ----
1171
1172 Then replace the old configuration file with the new one:
1173 [source,bash]
1174 ----
1175 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1176 ----
1177
1178 You can check if the changes could be applied automatically, using the following
1179 commands:
1180 [source,bash]
1181 ----
1182 systemctl status corosync
1183 journalctl -b -u corosync
1184 ----
1185
1186 If the changes could not be applied automatically, you may have to restart the
1187 corosync service via:
1188 [source,bash]
1189 ----
1190 systemctl restart corosync
1191 ----
1192
1193 On errors, check the troubleshooting section below.
1194
1195 Troubleshooting
1196 ~~~~~~~~~~~~~~~
1197
1198 Issue: 'quorum.expected_votes must be configured'
1199 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1200
1201 When corosync starts to fail and you get the following message in the system log:
1202
1203 ----
1204 [...]
1205 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1206 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1207 'configuration error: nodelist or quorum.expected_votes must be configured!'
1208 [...]
1209 ----
1210
1211 It means that the hostname you set for a corosync 'ringX_addr' in the
1212 configuration could not be resolved.
1213
1214 Write Configuration When Not Quorate
1215 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1216
1217 If you need to change '/etc/pve/corosync.conf' on a node with no quorum, and you
1218 understand what you are doing, use:
1219 [source,bash]
1220 ----
1221 pvecm expected 1
1222 ----
1223
1224 This sets the expected vote count to 1 and makes the cluster quorate. You can
1225 then fix your configuration, or revert it back to the last working backup.
1226
1227 This is not enough if corosync cannot start anymore. In that case, it is best to
1228 edit the local copy of the corosync configuration in
1229 '/etc/corosync/corosync.conf', so that corosync can start again. Ensure that on
1230 all nodes, this configuration has the same content to avoid split-brain
1231 situations.
1232
1233
1234 [[pvecm_corosync_conf_glossary]]
1235 Corosync Configuration Glossary
1236 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1237
1238 ringX_addr::
1239 This names the different link addresses for the Kronosnet connections between
1240 nodes.
1241
1242
1243 Cluster Cold Start
1244 ------------------
1245
1246 It is obvious that a cluster is not quorate when all nodes are
1247 offline. This is a common case after a power failure.
1248
1249 NOTE: It is always a good idea to use an uninterruptible power supply
1250 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1251 you want HA.
1252
1253 On node startup, the `pve-guests` service is started and waits for
1254 quorum. Once quorate, it starts all guests which have the `onboot`
1255 flag set.
1256
1257 When you turn on nodes, or when power comes back after power failure,
1258 it is likely that some nodes will boot faster than others. Please keep in
1259 mind that guest startup is delayed until you reach quorum.
1260
1261
1262 Guest Migration
1263 ---------------
1264
1265 Migrating virtual guests to other nodes is a useful feature in a
1266 cluster. There are settings to control the behavior of such
1267 migrations. This can be done via the configuration file
1268 `datacenter.cfg` or for a specific migration via API or command line
1269 parameters.
1270
1271 It makes a difference if a guest is online or offline, or if it has
1272 local resources (like a local disk).
1273
1274 For details about virtual machine migration, see the
1275 xref:qm_migration[QEMU/KVM Migration Chapter].
1276
1277 For details about container migration, see the
1278 xref:pct_migration[Container Migration Chapter].
1279
1280 Migration Type
1281 ~~~~~~~~~~~~~~
1282
1283 The migration type defines if the migration data should be sent over an
1284 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1285 Setting the migration type to `insecure` means that the RAM content of a
1286 virtual guest is also transferred unencrypted, which can lead to
1287 information disclosure of critical data from inside the guest (for
1288 example, passwords or encryption keys).
1289
1290 Therefore, we strongly recommend using the secure channel if you do
1291 not have full control over the network and can not guarantee that no
1292 one is eavesdropping on it.
1293
1294 NOTE: Storage migration does not follow this setting. Currently, it
1295 always sends the storage content over a secure channel.
1296
1297 Encryption requires a lot of computing power, so this setting is often
1298 changed to `insecure` to achieve better performance. The impact on
1299 modern systems is lower because they implement AES encryption in
1300 hardware. The performance impact is particularly evident in fast
1301 networks, where you can transfer 10 Gbps or more.
1302
1303 Migration Network
1304 ~~~~~~~~~~~~~~~~~
1305
1306 By default, {pve} uses the network in which cluster communication
1307 takes place to send the migration traffic. This is not optimal both because
1308 sensitive cluster traffic can be disrupted and this network may not
1309 have the best bandwidth available on the node.
1310
1311 Setting the migration network parameter allows the use of a dedicated
1312 network for all migration traffic. In addition to the memory,
1313 this also affects the storage traffic for offline migrations.
1314
1315 The migration network is set as a network using CIDR notation. This
1316 has the advantage that you don't have to set individual IP addresses
1317 for each node. {pve} can determine the real address on the
1318 destination node from the network specified in the CIDR form. To
1319 enable this, the network must be specified so that each node has exactly one
1320 IP in the respective network.
1321
1322 Example
1323 ^^^^^^^
1324
1325 We assume that we have a three-node setup, with three separate
1326 networks. One for public communication with the Internet, one for
1327 cluster communication, and a very fast one, which we want to use as a
1328 dedicated network for migration.
1329
1330 A network configuration for such a setup might look as follows:
1331
1332 ----
1333 iface eno1 inet manual
1334
1335 # public network
1336 auto vmbr0
1337 iface vmbr0 inet static
1338 address 192.X.Y.57/24
1339 gateway 192.X.Y.1
1340 bridge-ports eno1
1341 bridge-stp off
1342 bridge-fd 0
1343
1344 # cluster network
1345 auto eno2
1346 iface eno2 inet static
1347 address 10.1.1.1/24
1348
1349 # fast network
1350 auto eno3
1351 iface eno3 inet static
1352 address 10.1.2.1/24
1353 ----
1354
1355 Here, we will use the network 10.1.2.0/24 as a migration network. For
1356 a single migration, you can do this using the `migration_network`
1357 parameter of the command line tool:
1358
1359 ----
1360 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1361 ----
1362
1363 To configure this as the default network for all migrations in the
1364 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1365 file:
1366
1367 ----
1368 # use dedicated migration network
1369 migration: secure,network=10.1.2.0/24
1370 ----
1371
1372 NOTE: The migration type must always be set when the migration network
1373 is set in `/etc/pve/datacenter.cfg`.
1374
1375
1376 ifdef::manvolnum[]
1377 include::pve-copyright.adoc[]
1378 endif::manvolnum[]