]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
pvecm: language and format fixup
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication. There's no explicit limit for the number of nodes in a cluster.
31 In practice, the actual possible node count may be limited by the host and
32 network performance. Currently (2021), there are reports of clusters (using
33 high-end enterprise hardware) with over 50 nodes in production.
34
35 `pvecm` can be used to create a new cluster, join nodes to a cluster,
36 leave the cluster, get status information, and do various other cluster-related
37 tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
38 is used to transparently distribute the cluster configuration to all cluster
39 nodes.
40
41 Grouping nodes into a cluster has the following advantages:
42
43 * Centralized, web-based management
44
45 * Multi-master clusters: each node can do all management tasks
46
47 * Use of `pmxcfs`, a database-driven file system, for storing configuration
48 files, replicated in real-time on all nodes using `corosync`
49
50 * Easy migration of virtual machines and containers between physical
51 hosts
52
53 * Fast deployment
54
55 * Cluster-wide services like firewall and HA
56
57
58 Requirements
59 ------------
60
61 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
62 for corosync to work.
63
64 * Date and time must be synchronized.
65
66 * An SSH tunnel on TCP port 22 between nodes is required.
67
68 * If you are interested in High Availability, you need to have at
69 least three nodes for reliable quorum. All nodes should have the
70 same version.
71
72 * We recommend a dedicated NIC for the cluster traffic, especially if
73 you use shared storage.
74
75 * The root password of a cluster node is required for adding nodes.
76
77 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
78 nodes.
79
80 NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
81 not supported as a production configuration and should only be done temporarily,
82 during an upgrade of the whole cluster from one major version to another.
83
84 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
85 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
86 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
87 upgrade procedure to {pve} 6.0.
88
89
90 Preparing Nodes
91 ---------------
92
93 First, install {PVE} on all nodes. Make sure that each node is
94 installed with the final hostname and IP configuration. Changing the
95 hostname and IP is not possible after cluster creation.
96
97 While it's common to reference all node names and their IPs in `/etc/hosts` (or
98 make their names resolvable through other means), this is not necessary for a
99 cluster to work. It may be useful however, as you can then connect from one node
100 to another via SSH, using the easier to remember node name (see also
101 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
102 recommend referencing nodes by their IP addresses in the cluster configuration.
103
104
105 [[pvecm_create_cluster]]
106 Create a Cluster
107 ----------------
108
109 You can either create a cluster on the console (login via `ssh`), or through
110 the API using the {pve} web interface (__Datacenter -> Cluster__).
111
112 NOTE: Use a unique name for your cluster. This name cannot be changed later.
113 The cluster name follows the same rules as node names.
114
115 [[pvecm_cluster_create_via_gui]]
116 Create via Web GUI
117 ~~~~~~~~~~~~~~~~~~
118
119 [thumbnail="screenshot/gui-cluster-create.png"]
120
121 Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
122 name and select a network connection from the drop-down list to serve as the
123 main cluster network (Link 0). It defaults to the IP resolved via the node's
124 hostname.
125
126 To add a second link as fallback, you can select the 'Advanced' checkbox and
127 choose an additional network interface (Link 1, see also
128 xref:pvecm_redundancy[Corosync Redundancy]).
129
130 NOTE: Ensure that the network selected for cluster communication is not used for
131 any high traffic purposes, like network storage or live-migration.
132 While the cluster network itself produces small amounts of data, it is very
133 sensitive to latency. Check out full
134 xref:pvecm_cluster_network_requirements[cluster network requirements].
135
136 [[pvecm_cluster_create_via_cli]]
137 Create via the Command Line
138 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
139
140 Login via `ssh` to the first {pve} node and run the following command:
141
142 ----
143 hp1# pvecm create CLUSTERNAME
144 ----
145
146 To check the state of the new cluster use:
147
148 ----
149 hp1# pvecm status
150 ----
151
152 Multiple Clusters in the Same Network
153 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154
155 It is possible to create multiple clusters in the same physical or logical
156 network. In this case, each cluster must have a unique name to avoid possible
157 clashes in the cluster communication stack. Furthermore, this helps avoid human
158 confusion by making clusters clearly distinguishable.
159
160 While the bandwidth requirement of a corosync cluster is relatively low, the
161 latency of packages and the package per second (PPS) rate is the limiting
162 factor. Different clusters in the same network can compete with each other for
163 these resources, so it may still make sense to use separate physical network
164 infrastructure for bigger clusters.
165
166 [[pvecm_join_node_to_cluster]]
167 Adding Nodes to the Cluster
168 ---------------------------
169
170 CAUTION: A node that is about to be added to the cluster cannot hold any guests.
171 All existing configuration in `/etc/pve` is overwritten when joining a cluster,
172 since guest IDs could otherwise conflict. As a workaround, you can create a
173 backup of the guest (`vzdump`) and restore it under a different ID, after the
174 node has been added to the cluster.
175
176 Join Node to Cluster via GUI
177 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
178
179 [thumbnail="screenshot/gui-cluster-join-information.png"]
180
181 Log in to the web interface on an existing cluster node. Under __Datacenter ->
182 Cluster__, click the *Join Information* button at the top. Then, click on the
183 button *Copy Information*. Alternatively, copy the string from the 'Information'
184 field manually.
185
186 [thumbnail="screenshot/gui-cluster-join.png"]
187
188 Next, log in to the web interface on the node you want to add.
189 Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
190 'Information' field with the 'Join Information' text you copied earlier.
191 Most settings required for joining the cluster will be filled out
192 automatically. For security reasons, the cluster password has to be entered
193 manually.
194
195 NOTE: To enter all required data manually, you can disable the 'Assisted Join'
196 checkbox.
197
198 After clicking the *Join* button, the cluster join process will start
199 immediately. After the node has joined the cluster, its current node certificate
200 will be replaced by one signed from the cluster certificate authority (CA).
201 This means that the current session will stop working after a few seconds. You
202 then might need to force-reload the web interface and log in again with the
203 cluster credentials.
204
205 Now your node should be visible under __Datacenter -> Cluster__.
206
207 Join Node to Cluster via Command Line
208 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
209
210 Log in to the node you want to join into an existing cluster via `ssh`.
211
212 ----
213 hp2# pvecm add IP-ADDRESS-CLUSTER
214 ----
215
216 For `IP-ADDRESS-CLUSTER`, use the IP or hostname of an existing cluster node.
217 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
218
219
220 To check the state of the cluster use:
221
222 ----
223 # pvecm status
224 ----
225
226 .Cluster status after adding 4 nodes
227 ----
228 hp2# pvecm status
229 Quorum information
230 ~~~~~~~~~~~~~~~~~~
231 Date: Mon Apr 20 12:30:13 2015
232 Quorum provider: corosync_votequorum
233 Nodes: 4
234 Node ID: 0x00000001
235 Ring ID: 1/8
236 Quorate: Yes
237
238 Votequorum information
239 ~~~~~~~~~~~~~~~~~~~~~~
240 Expected votes: 4
241 Highest expected: 4
242 Total votes: 4
243 Quorum: 3
244 Flags: Quorate
245
246 Membership information
247 ~~~~~~~~~~~~~~~~~~~~~~
248 Nodeid Votes Name
249 0x00000001 1 192.168.15.91
250 0x00000002 1 192.168.15.92 (local)
251 0x00000003 1 192.168.15.93
252 0x00000004 1 192.168.15.94
253 ----
254
255 If you only want a list of all nodes, use:
256
257 ----
258 # pvecm nodes
259 ----
260
261 .List nodes in a cluster
262 ----
263 hp2# pvecm nodes
264
265 Membership information
266 ~~~~~~~~~~~~~~~~~~~~~~
267 Nodeid Votes Name
268 1 1 hp1
269 2 1 hp2 (local)
270 3 1 hp3
271 4 1 hp4
272 ----
273
274 [[pvecm_adding_nodes_with_separated_cluster_network]]
275 Adding Nodes with Separated Cluster Network
276 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
277
278 When adding a node to a cluster with a separated cluster network, you need to
279 use the 'link0' parameter to set the nodes address on that network:
280
281 [source,bash]
282 ----
283 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
284 ----
285
286 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
287 Kronosnet transport layer, also use the 'link1' parameter.
288
289 Using the GUI, you can select the correct interface from the corresponding
290 'Link X' fields in the *Cluster Join* dialog.
291
292 Remove a Cluster Node
293 ---------------------
294
295 CAUTION: Read the procedure carefully before proceeding, as it may
296 not be what you want or need.
297
298 Move all virtual machines from the node. Make sure you have made copies of any
299 local data or backups that you want to keep. In the following example, we will
300 remove the node hp4 from the cluster.
301
302 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
303 command to identify the node ID to remove:
304
305 ----
306 hp1# pvecm nodes
307
308 Membership information
309 ~~~~~~~~~~~~~~~~~~~~~~
310 Nodeid Votes Name
311 1 1 hp1 (local)
312 2 1 hp2
313 3 1 hp3
314 4 1 hp4
315 ----
316
317
318 At this point, you must power off hp4 and ensure that it will not power on
319 again (in the network) with its current configuration.
320
321 IMPORTANT: As mentioned above, it is critical to power off the node
322 *before* removal, and make sure that it will *not* power on again
323 (in the existing cluster network) with its current configuration.
324 If you power on the node as it is, the cluster could end up broken,
325 and it could be difficult to restore it to a functioning state.
326
327 After powering off the node hp4, we can safely remove it from the cluster.
328
329 ----
330 hp1# pvecm delnode hp4
331 Killing node 4
332 ----
333
334 Use `pvecm nodes` or `pvecm status` to check the node list again. It should
335 look something like:
336
337 ----
338 hp1# pvecm status
339
340 Quorum information
341 ~~~~~~~~~~~~~~~~~~
342 Date: Mon Apr 20 12:44:28 2015
343 Quorum provider: corosync_votequorum
344 Nodes: 3
345 Node ID: 0x00000001
346 Ring ID: 1/8
347 Quorate: Yes
348
349 Votequorum information
350 ~~~~~~~~~~~~~~~~~~~~~~
351 Expected votes: 3
352 Highest expected: 3
353 Total votes: 3
354 Quorum: 2
355 Flags: Quorate
356
357 Membership information
358 ~~~~~~~~~~~~~~~~~~~~~~
359 Nodeid Votes Name
360 0x00000001 1 192.168.15.90 (local)
361 0x00000002 1 192.168.15.91
362 0x00000003 1 192.168.15.92
363 ----
364
365 If, for whatever reason, you want this server to join the same cluster again,
366 you have to:
367
368 * do a fresh install of {pve} on it,
369
370 * then join it, as explained in the previous section.
371
372 NOTE: After removal of the node, its SSH fingerprint will still reside in the
373 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
374 a node with the same IP or hostname, run `pvecm updatecerts` once on the
375 re-added node to update its fingerprint cluster wide.
376
377 [[pvecm_separate_node_without_reinstall]]
378 Separate a Node Without Reinstalling
379 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
380
381 CAUTION: This is *not* the recommended method, proceed with caution. Use the
382 previous method if you're unsure.
383
384 You can also separate a node from a cluster without reinstalling it from
385 scratch. But after removing the node from the cluster, it will still have
386 access to any shared storage. This must be resolved before you start removing
387 the node from the cluster. A {pve} cluster cannot share the exact same
388 storage with another cluster, as storage locking doesn't work over the cluster
389 boundary. Furthermore, it may also lead to VMID conflicts.
390
391 It's suggested that you create a new storage, where only the node which you want
392 to separate has access. This can be a new export on your NFS or a new Ceph
393 pool, to name a few examples. It's just important that the exact same storage
394 does not get accessed by multiple clusters. After setting up this storage, move
395 all data and VMs from the node to it. Then you are ready to separate the
396 node from the cluster.
397
398 WARNING: Ensure that all shared resources are cleanly separated! Otherwise you
399 will run into conflicts and problems.
400
401 First, stop the corosync and pve-cluster services on the node:
402 [source,bash]
403 ----
404 systemctl stop pve-cluster
405 systemctl stop corosync
406 ----
407
408 Start the cluster file system again in local mode:
409 [source,bash]
410 ----
411 pmxcfs -l
412 ----
413
414 Delete the corosync configuration files:
415 [source,bash]
416 ----
417 rm /etc/pve/corosync.conf
418 rm -r /etc/corosync/*
419 ----
420
421 You can now start the file system again as a normal service:
422 [source,bash]
423 ----
424 killall pmxcfs
425 systemctl start pve-cluster
426 ----
427
428 The node is now separated from the cluster. You can deleted it from any
429 remaining node of the cluster with:
430 [source,bash]
431 ----
432 pvecm delnode oldnode
433 ----
434
435 If the command fails due to a loss of quorum in the remaining node, you can set
436 the expected votes to 1 as a workaround:
437 [source,bash]
438 ----
439 pvecm expected 1
440 ----
441
442 And then repeat the 'pvecm delnode' command.
443
444 Now switch back to the separated node and delete all the remaining cluster
445 files on it. This ensures that the node can be added to another cluster again
446 without problems.
447
448 [source,bash]
449 ----
450 rm /var/lib/corosync/*
451 ----
452
453 As the configuration files from the other nodes are still in the cluster
454 file system, you may want to clean those up too. After making absolutely sure
455 that you have the correct node name, you can simply remove the entire
456 directory recursively from '/etc/pve/nodes/NODENAME'.
457
458 CAUTION: The node's SSH keys will remain in the 'authorized_key' file. This
459 means that the nodes can still connect to each other with public key
460 authentication. You should fix this by removing the respective keys from the
461 '/etc/pve/priv/authorized_keys' file.
462
463
464 Quorum
465 ------
466
467 {pve} use a quorum-based technique to provide a consistent state among
468 all cluster nodes.
469
470 [quote, from Wikipedia, Quorum (distributed computing)]
471 ____
472 A quorum is the minimum number of votes that a distributed transaction
473 has to obtain in order to be allowed to perform an operation in a
474 distributed system.
475 ____
476
477 In case of network partitioning, state changes requires that a
478 majority of nodes are online. The cluster switches to read-only mode
479 if it loses quorum.
480
481 NOTE: {pve} assigns a single vote to each node by default.
482
483
484 Cluster Network
485 ---------------
486
487 The cluster network is the core of a cluster. All messages sent over it have to
488 be delivered reliably to all nodes in their respective order. In {pve} this
489 part is done by corosync, an implementation of a high performance, low overhead,
490 high availability development toolkit. It serves our decentralized configuration
491 file system (`pmxcfs`).
492
493 [[pvecm_cluster_network_requirements]]
494 Network Requirements
495 ~~~~~~~~~~~~~~~~~~~~
496 This needs a reliable network with latencies under 2 milliseconds (LAN
497 performance) to work properly. The network should not be used heavily by other
498 members; ideally corosync runs on its own network. Do not use a shared network
499 for corosync and storage (except as a potential low-priority fallback in a
500 xref:pvecm_redundancy[redundant] configuration).
501
502 Before setting up a cluster, it is good practice to check if the network is fit
503 for that purpose. To ensure that the nodes can connect to each other on the
504 cluster network, you can test the connectivity between them with the `ping`
505 tool.
506
507 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
508 be generated - no manual action is required.
509
510 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
511 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
512 communication, which, for now, only supports regular UDP unicast.
513
514 CAUTION: You can still enable Multicast or legacy unicast by setting your
515 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
516 but keep in mind that this will disable all cryptography and redundancy support.
517 This is therefore not recommended.
518
519 Separate Cluster Network
520 ~~~~~~~~~~~~~~~~~~~~~~~~
521
522 When creating a cluster without any parameters, the corosync cluster network is
523 generally shared with the web interface and the VMs' network. Depending on
524 your setup, even storage traffic may get sent over the same network. It's
525 recommended to change that, as corosync is a time-critical, real-time
526 application.
527
528 Setting Up a New Network
529 ^^^^^^^^^^^^^^^^^^^^^^^^
530
531 First, you have to set up a new network interface. It should be on a physically
532 separate network. Ensure that your network fulfills the
533 xref:pvecm_cluster_network_requirements[cluster network requirements].
534
535 Separate On Cluster Creation
536 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
537
538 This is possible via the 'linkX' parameters of the 'pvecm create'
539 command, used for creating a new cluster.
540
541 If you have set up an additional NIC with a static address on 10.10.10.1/25,
542 and want to send and receive all cluster communication over this interface,
543 you would execute:
544
545 [source,bash]
546 ----
547 pvecm create test --link0 10.10.10.1
548 ----
549
550 To check if everything is working properly, execute:
551 [source,bash]
552 ----
553 systemctl status corosync
554 ----
555
556 Afterwards, proceed as described above to
557 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
558
559 [[pvecm_separate_cluster_net_after_creation]]
560 Separate After Cluster Creation
561 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
562
563 You can do this if you have already created a cluster and want to switch
564 its communication to another network, without rebuilding the whole cluster.
565 This change may lead to short periods of quorum loss in the cluster, as nodes
566 have to restart corosync and come up one after the other on the new network.
567
568 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
569 Then, open it and you should see a file similar to:
570
571 ----
572 logging {
573 debug: off
574 to_syslog: yes
575 }
576
577 nodelist {
578
579 node {
580 name: due
581 nodeid: 2
582 quorum_votes: 1
583 ring0_addr: due
584 }
585
586 node {
587 name: tre
588 nodeid: 3
589 quorum_votes: 1
590 ring0_addr: tre
591 }
592
593 node {
594 name: uno
595 nodeid: 1
596 quorum_votes: 1
597 ring0_addr: uno
598 }
599
600 }
601
602 quorum {
603 provider: corosync_votequorum
604 }
605
606 totem {
607 cluster_name: testcluster
608 config_version: 3
609 ip_version: ipv4-6
610 secauth: on
611 version: 2
612 interface {
613 linknumber: 0
614 }
615
616 }
617 ----
618
619 NOTE: `ringX_addr` actually specifies a corosync *link address*. The name "ring"
620 is a remnant of older corosync versions that is kept for backwards
621 compatibility.
622
623 The first thing you want to do is add the 'name' properties in the node entries,
624 if you do not see them already. Those *must* match the node name.
625
626 Then replace all addresses from the 'ring0_addr' properties of all nodes with
627 the new addresses. You may use plain IP addresses or hostnames here. If you use
628 hostnames, ensure that they are resolvable from all nodes (see also
629 xref:pvecm_corosync_addresses[Link Address Types]).
630
631 In this example, we want to switch cluster communication to the
632 10.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively.
633
634 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
635 as well. However, we recommend only changing one link address at a time, so
636 that it's easier to recover if something goes wrong.
637
638 After we increase the 'config_version' property, the new configuration file
639 should look like:
640
641 ----
642 logging {
643 debug: off
644 to_syslog: yes
645 }
646
647 nodelist {
648
649 node {
650 name: due
651 nodeid: 2
652 quorum_votes: 1
653 ring0_addr: 10.10.10.2
654 }
655
656 node {
657 name: tre
658 nodeid: 3
659 quorum_votes: 1
660 ring0_addr: 10.10.10.3
661 }
662
663 node {
664 name: uno
665 nodeid: 1
666 quorum_votes: 1
667 ring0_addr: 10.10.10.1
668 }
669
670 }
671
672 quorum {
673 provider: corosync_votequorum
674 }
675
676 totem {
677 cluster_name: testcluster
678 config_version: 4
679 ip_version: ipv4-6
680 secauth: on
681 version: 2
682 interface {
683 linknumber: 0
684 }
685
686 }
687 ----
688
689 Then, after a final check to see that all changed information is correct, we
690 save it and once again follow the
691 xref:pvecm_edit_corosync_conf[edit corosync.conf file] section to bring it into
692 effect.
693
694 The changes will be applied live, so restarting corosync is not strictly
695 necessary. If you changed other settings as well, or notice corosync
696 complaining, you can optionally trigger a restart.
697
698 On a single node execute:
699
700 [source,bash]
701 ----
702 systemctl restart corosync
703 ----
704
705 Now check if everything is okay:
706
707 [source,bash]
708 ----
709 systemctl status corosync
710 ----
711
712 If corosync begins to work again, restart it on all other nodes too.
713 They will then join the cluster membership one by one on the new network.
714
715 [[pvecm_corosync_addresses]]
716 Corosync Addresses
717 ~~~~~~~~~~~~~~~~~~
718
719 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
720 `corosync.conf`) can be specified in two ways:
721
722 * **IPv4/v6 addresses** can be used directly. They are recommended, since they
723 are static and usually not changed carelessly.
724
725 * **Hostnames** will be resolved using `getaddrinfo`, which means that by
726 default, IPv6 addresses will be used first, if available (see also
727 `man gai.conf`). Keep this in mind, especially when upgrading an existing
728 cluster to IPv6.
729
730 CAUTION: Hostnames should be used with care, since the addresses they
731 resolve to can be changed without touching corosync or the node it runs on -
732 which may lead to a situation where an address is changed without thinking
733 about implications for corosync.
734
735 A separate, static hostname specifically for corosync is recommended, if
736 hostnames are preferred. Also, make sure that every node in the cluster can
737 resolve all hostnames correctly.
738
739 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
740 entry. Only the resolved IP is saved to the configuration.
741
742 Nodes that joined the cluster on earlier versions likely still use their
743 unresolved hostname in `corosync.conf`. It might be a good idea to replace
744 them with IPs or a separate hostname, as mentioned above.
745
746
747 [[pvecm_redundancy]]
748 Corosync Redundancy
749 -------------------
750
751 Corosync supports redundant networking via its integrated Kronosnet layer by
752 default (it is not supported on the legacy udp/udpu transports). It can be
753 enabled by specifying more than one link address, either via the '--linkX'
754 parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
755 adding a new node) or by specifying more than one 'ringX_addr' in
756 `corosync.conf`.
757
758 NOTE: To provide useful failover, every link should be on its own
759 physical network connection.
760
761 Links are used according to a priority setting. You can configure this priority
762 by setting 'knet_link_priority' in the corresponding interface section in
763 `corosync.conf`, or, preferably, using the 'priority' parameter when creating
764 your cluster with `pvecm`:
765
766 ----
767 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
768 ----
769
770 This would cause 'link1' to be used first, since it has the higher priority.
771
772 If no priorities are configured manually (or two links have the same priority),
773 links will be used in order of their number, with the lower number having higher
774 priority.
775
776 Even if all links are working, only the one with the highest priority will see
777 corosync traffic. Link priorities cannot be mixed, meaning that links with
778 different priorities will not be able to communicate with each other.
779
780 Since lower priority links will not see traffic unless all higher priorities
781 have failed, it becomes a useful strategy to specify networks used for
782 other tasks (VMs, storage, etc.) as low-priority links. If worst comes to
783 worst, a higher latency or more congested connection might be better than no
784 connection at all.
785
786 Adding Redundant Links To An Existing Cluster
787 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
788
789 To add a new link to a running configuration, first check how to
790 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
791
792 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
793 sure that your 'X' is the same for every node you add it to, and that it is
794 unique for each node.
795
796 Lastly, add a new 'interface', as shown below, to your `totem`
797 section, replacing 'X' with the link number chosen above.
798
799 Assuming you added a link with number 1, the new configuration file could look
800 like this:
801
802 ----
803 logging {
804 debug: off
805 to_syslog: yes
806 }
807
808 nodelist {
809
810 node {
811 name: due
812 nodeid: 2
813 quorum_votes: 1
814 ring0_addr: 10.10.10.2
815 ring1_addr: 10.20.20.2
816 }
817
818 node {
819 name: tre
820 nodeid: 3
821 quorum_votes: 1
822 ring0_addr: 10.10.10.3
823 ring1_addr: 10.20.20.3
824 }
825
826 node {
827 name: uno
828 nodeid: 1
829 quorum_votes: 1
830 ring0_addr: 10.10.10.1
831 ring1_addr: 10.20.20.1
832 }
833
834 }
835
836 quorum {
837 provider: corosync_votequorum
838 }
839
840 totem {
841 cluster_name: testcluster
842 config_version: 4
843 ip_version: ipv4-6
844 secauth: on
845 version: 2
846 interface {
847 linknumber: 0
848 }
849 interface {
850 linknumber: 1
851 }
852 }
853 ----
854
855 The new link will be enabled as soon as you follow the last steps to
856 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
857 be necessary. You can check that corosync loaded the new link using:
858
859 ----
860 journalctl -b -u corosync
861 ----
862
863 It might be a good idea to test the new link by temporarily disconnecting the
864 old link on one node and making sure that its status remains online while
865 disconnected:
866
867 ----
868 pvecm status
869 ----
870
871 If you see a healthy cluster state, it means that your new link is being used.
872
873
874 Role of SSH in {PVE} Clusters
875 -----------------------------
876
877 {PVE} utilizes SSH tunnels for various features.
878
879 * Proxying console/shell sessions (node and guests)
880 +
881 When using the shell for node B while being connected to node A, connects to a
882 terminal proxy on node A, which is in turn connected to the login shell on node
883 B via a non-interactive SSH tunnel.
884
885 * VM and CT memory and local-storage migration in 'secure' mode.
886 +
887 During the migration, one or more SSH tunnel(s) are established between the
888 source and target nodes, in order to exchange migration information and
889 transfer memory and disk contents.
890
891 * Storage replication
892
893 .Pitfalls due to automatic execution of `.bashrc` and siblings
894 [IMPORTANT]
895 ====
896 In case you have a custom `.bashrc`, or similar files that get executed on
897 login by the configured shell, `ssh` will automatically run it once the session
898 is established successfully. This can cause some unexpected behavior, as those
899 commands may be executed with root permissions on any of the operations
900 described above. This can cause possible problematic side-effects!
901
902 In order to avoid such complications, it's recommended to add a check in
903 `/root/.bashrc` to make sure the session is interactive, and only then run
904 `.bashrc` commands.
905
906 You can add this snippet at the beginning of your `.bashrc` file:
907
908 ----
909 # Early exit if not running interactively to avoid side-effects!
910 case $- in
911 *i*) ;;
912 *) return;;
913 esac
914 ----
915 ====
916
917
918 Corosync External Vote Support
919 ------------------------------
920
921 This section describes a way to deploy an external voter in a {pve} cluster.
922 When configured, the cluster can sustain more node failures without
923 violating safety properties of the cluster communication.
924
925 For this to work, there are two services involved:
926
927 * A QDevice daemon which runs on each {pve} node
928
929 * An external vote daemon which runs on an independent server
930
931 As a result, you can achieve higher availability, even in smaller setups (for
932 example 2+1 nodes).
933
934 QDevice Technical Overview
935 ~~~~~~~~~~~~~~~~~~~~~~~~~~
936
937 The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
938 node. It provides a configured number of votes to the cluster's quorum
939 subsystem, based on an externally running third-party arbitrator's decision.
940 Its primary use is to allow a cluster to sustain more node failures than
941 standard quorum rules allow. This can be done safely as the external device
942 can see all nodes and thus choose only one set of nodes to give its vote.
943 This will only be done if said set of nodes can have quorum (again) after
944 receiving the third-party vote.
945
946 Currently, only 'QDevice Net' is supported as a third-party arbitrator. This is
947 a daemon which provides a vote to a cluster partition, if it can reach the
948 partition members over the network. It will only give votes to one partition
949 of a cluster at any time.
950 It's designed to support multiple clusters and is almost configuration and
951 state free. New clusters are handled dynamically and no configuration file
952 is needed on the host running a QDevice.
953
954 The only requirements for the external host are that it needs network access to
955 the cluster and to have a corosync-qnetd package available. We provide a package
956 for Debian based hosts, and other Linux distributions should also have a package
957 available through their respective package manager.
958
959 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
960 TCP/IP. The daemon may even run outside of the cluster's LAN and can have longer
961 latencies than 2 ms.
962
963 Supported Setups
964 ~~~~~~~~~~~~~~~~
965
966 We support QDevices for clusters with an even number of nodes and recommend
967 it for 2 node clusters, if they should provide higher availability.
968 For clusters with an odd node count, we currently discourage the use of
969 QDevices. The reason for this is the difference in the votes which the QDevice
970 provides for each cluster type. Even numbered clusters get a single additional
971 vote, which only increases availability, because if the QDevice
972 itself fails, you are in the same position as with no QDevice at all.
973
974 On the other hand, with an odd numbered cluster size, the QDevice provides
975 '(N-1)' votes -- where 'N' corresponds to the cluster node count. This
976 alternative behavior makes sense; if it had only one additional vote, the
977 cluster could get into a split-brain situation. This algorithm allows for all
978 nodes but one (and naturally the QDevice itself) to fail. However, there are two
979 drawbacks to this:
980
981 * If the QNet daemon itself fails, no other node may fail or the cluster
982 immediately loses quorum. For example, in a cluster with 15 nodes, 7
983 could fail before the cluster becomes inquorate. But, if a QDevice is
984 configured here and it itself fails, **no single node** of the 15 may fail.
985 The QDevice acts almost as a single point of failure in this case.
986
987 * The fact that all but one node plus QDevice may fail sounds promising at
988 first, but this may result in a mass recovery of HA services, which could
989 overload the single remaining node. Furthermore, a Ceph server will stop
990 providing services if only '((N-1)/2)' nodes or less remain online.
991
992 If you understand the drawbacks and implications, you can decide yourself if
993 you want to use this technology in an odd numbered cluster setup.
994
995 QDevice-Net Setup
996 ~~~~~~~~~~~~~~~~~
997
998 We recommend running any daemon which provides votes to corosync-qdevice as an
999 unprivileged user. {pve} and Debian provide a package which is already
1000 configured to do so.
1001 The traffic between the daemon and the cluster must be encrypted to ensure a
1002 safe and secure integration of the QDevice in {pve}.
1003
1004 First, install the 'corosync-qnetd' package on your external server
1005
1006 ----
1007 external# apt install corosync-qnetd
1008 ----
1009
1010 and the 'corosync-qdevice' package on all cluster nodes
1011
1012 ----
1013 pve# apt install corosync-qdevice
1014 ----
1015
1016 After doing this, ensure that all the nodes in the cluster are online.
1017
1018 You can now set up your QDevice by running the following command on one
1019 of the {pve} nodes:
1020
1021 ----
1022 pve# pvecm qdevice setup <QDEVICE-IP>
1023 ----
1024
1025 The SSH key from the cluster will be automatically copied to the QDevice.
1026
1027 NOTE: Make sure that the SSH configuration on your external server allows root
1028 login via password, if you are asked for a password during this step.
1029
1030 After you enter the password and all the steps have successfully completed, you
1031 will see "Done". You can verify that the QDevice has been set up with:
1032
1033 ----
1034 pve# pvecm status
1035
1036 ...
1037
1038 Votequorum information
1039 ~~~~~~~~~~~~~~~~~~~~~
1040 Expected votes: 3
1041 Highest expected: 3
1042 Total votes: 3
1043 Quorum: 2
1044 Flags: Quorate Qdevice
1045
1046 Membership information
1047 ~~~~~~~~~~~~~~~~~~~~~~
1048 Nodeid Votes Qdevice Name
1049 0x00000001 1 A,V,NMW 192.168.22.180 (local)
1050 0x00000002 1 A,V,NMW 192.168.22.181
1051 0x00000000 1 Qdevice
1052
1053 ----
1054
1055
1056 Frequently Asked Questions
1057 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1058
1059 Tie Breaking
1060 ^^^^^^^^^^^^
1061
1062 In case of a tie, where two same-sized cluster partitions cannot see each other
1063 but can see the QDevice, the QDevice chooses one of those partitions randomly
1064 and provides a vote to it.
1065
1066 Possible Negative Implications
1067 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1068
1069 For clusters with an even node count, there are no negative implications when
1070 using a QDevice. If it fails to work, it is the same as not having a QDevice
1071 at all.
1072
1073 Adding/Deleting Nodes After QDevice Setup
1074 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1075
1076 If you want to add a new node or remove an existing one from a cluster with a
1077 QDevice setup, you need to remove the QDevice first. After that, you can add or
1078 remove nodes normally. Once you have a cluster with an even node count again,
1079 you can set up the QDevice again as described previously.
1080
1081 Removing the QDevice
1082 ^^^^^^^^^^^^^^^^^^^^
1083
1084 If you used the official `pvecm` tool to add the QDevice, you can remove it
1085 by running:
1086
1087 ----
1088 pve# pvecm qdevice remove
1089 ----
1090
1091 //Still TODO
1092 //^^^^^^^^^^
1093 //There is still stuff to add here
1094
1095
1096 Corosync Configuration
1097 ----------------------
1098
1099 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1100 controls the cluster membership and its network.
1101 For further information about it, check the corosync.conf man page:
1102 [source,bash]
1103 ----
1104 man corosync.conf
1105 ----
1106
1107 For node membership, you should always use the `pvecm` tool provided by {pve}.
1108 You may have to edit the configuration file manually for other changes.
1109 Here are a few best practice tips for doing this.
1110
1111 [[pvecm_edit_corosync_conf]]
1112 Edit corosync.conf
1113 ~~~~~~~~~~~~~~~~~~
1114
1115 Editing the corosync.conf file is not always very straightforward. There are
1116 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1117 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1118 propagate the changes to the local one, but not vice versa.
1119
1120 The configuration will get updated automatically, as soon as the file changes.
1121 This means that changes which can be integrated in a running corosync will take
1122 effect immediately. Thus, you should always make a copy and edit that instead,
1123 to avoid triggering unintended changes when saving the file while editing.
1124
1125 [source,bash]
1126 ----
1127 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1128 ----
1129
1130 Then, open the config file with your favorite editor, such as `nano` or
1131 `vim.tiny`, which come pre-installed on every {pve} node.
1132
1133 NOTE: Always increment the 'config_version' number after configuration changes;
1134 omitting this can lead to problems.
1135
1136 After making the necessary changes, create another copy of the current working
1137 configuration file. This serves as a backup if the new configuration fails to
1138 apply or causes other issues.
1139
1140 [source,bash]
1141 ----
1142 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1143 ----
1144
1145 Then replace the old configuration file with the new one:
1146 [source,bash]
1147 ----
1148 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1149 ----
1150
1151 You can check if the changes could be applied automatically, using the following
1152 commands:
1153 [source,bash]
1154 ----
1155 systemctl status corosync
1156 journalctl -b -u corosync
1157 ----
1158
1159 If the changes could not be applied automatically, you may have to restart the
1160 corosync service via:
1161 [source,bash]
1162 ----
1163 systemctl restart corosync
1164 ----
1165
1166 On errors, check the troubleshooting section below.
1167
1168 Troubleshooting
1169 ~~~~~~~~~~~~~~~
1170
1171 Issue: 'quorum.expected_votes must be configured'
1172 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1173
1174 When corosync starts to fail and you get the following message in the system log:
1175
1176 ----
1177 [...]
1178 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1179 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1180 'configuration error: nodelist or quorum.expected_votes must be configured!'
1181 [...]
1182 ----
1183
1184 It means that the hostname you set for a corosync 'ringX_addr' in the
1185 configuration could not be resolved.
1186
1187 Write Configuration When Not Quorate
1188 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1189
1190 If you need to change '/etc/pve/corosync.conf' on a node with no quorum, and you
1191 understand what you are doing, use:
1192 [source,bash]
1193 ----
1194 pvecm expected 1
1195 ----
1196
1197 This sets the expected vote count to 1 and makes the cluster quorate. You can
1198 then fix your configuration, or revert it back to the last working backup.
1199
1200 This is not enough if corosync cannot start anymore. In that case, it is best to
1201 edit the local copy of the corosync configuration in
1202 '/etc/corosync/corosync.conf', so that corosync can start again. Ensure that on
1203 all nodes, this configuration has the same content to avoid split-brain
1204 situations.
1205
1206
1207 [[pvecm_corosync_conf_glossary]]
1208 Corosync Configuration Glossary
1209 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1210
1211 ringX_addr::
1212 This names the different link addresses for the Kronosnet connections between
1213 nodes.
1214
1215
1216 Cluster Cold Start
1217 ------------------
1218
1219 It is obvious that a cluster is not quorate when all nodes are
1220 offline. This is a common case after a power failure.
1221
1222 NOTE: It is always a good idea to use an uninterruptible power supply
1223 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1224 you want HA.
1225
1226 On node startup, the `pve-guests` service is started and waits for
1227 quorum. Once quorate, it starts all guests which have the `onboot`
1228 flag set.
1229
1230 When you turn on nodes, or when power comes back after power failure,
1231 it is likely that some nodes will boot faster than others. Please keep in
1232 mind that guest startup is delayed until you reach quorum.
1233
1234
1235 Guest Migration
1236 ---------------
1237
1238 Migrating virtual guests to other nodes is a useful feature in a
1239 cluster. There are settings to control the behavior of such
1240 migrations. This can be done via the configuration file
1241 `datacenter.cfg` or for a specific migration via API or command line
1242 parameters.
1243
1244 It makes a difference if a guest is online or offline, or if it has
1245 local resources (like a local disk).
1246
1247 For details about virtual machine migration, see the
1248 xref:qm_migration[QEMU/KVM Migration Chapter].
1249
1250 For details about container migration, see the
1251 xref:pct_migration[Container Migration Chapter].
1252
1253 Migration Type
1254 ~~~~~~~~~~~~~~
1255
1256 The migration type defines if the migration data should be sent over an
1257 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1258 Setting the migration type to insecure means that the RAM content of a
1259 virtual guest is also transferred unencrypted, which can lead to
1260 information disclosure of critical data from inside the guest (for
1261 example, passwords or encryption keys).
1262
1263 Therefore, we strongly recommend using the secure channel if you do
1264 not have full control over the network and can not guarantee that no
1265 one is eavesdropping on it.
1266
1267 NOTE: Storage migration does not follow this setting. Currently, it
1268 always sends the storage content over a secure channel.
1269
1270 Encryption requires a lot of computing power, so this setting is often
1271 changed to "unsafe" to achieve better performance. The impact on
1272 modern systems is lower because they implement AES encryption in
1273 hardware. The performance impact is particularly evident in fast
1274 networks, where you can transfer 10 Gbps or more.
1275
1276 Migration Network
1277 ~~~~~~~~~~~~~~~~~
1278
1279 By default, {pve} uses the network in which cluster communication
1280 takes place to send the migration traffic. This is not optimal both because
1281 sensitive cluster traffic can be disrupted and this network may not
1282 have the best bandwidth available on the node.
1283
1284 Setting the migration network parameter allows the use of a dedicated
1285 network for all migration traffic. In addition to the memory,
1286 this also affects the storage traffic for offline migrations.
1287
1288 The migration network is set as a network using CIDR notation. This
1289 has the advantage that you don't have to set individual IP addresses
1290 for each node. {pve} can determine the real address on the
1291 destination node from the network specified in the CIDR form. To
1292 enable this, the network must be specified so that each node has exactly one
1293 IP in the respective network.
1294
1295 Example
1296 ^^^^^^^
1297
1298 We assume that we have a three-node setup, with three separate
1299 networks. One for public communication with the Internet, one for
1300 cluster communication, and a very fast one, which we want to use as a
1301 dedicated network for migration.
1302
1303 A network configuration for such a setup might look as follows:
1304
1305 ----
1306 iface eno1 inet manual
1307
1308 # public network
1309 auto vmbr0
1310 iface vmbr0 inet static
1311 address 192.X.Y.57
1312 netmask 255.255.250.0
1313 gateway 192.X.Y.1
1314 bridge-ports eno1
1315 bridge-stp off
1316 bridge-fd 0
1317
1318 # cluster network
1319 auto eno2
1320 iface eno2 inet static
1321 address 10.1.1.1
1322 netmask 255.255.255.0
1323
1324 # fast network
1325 auto eno3
1326 iface eno3 inet static
1327 address 10.1.2.1
1328 netmask 255.255.255.0
1329 ----
1330
1331 Here, we will use the network 10.1.2.0/24 as a migration network. For
1332 a single migration, you can do this using the `migration_network`
1333 parameter of the command line tool:
1334
1335 ----
1336 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1337 ----
1338
1339 To configure this as the default network for all migrations in the
1340 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1341 file:
1342
1343 ----
1344 # use dedicated migration network
1345 migration: secure,network=10.1.2.0/24
1346 ----
1347
1348 NOTE: The migration type must always be set when the migration network
1349 is set in `/etc/pve/datacenter.cfg`.
1350
1351
1352 ifdef::manvolnum[]
1353 include::pve-copyright.adoc[]
1354 endif::manvolnum[]