]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
pvecm: add note about corosync killnode error
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {pve} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication. There's no explicit limit for the number of nodes in a cluster.
31 In practice, the actual possible node count may be limited by the host and
32 network performance. Currently (2021), there are reports of clusters (using
33 high-end enterprise hardware) with over 50 nodes in production.
34
35 `pvecm` can be used to create a new cluster, join nodes to a cluster,
36 leave the cluster, get status information, and do various other cluster-related
37 tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
38 is used to transparently distribute the cluster configuration to all cluster
39 nodes.
40
41 Grouping nodes into a cluster has the following advantages:
42
43 * Centralized, web-based management
44
45 * Multi-master clusters: each node can do all management tasks
46
47 * Use of `pmxcfs`, a database-driven file system, for storing configuration
48 files, replicated in real-time on all nodes using `corosync`
49
50 * Easy migration of virtual machines and containers between physical
51 hosts
52
53 * Fast deployment
54
55 * Cluster-wide services like firewall and HA
56
57
58 Requirements
59 ------------
60
61 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
62 for corosync to work.
63
64 * Date and time must be synchronized.
65
66 * An SSH tunnel on TCP port 22 between nodes is required.
67
68 * If you are interested in High Availability, you need to have at
69 least three nodes for reliable quorum. All nodes should have the
70 same version.
71
72 * We recommend a dedicated NIC for the cluster traffic, especially if
73 you use shared storage.
74
75 * The root password of a cluster node is required for adding nodes.
76
77 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
78 nodes.
79
80 NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
81 not supported as a production configuration and should only be done temporarily,
82 during an upgrade of the whole cluster from one major version to another.
83
84 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
85 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
86 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
87 upgrade procedure to {pve} 6.0.
88
89
90 Preparing Nodes
91 ---------------
92
93 First, install {pve} on all nodes. Make sure that each node is
94 installed with the final hostname and IP configuration. Changing the
95 hostname and IP is not possible after cluster creation.
96
97 While it's common to reference all node names and their IPs in `/etc/hosts` (or
98 make their names resolvable through other means), this is not necessary for a
99 cluster to work. It may be useful however, as you can then connect from one node
100 to another via SSH, using the easier to remember node name (see also
101 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
102 recommend referencing nodes by their IP addresses in the cluster configuration.
103
104
105 [[pvecm_create_cluster]]
106 Create a Cluster
107 ----------------
108
109 You can either create a cluster on the console (login via `ssh`), or through
110 the API using the {pve} web interface (__Datacenter -> Cluster__).
111
112 NOTE: Use a unique name for your cluster. This name cannot be changed later.
113 The cluster name follows the same rules as node names.
114
115 [[pvecm_cluster_create_via_gui]]
116 Create via Web GUI
117 ~~~~~~~~~~~~~~~~~~
118
119 [thumbnail="screenshot/gui-cluster-create.png"]
120
121 Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
122 name and select a network connection from the drop-down list to serve as the
123 main cluster network (Link 0). It defaults to the IP resolved via the node's
124 hostname.
125
126 As of {pve} 6.2, up to 8 fallback links can be added to a cluster. To add a
127 redundant link, click the 'Add' button and select a link number and IP address
128 from the respective fields. Prior to {pve} 6.2, to add a second link as
129 fallback, you can select the 'Advanced' checkbox and choose an additional
130 network interface (Link 1, see also xref:pvecm_redundancy[Corosync Redundancy]).
131
132 NOTE: Ensure that the network selected for cluster communication is not used for
133 any high traffic purposes, like network storage or live-migration.
134 While the cluster network itself produces small amounts of data, it is very
135 sensitive to latency. Check out full
136 xref:pvecm_cluster_network_requirements[cluster network requirements].
137
138 [[pvecm_cluster_create_via_cli]]
139 Create via the Command Line
140 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
141
142 Login via `ssh` to the first {pve} node and run the following command:
143
144 ----
145 hp1# pvecm create CLUSTERNAME
146 ----
147
148 To check the state of the new cluster use:
149
150 ----
151 hp1# pvecm status
152 ----
153
154 Multiple Clusters in the Same Network
155 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
156
157 It is possible to create multiple clusters in the same physical or logical
158 network. In this case, each cluster must have a unique name to avoid possible
159 clashes in the cluster communication stack. Furthermore, this helps avoid human
160 confusion by making clusters clearly distinguishable.
161
162 While the bandwidth requirement of a corosync cluster is relatively low, the
163 latency of packages and the package per second (PPS) rate is the limiting
164 factor. Different clusters in the same network can compete with each other for
165 these resources, so it may still make sense to use separate physical network
166 infrastructure for bigger clusters.
167
168 [[pvecm_join_node_to_cluster]]
169 Adding Nodes to the Cluster
170 ---------------------------
171
172 CAUTION: A node that is about to be added to the cluster cannot hold any guests.
173 All existing configuration in `/etc/pve` is overwritten when joining a cluster,
174 since guest IDs could otherwise conflict. As a workaround, you can create a
175 backup of the guest (`vzdump`) and restore it under a different ID, after the
176 node has been added to the cluster.
177
178 Join Node to Cluster via GUI
179 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
180
181 [thumbnail="screenshot/gui-cluster-join-information.png"]
182
183 Log in to the web interface on an existing cluster node. Under __Datacenter ->
184 Cluster__, click the *Join Information* button at the top. Then, click on the
185 button *Copy Information*. Alternatively, copy the string from the 'Information'
186 field manually.
187
188 [thumbnail="screenshot/gui-cluster-join.png"]
189
190 Next, log in to the web interface on the node you want to add.
191 Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
192 'Information' field with the 'Join Information' text you copied earlier.
193 Most settings required for joining the cluster will be filled out
194 automatically. For security reasons, the cluster password has to be entered
195 manually.
196
197 NOTE: To enter all required data manually, you can disable the 'Assisted Join'
198 checkbox.
199
200 After clicking the *Join* button, the cluster join process will start
201 immediately. After the node has joined the cluster, its current node certificate
202 will be replaced by one signed from the cluster certificate authority (CA).
203 This means that the current session will stop working after a few seconds. You
204 then might need to force-reload the web interface and log in again with the
205 cluster credentials.
206
207 Now your node should be visible under __Datacenter -> Cluster__.
208
209 Join Node to Cluster via Command Line
210 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
211
212 Log in to the node you want to join into an existing cluster via `ssh`.
213
214 ----
215 # pvecm add IP-ADDRESS-CLUSTER
216 ----
217
218 For `IP-ADDRESS-CLUSTER`, use the IP or hostname of an existing cluster node.
219 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
220
221
222 To check the state of the cluster use:
223
224 ----
225 # pvecm status
226 ----
227
228 .Cluster status after adding 4 nodes
229 ----
230 # pvecm status
231 Cluster information
232 ~~~~~~~~~~~~~~~~~~~
233 Name: prod-central
234 Config Version: 3
235 Transport: knet
236 Secure auth: on
237
238 Quorum information
239 ~~~~~~~~~~~~~~~~~~
240 Date: Tue Sep 14 11:06:47 2021
241 Quorum provider: corosync_votequorum
242 Nodes: 4
243 Node ID: 0x00000001
244 Ring ID: 1.1a8
245 Quorate: Yes
246
247 Votequorum information
248 ~~~~~~~~~~~~~~~~~~~~~~
249 Expected votes: 4
250 Highest expected: 4
251 Total votes: 4
252 Quorum: 3
253 Flags: Quorate
254
255 Membership information
256 ~~~~~~~~~~~~~~~~~~~~~~
257 Nodeid Votes Name
258 0x00000001 1 192.168.15.91
259 0x00000002 1 192.168.15.92 (local)
260 0x00000003 1 192.168.15.93
261 0x00000004 1 192.168.15.94
262 ----
263
264 If you only want a list of all nodes, use:
265
266 ----
267 # pvecm nodes
268 ----
269
270 .List nodes in a cluster
271 ----
272 # pvecm nodes
273
274 Membership information
275 ~~~~~~~~~~~~~~~~~~~~~~
276 Nodeid Votes Name
277 1 1 hp1
278 2 1 hp2 (local)
279 3 1 hp3
280 4 1 hp4
281 ----
282
283 [[pvecm_adding_nodes_with_separated_cluster_network]]
284 Adding Nodes with Separated Cluster Network
285 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
286
287 When adding a node to a cluster with a separated cluster network, you need to
288 use the 'link0' parameter to set the nodes address on that network:
289
290 [source,bash]
291 ----
292 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
293 ----
294
295 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
296 Kronosnet transport layer, also use the 'link1' parameter.
297
298 Using the GUI, you can select the correct interface from the corresponding
299 'Link X' fields in the *Cluster Join* dialog.
300
301 Remove a Cluster Node
302 ---------------------
303
304 CAUTION: Read the procedure carefully before proceeding, as it may
305 not be what you want or need.
306
307 Move all virtual machines from the node. Make sure you have made copies of any
308 local data or backups that you want to keep. In the following example, we will
309 remove the node hp4 from the cluster.
310
311 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
312 command to identify the node ID to remove:
313
314 ----
315 hp1# pvecm nodes
316
317 Membership information
318 ~~~~~~~~~~~~~~~~~~~~~~
319 Nodeid Votes Name
320 1 1 hp1 (local)
321 2 1 hp2
322 3 1 hp3
323 4 1 hp4
324 ----
325
326
327 At this point, you must power off hp4 and ensure that it will not power on
328 again (in the network) with its current configuration.
329
330 IMPORTANT: As mentioned above, it is critical to power off the node
331 *before* removal, and make sure that it will *not* power on again
332 (in the existing cluster network) with its current configuration.
333 If you power on the node as it is, the cluster could end up broken,
334 and it could be difficult to restore it to a functioning state.
335
336 After powering off the node hp4, we can safely remove it from the cluster.
337
338 ----
339 hp1# pvecm delnode hp4
340 Killing node 4
341 ----
342
343 NOTE: At this point, it is possible that you will receive an error message
344 stating `Could not kill node (error = CS_ERR_NOT_EXIST)`. This does not
345 signify an actual failure in the deletion of the node, but rather a failure in
346 corosync trying to kill an offline node. Thus, it can be safely ignored.
347
348 Use `pvecm nodes` or `pvecm status` to check the node list again. It should
349 look something like:
350
351 ----
352 hp1# pvecm status
353
354 ...
355
356 Votequorum information
357 ~~~~~~~~~~~~~~~~~~~~~~
358 Expected votes: 3
359 Highest expected: 3
360 Total votes: 3
361 Quorum: 2
362 Flags: Quorate
363
364 Membership information
365 ~~~~~~~~~~~~~~~~~~~~~~
366 Nodeid Votes Name
367 0x00000001 1 192.168.15.90 (local)
368 0x00000002 1 192.168.15.91
369 0x00000003 1 192.168.15.92
370 ----
371
372 If, for whatever reason, you want this server to join the same cluster again,
373 you have to:
374
375 * do a fresh install of {pve} on it,
376
377 * then join it, as explained in the previous section.
378
379 NOTE: After removal of the node, its SSH fingerprint will still reside in the
380 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
381 a node with the same IP or hostname, run `pvecm updatecerts` once on the
382 re-added node to update its fingerprint cluster wide.
383
384 [[pvecm_separate_node_without_reinstall]]
385 Separate a Node Without Reinstalling
386 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
387
388 CAUTION: This is *not* the recommended method, proceed with caution. Use the
389 previous method if you're unsure.
390
391 You can also separate a node from a cluster without reinstalling it from
392 scratch. But after removing the node from the cluster, it will still have
393 access to any shared storage. This must be resolved before you start removing
394 the node from the cluster. A {pve} cluster cannot share the exact same
395 storage with another cluster, as storage locking doesn't work over the cluster
396 boundary. Furthermore, it may also lead to VMID conflicts.
397
398 It's suggested that you create a new storage, where only the node which you want
399 to separate has access. This can be a new export on your NFS or a new Ceph
400 pool, to name a few examples. It's just important that the exact same storage
401 does not get accessed by multiple clusters. After setting up this storage, move
402 all data and VMs from the node to it. Then you are ready to separate the
403 node from the cluster.
404
405 WARNING: Ensure that all shared resources are cleanly separated! Otherwise you
406 will run into conflicts and problems.
407
408 First, stop the corosync and pve-cluster services on the node:
409 [source,bash]
410 ----
411 systemctl stop pve-cluster
412 systemctl stop corosync
413 ----
414
415 Start the cluster file system again in local mode:
416 [source,bash]
417 ----
418 pmxcfs -l
419 ----
420
421 Delete the corosync configuration files:
422 [source,bash]
423 ----
424 rm /etc/pve/corosync.conf
425 rm -r /etc/corosync/*
426 ----
427
428 You can now start the file system again as a normal service:
429 [source,bash]
430 ----
431 killall pmxcfs
432 systemctl start pve-cluster
433 ----
434
435 The node is now separated from the cluster. You can deleted it from any
436 remaining node of the cluster with:
437 [source,bash]
438 ----
439 pvecm delnode oldnode
440 ----
441
442 If the command fails due to a loss of quorum in the remaining node, you can set
443 the expected votes to 1 as a workaround:
444 [source,bash]
445 ----
446 pvecm expected 1
447 ----
448
449 And then repeat the 'pvecm delnode' command.
450
451 Now switch back to the separated node and delete all the remaining cluster
452 files on it. This ensures that the node can be added to another cluster again
453 without problems.
454
455 [source,bash]
456 ----
457 rm /var/lib/corosync/*
458 ----
459
460 As the configuration files from the other nodes are still in the cluster
461 file system, you may want to clean those up too. After making absolutely sure
462 that you have the correct node name, you can simply remove the entire
463 directory recursively from '/etc/pve/nodes/NODENAME'.
464
465 CAUTION: The node's SSH keys will remain in the 'authorized_key' file. This
466 means that the nodes can still connect to each other with public key
467 authentication. You should fix this by removing the respective keys from the
468 '/etc/pve/priv/authorized_keys' file.
469
470
471 Quorum
472 ------
473
474 {pve} use a quorum-based technique to provide a consistent state among
475 all cluster nodes.
476
477 [quote, from Wikipedia, Quorum (distributed computing)]
478 ____
479 A quorum is the minimum number of votes that a distributed transaction
480 has to obtain in order to be allowed to perform an operation in a
481 distributed system.
482 ____
483
484 In case of network partitioning, state changes requires that a
485 majority of nodes are online. The cluster switches to read-only mode
486 if it loses quorum.
487
488 NOTE: {pve} assigns a single vote to each node by default.
489
490
491 Cluster Network
492 ---------------
493
494 The cluster network is the core of a cluster. All messages sent over it have to
495 be delivered reliably to all nodes in their respective order. In {pve} this
496 part is done by corosync, an implementation of a high performance, low overhead,
497 high availability development toolkit. It serves our decentralized configuration
498 file system (`pmxcfs`).
499
500 [[pvecm_cluster_network_requirements]]
501 Network Requirements
502 ~~~~~~~~~~~~~~~~~~~~
503 This needs a reliable network with latencies under 2 milliseconds (LAN
504 performance) to work properly. The network should not be used heavily by other
505 members; ideally corosync runs on its own network. Do not use a shared network
506 for corosync and storage (except as a potential low-priority fallback in a
507 xref:pvecm_redundancy[redundant] configuration).
508
509 Before setting up a cluster, it is good practice to check if the network is fit
510 for that purpose. To ensure that the nodes can connect to each other on the
511 cluster network, you can test the connectivity between them with the `ping`
512 tool.
513
514 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
515 be generated - no manual action is required.
516
517 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
518 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
519 communication, which, for now, only supports regular UDP unicast.
520
521 CAUTION: You can still enable Multicast or legacy unicast by setting your
522 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
523 but keep in mind that this will disable all cryptography and redundancy support.
524 This is therefore not recommended.
525
526 Separate Cluster Network
527 ~~~~~~~~~~~~~~~~~~~~~~~~
528
529 When creating a cluster without any parameters, the corosync cluster network is
530 generally shared with the web interface and the VMs' network. Depending on
531 your setup, even storage traffic may get sent over the same network. It's
532 recommended to change that, as corosync is a time-critical, real-time
533 application.
534
535 Setting Up a New Network
536 ^^^^^^^^^^^^^^^^^^^^^^^^
537
538 First, you have to set up a new network interface. It should be on a physically
539 separate network. Ensure that your network fulfills the
540 xref:pvecm_cluster_network_requirements[cluster network requirements].
541
542 Separate On Cluster Creation
543 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
544
545 This is possible via the 'linkX' parameters of the 'pvecm create'
546 command, used for creating a new cluster.
547
548 If you have set up an additional NIC with a static address on 10.10.10.1/25,
549 and want to send and receive all cluster communication over this interface,
550 you would execute:
551
552 [source,bash]
553 ----
554 pvecm create test --link0 10.10.10.1
555 ----
556
557 To check if everything is working properly, execute:
558 [source,bash]
559 ----
560 systemctl status corosync
561 ----
562
563 Afterwards, proceed as described above to
564 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
565
566 [[pvecm_separate_cluster_net_after_creation]]
567 Separate After Cluster Creation
568 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
569
570 You can do this if you have already created a cluster and want to switch
571 its communication to another network, without rebuilding the whole cluster.
572 This change may lead to short periods of quorum loss in the cluster, as nodes
573 have to restart corosync and come up one after the other on the new network.
574
575 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
576 Then, open it and you should see a file similar to:
577
578 ----
579 logging {
580 debug: off
581 to_syslog: yes
582 }
583
584 nodelist {
585
586 node {
587 name: due
588 nodeid: 2
589 quorum_votes: 1
590 ring0_addr: due
591 }
592
593 node {
594 name: tre
595 nodeid: 3
596 quorum_votes: 1
597 ring0_addr: tre
598 }
599
600 node {
601 name: uno
602 nodeid: 1
603 quorum_votes: 1
604 ring0_addr: uno
605 }
606
607 }
608
609 quorum {
610 provider: corosync_votequorum
611 }
612
613 totem {
614 cluster_name: testcluster
615 config_version: 3
616 ip_version: ipv4-6
617 secauth: on
618 version: 2
619 interface {
620 linknumber: 0
621 }
622
623 }
624 ----
625
626 NOTE: `ringX_addr` actually specifies a corosync *link address*. The name "ring"
627 is a remnant of older corosync versions that is kept for backwards
628 compatibility.
629
630 The first thing you want to do is add the 'name' properties in the node entries,
631 if you do not see them already. Those *must* match the node name.
632
633 Then replace all addresses from the 'ring0_addr' properties of all nodes with
634 the new addresses. You may use plain IP addresses or hostnames here. If you use
635 hostnames, ensure that they are resolvable from all nodes (see also
636 xref:pvecm_corosync_addresses[Link Address Types]).
637
638 In this example, we want to switch cluster communication to the
639 10.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively.
640
641 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
642 as well. However, we recommend only changing one link address at a time, so
643 that it's easier to recover if something goes wrong.
644
645 After we increase the 'config_version' property, the new configuration file
646 should look like:
647
648 ----
649 logging {
650 debug: off
651 to_syslog: yes
652 }
653
654 nodelist {
655
656 node {
657 name: due
658 nodeid: 2
659 quorum_votes: 1
660 ring0_addr: 10.10.10.2
661 }
662
663 node {
664 name: tre
665 nodeid: 3
666 quorum_votes: 1
667 ring0_addr: 10.10.10.3
668 }
669
670 node {
671 name: uno
672 nodeid: 1
673 quorum_votes: 1
674 ring0_addr: 10.10.10.1
675 }
676
677 }
678
679 quorum {
680 provider: corosync_votequorum
681 }
682
683 totem {
684 cluster_name: testcluster
685 config_version: 4
686 ip_version: ipv4-6
687 secauth: on
688 version: 2
689 interface {
690 linknumber: 0
691 }
692
693 }
694 ----
695
696 Then, after a final check to see that all changed information is correct, we
697 save it and once again follow the
698 xref:pvecm_edit_corosync_conf[edit corosync.conf file] section to bring it into
699 effect.
700
701 The changes will be applied live, so restarting corosync is not strictly
702 necessary. If you changed other settings as well, or notice corosync
703 complaining, you can optionally trigger a restart.
704
705 On a single node execute:
706
707 [source,bash]
708 ----
709 systemctl restart corosync
710 ----
711
712 Now check if everything is okay:
713
714 [source,bash]
715 ----
716 systemctl status corosync
717 ----
718
719 If corosync begins to work again, restart it on all other nodes too.
720 They will then join the cluster membership one by one on the new network.
721
722 [[pvecm_corosync_addresses]]
723 Corosync Addresses
724 ~~~~~~~~~~~~~~~~~~
725
726 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
727 `corosync.conf`) can be specified in two ways:
728
729 * **IPv4/v6 addresses** can be used directly. They are recommended, since they
730 are static and usually not changed carelessly.
731
732 * **Hostnames** will be resolved using `getaddrinfo`, which means that by
733 default, IPv6 addresses will be used first, if available (see also
734 `man gai.conf`). Keep this in mind, especially when upgrading an existing
735 cluster to IPv6.
736
737 CAUTION: Hostnames should be used with care, since the addresses they
738 resolve to can be changed without touching corosync or the node it runs on -
739 which may lead to a situation where an address is changed without thinking
740 about implications for corosync.
741
742 A separate, static hostname specifically for corosync is recommended, if
743 hostnames are preferred. Also, make sure that every node in the cluster can
744 resolve all hostnames correctly.
745
746 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
747 entry. Only the resolved IP is saved to the configuration.
748
749 Nodes that joined the cluster on earlier versions likely still use their
750 unresolved hostname in `corosync.conf`. It might be a good idea to replace
751 them with IPs or a separate hostname, as mentioned above.
752
753
754 [[pvecm_redundancy]]
755 Corosync Redundancy
756 -------------------
757
758 Corosync supports redundant networking via its integrated Kronosnet layer by
759 default (it is not supported on the legacy udp/udpu transports). It can be
760 enabled by specifying more than one link address, either via the '--linkX'
761 parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
762 adding a new node) or by specifying more than one 'ringX_addr' in
763 `corosync.conf`.
764
765 NOTE: To provide useful failover, every link should be on its own
766 physical network connection.
767
768 Links are used according to a priority setting. You can configure this priority
769 by setting 'knet_link_priority' in the corresponding interface section in
770 `corosync.conf`, or, preferably, using the 'priority' parameter when creating
771 your cluster with `pvecm`:
772
773 ----
774 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
775 ----
776
777 This would cause 'link1' to be used first, since it has the higher priority.
778
779 If no priorities are configured manually (or two links have the same priority),
780 links will be used in order of their number, with the lower number having higher
781 priority.
782
783 Even if all links are working, only the one with the highest priority will see
784 corosync traffic. Link priorities cannot be mixed, meaning that links with
785 different priorities will not be able to communicate with each other.
786
787 Since lower priority links will not see traffic unless all higher priorities
788 have failed, it becomes a useful strategy to specify networks used for
789 other tasks (VMs, storage, etc.) as low-priority links. If worst comes to
790 worst, a higher latency or more congested connection might be better than no
791 connection at all.
792
793 Adding Redundant Links To An Existing Cluster
794 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
795
796 To add a new link to a running configuration, first check how to
797 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
798
799 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
800 sure that your 'X' is the same for every node you add it to, and that it is
801 unique for each node.
802
803 Lastly, add a new 'interface', as shown below, to your `totem`
804 section, replacing 'X' with the link number chosen above.
805
806 Assuming you added a link with number 1, the new configuration file could look
807 like this:
808
809 ----
810 logging {
811 debug: off
812 to_syslog: yes
813 }
814
815 nodelist {
816
817 node {
818 name: due
819 nodeid: 2
820 quorum_votes: 1
821 ring0_addr: 10.10.10.2
822 ring1_addr: 10.20.20.2
823 }
824
825 node {
826 name: tre
827 nodeid: 3
828 quorum_votes: 1
829 ring0_addr: 10.10.10.3
830 ring1_addr: 10.20.20.3
831 }
832
833 node {
834 name: uno
835 nodeid: 1
836 quorum_votes: 1
837 ring0_addr: 10.10.10.1
838 ring1_addr: 10.20.20.1
839 }
840
841 }
842
843 quorum {
844 provider: corosync_votequorum
845 }
846
847 totem {
848 cluster_name: testcluster
849 config_version: 4
850 ip_version: ipv4-6
851 secauth: on
852 version: 2
853 interface {
854 linknumber: 0
855 }
856 interface {
857 linknumber: 1
858 }
859 }
860 ----
861
862 The new link will be enabled as soon as you follow the last steps to
863 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
864 be necessary. You can check that corosync loaded the new link using:
865
866 ----
867 journalctl -b -u corosync
868 ----
869
870 It might be a good idea to test the new link by temporarily disconnecting the
871 old link on one node and making sure that its status remains online while
872 disconnected:
873
874 ----
875 pvecm status
876 ----
877
878 If you see a healthy cluster state, it means that your new link is being used.
879
880
881 Role of SSH in {pve} Clusters
882 -----------------------------
883
884 {pve} utilizes SSH tunnels for various features.
885
886 * Proxying console/shell sessions (node and guests)
887 +
888 When using the shell for node B while being connected to node A, connects to a
889 terminal proxy on node A, which is in turn connected to the login shell on node
890 B via a non-interactive SSH tunnel.
891
892 * VM and CT memory and local-storage migration in 'secure' mode.
893 +
894 During the migration, one or more SSH tunnel(s) are established between the
895 source and target nodes, in order to exchange migration information and
896 transfer memory and disk contents.
897
898 * Storage replication
899
900 .Pitfalls due to automatic execution of `.bashrc` and siblings
901 [IMPORTANT]
902 ====
903 In case you have a custom `.bashrc`, or similar files that get executed on
904 login by the configured shell, `ssh` will automatically run it once the session
905 is established successfully. This can cause some unexpected behavior, as those
906 commands may be executed with root permissions on any of the operations
907 described above. This can cause possible problematic side-effects!
908
909 In order to avoid such complications, it's recommended to add a check in
910 `/root/.bashrc` to make sure the session is interactive, and only then run
911 `.bashrc` commands.
912
913 You can add this snippet at the beginning of your `.bashrc` file:
914
915 ----
916 # Early exit if not running interactively to avoid side-effects!
917 case $- in
918 *i*) ;;
919 *) return;;
920 esac
921 ----
922 ====
923
924
925 Corosync External Vote Support
926 ------------------------------
927
928 This section describes a way to deploy an external voter in a {pve} cluster.
929 When configured, the cluster can sustain more node failures without
930 violating safety properties of the cluster communication.
931
932 For this to work, there are two services involved:
933
934 * A QDevice daemon which runs on each {pve} node
935
936 * An external vote daemon which runs on an independent server
937
938 As a result, you can achieve higher availability, even in smaller setups (for
939 example 2+1 nodes).
940
941 QDevice Technical Overview
942 ~~~~~~~~~~~~~~~~~~~~~~~~~~
943
944 The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
945 node. It provides a configured number of votes to the cluster's quorum
946 subsystem, based on an externally running third-party arbitrator's decision.
947 Its primary use is to allow a cluster to sustain more node failures than
948 standard quorum rules allow. This can be done safely as the external device
949 can see all nodes and thus choose only one set of nodes to give its vote.
950 This will only be done if said set of nodes can have quorum (again) after
951 receiving the third-party vote.
952
953 Currently, only 'QDevice Net' is supported as a third-party arbitrator. This is
954 a daemon which provides a vote to a cluster partition, if it can reach the
955 partition members over the network. It will only give votes to one partition
956 of a cluster at any time.
957 It's designed to support multiple clusters and is almost configuration and
958 state free. New clusters are handled dynamically and no configuration file
959 is needed on the host running a QDevice.
960
961 The only requirements for the external host are that it needs network access to
962 the cluster and to have a corosync-qnetd package available. We provide a package
963 for Debian based hosts, and other Linux distributions should also have a package
964 available through their respective package manager.
965
966 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
967 TCP/IP. The daemon may even run outside of the cluster's LAN and can have longer
968 latencies than 2 ms.
969
970 Supported Setups
971 ~~~~~~~~~~~~~~~~
972
973 We support QDevices for clusters with an even number of nodes and recommend
974 it for 2 node clusters, if they should provide higher availability.
975 For clusters with an odd node count, we currently discourage the use of
976 QDevices. The reason for this is the difference in the votes which the QDevice
977 provides for each cluster type. Even numbered clusters get a single additional
978 vote, which only increases availability, because if the QDevice
979 itself fails, you are in the same position as with no QDevice at all.
980
981 On the other hand, with an odd numbered cluster size, the QDevice provides
982 '(N-1)' votes -- where 'N' corresponds to the cluster node count. This
983 alternative behavior makes sense; if it had only one additional vote, the
984 cluster could get into a split-brain situation. This algorithm allows for all
985 nodes but one (and naturally the QDevice itself) to fail. However, there are two
986 drawbacks to this:
987
988 * If the QNet daemon itself fails, no other node may fail or the cluster
989 immediately loses quorum. For example, in a cluster with 15 nodes, 7
990 could fail before the cluster becomes inquorate. But, if a QDevice is
991 configured here and it itself fails, **no single node** of the 15 may fail.
992 The QDevice acts almost as a single point of failure in this case.
993
994 * The fact that all but one node plus QDevice may fail sounds promising at
995 first, but this may result in a mass recovery of HA services, which could
996 overload the single remaining node. Furthermore, a Ceph server will stop
997 providing services if only '((N-1)/2)' nodes or less remain online.
998
999 If you understand the drawbacks and implications, you can decide yourself if
1000 you want to use this technology in an odd numbered cluster setup.
1001
1002 QDevice-Net Setup
1003 ~~~~~~~~~~~~~~~~~
1004
1005 We recommend running any daemon which provides votes to corosync-qdevice as an
1006 unprivileged user. {pve} and Debian provide a package which is already
1007 configured to do so.
1008 The traffic between the daemon and the cluster must be encrypted to ensure a
1009 safe and secure integration of the QDevice in {pve}.
1010
1011 First, install the 'corosync-qnetd' package on your external server
1012
1013 ----
1014 external# apt install corosync-qnetd
1015 ----
1016
1017 and the 'corosync-qdevice' package on all cluster nodes
1018
1019 ----
1020 pve# apt install corosync-qdevice
1021 ----
1022
1023 After doing this, ensure that all the nodes in the cluster are online.
1024
1025 You can now set up your QDevice by running the following command on one
1026 of the {pve} nodes:
1027
1028 ----
1029 pve# pvecm qdevice setup <QDEVICE-IP>
1030 ----
1031
1032 The SSH key from the cluster will be automatically copied to the QDevice.
1033
1034 NOTE: Make sure that the SSH configuration on your external server allows root
1035 login via password, if you are asked for a password during this step.
1036
1037 After you enter the password and all the steps have successfully completed, you
1038 will see "Done". You can verify that the QDevice has been set up with:
1039
1040 ----
1041 pve# pvecm status
1042
1043 ...
1044
1045 Votequorum information
1046 ~~~~~~~~~~~~~~~~~~~~~
1047 Expected votes: 3
1048 Highest expected: 3
1049 Total votes: 3
1050 Quorum: 2
1051 Flags: Quorate Qdevice
1052
1053 Membership information
1054 ~~~~~~~~~~~~~~~~~~~~~~
1055 Nodeid Votes Qdevice Name
1056 0x00000001 1 A,V,NMW 192.168.22.180 (local)
1057 0x00000002 1 A,V,NMW 192.168.22.181
1058 0x00000000 1 Qdevice
1059
1060 ----
1061
1062
1063 Frequently Asked Questions
1064 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1065
1066 Tie Breaking
1067 ^^^^^^^^^^^^
1068
1069 In case of a tie, where two same-sized cluster partitions cannot see each other
1070 but can see the QDevice, the QDevice chooses one of those partitions randomly
1071 and provides a vote to it.
1072
1073 Possible Negative Implications
1074 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1075
1076 For clusters with an even node count, there are no negative implications when
1077 using a QDevice. If it fails to work, it is the same as not having a QDevice
1078 at all.
1079
1080 Adding/Deleting Nodes After QDevice Setup
1081 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1082
1083 If you want to add a new node or remove an existing one from a cluster with a
1084 QDevice setup, you need to remove the QDevice first. After that, you can add or
1085 remove nodes normally. Once you have a cluster with an even node count again,
1086 you can set up the QDevice again as described previously.
1087
1088 Removing the QDevice
1089 ^^^^^^^^^^^^^^^^^^^^
1090
1091 If you used the official `pvecm` tool to add the QDevice, you can remove it
1092 by running:
1093
1094 ----
1095 pve# pvecm qdevice remove
1096 ----
1097
1098 //Still TODO
1099 //^^^^^^^^^^
1100 //There is still stuff to add here
1101
1102
1103 Corosync Configuration
1104 ----------------------
1105
1106 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1107 controls the cluster membership and its network.
1108 For further information about it, check the corosync.conf man page:
1109 [source,bash]
1110 ----
1111 man corosync.conf
1112 ----
1113
1114 For node membership, you should always use the `pvecm` tool provided by {pve}.
1115 You may have to edit the configuration file manually for other changes.
1116 Here are a few best practice tips for doing this.
1117
1118 [[pvecm_edit_corosync_conf]]
1119 Edit corosync.conf
1120 ~~~~~~~~~~~~~~~~~~
1121
1122 Editing the corosync.conf file is not always very straightforward. There are
1123 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1124 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1125 propagate the changes to the local one, but not vice versa.
1126
1127 The configuration will get updated automatically, as soon as the file changes.
1128 This means that changes which can be integrated in a running corosync will take
1129 effect immediately. Thus, you should always make a copy and edit that instead,
1130 to avoid triggering unintended changes when saving the file while editing.
1131
1132 [source,bash]
1133 ----
1134 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1135 ----
1136
1137 Then, open the config file with your favorite editor, such as `nano` or
1138 `vim.tiny`, which come pre-installed on every {pve} node.
1139
1140 NOTE: Always increment the 'config_version' number after configuration changes;
1141 omitting this can lead to problems.
1142
1143 After making the necessary changes, create another copy of the current working
1144 configuration file. This serves as a backup if the new configuration fails to
1145 apply or causes other issues.
1146
1147 [source,bash]
1148 ----
1149 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1150 ----
1151
1152 Then replace the old configuration file with the new one:
1153 [source,bash]
1154 ----
1155 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1156 ----
1157
1158 You can check if the changes could be applied automatically, using the following
1159 commands:
1160 [source,bash]
1161 ----
1162 systemctl status corosync
1163 journalctl -b -u corosync
1164 ----
1165
1166 If the changes could not be applied automatically, you may have to restart the
1167 corosync service via:
1168 [source,bash]
1169 ----
1170 systemctl restart corosync
1171 ----
1172
1173 On errors, check the troubleshooting section below.
1174
1175 Troubleshooting
1176 ~~~~~~~~~~~~~~~
1177
1178 Issue: 'quorum.expected_votes must be configured'
1179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1180
1181 When corosync starts to fail and you get the following message in the system log:
1182
1183 ----
1184 [...]
1185 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1186 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1187 'configuration error: nodelist or quorum.expected_votes must be configured!'
1188 [...]
1189 ----
1190
1191 It means that the hostname you set for a corosync 'ringX_addr' in the
1192 configuration could not be resolved.
1193
1194 Write Configuration When Not Quorate
1195 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1196
1197 If you need to change '/etc/pve/corosync.conf' on a node with no quorum, and you
1198 understand what you are doing, use:
1199 [source,bash]
1200 ----
1201 pvecm expected 1
1202 ----
1203
1204 This sets the expected vote count to 1 and makes the cluster quorate. You can
1205 then fix your configuration, or revert it back to the last working backup.
1206
1207 This is not enough if corosync cannot start anymore. In that case, it is best to
1208 edit the local copy of the corosync configuration in
1209 '/etc/corosync/corosync.conf', so that corosync can start again. Ensure that on
1210 all nodes, this configuration has the same content to avoid split-brain
1211 situations.
1212
1213
1214 [[pvecm_corosync_conf_glossary]]
1215 Corosync Configuration Glossary
1216 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1217
1218 ringX_addr::
1219 This names the different link addresses for the Kronosnet connections between
1220 nodes.
1221
1222
1223 Cluster Cold Start
1224 ------------------
1225
1226 It is obvious that a cluster is not quorate when all nodes are
1227 offline. This is a common case after a power failure.
1228
1229 NOTE: It is always a good idea to use an uninterruptible power supply
1230 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1231 you want HA.
1232
1233 On node startup, the `pve-guests` service is started and waits for
1234 quorum. Once quorate, it starts all guests which have the `onboot`
1235 flag set.
1236
1237 When you turn on nodes, or when power comes back after power failure,
1238 it is likely that some nodes will boot faster than others. Please keep in
1239 mind that guest startup is delayed until you reach quorum.
1240
1241
1242 Guest Migration
1243 ---------------
1244
1245 Migrating virtual guests to other nodes is a useful feature in a
1246 cluster. There are settings to control the behavior of such
1247 migrations. This can be done via the configuration file
1248 `datacenter.cfg` or for a specific migration via API or command line
1249 parameters.
1250
1251 It makes a difference if a guest is online or offline, or if it has
1252 local resources (like a local disk).
1253
1254 For details about virtual machine migration, see the
1255 xref:qm_migration[QEMU/KVM Migration Chapter].
1256
1257 For details about container migration, see the
1258 xref:pct_migration[Container Migration Chapter].
1259
1260 Migration Type
1261 ~~~~~~~~~~~~~~
1262
1263 The migration type defines if the migration data should be sent over an
1264 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1265 Setting the migration type to insecure means that the RAM content of a
1266 virtual guest is also transferred unencrypted, which can lead to
1267 information disclosure of critical data from inside the guest (for
1268 example, passwords or encryption keys).
1269
1270 Therefore, we strongly recommend using the secure channel if you do
1271 not have full control over the network and can not guarantee that no
1272 one is eavesdropping on it.
1273
1274 NOTE: Storage migration does not follow this setting. Currently, it
1275 always sends the storage content over a secure channel.
1276
1277 Encryption requires a lot of computing power, so this setting is often
1278 changed to "unsafe" to achieve better performance. The impact on
1279 modern systems is lower because they implement AES encryption in
1280 hardware. The performance impact is particularly evident in fast
1281 networks, where you can transfer 10 Gbps or more.
1282
1283 Migration Network
1284 ~~~~~~~~~~~~~~~~~
1285
1286 By default, {pve} uses the network in which cluster communication
1287 takes place to send the migration traffic. This is not optimal both because
1288 sensitive cluster traffic can be disrupted and this network may not
1289 have the best bandwidth available on the node.
1290
1291 Setting the migration network parameter allows the use of a dedicated
1292 network for all migration traffic. In addition to the memory,
1293 this also affects the storage traffic for offline migrations.
1294
1295 The migration network is set as a network using CIDR notation. This
1296 has the advantage that you don't have to set individual IP addresses
1297 for each node. {pve} can determine the real address on the
1298 destination node from the network specified in the CIDR form. To
1299 enable this, the network must be specified so that each node has exactly one
1300 IP in the respective network.
1301
1302 Example
1303 ^^^^^^^
1304
1305 We assume that we have a three-node setup, with three separate
1306 networks. One for public communication with the Internet, one for
1307 cluster communication, and a very fast one, which we want to use as a
1308 dedicated network for migration.
1309
1310 A network configuration for such a setup might look as follows:
1311
1312 ----
1313 iface eno1 inet manual
1314
1315 # public network
1316 auto vmbr0
1317 iface vmbr0 inet static
1318 address 192.X.Y.57/24
1319 gateway 192.X.Y.1
1320 bridge-ports eno1
1321 bridge-stp off
1322 bridge-fd 0
1323
1324 # cluster network
1325 auto eno2
1326 iface eno2 inet static
1327 address 10.1.1.1/24
1328
1329 # fast network
1330 auto eno3
1331 iface eno3 inet static
1332 address 10.1.2.1/24
1333 ----
1334
1335 Here, we will use the network 10.1.2.0/24 as a migration network. For
1336 a single migration, you can do this using the `migration_network`
1337 parameter of the command line tool:
1338
1339 ----
1340 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1341 ----
1342
1343 To configure this as the default network for all migrations in the
1344 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1345 file:
1346
1347 ----
1348 # use dedicated migration network
1349 migration: secure,network=10.1.2.0/24
1350 ----
1351
1352 NOTE: The migration type must always be set when the migration network
1353 is set in `/etc/pve/datacenter.cfg`.
1354
1355
1356 ifdef::manvolnum[]
1357 include::pve-copyright.adoc[]
1358 endif::manvolnum[]