]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
bump version to 8.2.1
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication. There's no explicit limit for the number of nodes in a cluster.
31 In practice, the actual possible node count may be limited by the host and
32 network performance. Currently (2021), there are reports of clusters (using
33 high-end enterprise hardware) with over 50 nodes in production.
34
35 `pvecm` can be used to create a new cluster, join nodes to a cluster,
36 leave the cluster, get status information and do various other cluster-related
37 tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
38 is used to transparently distribute the cluster configuration to all cluster
39 nodes.
40
41 Grouping nodes into a cluster has the following advantages:
42
43 * Centralized, web based management
44
45 * Multi-master clusters: each node can do all management tasks
46
47 * `pmxcfs`: database-driven file system for storing configuration files,
48 replicated in real-time on all nodes using `corosync`.
49
50 * Easy migration of virtual machines and containers between physical
51 hosts
52
53 * Fast deployment
54
55 * Cluster-wide services like firewall and HA
56
57
58 Requirements
59 ------------
60
61 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
62 for corosync to work.
63
64 * Date and time have to be synchronized.
65
66 * SSH tunnel on TCP port 22 between nodes is used.
67
68 * If you are interested in High Availability, you need to have at
69 least three nodes for reliable quorum. All nodes should have the
70 same version.
71
72 * We recommend a dedicated NIC for the cluster traffic, especially if
73 you use shared storage.
74
75 * Root password of a cluster node is required for adding nodes.
76
77 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
78 nodes.
79
80 NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
81 not supported as production configuration and should only used temporarily
82 during upgrading the whole cluster from one to another major version.
83
84 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
85 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
86 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
87 upgrade procedure to {pve} 6.0.
88
89
90 Preparing Nodes
91 ---------------
92
93 First, install {PVE} on all nodes. Make sure that each node is
94 installed with the final hostname and IP configuration. Changing the
95 hostname and IP is not possible after cluster creation.
96
97 While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
98 make their names resolvable through other means), this is not necessary for a
99 cluster to work. It may be useful however, as you can then connect from one node
100 to the other with SSH via the easier to remember node name (see also
101 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
102 recommend to reference nodes by their IP addresses in the cluster configuration.
103
104
105 [[pvecm_create_cluster]]
106 Create a Cluster
107 ----------------
108
109 You can either create a cluster on the console (login via `ssh`), or through
110 the API using the {pve} Webinterface (__Datacenter -> Cluster__).
111
112 NOTE: Use a unique name for your cluster. This name cannot be changed later.
113 The cluster name follows the same rules as node names.
114
115 [[pvecm_cluster_create_via_gui]]
116 Create via Web GUI
117 ~~~~~~~~~~~~~~~~~~
118
119 [thumbnail="screenshot/gui-cluster-create.png"]
120
121 Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
122 name and select a network connection from the dropdown to serve as the main
123 cluster network (Link 0). It defaults to the IP resolved via the node's
124 hostname.
125
126 To add a second link as fallback, you can select the 'Advanced' checkbox and
127 choose an additional network interface (Link 1, see also
128 xref:pvecm_redundancy[Corosync Redundancy]).
129
130 NOTE: Ensure the network selected for the cluster communication is not used for
131 any high traffic loads like those of (network) storages or live-migration.
132 While the cluster network itself produces small amounts of data, it is very
133 sensitive to latency. Check out full
134 xref:pvecm_cluster_network_requirements[cluster network requirements].
135
136 [[pvecm_cluster_create_via_cli]]
137 Create via Command Line
138 ~~~~~~~~~~~~~~~~~~~~~~~
139
140 Login via `ssh` to the first {pve} node and run the following command:
141
142 ----
143 hp1# pvecm create CLUSTERNAME
144 ----
145
146 To check the state of the new cluster use:
147
148 ----
149 hp1# pvecm status
150 ----
151
152 Multiple Clusters In Same Network
153 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154
155 It is possible to create multiple clusters in the same physical or logical
156 network. Each such cluster must have a unique name to avoid possible clashes in
157 the cluster communication stack. This also helps avoid human confusion by making
158 clusters clearly distinguishable.
159
160 While the bandwidth requirement of a corosync cluster is relatively low, the
161 latency of packages and the package per second (PPS) rate is the limiting
162 factor. Different clusters in the same network can compete with each other for
163 these resources, so it may still make sense to use separate physical network
164 infrastructure for bigger clusters.
165
166 [[pvecm_join_node_to_cluster]]
167 Adding Nodes to the Cluster
168 ---------------------------
169
170 CAUTION: A node that is about to be added to the cluster cannot hold any guests.
171 All existing configuration in `/etc/pve` is overwritten when joining a cluster,
172 since guest IDs could be conflicting. As a workaround create a backup of the
173 guest (`vzdump`) and restore it as a different ID after the node has been added
174 to the cluster.
175
176 Join Node to Cluster via GUI
177 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
178
179 [thumbnail="screenshot/gui-cluster-join-information.png"]
180
181 Login to the web interface on an existing cluster node. Under __Datacenter ->
182 Cluster__, click the button *Join Information* at the top. Then, click on the
183 button *Copy Information*. Alternatively, copy the string from the 'Information'
184 field manually.
185
186 [thumbnail="screenshot/gui-cluster-join.png"]
187
188 Next, login to the web interface on the node you want to add.
189 Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
190 'Information' field with the 'Join Information' text you copied earlier.
191 Most settings required for joining the cluster will be filled out
192 automatically. For security reasons, the cluster password has to be entered
193 manually.
194
195 NOTE: To enter all required data manually, you can disable the 'Assisted Join'
196 checkbox.
197
198 After clicking the *Join* button, the cluster join process will start
199 immediately. After the node joined the cluster its current node certificate
200 will be replaced by one signed from the cluster certificate authority (CA),
201 that means the current session will stop to work after a few seconds. You might
202 then need to force-reload the webinterface and re-login with the cluster
203 credentials.
204
205 Now your node should be visible under __Datacenter -> Cluster__.
206
207 Join Node to Cluster via Command Line
208 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
209
210 Login via `ssh` to the node you want to join into an existing cluster.
211
212 ----
213 hp2# pvecm add IP-ADDRESS-CLUSTER
214 ----
215
216 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
217 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
218
219
220 To check the state of the cluster use:
221
222 ----
223 # pvecm status
224 ----
225
226 .Cluster status after adding 4 nodes
227 ----
228 hp2# pvecm status
229 Quorum information
230 ~~~~~~~~~~~~~~~~~~
231 Date: Mon Apr 20 12:30:13 2015
232 Quorum provider: corosync_votequorum
233 Nodes: 4
234 Node ID: 0x00000001
235 Ring ID: 1/8
236 Quorate: Yes
237
238 Votequorum information
239 ~~~~~~~~~~~~~~~~~~~~~~
240 Expected votes: 4
241 Highest expected: 4
242 Total votes: 4
243 Quorum: 3
244 Flags: Quorate
245
246 Membership information
247 ~~~~~~~~~~~~~~~~~~~~~~
248 Nodeid Votes Name
249 0x00000001 1 192.168.15.91
250 0x00000002 1 192.168.15.92 (local)
251 0x00000003 1 192.168.15.93
252 0x00000004 1 192.168.15.94
253 ----
254
255 If you only want the list of all nodes use:
256
257 ----
258 # pvecm nodes
259 ----
260
261 .List nodes in a cluster
262 ----
263 hp2# pvecm nodes
264
265 Membership information
266 ~~~~~~~~~~~~~~~~~~~~~~
267 Nodeid Votes Name
268 1 1 hp1
269 2 1 hp2 (local)
270 3 1 hp3
271 4 1 hp4
272 ----
273
274 [[pvecm_adding_nodes_with_separated_cluster_network]]
275 Adding Nodes With Separated Cluster Network
276 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
277
278 When adding a node to a cluster with a separated cluster network you need to
279 use the 'link0' parameter to set the nodes address on that network:
280
281 [source,bash]
282 ----
283 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
284 ----
285
286 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
287 kronosnet transport layer, also use the 'link1' parameter.
288
289 Using the GUI, you can select the correct interface from the corresponding 'Link 0'
290 and 'Link 1' fields in the *Cluster Join* dialog.
291
292 Remove a Cluster Node
293 ---------------------
294
295 CAUTION: Read carefully the procedure before proceeding, as it could
296 not be what you want or need.
297
298 Move all virtual machines from the node. Make sure you have no local
299 data or backups you want to keep, or save them accordingly.
300 In the following example we will remove the node hp4 from the cluster.
301
302 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
303 command to identify the node ID to remove:
304
305 ----
306 hp1# pvecm nodes
307
308 Membership information
309 ~~~~~~~~~~~~~~~~~~~~~~
310 Nodeid Votes Name
311 1 1 hp1 (local)
312 2 1 hp2
313 3 1 hp3
314 4 1 hp4
315 ----
316
317
318 At this point you must power off hp4 and
319 make sure that it will not power on again (in the network) as it
320 is.
321
322 IMPORTANT: As said above, it is critical to power off the node
323 *before* removal, and make sure that it will *never* power on again
324 (in the existing cluster network) as it is.
325 If you power on the node as it is, your cluster will be screwed up and
326 it could be difficult to restore a clean cluster state.
327
328 After powering off the node hp4, we can safely remove it from the cluster.
329
330 ----
331 hp1# pvecm delnode hp4
332 Killing node 4
333 ----
334
335 Use `pvecm nodes` or `pvecm status` to check the node list again. It should
336 look something like:
337
338 ----
339 hp1# pvecm status
340
341 Quorum information
342 ~~~~~~~~~~~~~~~~~~
343 Date: Mon Apr 20 12:44:28 2015
344 Quorum provider: corosync_votequorum
345 Nodes: 3
346 Node ID: 0x00000001
347 Ring ID: 1/8
348 Quorate: Yes
349
350 Votequorum information
351 ~~~~~~~~~~~~~~~~~~~~~~
352 Expected votes: 3
353 Highest expected: 3
354 Total votes: 3
355 Quorum: 2
356 Flags: Quorate
357
358 Membership information
359 ~~~~~~~~~~~~~~~~~~~~~~
360 Nodeid Votes Name
361 0x00000001 1 192.168.15.90 (local)
362 0x00000002 1 192.168.15.91
363 0x00000003 1 192.168.15.92
364 ----
365
366 If, for whatever reason, you want this server to join the same cluster again,
367 you have to
368
369 * reinstall {pve} on it from scratch
370
371 * then join it, as explained in the previous section.
372
373 NOTE: After removal of the node, its SSH fingerprint will still reside in the
374 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
375 a node with the same IP or hostname, run `pvecm updatecerts` once on the
376 re-added node to update its fingerprint cluster wide.
377
378 [[pvecm_separate_node_without_reinstall]]
379 Separate A Node Without Reinstalling
380 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
381
382 CAUTION: This is *not* the recommended method, proceed with caution. Use the
383 above mentioned method if you're unsure.
384
385 You can also separate a node from a cluster without reinstalling it from
386 scratch. But after removing the node from the cluster it will still have
387 access to the shared storages! This must be resolved before you start removing
388 the node from the cluster. A {pve} cluster cannot share the exact same
389 storage with another cluster, as storage locking doesn't work over the cluster
390 boundary. Further, it may also lead to VMID conflicts.
391
392 Its suggested that you create a new storage where only the node which you want
393 to separate has access. This can be a new export on your NFS or a new Ceph
394 pool, to name a few examples. Its just important that the exact same storage
395 does not gets accessed by multiple clusters. After setting this storage up move
396 all data from the node and its VMs to it. Then you are ready to separate the
397 node from the cluster.
398
399 WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
400 run into conflicts and problems.
401
402 First, stop the corosync and the pve-cluster services on the node:
403 [source,bash]
404 ----
405 systemctl stop pve-cluster
406 systemctl stop corosync
407 ----
408
409 Start the cluster filesystem again in local mode:
410 [source,bash]
411 ----
412 pmxcfs -l
413 ----
414
415 Delete the corosync configuration files:
416 [source,bash]
417 ----
418 rm /etc/pve/corosync.conf
419 rm -r /etc/corosync/*
420 ----
421
422 You can now start the filesystem again as normal service:
423 [source,bash]
424 ----
425 killall pmxcfs
426 systemctl start pve-cluster
427 ----
428
429 The node is now separated from the cluster. You can deleted it from a remaining
430 node of the cluster with:
431 [source,bash]
432 ----
433 pvecm delnode oldnode
434 ----
435
436 If the command failed, because the remaining node in the cluster lost quorum
437 when the now separate node exited, you may set the expected votes to 1 as a workaround:
438 [source,bash]
439 ----
440 pvecm expected 1
441 ----
442
443 And then repeat the 'pvecm delnode' command.
444
445 Now switch back to the separated node, here delete all remaining files left
446 from the old cluster. This ensures that the node can be added to another
447 cluster again without problems.
448
449 [source,bash]
450 ----
451 rm /var/lib/corosync/*
452 ----
453
454 As the configuration files from the other nodes are still in the cluster
455 filesystem you may want to clean those up too. Remove simply the whole
456 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
457 you used the correct one before deleting it.
458
459 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
460 the nodes can still connect to each other with public key authentication. This
461 should be fixed by removing the respective keys from the
462 '/etc/pve/priv/authorized_keys' file.
463
464
465 Quorum
466 ------
467
468 {pve} use a quorum-based technique to provide a consistent state among
469 all cluster nodes.
470
471 [quote, from Wikipedia, Quorum (distributed computing)]
472 ____
473 A quorum is the minimum number of votes that a distributed transaction
474 has to obtain in order to be allowed to perform an operation in a
475 distributed system.
476 ____
477
478 In case of network partitioning, state changes requires that a
479 majority of nodes are online. The cluster switches to read-only mode
480 if it loses quorum.
481
482 NOTE: {pve} assigns a single vote to each node by default.
483
484
485 Cluster Network
486 ---------------
487
488 The cluster network is the core of a cluster. All messages sent over it have to
489 be delivered reliably to all nodes in their respective order. In {pve} this
490 part is done by corosync, an implementation of a high performance, low overhead
491 high availability development toolkit. It serves our decentralized
492 configuration file system (`pmxcfs`).
493
494 [[pvecm_cluster_network_requirements]]
495 Network Requirements
496 ~~~~~~~~~~~~~~~~~~~~
497 This needs a reliable network with latencies under 2 milliseconds (LAN
498 performance) to work properly. The network should not be used heavily by other
499 members, ideally corosync runs on its own network. Do not use a shared network
500 for corosync and storage (except as a potential low-priority fallback in a
501 xref:pvecm_redundancy[redundant] configuration).
502
503 Before setting up a cluster, it is good practice to check if the network is fit
504 for that purpose. To make sure the nodes can connect to each other on the
505 cluster network, you can test the connectivity between them with the `ping`
506 tool.
507
508 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
509 be generated - no manual action is required.
510
511 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
512 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
513 communication, which, for now, only supports regular UDP unicast.
514
515 CAUTION: You can still enable Multicast or legacy unicast by setting your
516 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
517 but keep in mind that this will disable all cryptography and redundancy support.
518 This is therefore not recommended.
519
520 Separate Cluster Network
521 ~~~~~~~~~~~~~~~~~~~~~~~~
522
523 When creating a cluster without any parameters the corosync cluster network is
524 generally shared with the Web UI and the VMs and their traffic. Depending on
525 your setup, even storage traffic may get sent over the same network. Its
526 recommended to change that, as corosync is a time critical real time
527 application.
528
529 Setting Up A New Network
530 ^^^^^^^^^^^^^^^^^^^^^^^^
531
532 First, you have to set up a new network interface. It should be on a physically
533 separate network. Ensure that your network fulfills the
534 xref:pvecm_cluster_network_requirements[cluster network requirements].
535
536 Separate On Cluster Creation
537 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
538
539 This is possible via the 'linkX' parameters of the 'pvecm create'
540 command used for creating a new cluster.
541
542 If you have set up an additional NIC with a static address on 10.10.10.1/25,
543 and want to send and receive all cluster communication over this interface,
544 you would execute:
545
546 [source,bash]
547 ----
548 pvecm create test --link0 10.10.10.1
549 ----
550
551 To check if everything is working properly execute:
552 [source,bash]
553 ----
554 systemctl status corosync
555 ----
556
557 Afterwards, proceed as described above to
558 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
559
560 [[pvecm_separate_cluster_net_after_creation]]
561 Separate After Cluster Creation
562 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
563
564 You can do this if you have already created a cluster and want to switch
565 its communication to another network, without rebuilding the whole cluster.
566 This change may lead to short durations of quorum loss in the cluster, as nodes
567 have to restart corosync and come up one after the other on the new network.
568
569 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
570 Then, open it and you should see a file similar to:
571
572 ----
573 logging {
574 debug: off
575 to_syslog: yes
576 }
577
578 nodelist {
579
580 node {
581 name: due
582 nodeid: 2
583 quorum_votes: 1
584 ring0_addr: due
585 }
586
587 node {
588 name: tre
589 nodeid: 3
590 quorum_votes: 1
591 ring0_addr: tre
592 }
593
594 node {
595 name: uno
596 nodeid: 1
597 quorum_votes: 1
598 ring0_addr: uno
599 }
600
601 }
602
603 quorum {
604 provider: corosync_votequorum
605 }
606
607 totem {
608 cluster_name: testcluster
609 config_version: 3
610 ip_version: ipv4-6
611 secauth: on
612 version: 2
613 interface {
614 linknumber: 0
615 }
616
617 }
618 ----
619
620 NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
621 is a remnant of older corosync versions that is kept for backwards
622 compatibility.
623
624 The first thing you want to do is add the 'name' properties in the node entries
625 if you do not see them already. Those *must* match the node name.
626
627 Then replace all addresses from the 'ring0_addr' properties of all nodes with
628 the new addresses. You may use plain IP addresses or hostnames here. If you use
629 hostnames ensure that they are resolvable from all nodes. (see also
630 xref:pvecm_corosync_addresses[Link Address Types])
631
632 In this example, we want to switch the cluster communication to the
633 10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
634
635 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
636 as well, although we recommend to not change multiple addresses at once, to make
637 it easier to recover if something goes wrong.
638
639 After we increase the 'config_version' property, the new configuration file
640 should look like:
641
642 ----
643 logging {
644 debug: off
645 to_syslog: yes
646 }
647
648 nodelist {
649
650 node {
651 name: due
652 nodeid: 2
653 quorum_votes: 1
654 ring0_addr: 10.10.10.2
655 }
656
657 node {
658 name: tre
659 nodeid: 3
660 quorum_votes: 1
661 ring0_addr: 10.10.10.3
662 }
663
664 node {
665 name: uno
666 nodeid: 1
667 quorum_votes: 1
668 ring0_addr: 10.10.10.1
669 }
670
671 }
672
673 quorum {
674 provider: corosync_votequorum
675 }
676
677 totem {
678 cluster_name: testcluster
679 config_version: 4
680 ip_version: ipv4-6
681 secauth: on
682 version: 2
683 interface {
684 linknumber: 0
685 }
686
687 }
688 ----
689
690 Then, after a final check if all changed information is correct, we save it and
691 once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
692 section to bring it into effect.
693
694 The changes will be applied live, so restarting corosync is not strictly
695 necessary. If you changed other settings as well, or notice corosync
696 complaining, you can optionally trigger a restart.
697
698 On a single node execute:
699
700 [source,bash]
701 ----
702 systemctl restart corosync
703 ----
704
705 Now check if everything is fine:
706
707 [source,bash]
708 ----
709 systemctl status corosync
710 ----
711
712 If corosync runs again correct restart corosync also on all other nodes.
713 They will then join the cluster membership one by one on the new network.
714
715 [[pvecm_corosync_addresses]]
716 Corosync addresses
717 ~~~~~~~~~~~~~~~~~~
718
719 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
720 `corosync.conf`) can be specified in two ways:
721
722 * **IPv4/v6 addresses** will be used directly. They are recommended, since they
723 are static and usually not changed carelessly.
724
725 * **Hostnames** will be resolved using `getaddrinfo`, which means that per
726 default, IPv6 addresses will be used first, if available (see also
727 `man gai.conf`). Keep this in mind, especially when upgrading an existing
728 cluster to IPv6.
729
730 CAUTION: Hostnames should be used with care, since the address they
731 resolve to can be changed without touching corosync or the node it runs on -
732 which may lead to a situation where an address is changed without thinking
733 about implications for corosync.
734
735 A separate, static hostname specifically for corosync is recommended, if
736 hostnames are preferred. Also, make sure that every node in the cluster can
737 resolve all hostnames correctly.
738
739 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
740 entry. Only the resolved IP is then saved to the configuration.
741
742 Nodes that joined the cluster on earlier versions likely still use their
743 unresolved hostname in `corosync.conf`. It might be a good idea to replace
744 them with IPs or a separate hostname, as mentioned above.
745
746
747 [[pvecm_redundancy]]
748 Corosync Redundancy
749 -------------------
750
751 Corosync supports redundant networking via its integrated kronosnet layer by
752 default (it is not supported on the legacy udp/udpu transports). It can be
753 enabled by specifying more than one link address, either via the '--linkX'
754 parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
755 adding a new node) or by specifying more than one 'ringX_addr' in
756 `corosync.conf`.
757
758 NOTE: To provide useful failover, every link should be on its own
759 physical network connection.
760
761 Links are used according to a priority setting. You can configure this priority
762 by setting 'knet_link_priority' in the corresponding interface section in
763 `corosync.conf`, or, preferably, using the 'priority' parameter when creating
764 your cluster with `pvecm`:
765
766 ----
767 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
768 ----
769
770 This would cause 'link1' to be used first, since it has the higher priority.
771
772 If no priorities are configured manually (or two links have the same priority),
773 links will be used in order of their number, with the lower number having higher
774 priority.
775
776 Even if all links are working, only the one with the highest priority will see
777 corosync traffic. Link priorities cannot be mixed, i.e. links with different
778 priorities will not be able to communicate with each other.
779
780 Since lower priority links will not see traffic unless all higher priorities
781 have failed, it becomes a useful strategy to specify even networks used for
782 other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
783 worst, a higher-latency or more congested connection might be better than no
784 connection at all.
785
786 Adding Redundant Links To An Existing Cluster
787 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
788
789 To add a new link to a running configuration, first check how to
790 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
791
792 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
793 sure that your 'X' is the same for every node you add it to, and that it is
794 unique for each node.
795
796 Lastly, add a new 'interface', as shown below, to your `totem`
797 section, replacing 'X' with your link number chosen above.
798
799 Assuming you added a link with number 1, the new configuration file could look
800 like this:
801
802 ----
803 logging {
804 debug: off
805 to_syslog: yes
806 }
807
808 nodelist {
809
810 node {
811 name: due
812 nodeid: 2
813 quorum_votes: 1
814 ring0_addr: 10.10.10.2
815 ring1_addr: 10.20.20.2
816 }
817
818 node {
819 name: tre
820 nodeid: 3
821 quorum_votes: 1
822 ring0_addr: 10.10.10.3
823 ring1_addr: 10.20.20.3
824 }
825
826 node {
827 name: uno
828 nodeid: 1
829 quorum_votes: 1
830 ring0_addr: 10.10.10.1
831 ring1_addr: 10.20.20.1
832 }
833
834 }
835
836 quorum {
837 provider: corosync_votequorum
838 }
839
840 totem {
841 cluster_name: testcluster
842 config_version: 4
843 ip_version: ipv4-6
844 secauth: on
845 version: 2
846 interface {
847 linknumber: 0
848 }
849 interface {
850 linknumber: 1
851 }
852 }
853 ----
854
855 The new link will be enabled as soon as you follow the last steps to
856 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
857 be necessary. You can check that corosync loaded the new link using:
858
859 ----
860 journalctl -b -u corosync
861 ----
862
863 It might be a good idea to test the new link by temporarily disconnecting the
864 old link on one node and making sure that its status remains online while
865 disconnected:
866
867 ----
868 pvecm status
869 ----
870
871 If you see a healthy cluster state, it means that your new link is being used.
872
873
874 Role of SSH in {PVE} Clusters
875 -----------------------------
876
877 {PVE} utilizes SSH tunnels for various features.
878
879 * Proxying console/shell sessions (node and guests)
880 +
881 When using the shell for node B while being connected to node A, connects to a
882 terminal proxy on node A, which is in turn connected to the login shell on node
883 B via a non-interactive SSH tunnel.
884
885 * VM and CT memory and local-storage migration in 'secure' mode.
886 +
887 During the migration one or more SSH tunnel(s) are established between the
888 source and target nodes, in order to exchange migration information and
889 transfer memory and disk contents.
890
891 * Storage replication
892
893 .Pitfalls due to automatic execution of `.bashrc` and siblings
894 [IMPORTANT]
895 ====
896 In case you have a custom `.bashrc`, or similar files that get executed on
897 login by the configured shell, `ssh` will automatically run it once the session
898 is established successfully. This can cause some unexpected behavior, as those
899 commands may be executed with root permissions on any above described
900 operation. That can cause possible problematic side-effects!
901
902 In order to avoid such complications, it's recommended to add a check in
903 `/root/.bashrc` to make sure the session is interactive, and only then run
904 `.bashrc` commands.
905
906 You can add this snippet at the beginning of your `.bashrc` file:
907
908 ----
909 # Early exit if not running interactively to avoid side-effects!
910 case $- in
911 *i*) ;;
912 *) return;;
913 esac
914 ----
915 ====
916
917
918 Corosync External Vote Support
919 ------------------------------
920
921 This section describes a way to deploy an external voter in a {pve} cluster.
922 When configured, the cluster can sustain more node failures without
923 violating safety properties of the cluster communication.
924
925 For this to work there are two services involved:
926
927 * a so called qdevice daemon which runs on each {pve} node
928
929 * an external vote daemon which runs on an independent server.
930
931 As a result you can achieve higher availability even in smaller setups (for
932 example 2+1 nodes).
933
934 QDevice Technical Overview
935 ~~~~~~~~~~~~~~~~~~~~~~~~~~
936
937 The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
938 node. It provides a configured number of votes to the clusters quorum
939 subsystem based on an external running third-party arbitrator's decision.
940 Its primary use is to allow a cluster to sustain more node failures than
941 standard quorum rules allow. This can be done safely as the external device
942 can see all nodes and thus choose only one set of nodes to give its vote.
943 This will only be done if said set of nodes can have quorum (again) when
944 receiving the third-party vote.
945
946 Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
947 a daemon which provides a vote to a cluster partition if it can reach the
948 partition members over the network. It will give only votes to one partition
949 of a cluster at any time.
950 It's designed to support multiple clusters and is almost configuration and
951 state free. New clusters are handled dynamically and no configuration file
952 is needed on the host running a QDevice.
953
954 The external host has the only requirement that it needs network access to the
955 cluster and a corosync-qnetd package available. We provide such a package
956 for Debian based hosts, other Linux distributions should also have a package
957 available through their respective package manager.
958
959 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
960 TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
961 latencies than 2 ms.
962
963 Supported Setups
964 ~~~~~~~~~~~~~~~~
965
966 We support QDevices for clusters with an even number of nodes and recommend
967 it for 2 node clusters, if they should provide higher availability.
968 For clusters with an odd node count we discourage the use of QDevices
969 currently. The reason for this, is the difference of the votes the QDevice
970 provides for each cluster type. Even numbered clusters get single additional
971 vote, with this we can only increase availability, i.e. if the QDevice
972 itself fails we are in the same situation as with no QDevice at all.
973
974 Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
975 where 'N' corresponds to the cluster node count. This difference makes
976 sense, if we had only one additional vote the cluster can get into a split
977 brain situation.
978 This algorithm would allow that all nodes but one (and naturally the
979 QDevice itself) could fail.
980 There are two drawbacks with this:
981
982 * If the QNet daemon itself fails, no other node may fail or the cluster
983 immediately loses quorum. For example, in a cluster with 15 nodes 7
984 could fail before the cluster becomes inquorate. But, if a QDevice is
985 configured here and said QDevice fails itself **no single node** of
986 the 15 may fail. The QDevice acts almost as a single point of failure in
987 this case.
988
989 * The fact that all but one node plus QDevice may fail sound promising at
990 first, but this may result in a mass recovery of HA services that would
991 overload the single node left. Also ceph server will stop to provide
992 services after only '((N-1)/2)' nodes are online.
993
994 If you understand the drawbacks and implications you can decide yourself if
995 you should use this technology in an odd numbered cluster setup.
996
997 QDevice-Net Setup
998 ~~~~~~~~~~~~~~~~~
999
1000 We recommend to run any daemon which provides votes to corosync-qdevice as an
1001 unprivileged user. {pve} and Debian provide a package which is already
1002 configured to do so.
1003 The traffic between the daemon and the cluster must be encrypted to ensure a
1004 safe and secure QDevice integration in {pve}.
1005
1006 First, install the 'corosync-qnetd' package on your external server
1007
1008 ----
1009 external# apt install corosync-qnetd
1010 ----
1011
1012 and the 'corosync-qdevice' package on all cluster nodes
1013
1014 ----
1015 pve# apt install corosync-qdevice
1016 ----
1017
1018 After that, ensure that all your nodes on the cluster are online.
1019
1020 You can now easily set up your QDevice by running the following command on one
1021 of the {pve} nodes:
1022
1023 ----
1024 pve# pvecm qdevice setup <QDEVICE-IP>
1025 ----
1026
1027 The SSH key from the cluster will be automatically copied to the QDevice.
1028
1029 NOTE: Make sure that the SSH configuration on your external server allows root
1030 login via password, if you are asked for a password during this step.
1031
1032 After you enter the password and all the steps are successfully completed, you
1033 will see "Done". You can check the status now:
1034
1035 ----
1036 pve# pvecm status
1037
1038 ...
1039
1040 Votequorum information
1041 ~~~~~~~~~~~~~~~~~~~~~
1042 Expected votes: 3
1043 Highest expected: 3
1044 Total votes: 3
1045 Quorum: 2
1046 Flags: Quorate Qdevice
1047
1048 Membership information
1049 ~~~~~~~~~~~~~~~~~~~~~~
1050 Nodeid Votes Qdevice Name
1051 0x00000001 1 A,V,NMW 192.168.22.180 (local)
1052 0x00000002 1 A,V,NMW 192.168.22.181
1053 0x00000000 1 Qdevice
1054
1055 ----
1056
1057 which means the QDevice is set up.
1058
1059 Frequently Asked Questions
1060 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1061
1062 Tie Breaking
1063 ^^^^^^^^^^^^
1064
1065 In case of a tie, where two same-sized cluster partitions cannot see each other
1066 but the QDevice, the QDevice chooses randomly one of those partitions and
1067 provides a vote to it.
1068
1069 Possible Negative Implications
1070 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1071
1072 For clusters with an even node count there are no negative implications when
1073 setting up a QDevice. If it fails to work, you are as good as without QDevice at
1074 all.
1075
1076 Adding/Deleting Nodes After QDevice Setup
1077 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1078
1079 If you want to add a new node or remove an existing one from a cluster with a
1080 QDevice setup, you need to remove the QDevice first. After that, you can add or
1081 remove nodes normally. Once you have a cluster with an even node count again,
1082 you can set up the QDevice again as described above.
1083
1084 Removing the QDevice
1085 ^^^^^^^^^^^^^^^^^^^^
1086
1087 If you used the official `pvecm` tool to add the QDevice, you can remove it
1088 trivially by running:
1089
1090 ----
1091 pve# pvecm qdevice remove
1092 ----
1093
1094 //Still TODO
1095 //^^^^^^^^^^
1096 //There is still stuff to add here
1097
1098
1099 Corosync Configuration
1100 ----------------------
1101
1102 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1103 controls the cluster membership and its network.
1104 For further information about it, check the corosync.conf man page:
1105 [source,bash]
1106 ----
1107 man corosync.conf
1108 ----
1109
1110 For node membership you should always use the `pvecm` tool provided by {pve}.
1111 You may have to edit the configuration file manually for other changes.
1112 Here are a few best practice tips for doing this.
1113
1114 [[pvecm_edit_corosync_conf]]
1115 Edit corosync.conf
1116 ~~~~~~~~~~~~~~~~~~
1117
1118 Editing the corosync.conf file is not always very straightforward. There are
1119 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1120 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1121 propagate the changes to the local one, but not vice versa.
1122
1123 The configuration will get updated automatically as soon as the file changes.
1124 This means changes which can be integrated in a running corosync will take
1125 effect immediately. So you should always make a copy and edit that instead, to
1126 avoid triggering some unwanted changes by an in-between safe.
1127
1128 [source,bash]
1129 ----
1130 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1131 ----
1132
1133 Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1134 preinstalled on any {pve} node for example.
1135
1136 NOTE: Always increment the 'config_version' number on configuration changes,
1137 omitting this can lead to problems.
1138
1139 After making the necessary changes create another copy of the current working
1140 configuration file. This serves as a backup if the new configuration fails to
1141 apply or makes problems in other ways.
1142
1143 [source,bash]
1144 ----
1145 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1146 ----
1147
1148 Then move the new configuration file over the old one:
1149 [source,bash]
1150 ----
1151 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1152 ----
1153
1154 You may check with the commands
1155 [source,bash]
1156 ----
1157 systemctl status corosync
1158 journalctl -b -u corosync
1159 ----
1160
1161 If the change could be applied automatically. If not you may have to restart the
1162 corosync service via:
1163 [source,bash]
1164 ----
1165 systemctl restart corosync
1166 ----
1167
1168 On errors check the troubleshooting section below.
1169
1170 Troubleshooting
1171 ~~~~~~~~~~~~~~~
1172
1173 Issue: 'quorum.expected_votes must be configured'
1174 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1175
1176 When corosync starts to fail and you get the following message in the system log:
1177
1178 ----
1179 [...]
1180 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1181 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1182 'configuration error: nodelist or quorum.expected_votes must be configured!'
1183 [...]
1184 ----
1185
1186 It means that the hostname you set for corosync 'ringX_addr' in the
1187 configuration could not be resolved.
1188
1189 Write Configuration When Not Quorate
1190 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1191
1192 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1193 know what you do, use:
1194 [source,bash]
1195 ----
1196 pvecm expected 1
1197 ----
1198
1199 This sets the expected vote count to 1 and makes the cluster quorate. You can
1200 now fix your configuration, or revert it back to the last working backup.
1201
1202 This is not enough if corosync cannot start anymore. Here it is best to edit the
1203 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1204 that corosync can start again. Ensure that on all nodes this configuration has
1205 the same content to avoid split brains. If you are not sure what went wrong
1206 it's best to ask the Proxmox Community to help you.
1207
1208
1209 [[pvecm_corosync_conf_glossary]]
1210 Corosync Configuration Glossary
1211 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1212
1213 ringX_addr::
1214 This names the different link addresses for the kronosnet connections between
1215 nodes.
1216
1217
1218 Cluster Cold Start
1219 ------------------
1220
1221 It is obvious that a cluster is not quorate when all nodes are
1222 offline. This is a common case after a power failure.
1223
1224 NOTE: It is always a good idea to use an uninterruptible power supply
1225 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1226 you want HA.
1227
1228 On node startup, the `pve-guests` service is started and waits for
1229 quorum. Once quorate, it starts all guests which have the `onboot`
1230 flag set.
1231
1232 When you turn on nodes, or when power comes back after power failure,
1233 it is likely that some nodes boots faster than others. Please keep in
1234 mind that guest startup is delayed until you reach quorum.
1235
1236
1237 Guest Migration
1238 ---------------
1239
1240 Migrating virtual guests to other nodes is a useful feature in a
1241 cluster. There are settings to control the behavior of such
1242 migrations. This can be done via the configuration file
1243 `datacenter.cfg` or for a specific migration via API or command line
1244 parameters.
1245
1246 It makes a difference if a Guest is online or offline, or if it has
1247 local resources (like a local disk).
1248
1249 For Details about Virtual Machine Migration see the
1250 xref:qm_migration[QEMU/KVM Migration Chapter].
1251
1252 For Details about Container Migration see the
1253 xref:pct_migration[Container Migration Chapter].
1254
1255 Migration Type
1256 ~~~~~~~~~~~~~~
1257
1258 The migration type defines if the migration data should be sent over an
1259 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1260 Setting the migration type to insecure means that the RAM content of a
1261 virtual guest gets also transferred unencrypted, which can lead to
1262 information disclosure of critical data from inside the guest (for
1263 example passwords or encryption keys).
1264
1265 Therefore, we strongly recommend using the secure channel if you do
1266 not have full control over the network and can not guarantee that no
1267 one is eavesdropping on it.
1268
1269 NOTE: Storage migration does not follow this setting. Currently, it
1270 always sends the storage content over a secure channel.
1271
1272 Encryption requires a lot of computing power, so this setting is often
1273 changed to "unsafe" to achieve better performance. The impact on
1274 modern systems is lower because they implement AES encryption in
1275 hardware. The performance impact is particularly evident in fast
1276 networks where you can transfer 10 Gbps or more.
1277
1278 Migration Network
1279 ~~~~~~~~~~~~~~~~~
1280
1281 By default, {pve} uses the network in which cluster communication
1282 takes place to send the migration traffic. This is not optimal because
1283 sensitive cluster traffic can be disrupted and this network may not
1284 have the best bandwidth available on the node.
1285
1286 Setting the migration network parameter allows the use of a dedicated
1287 network for the entire migration traffic. In addition to the memory,
1288 this also affects the storage traffic for offline migrations.
1289
1290 The migration network is set as a network in the CIDR notation. This
1291 has the advantage that you do not have to set individual IP addresses
1292 for each node. {pve} can determine the real address on the
1293 destination node from the network specified in the CIDR form. To
1294 enable this, the network must be specified so that each node has one,
1295 but only one IP in the respective network.
1296
1297 Example
1298 ^^^^^^^
1299
1300 We assume that we have a three-node setup with three separate
1301 networks. One for public communication with the Internet, one for
1302 cluster communication and a very fast one, which we want to use as a
1303 dedicated network for migration.
1304
1305 A network configuration for such a setup might look as follows:
1306
1307 ----
1308 iface eno1 inet manual
1309
1310 # public network
1311 auto vmbr0
1312 iface vmbr0 inet static
1313 address 192.X.Y.57
1314 netmask 255.255.250.0
1315 gateway 192.X.Y.1
1316 bridge-ports eno1
1317 bridge-stp off
1318 bridge-fd 0
1319
1320 # cluster network
1321 auto eno2
1322 iface eno2 inet static
1323 address 10.1.1.1
1324 netmask 255.255.255.0
1325
1326 # fast network
1327 auto eno3
1328 iface eno3 inet static
1329 address 10.1.2.1
1330 netmask 255.255.255.0
1331 ----
1332
1333 Here, we will use the network 10.1.2.0/24 as a migration network. For
1334 a single migration, you can do this using the `migration_network`
1335 parameter of the command line tool:
1336
1337 ----
1338 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1339 ----
1340
1341 To configure this as the default network for all migrations in the
1342 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1343 file:
1344
1345 ----
1346 # use dedicated migration network
1347 migration: secure,network=10.1.2.0/24
1348 ----
1349
1350 NOTE: The migration type must always be set when the migration network
1351 gets set in `/etc/pve/datacenter.cfg`.
1352
1353
1354 ifdef::manvolnum[]
1355 include::pve-copyright.adoc[]
1356 endif::manvolnum[]