]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
cluster: restructure ssh role section
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
32
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
37 nodes.
38
39 Grouping nodes into a cluster has the following advantages:
40
41 * Centralized, web based management
42
43 * Multi-master clusters: each node can do all management tasks
44
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
47
48 * Easy migration of virtual machines and containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
60 for corosync to work.
61
62 * Date and time have to be synchronized.
63
64 * SSH tunnel on TCP port 22 between nodes is used.
65
66 * If you are interested in High Availability, you need to have at
67 least three nodes for reliable quorum. All nodes should have the
68 same version.
69
70 * We recommend a dedicated NIC for the cluster traffic, especially if
71 you use shared storage.
72
73 * Root password of a cluster node is required for adding nodes.
74
75 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
76 nodes.
77
78 NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
79 not supported as production configuration and should only used temporarily
80 during upgrading the whole cluster from one to another major version.
81
82 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
83 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
84 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
85 upgrade procedure to {pve} 6.0.
86
87
88 Preparing Nodes
89 ---------------
90
91 First, install {PVE} on all nodes. Make sure that each node is
92 installed with the final hostname and IP configuration. Changing the
93 hostname and IP is not possible after cluster creation.
94
95 While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
96 make their names resolvable through other means), this is not necessary for a
97 cluster to work. It may be useful however, as you can then connect from one node
98 to the other with SSH via the easier to remember node name (see also
99 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
100 recommend to reference nodes by their IP addresses in the cluster configuration.
101
102
103 [[pvecm_create_cluster]]
104 Create a Cluster
105 ----------------
106
107 You can either create a cluster on the console (login via `ssh`), or through
108 the API using the {pve} Webinterface (__Datacenter -> Cluster__).
109
110 NOTE: Use a unique name for your cluster. This name cannot be changed later.
111 The cluster name follows the same rules as node names.
112
113 [[pvecm_cluster_create_via_gui]]
114 Create via Web GUI
115 ~~~~~~~~~~~~~~~~~~
116
117 [thumbnail="screenshot/gui-cluster-create.png"]
118
119 Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
120 name and select a network connection from the dropdown to serve as the main
121 cluster network (Link 0). It defaults to the IP resolved via the node's
122 hostname.
123
124 To add a second link as fallback, you can select the 'Advanced' checkbox and
125 choose an additional network interface (Link 1, see also
126 xref:pvecm_redundancy[Corosync Redundancy]).
127
128 NOTE: Ensure the network selected for the cluster communication is not used for
129 any high traffic loads like those of (network) storages or live-migration.
130 While the cluster network itself produces small amounts of data, it is very
131 sensitive to latency. Check out full
132 xref:pvecm_cluster_network_requirements[cluster network requirements].
133
134 [[pvecm_cluster_create_via_cli]]
135 Create via Command Line
136 ~~~~~~~~~~~~~~~~~~~~~~~
137
138 Login via `ssh` to the first {pve} node and run the following command:
139
140 ----
141 hp1# pvecm create CLUSTERNAME
142 ----
143
144 To check the state of the new cluster use:
145
146 ----
147 hp1# pvecm status
148 ----
149
150 Multiple Clusters In Same Network
151 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
152
153 It is possible to create multiple clusters in the same physical or logical
154 network. Each such cluster must have a unique name to avoid possible clashes in
155 the cluster communication stack. This also helps avoid human confusion by making
156 clusters clearly distinguishable.
157
158 While the bandwidth requirement of a corosync cluster is relatively low, the
159 latency of packages and the package per second (PPS) rate is the limiting
160 factor. Different clusters in the same network can compete with each other for
161 these resources, so it may still make sense to use separate physical network
162 infrastructure for bigger clusters.
163
164 [[pvecm_join_node_to_cluster]]
165 Adding Nodes to the Cluster
166 ---------------------------
167
168 CAUTION: A node that is about to be added to the cluster cannot hold any guests.
169 All existing configuration in `/etc/pve` is overwritten when joining a cluster,
170 since guest IDs could be conflicting. As a workaround create a backup of the
171 guest (`vzdump`) and restore it as a different ID after the node has been added
172 to the cluster.
173
174 Join Node to Cluster via GUI
175 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
176
177 [thumbnail="screenshot/gui-cluster-join-information.png"]
178
179 Login to the web interface on an existing cluster node. Under __Datacenter ->
180 Cluster__, click the button *Join Information* at the top. Then, click on the
181 button *Copy Information*. Alternatively, copy the string from the 'Information'
182 field manually.
183
184 [thumbnail="screenshot/gui-cluster-join.png"]
185
186 Next, login to the web interface on the node you want to add.
187 Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
188 'Information' field with the 'Join Information' text you copied earlier.
189 Most settings required for joining the cluster will be filled out
190 automatically. For security reasons, the cluster password has to be entered
191 manually.
192
193 NOTE: To enter all required data manually, you can disable the 'Assisted Join'
194 checkbox.
195
196 After clicking the *Join* button, the cluster join process will start
197 immediately. After the node joined the cluster its current node certificate
198 will be replaced by one signed from the cluster certificate authority (CA),
199 that means the current session will stop to work after a few seconds. You might
200 then need to force-reload the webinterface and re-login with the cluster
201 credentials.
202
203 Now your node should be visible under __Datacenter -> Cluster__.
204
205 Join Node to Cluster via Command Line
206 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
207
208 Login via `ssh` to the node you want to join into an existing cluster.
209
210 ----
211 hp2# pvecm add IP-ADDRESS-CLUSTER
212 ----
213
214 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
215 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
216
217
218 To check the state of the cluster use:
219
220 ----
221 # pvecm status
222 ----
223
224 .Cluster status after adding 4 nodes
225 ----
226 hp2# pvecm status
227 Quorum information
228 ~~~~~~~~~~~~~~~~~~
229 Date: Mon Apr 20 12:30:13 2015
230 Quorum provider: corosync_votequorum
231 Nodes: 4
232 Node ID: 0x00000001
233 Ring ID: 1/8
234 Quorate: Yes
235
236 Votequorum information
237 ~~~~~~~~~~~~~~~~~~~~~~
238 Expected votes: 4
239 Highest expected: 4
240 Total votes: 4
241 Quorum: 3
242 Flags: Quorate
243
244 Membership information
245 ~~~~~~~~~~~~~~~~~~~~~~
246 Nodeid Votes Name
247 0x00000001 1 192.168.15.91
248 0x00000002 1 192.168.15.92 (local)
249 0x00000003 1 192.168.15.93
250 0x00000004 1 192.168.15.94
251 ----
252
253 If you only want the list of all nodes use:
254
255 ----
256 # pvecm nodes
257 ----
258
259 .List nodes in a cluster
260 ----
261 hp2# pvecm nodes
262
263 Membership information
264 ~~~~~~~~~~~~~~~~~~~~~~
265 Nodeid Votes Name
266 1 1 hp1
267 2 1 hp2 (local)
268 3 1 hp3
269 4 1 hp4
270 ----
271
272 [[pvecm_adding_nodes_with_separated_cluster_network]]
273 Adding Nodes With Separated Cluster Network
274 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
275
276 When adding a node to a cluster with a separated cluster network you need to
277 use the 'link0' parameter to set the nodes address on that network:
278
279 [source,bash]
280 ----
281 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
282 ----
283
284 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
285 kronosnet transport layer, also use the 'link1' parameter.
286
287 Using the GUI, you can select the correct interface from the corresponding 'Link 0'
288 and 'Link 1' fields in the *Cluster Join* dialog.
289
290 Remove a Cluster Node
291 ---------------------
292
293 CAUTION: Read carefully the procedure before proceeding, as it could
294 not be what you want or need.
295
296 Move all virtual machines from the node. Make sure you have no local
297 data or backups you want to keep, or save them accordingly.
298 In the following example we will remove the node hp4 from the cluster.
299
300 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
301 command to identify the node ID to remove:
302
303 ----
304 hp1# pvecm nodes
305
306 Membership information
307 ~~~~~~~~~~~~~~~~~~~~~~
308 Nodeid Votes Name
309 1 1 hp1 (local)
310 2 1 hp2
311 3 1 hp3
312 4 1 hp4
313 ----
314
315
316 At this point you must power off hp4 and
317 make sure that it will not power on again (in the network) as it
318 is.
319
320 IMPORTANT: As said above, it is critical to power off the node
321 *before* removal, and make sure that it will *never* power on again
322 (in the existing cluster network) as it is.
323 If you power on the node as it is, your cluster will be screwed up and
324 it could be difficult to restore a clean cluster state.
325
326 After powering off the node hp4, we can safely remove it from the cluster.
327
328 ----
329 hp1# pvecm delnode hp4
330 Killing node 4
331 ----
332
333 Use `pvecm nodes` or `pvecm status` to check the node list again. It should
334 look something like:
335
336 ----
337 hp1# pvecm status
338
339 Quorum information
340 ~~~~~~~~~~~~~~~~~~
341 Date: Mon Apr 20 12:44:28 2015
342 Quorum provider: corosync_votequorum
343 Nodes: 3
344 Node ID: 0x00000001
345 Ring ID: 1/8
346 Quorate: Yes
347
348 Votequorum information
349 ~~~~~~~~~~~~~~~~~~~~~~
350 Expected votes: 3
351 Highest expected: 3
352 Total votes: 3
353 Quorum: 2
354 Flags: Quorate
355
356 Membership information
357 ~~~~~~~~~~~~~~~~~~~~~~
358 Nodeid Votes Name
359 0x00000001 1 192.168.15.90 (local)
360 0x00000002 1 192.168.15.91
361 0x00000003 1 192.168.15.92
362 ----
363
364 If, for whatever reason, you want this server to join the same cluster again,
365 you have to
366
367 * reinstall {pve} on it from scratch
368
369 * then join it, as explained in the previous section.
370
371 NOTE: After removal of the node, its SSH fingerprint will still reside in the
372 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
373 a node with the same IP or hostname, run `pvecm updatecerts` once on the
374 re-added node to update its fingerprint cluster wide.
375
376 [[pvecm_separate_node_without_reinstall]]
377 Separate A Node Without Reinstalling
378 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
379
380 CAUTION: This is *not* the recommended method, proceed with caution. Use the
381 above mentioned method if you're unsure.
382
383 You can also separate a node from a cluster without reinstalling it from
384 scratch. But after removing the node from the cluster it will still have
385 access to the shared storages! This must be resolved before you start removing
386 the node from the cluster. A {pve} cluster cannot share the exact same
387 storage with another cluster, as storage locking doesn't work over cluster
388 boundary. Further, it may also lead to VMID conflicts.
389
390 Its suggested that you create a new storage where only the node which you want
391 to separate has access. This can be a new export on your NFS or a new Ceph
392 pool, to name a few examples. Its just important that the exact same storage
393 does not gets accessed by multiple clusters. After setting this storage up move
394 all data from the node and its VMs to it. Then you are ready to separate the
395 node from the cluster.
396
397 WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
398 run into conflicts and problems.
399
400 First, stop the corosync and the pve-cluster services on the node:
401 [source,bash]
402 ----
403 systemctl stop pve-cluster
404 systemctl stop corosync
405 ----
406
407 Start the cluster filesystem again in local mode:
408 [source,bash]
409 ----
410 pmxcfs -l
411 ----
412
413 Delete the corosync configuration files:
414 [source,bash]
415 ----
416 rm /etc/pve/corosync.conf
417 rm -r /etc/corosync/*
418 ----
419
420 You can now start the filesystem again as normal service:
421 [source,bash]
422 ----
423 killall pmxcfs
424 systemctl start pve-cluster
425 ----
426
427 The node is now separated from the cluster. You can deleted it from a remaining
428 node of the cluster with:
429 [source,bash]
430 ----
431 pvecm delnode oldnode
432 ----
433
434 If the command failed, because the remaining node in the cluster lost quorum
435 when the now separate node exited, you may set the expected votes to 1 as a workaround:
436 [source,bash]
437 ----
438 pvecm expected 1
439 ----
440
441 And then repeat the 'pvecm delnode' command.
442
443 Now switch back to the separated node, here delete all remaining files left
444 from the old cluster. This ensures that the node can be added to another
445 cluster again without problems.
446
447 [source,bash]
448 ----
449 rm /var/lib/corosync/*
450 ----
451
452 As the configuration files from the other nodes are still in the cluster
453 filesystem you may want to clean those up too. Remove simply the whole
454 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
455 you used the correct one before deleting it.
456
457 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
458 the nodes can still connect to each other with public key authentication. This
459 should be fixed by removing the respective keys from the
460 '/etc/pve/priv/authorized_keys' file.
461
462
463 Quorum
464 ------
465
466 {pve} use a quorum-based technique to provide a consistent state among
467 all cluster nodes.
468
469 [quote, from Wikipedia, Quorum (distributed computing)]
470 ____
471 A quorum is the minimum number of votes that a distributed transaction
472 has to obtain in order to be allowed to perform an operation in a
473 distributed system.
474 ____
475
476 In case of network partitioning, state changes requires that a
477 majority of nodes are online. The cluster switches to read-only mode
478 if it loses quorum.
479
480 NOTE: {pve} assigns a single vote to each node by default.
481
482
483 Cluster Network
484 ---------------
485
486 The cluster network is the core of a cluster. All messages sent over it have to
487 be delivered reliably to all nodes in their respective order. In {pve} this
488 part is done by corosync, an implementation of a high performance, low overhead
489 high availability development toolkit. It serves our decentralized
490 configuration file system (`pmxcfs`).
491
492 [[pvecm_cluster_network_requirements]]
493 Network Requirements
494 ~~~~~~~~~~~~~~~~~~~~
495 This needs a reliable network with latencies under 2 milliseconds (LAN
496 performance) to work properly. The network should not be used heavily by other
497 members, ideally corosync runs on its own network. Do not use a shared network
498 for corosync and storage (except as a potential low-priority fallback in a
499 xref:pvecm_redundancy[redundant] configuration).
500
501 Before setting up a cluster, it is good practice to check if the network is fit
502 for that purpose. To make sure the nodes can connect to each other on the
503 cluster network, you can test the connectivity between them with the `ping`
504 tool.
505
506 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
507 be generated - no manual action is required.
508
509 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
510 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
511 communication, which, for now, only supports regular UDP unicast.
512
513 CAUTION: You can still enable Multicast or legacy unicast by setting your
514 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
515 but keep in mind that this will disable all cryptography and redundancy support.
516 This is therefore not recommended.
517
518 Separate Cluster Network
519 ~~~~~~~~~~~~~~~~~~~~~~~~
520
521 When creating a cluster without any parameters the corosync cluster network is
522 generally shared with the Web UI and the VMs and their traffic. Depending on
523 your setup, even storage traffic may get sent over the same network. Its
524 recommended to change that, as corosync is a time critical real time
525 application.
526
527 Setting Up A New Network
528 ^^^^^^^^^^^^^^^^^^^^^^^^
529
530 First, you have to set up a new network interface. It should be on a physically
531 separate network. Ensure that your network fulfills the
532 xref:pvecm_cluster_network_requirements[cluster network requirements].
533
534 Separate On Cluster Creation
535 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
536
537 This is possible via the 'linkX' parameters of the 'pvecm create'
538 command used for creating a new cluster.
539
540 If you have set up an additional NIC with a static address on 10.10.10.1/25,
541 and want to send and receive all cluster communication over this interface,
542 you would execute:
543
544 [source,bash]
545 ----
546 pvecm create test --link0 10.10.10.1
547 ----
548
549 To check if everything is working properly execute:
550 [source,bash]
551 ----
552 systemctl status corosync
553 ----
554
555 Afterwards, proceed as described above to
556 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
557
558 [[pvecm_separate_cluster_net_after_creation]]
559 Separate After Cluster Creation
560 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
561
562 You can do this if you have already created a cluster and want to switch
563 its communication to another network, without rebuilding the whole cluster.
564 This change may lead to short durations of quorum loss in the cluster, as nodes
565 have to restart corosync and come up one after the other on the new network.
566
567 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
568 Then, open it and you should see a file similar to:
569
570 ----
571 logging {
572 debug: off
573 to_syslog: yes
574 }
575
576 nodelist {
577
578 node {
579 name: due
580 nodeid: 2
581 quorum_votes: 1
582 ring0_addr: due
583 }
584
585 node {
586 name: tre
587 nodeid: 3
588 quorum_votes: 1
589 ring0_addr: tre
590 }
591
592 node {
593 name: uno
594 nodeid: 1
595 quorum_votes: 1
596 ring0_addr: uno
597 }
598
599 }
600
601 quorum {
602 provider: corosync_votequorum
603 }
604
605 totem {
606 cluster_name: testcluster
607 config_version: 3
608 ip_version: ipv4-6
609 secauth: on
610 version: 2
611 interface {
612 linknumber: 0
613 }
614
615 }
616 ----
617
618 NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
619 is a remnant of older corosync versions that is kept for backwards
620 compatibility.
621
622 The first thing you want to do is add the 'name' properties in the node entries
623 if you do not see them already. Those *must* match the node name.
624
625 Then replace all addresses from the 'ring0_addr' properties of all nodes with
626 the new addresses. You may use plain IP addresses or hostnames here. If you use
627 hostnames ensure that they are resolvable from all nodes. (see also
628 xref:pvecm_corosync_addresses[Link Address Types])
629
630 In this example, we want to switch the cluster communication to the
631 10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
632
633 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
634 as well, although we recommend to not change multiple addresses at once, to make
635 it easier to recover if something goes wrong.
636
637 After we increase the 'config_version' property, the new configuration file
638 should look like:
639
640 ----
641 logging {
642 debug: off
643 to_syslog: yes
644 }
645
646 nodelist {
647
648 node {
649 name: due
650 nodeid: 2
651 quorum_votes: 1
652 ring0_addr: 10.10.10.2
653 }
654
655 node {
656 name: tre
657 nodeid: 3
658 quorum_votes: 1
659 ring0_addr: 10.10.10.3
660 }
661
662 node {
663 name: uno
664 nodeid: 1
665 quorum_votes: 1
666 ring0_addr: 10.10.10.1
667 }
668
669 }
670
671 quorum {
672 provider: corosync_votequorum
673 }
674
675 totem {
676 cluster_name: testcluster
677 config_version: 4
678 ip_version: ipv4-6
679 secauth: on
680 version: 2
681 interface {
682 linknumber: 0
683 }
684
685 }
686 ----
687
688 Then, after a final check if all changed information is correct, we save it and
689 once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
690 section to bring it into effect.
691
692 The changes will be applied live, so restarting corosync is not strictly
693 necessary. If you changed other settings as well, or notice corosync
694 complaining, you can optionally trigger a restart.
695
696 On a single node execute:
697
698 [source,bash]
699 ----
700 systemctl restart corosync
701 ----
702
703 Now check if everything is fine:
704
705 [source,bash]
706 ----
707 systemctl status corosync
708 ----
709
710 If corosync runs again correct restart corosync also on all other nodes.
711 They will then join the cluster membership one by one on the new network.
712
713 [[pvecm_corosync_addresses]]
714 Corosync addresses
715 ~~~~~~~~~~~~~~~~~~
716
717 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
718 `corosync.conf`) can be specified in two ways:
719
720 * **IPv4/v6 addresses** will be used directly. They are recommended, since they
721 are static and usually not changed carelessly.
722
723 * **Hostnames** will be resolved using `getaddrinfo`, which means that per
724 default, IPv6 addresses will be used first, if available (see also
725 `man gai.conf`). Keep this in mind, especially when upgrading an existing
726 cluster to IPv6.
727
728 CAUTION: Hostnames should be used with care, since the address they
729 resolve to can be changed without touching corosync or the node it runs on -
730 which may lead to a situation where an address is changed without thinking
731 about implications for corosync.
732
733 A separate, static hostname specifically for corosync is recommended, if
734 hostnames are preferred. Also, make sure that every node in the cluster can
735 resolve all hostnames correctly.
736
737 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
738 entry. Only the resolved IP is then saved to the configuration.
739
740 Nodes that joined the cluster on earlier versions likely still use their
741 unresolved hostname in `corosync.conf`. It might be a good idea to replace
742 them with IPs or a separate hostname, as mentioned above.
743
744
745 [[pvecm_redundancy]]
746 Corosync Redundancy
747 -------------------
748
749 Corosync supports redundant networking via its integrated kronosnet layer by
750 default (it is not supported on the legacy udp/udpu transports). It can be
751 enabled by specifying more than one link address, either via the '--linkX'
752 parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
753 adding a new node) or by specifying more than one 'ringX_addr' in
754 `corosync.conf`.
755
756 NOTE: To provide useful failover, every link should be on its own
757 physical network connection.
758
759 Links are used according to a priority setting. You can configure this priority
760 by setting 'knet_link_priority' in the corresponding interface section in
761 `corosync.conf`, or, preferably, using the 'priority' parameter when creating
762 your cluster with `pvecm`:
763
764 ----
765 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
766 ----
767
768 This would cause 'link1' to be used first, since it has the higher priority.
769
770 If no priorities are configured manually (or two links have the same priority),
771 links will be used in order of their number, with the lower number having higher
772 priority.
773
774 Even if all links are working, only the one with the highest priority will see
775 corosync traffic. Link priorities cannot be mixed, i.e. links with different
776 priorities will not be able to communicate with each other.
777
778 Since lower priority links will not see traffic unless all higher priorities
779 have failed, it becomes a useful strategy to specify even networks used for
780 other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
781 worst, a higher-latency or more congested connection might be better than no
782 connection at all.
783
784 Adding Redundant Links To An Existing Cluster
785 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
786
787 To add a new link to a running configuration, first check how to
788 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
789
790 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
791 sure that your 'X' is the same for every node you add it to, and that it is
792 unique for each node.
793
794 Lastly, add a new 'interface', as shown below, to your `totem`
795 section, replacing 'X' with your link number chosen above.
796
797 Assuming you added a link with number 1, the new configuration file could look
798 like this:
799
800 ----
801 logging {
802 debug: off
803 to_syslog: yes
804 }
805
806 nodelist {
807
808 node {
809 name: due
810 nodeid: 2
811 quorum_votes: 1
812 ring0_addr: 10.10.10.2
813 ring1_addr: 10.20.20.2
814 }
815
816 node {
817 name: tre
818 nodeid: 3
819 quorum_votes: 1
820 ring0_addr: 10.10.10.3
821 ring1_addr: 10.20.20.3
822 }
823
824 node {
825 name: uno
826 nodeid: 1
827 quorum_votes: 1
828 ring0_addr: 10.10.10.1
829 ring1_addr: 10.20.20.1
830 }
831
832 }
833
834 quorum {
835 provider: corosync_votequorum
836 }
837
838 totem {
839 cluster_name: testcluster
840 config_version: 4
841 ip_version: ipv4-6
842 secauth: on
843 version: 2
844 interface {
845 linknumber: 0
846 }
847 interface {
848 linknumber: 1
849 }
850 }
851 ----
852
853 The new link will be enabled as soon as you follow the last steps to
854 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
855 be necessary. You can check that corosync loaded the new link using:
856
857 ----
858 journalctl -b -u corosync
859 ----
860
861 It might be a good idea to test the new link by temporarily disconnecting the
862 old link on one node and making sure that its status remains online while
863 disconnected:
864
865 ----
866 pvecm status
867 ----
868
869 If you see a healthy cluster state, it means that your new link is being used.
870
871
872 Role of SSH in {PVE} Clusters
873 -----------------------------
874
875 {PVE} utilizes SSH tunnels for various operations.
876
877 * Proxying terminal sessions of node and containers between nodes
878 +
879 When you connect another nodes shell through the web interface, for example, a
880 non-interactive SSH tunnel is started in order to forward the necessary ports
881 for the VNC connection.
882
883 * VM and CT memory and local-storage migration, if the cluster wide migration
884 settings are not configured 'insecure' mode. During a VM migration an SSH
885 tunnel is established between the target and source nodes.
886
887 * Storage replication
888
889 .Pitfalls due to automatic execution of `.bashrc` and siblings
890 [IMPORTANT]
891 ====
892 In case you have a custom `.bashrc`, or similar files that get executed on
893 login by the configured shell, `ssh` will automatically run it once the session
894 is established successfully. This can cause some unexpected behavior, as those
895 commands may be executed with root permissions on any above described
896 operation. That can cause possible problematic side-effects!
897
898 In order to avoid such complications, it's recommended to add a check in
899 `/root/.bashrc` to make sure the session is interactive, and only then run
900 `.bashrc` commands.
901
902 You can add this snippet at the beginning of your `.bashrc` file:
903
904 ----
905 # Early exit if not running interactively to avoid side-effects!
906 case $- in
907 *i*) ;;
908 *) return;;
909 esac
910 ----
911 ====
912
913
914 Corosync External Vote Support
915 ------------------------------
916
917 This section describes a way to deploy an external voter in a {pve} cluster.
918 When configured, the cluster can sustain more node failures without
919 violating safety properties of the cluster communication.
920
921 For this to work there are two services involved:
922
923 * a so called qdevice daemon which runs on each {pve} node
924
925 * an external vote daemon which runs on an independent server.
926
927 As a result you can achieve higher availability even in smaller setups (for
928 example 2+1 nodes).
929
930 QDevice Technical Overview
931 ~~~~~~~~~~~~~~~~~~~~~~~~~~
932
933 The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
934 node. It provides a configured number of votes to the clusters quorum
935 subsystem based on an external running third-party arbitrator's decision.
936 Its primary use is to allow a cluster to sustain more node failures than
937 standard quorum rules allow. This can be done safely as the external device
938 can see all nodes and thus choose only one set of nodes to give its vote.
939 This will only be done if said set of nodes can have quorum (again) when
940 receiving the third-party vote.
941
942 Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
943 a daemon which provides a vote to a cluster partition if it can reach the
944 partition members over the network. It will give only votes to one partition
945 of a cluster at any time.
946 It's designed to support multiple clusters and is almost configuration and
947 state free. New clusters are handled dynamically and no configuration file
948 is needed on the host running a QDevice.
949
950 The external host has the only requirement that it needs network access to the
951 cluster and a corosync-qnetd package available. We provide such a package
952 for Debian based hosts, other Linux distributions should also have a package
953 available through their respective package manager.
954
955 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
956 TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
957 latencies than 2 ms.
958
959 Supported Setups
960 ~~~~~~~~~~~~~~~~
961
962 We support QDevices for clusters with an even number of nodes and recommend
963 it for 2 node clusters, if they should provide higher availability.
964 For clusters with an odd node count we discourage the use of QDevices
965 currently. The reason for this, is the difference of the votes the QDevice
966 provides for each cluster type. Even numbered clusters get single additional
967 vote, with this we can only increase availability, i.e. if the QDevice
968 itself fails we are in the same situation as with no QDevice at all.
969
970 Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
971 where 'N' corresponds to the cluster node count. This difference makes
972 sense, if we had only one additional vote the cluster can get into a split
973 brain situation.
974 This algorithm would allow that all nodes but one (and naturally the
975 QDevice itself) could fail.
976 There are two drawbacks with this:
977
978 * If the QNet daemon itself fails, no other node may fail or the cluster
979 immediately loses quorum. For example, in a cluster with 15 nodes 7
980 could fail before the cluster becomes inquorate. But, if a QDevice is
981 configured here and said QDevice fails itself **no single node** of
982 the 15 may fail. The QDevice acts almost as a single point of failure in
983 this case.
984
985 * The fact that all but one node plus QDevice may fail sound promising at
986 first, but this may result in a mass recovery of HA services that would
987 overload the single node left. Also ceph server will stop to provide
988 services after only '((N-1)/2)' nodes are online.
989
990 If you understand the drawbacks and implications you can decide yourself if
991 you should use this technology in an odd numbered cluster setup.
992
993 QDevice-Net Setup
994 ~~~~~~~~~~~~~~~~~
995
996 We recommend to run any daemon which provides votes to corosync-qdevice as an
997 unprivileged user. {pve} and Debian provide a package which is already
998 configured to do so.
999 The traffic between the daemon and the cluster must be encrypted to ensure a
1000 safe and secure QDevice integration in {pve}.
1001
1002 First, install the 'corosync-qnetd' package on your external server
1003
1004 ----
1005 external# apt install corosync-qnetd
1006 ----
1007
1008 and the 'corosync-qdevice' package on all cluster nodes
1009
1010 ----
1011 pve# apt install corosync-qdevice
1012 ----
1013
1014 After that, ensure that all your nodes on the cluster are online.
1015
1016 You can now easily set up your QDevice by running the following command on one
1017 of the {pve} nodes:
1018
1019 ----
1020 pve# pvecm qdevice setup <QDEVICE-IP>
1021 ----
1022
1023 The SSH key from the cluster will be automatically copied to the QDevice.
1024
1025 NOTE: Make sure that the SSH configuration on your external server allows root
1026 login via password, if you are asked for a password during this step.
1027
1028 After you enter the password and all the steps are successfully completed, you
1029 will see "Done". You can check the status now:
1030
1031 ----
1032 pve# pvecm status
1033
1034 ...
1035
1036 Votequorum information
1037 ~~~~~~~~~~~~~~~~~~~~~
1038 Expected votes: 3
1039 Highest expected: 3
1040 Total votes: 3
1041 Quorum: 2
1042 Flags: Quorate Qdevice
1043
1044 Membership information
1045 ~~~~~~~~~~~~~~~~~~~~~~
1046 Nodeid Votes Qdevice Name
1047 0x00000001 1 A,V,NMW 192.168.22.180 (local)
1048 0x00000002 1 A,V,NMW 192.168.22.181
1049 0x00000000 1 Qdevice
1050
1051 ----
1052
1053 which means the QDevice is set up.
1054
1055 Frequently Asked Questions
1056 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1057
1058 Tie Breaking
1059 ^^^^^^^^^^^^
1060
1061 In case of a tie, where two same-sized cluster partitions cannot see each other
1062 but the QDevice, the QDevice chooses randomly one of those partitions and
1063 provides a vote to it.
1064
1065 Possible Negative Implications
1066 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1067
1068 For clusters with an even node count there are no negative implications when
1069 setting up a QDevice. If it fails to work, you are as good as without QDevice at
1070 all.
1071
1072 Adding/Deleting Nodes After QDevice Setup
1073 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1074
1075 If you want to add a new node or remove an existing one from a cluster with a
1076 QDevice setup, you need to remove the QDevice first. After that, you can add or
1077 remove nodes normally. Once you have a cluster with an even node count again,
1078 you can set up the QDevice again as described above.
1079
1080 Removing the QDevice
1081 ^^^^^^^^^^^^^^^^^^^^
1082
1083 If you used the official `pvecm` tool to add the QDevice, you can remove it
1084 trivially by running:
1085
1086 ----
1087 pve# pvecm qdevice remove
1088 ----
1089
1090 //Still TODO
1091 //^^^^^^^^^^
1092 //There is still stuff to add here
1093
1094
1095 Corosync Configuration
1096 ----------------------
1097
1098 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1099 controls the cluster membership and its network.
1100 For further information about it, check the corosync.conf man page:
1101 [source,bash]
1102 ----
1103 man corosync.conf
1104 ----
1105
1106 For node membership you should always use the `pvecm` tool provided by {pve}.
1107 You may have to edit the configuration file manually for other changes.
1108 Here are a few best practice tips for doing this.
1109
1110 [[pvecm_edit_corosync_conf]]
1111 Edit corosync.conf
1112 ~~~~~~~~~~~~~~~~~~
1113
1114 Editing the corosync.conf file is not always very straightforward. There are
1115 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1116 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1117 propagate the changes to the local one, but not vice versa.
1118
1119 The configuration will get updated automatically as soon as the file changes.
1120 This means changes which can be integrated in a running corosync will take
1121 effect immediately. So you should always make a copy and edit that instead, to
1122 avoid triggering some unwanted changes by an in-between safe.
1123
1124 [source,bash]
1125 ----
1126 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1127 ----
1128
1129 Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1130 preinstalled on any {pve} node for example.
1131
1132 NOTE: Always increment the 'config_version' number on configuration changes,
1133 omitting this can lead to problems.
1134
1135 After making the necessary changes create another copy of the current working
1136 configuration file. This serves as a backup if the new configuration fails to
1137 apply or makes problems in other ways.
1138
1139 [source,bash]
1140 ----
1141 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1142 ----
1143
1144 Then move the new configuration file over the old one:
1145 [source,bash]
1146 ----
1147 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1148 ----
1149
1150 You may check with the commands
1151 [source,bash]
1152 ----
1153 systemctl status corosync
1154 journalctl -b -u corosync
1155 ----
1156
1157 If the change could be applied automatically. If not you may have to restart the
1158 corosync service via:
1159 [source,bash]
1160 ----
1161 systemctl restart corosync
1162 ----
1163
1164 On errors check the troubleshooting section below.
1165
1166 Troubleshooting
1167 ~~~~~~~~~~~~~~~
1168
1169 Issue: 'quorum.expected_votes must be configured'
1170 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1171
1172 When corosync starts to fail and you get the following message in the system log:
1173
1174 ----
1175 [...]
1176 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1177 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1178 'configuration error: nodelist or quorum.expected_votes must be configured!'
1179 [...]
1180 ----
1181
1182 It means that the hostname you set for corosync 'ringX_addr' in the
1183 configuration could not be resolved.
1184
1185 Write Configuration When Not Quorate
1186 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1187
1188 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1189 know what you do, use:
1190 [source,bash]
1191 ----
1192 pvecm expected 1
1193 ----
1194
1195 This sets the expected vote count to 1 and makes the cluster quorate. You can
1196 now fix your configuration, or revert it back to the last working backup.
1197
1198 This is not enough if corosync cannot start anymore. Here it is best to edit the
1199 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1200 that corosync can start again. Ensure that on all nodes this configuration has
1201 the same content to avoid split brains. If you are not sure what went wrong
1202 it's best to ask the Proxmox Community to help you.
1203
1204
1205 [[pvecm_corosync_conf_glossary]]
1206 Corosync Configuration Glossary
1207 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1208
1209 ringX_addr::
1210 This names the different link addresses for the kronosnet connections between
1211 nodes.
1212
1213
1214 Cluster Cold Start
1215 ------------------
1216
1217 It is obvious that a cluster is not quorate when all nodes are
1218 offline. This is a common case after a power failure.
1219
1220 NOTE: It is always a good idea to use an uninterruptible power supply
1221 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1222 you want HA.
1223
1224 On node startup, the `pve-guests` service is started and waits for
1225 quorum. Once quorate, it starts all guests which have the `onboot`
1226 flag set.
1227
1228 When you turn on nodes, or when power comes back after power failure,
1229 it is likely that some nodes boots faster than others. Please keep in
1230 mind that guest startup is delayed until you reach quorum.
1231
1232
1233 Guest Migration
1234 ---------------
1235
1236 Migrating virtual guests to other nodes is a useful feature in a
1237 cluster. There are settings to control the behavior of such
1238 migrations. This can be done via the configuration file
1239 `datacenter.cfg` or for a specific migration via API or command line
1240 parameters.
1241
1242 It makes a difference if a Guest is online or offline, or if it has
1243 local resources (like a local disk).
1244
1245 For Details about Virtual Machine Migration see the
1246 xref:qm_migration[QEMU/KVM Migration Chapter].
1247
1248 For Details about Container Migration see the
1249 xref:pct_migration[Container Migration Chapter].
1250
1251 Migration Type
1252 ~~~~~~~~~~~~~~
1253
1254 The migration type defines if the migration data should be sent over an
1255 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1256 Setting the migration type to insecure means that the RAM content of a
1257 virtual guest gets also transferred unencrypted, which can lead to
1258 information disclosure of critical data from inside the guest (for
1259 example passwords or encryption keys).
1260
1261 Therefore, we strongly recommend using the secure channel if you do
1262 not have full control over the network and can not guarantee that no
1263 one is eavesdropping on it.
1264
1265 NOTE: Storage migration does not follow this setting. Currently, it
1266 always sends the storage content over a secure channel.
1267
1268 Encryption requires a lot of computing power, so this setting is often
1269 changed to "unsafe" to achieve better performance. The impact on
1270 modern systems is lower because they implement AES encryption in
1271 hardware. The performance impact is particularly evident in fast
1272 networks where you can transfer 10 Gbps or more.
1273
1274 Migration Network
1275 ~~~~~~~~~~~~~~~~~
1276
1277 By default, {pve} uses the network in which cluster communication
1278 takes place to send the migration traffic. This is not optimal because
1279 sensitive cluster traffic can be disrupted and this network may not
1280 have the best bandwidth available on the node.
1281
1282 Setting the migration network parameter allows the use of a dedicated
1283 network for the entire migration traffic. In addition to the memory,
1284 this also affects the storage traffic for offline migrations.
1285
1286 The migration network is set as a network in the CIDR notation. This
1287 has the advantage that you do not have to set individual IP addresses
1288 for each node. {pve} can determine the real address on the
1289 destination node from the network specified in the CIDR form. To
1290 enable this, the network must be specified so that each node has one,
1291 but only one IP in the respective network.
1292
1293 Example
1294 ^^^^^^^
1295
1296 We assume that we have a three-node setup with three separate
1297 networks. One for public communication with the Internet, one for
1298 cluster communication and a very fast one, which we want to use as a
1299 dedicated network for migration.
1300
1301 A network configuration for such a setup might look as follows:
1302
1303 ----
1304 iface eno1 inet manual
1305
1306 # public network
1307 auto vmbr0
1308 iface vmbr0 inet static
1309 address 192.X.Y.57
1310 netmask 255.255.250.0
1311 gateway 192.X.Y.1
1312 bridge-ports eno1
1313 bridge-stp off
1314 bridge-fd 0
1315
1316 # cluster network
1317 auto eno2
1318 iface eno2 inet static
1319 address 10.1.1.1
1320 netmask 255.255.255.0
1321
1322 # fast network
1323 auto eno3
1324 iface eno3 inet static
1325 address 10.1.2.1
1326 netmask 255.255.255.0
1327 ----
1328
1329 Here, we will use the network 10.1.2.0/24 as a migration network. For
1330 a single migration, you can do this using the `migration_network`
1331 parameter of the command line tool:
1332
1333 ----
1334 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1335 ----
1336
1337 To configure this as the default network for all migrations in the
1338 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1339 file:
1340
1341 ----
1342 # use dedicated migration network
1343 migration: secure,network=10.1.2.0/24
1344 ----
1345
1346 NOTE: The migration type must always be set when the migration network
1347 gets set in `/etc/pve/datacenter.cfg`.
1348
1349
1350 ifdef::manvolnum[]
1351 include::pve-copyright.adoc[]
1352 endif::manvolnum[]