]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
Add cluster join information screenshot
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
32
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
37 nodes.
38
39 Grouping nodes into a cluster has the following advantages:
40
41 * Centralized, web based management
42
43 * Multi-master clusters: each node can do all management tasks
44
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
47
48 * Easy migration of virtual machines and containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
60 for corosync to work.
61
62 * Date and time have to be synchronized.
63
64 * SSH tunnel on TCP port 22 between nodes is used.
65
66 * If you are interested in High Availability, you need to have at
67 least three nodes for reliable quorum. All nodes should have the
68 same version.
69
70 * We recommend a dedicated NIC for the cluster traffic, especially if
71 you use shared storage.
72
73 * Root password of a cluster node is required for adding nodes.
74
75 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
76 nodes.
77
78 NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
79 production configuration and should only used temporarily during upgrading the
80 whole cluster from one to another major version.
81
82 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
83 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
84 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
85 upgrade procedure to {pve} 6.0.
86
87
88 Preparing Nodes
89 ---------------
90
91 First, install {PVE} on all nodes. Make sure that each node is
92 installed with the final hostname and IP configuration. Changing the
93 hostname and IP is not possible after cluster creation.
94
95 Currently the cluster creation can either be done on the console (login via
96 `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
97 Cluster__).
98
99 While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
100 make their names resolvable through other means), this is not necessary for a
101 cluster to work. It may be useful however, as you can then connect from one node
102 to the other with SSH via the easier to remember node name (see also
103 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
104 recommend to reference nodes by their IP addresses in the cluster configuration.
105
106
107 [[pvecm_create_cluster]]
108 Create the Cluster
109 ------------------
110
111 Use a unique name for your cluster. This name cannot be changed later. The
112 cluster name follows the same rules as node names.
113
114 Create via Web GUI
115 ~~~~~~~~~~~~~~~~~~
116
117 Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
118 name and select a network connection from the dropdown to serve as the main
119 cluster network (Link 0). It defaults to the IP resolved via the node's
120 hostname.
121
122 To add a second link as fallback, you can select the 'Advanced' checkbox and
123 choose an additional network interface (Link 1, see also
124 xref:pvecm_redundancy[Corosync Redundancy]).
125
126 Create via Command Line
127 ~~~~~~~~~~~~~~~~~~~~~~~
128
129 Login via `ssh` to the first {pve} node and run the following command:
130
131 ----
132 hp1# pvecm create CLUSTERNAME
133 ----
134
135 To check the state of the new cluster use:
136
137 ----
138 hp1# pvecm status
139 ----
140
141 Multiple Clusters In Same Network
142 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
143
144 It is possible to create multiple clusters in the same physical or logical
145 network. Each such cluster must have a unique name to avoid possible clashes in
146 the cluster communication stack. This also helps avoid human confusion by making
147 clusters clearly distinguishable.
148
149 While the bandwidth requirement of a corosync cluster is relatively low, the
150 latency of packages and the package per second (PPS) rate is the limiting
151 factor. Different clusters in the same network can compete with each other for
152 these resources, so it may still make sense to use separate physical network
153 infrastructure for bigger clusters.
154
155 [[pvecm_join_node_to_cluster]]
156 Adding Nodes to the Cluster
157 ---------------------------
158
159 CAUTION: A node that is about to be added to the cluster cannot hold any guests.
160 All existing configuration in `/etc/pve` is overwritten when joining a cluster,
161 since guest IDs could be conflicting. As a workaround create a backup of the
162 guest (`vzdump`) and restore it as a different ID after the node has been added
163 to the cluster.
164
165 Add Node via GUI
166 ~~~~~~~~~~~~~~~~
167
168 Login to the web interface on an existing cluster node. Under __Datacenter ->
169 Cluster__, click the button *Join Information* at the top. Then, click on the
170 button *Copy Information*. Alternatively, copy the string from the 'Information'
171 field manually.
172
173 Next, login to the web interface on the node you want to add.
174 Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
175 'Information' field with the text you copied earlier.
176
177 For security reasons, the cluster password has to be entered manually.
178
179 NOTE: To enter all required data manually, you can disable the 'Assisted Join'
180 checkbox.
181
182 After clicking on *Join* the node will immediately be added to the cluster. You
183 might need to reload the web page and re-login with the cluster credentials.
184
185 Confirm that your node is visible under __Datacenter -> Cluster__.
186
187 Add Node via Command Line
188 ~~~~~~~~~~~~~~~~~~~~~~~~~
189
190 Login via `ssh` to the node you want to add.
191
192 ----
193 hp2# pvecm add IP-ADDRESS-CLUSTER
194 ----
195
196 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
197 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
198
199
200 To check the state of the cluster use:
201
202 ----
203 # pvecm status
204 ----
205
206 .Cluster status after adding 4 nodes
207 ----
208 hp2# pvecm status
209 Quorum information
210 ~~~~~~~~~~~~~~~~~~
211 Date: Mon Apr 20 12:30:13 2015
212 Quorum provider: corosync_votequorum
213 Nodes: 4
214 Node ID: 0x00000001
215 Ring ID: 1/8
216 Quorate: Yes
217
218 Votequorum information
219 ~~~~~~~~~~~~~~~~~~~~~~
220 Expected votes: 4
221 Highest expected: 4
222 Total votes: 4
223 Quorum: 3
224 Flags: Quorate
225
226 Membership information
227 ~~~~~~~~~~~~~~~~~~~~~~
228 Nodeid Votes Name
229 0x00000001 1 192.168.15.91
230 0x00000002 1 192.168.15.92 (local)
231 0x00000003 1 192.168.15.93
232 0x00000004 1 192.168.15.94
233 ----
234
235 If you only want the list of all nodes use:
236
237 ----
238 # pvecm nodes
239 ----
240
241 .List nodes in a cluster
242 ----
243 hp2# pvecm nodes
244
245 Membership information
246 ~~~~~~~~~~~~~~~~~~~~~~
247 Nodeid Votes Name
248 1 1 hp1
249 2 1 hp2 (local)
250 3 1 hp3
251 4 1 hp4
252 ----
253
254 [[pvecm_adding_nodes_with_separated_cluster_network]]
255 Adding Nodes With Separated Cluster Network
256 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
257
258 When adding a node to a cluster with a separated cluster network you need to
259 use the 'link0' parameter to set the nodes address on that network:
260
261 [source,bash]
262 ----
263 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
264 ----
265
266 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
267 kronosnet transport layer, also use the 'link1' parameter.
268
269 Using the GUI, you can select the correct interface from the corresponding 'Link 0'
270 and 'Link 1' fields in the *Cluster Join* dialog.
271
272 Remove a Cluster Node
273 ---------------------
274
275 CAUTION: Read carefully the procedure before proceeding, as it could
276 not be what you want or need.
277
278 Move all virtual machines from the node. Make sure you have no local
279 data or backups you want to keep, or save them accordingly.
280 In the following example we will remove the node hp4 from the cluster.
281
282 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
283 command to identify the node ID to remove:
284
285 ----
286 hp1# pvecm nodes
287
288 Membership information
289 ~~~~~~~~~~~~~~~~~~~~~~
290 Nodeid Votes Name
291 1 1 hp1 (local)
292 2 1 hp2
293 3 1 hp3
294 4 1 hp4
295 ----
296
297
298 At this point you must power off hp4 and
299 make sure that it will not power on again (in the network) as it
300 is.
301
302 IMPORTANT: As said above, it is critical to power off the node
303 *before* removal, and make sure that it will *never* power on again
304 (in the existing cluster network) as it is.
305 If you power on the node as it is, your cluster will be screwed up and
306 it could be difficult to restore a clean cluster state.
307
308 After powering off the node hp4, we can safely remove it from the cluster.
309
310 ----
311 hp1# pvecm delnode hp4
312 ----
313
314 If the operation succeeds no output is returned, just check the node
315 list again with `pvecm nodes` or `pvecm status`. You should see
316 something like:
317
318 ----
319 hp1# pvecm status
320
321 Quorum information
322 ~~~~~~~~~~~~~~~~~~
323 Date: Mon Apr 20 12:44:28 2015
324 Quorum provider: corosync_votequorum
325 Nodes: 3
326 Node ID: 0x00000001
327 Ring ID: 1/8
328 Quorate: Yes
329
330 Votequorum information
331 ~~~~~~~~~~~~~~~~~~~~~~
332 Expected votes: 3
333 Highest expected: 3
334 Total votes: 3
335 Quorum: 2
336 Flags: Quorate
337
338 Membership information
339 ~~~~~~~~~~~~~~~~~~~~~~
340 Nodeid Votes Name
341 0x00000001 1 192.168.15.90 (local)
342 0x00000002 1 192.168.15.91
343 0x00000003 1 192.168.15.92
344 ----
345
346 If, for whatever reason, you want this server to join the same cluster again,
347 you have to
348
349 * reinstall {pve} on it from scratch
350
351 * then join it, as explained in the previous section.
352
353 NOTE: After removal of the node, its SSH fingerprint will still reside in the
354 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
355 a node with the same IP or hostname, run `pvecm updatecerts` once on the
356 re-added node to update its fingerprint cluster wide.
357
358 [[pvecm_separate_node_without_reinstall]]
359 Separate A Node Without Reinstalling
360 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
361
362 CAUTION: This is *not* the recommended method, proceed with caution. Use the
363 above mentioned method if you're unsure.
364
365 You can also separate a node from a cluster without reinstalling it from
366 scratch. But after removing the node from the cluster it will still have
367 access to the shared storages! This must be resolved before you start removing
368 the node from the cluster. A {pve} cluster cannot share the exact same
369 storage with another cluster, as storage locking doesn't work over cluster
370 boundary. Further, it may also lead to VMID conflicts.
371
372 Its suggested that you create a new storage where only the node which you want
373 to separate has access. This can be a new export on your NFS or a new Ceph
374 pool, to name a few examples. Its just important that the exact same storage
375 does not gets accessed by multiple clusters. After setting this storage up move
376 all data from the node and its VMs to it. Then you are ready to separate the
377 node from the cluster.
378
379 WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
380 run into conflicts and problems.
381
382 First stop the corosync and the pve-cluster services on the node:
383 [source,bash]
384 ----
385 systemctl stop pve-cluster
386 systemctl stop corosync
387 ----
388
389 Start the cluster filesystem again in local mode:
390 [source,bash]
391 ----
392 pmxcfs -l
393 ----
394
395 Delete the corosync configuration files:
396 [source,bash]
397 ----
398 rm /etc/pve/corosync.conf
399 rm /etc/corosync/*
400 ----
401
402 You can now start the filesystem again as normal service:
403 [source,bash]
404 ----
405 killall pmxcfs
406 systemctl start pve-cluster
407 ----
408
409 The node is now separated from the cluster. You can deleted it from a remaining
410 node of the cluster with:
411 [source,bash]
412 ----
413 pvecm delnode oldnode
414 ----
415
416 If the command failed, because the remaining node in the cluster lost quorum
417 when the now separate node exited, you may set the expected votes to 1 as a workaround:
418 [source,bash]
419 ----
420 pvecm expected 1
421 ----
422
423 And then repeat the 'pvecm delnode' command.
424
425 Now switch back to the separated node, here delete all remaining files left
426 from the old cluster. This ensures that the node can be added to another
427 cluster again without problems.
428
429 [source,bash]
430 ----
431 rm /var/lib/corosync/*
432 ----
433
434 As the configuration files from the other nodes are still in the cluster
435 filesystem you may want to clean those up too. Remove simply the whole
436 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
437 you used the correct one before deleting it.
438
439 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
440 the nodes can still connect to each other with public key authentication. This
441 should be fixed by removing the respective keys from the
442 '/etc/pve/priv/authorized_keys' file.
443
444
445 Quorum
446 ------
447
448 {pve} use a quorum-based technique to provide a consistent state among
449 all cluster nodes.
450
451 [quote, from Wikipedia, Quorum (distributed computing)]
452 ____
453 A quorum is the minimum number of votes that a distributed transaction
454 has to obtain in order to be allowed to perform an operation in a
455 distributed system.
456 ____
457
458 In case of network partitioning, state changes requires that a
459 majority of nodes are online. The cluster switches to read-only mode
460 if it loses quorum.
461
462 NOTE: {pve} assigns a single vote to each node by default.
463
464
465 Cluster Network
466 ---------------
467
468 The cluster network is the core of a cluster. All messages sent over it have to
469 be delivered reliably to all nodes in their respective order. In {pve} this
470 part is done by corosync, an implementation of a high performance, low overhead
471 high availability development toolkit. It serves our decentralized
472 configuration file system (`pmxcfs`).
473
474 [[pvecm_cluster_network_requirements]]
475 Network Requirements
476 ~~~~~~~~~~~~~~~~~~~~
477 This needs a reliable network with latencies under 2 milliseconds (LAN
478 performance) to work properly. The network should not be used heavily by other
479 members, ideally corosync runs on its own network. Do not use a shared network
480 for corosync and storage (except as a potential low-priority fallback in a
481 xref:pvecm_redundancy[redundant] configuration).
482
483 Before setting up a cluster, it is good practice to check if the network is fit
484 for that purpose. To make sure the nodes can connect to each other on the
485 cluster network, you can test the connectivity between them with the `ping`
486 tool.
487
488 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
489 be generated - no manual action is required.
490
491 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
492 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
493 communication, which, for now, only supports regular UDP unicast.
494
495 CAUTION: You can still enable Multicast or legacy unicast by setting your
496 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
497 but keep in mind that this will disable all cryptography and redundancy support.
498 This is therefore not recommended.
499
500 Separate Cluster Network
501 ~~~~~~~~~~~~~~~~~~~~~~~~
502
503 When creating a cluster without any parameters the corosync cluster network is
504 generally shared with the Web UI and the VMs and their traffic. Depending on
505 your setup, even storage traffic may get sent over the same network. Its
506 recommended to change that, as corosync is a time critical real time
507 application.
508
509 Setting Up A New Network
510 ^^^^^^^^^^^^^^^^^^^^^^^^
511
512 First you have to set up a new network interface. It should be on a physically
513 separate network. Ensure that your network fulfills the
514 xref:pvecm_cluster_network_requirements[cluster network requirements].
515
516 Separate On Cluster Creation
517 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
518
519 This is possible via the 'linkX' parameters of the 'pvecm create'
520 command used for creating a new cluster.
521
522 If you have set up an additional NIC with a static address on 10.10.10.1/25,
523 and want to send and receive all cluster communication over this interface,
524 you would execute:
525
526 [source,bash]
527 ----
528 pvecm create test --link0 10.10.10.1
529 ----
530
531 To check if everything is working properly execute:
532 [source,bash]
533 ----
534 systemctl status corosync
535 ----
536
537 Afterwards, proceed as described above to
538 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
539
540 [[pvecm_separate_cluster_net_after_creation]]
541 Separate After Cluster Creation
542 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
543
544 You can do this if you have already created a cluster and want to switch
545 its communication to another network, without rebuilding the whole cluster.
546 This change may lead to short durations of quorum loss in the cluster, as nodes
547 have to restart corosync and come up one after the other on the new network.
548
549 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
550 Then, open it and you should see a file similar to:
551
552 ----
553 logging {
554 debug: off
555 to_syslog: yes
556 }
557
558 nodelist {
559
560 node {
561 name: due
562 nodeid: 2
563 quorum_votes: 1
564 ring0_addr: due
565 }
566
567 node {
568 name: tre
569 nodeid: 3
570 quorum_votes: 1
571 ring0_addr: tre
572 }
573
574 node {
575 name: uno
576 nodeid: 1
577 quorum_votes: 1
578 ring0_addr: uno
579 }
580
581 }
582
583 quorum {
584 provider: corosync_votequorum
585 }
586
587 totem {
588 cluster_name: testcluster
589 config_version: 3
590 ip_version: ipv4-6
591 secauth: on
592 version: 2
593 interface {
594 linknumber: 0
595 }
596
597 }
598 ----
599
600 NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
601 is a remnant of older corosync versions that is kept for backwards
602 compatibility.
603
604 The first thing you want to do is add the 'name' properties in the node entries
605 if you do not see them already. Those *must* match the node name.
606
607 Then replace all addresses from the 'ring0_addr' properties of all nodes with
608 the new addresses. You may use plain IP addresses or hostnames here. If you use
609 hostnames ensure that they are resolvable from all nodes. (see also
610 xref:pvecm_corosync_addresses[Link Address Types])
611
612 In this example, we want to switch the cluster communication to the
613 10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
614
615 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
616 as well, although we recommend to not change multiple addresses at once, to make
617 it easier to recover if something goes wrong.
618
619 After we increase the 'config_version' property, the new configuration file
620 should look like:
621
622 ----
623 logging {
624 debug: off
625 to_syslog: yes
626 }
627
628 nodelist {
629
630 node {
631 name: due
632 nodeid: 2
633 quorum_votes: 1
634 ring0_addr: 10.10.10.2
635 }
636
637 node {
638 name: tre
639 nodeid: 3
640 quorum_votes: 1
641 ring0_addr: 10.10.10.3
642 }
643
644 node {
645 name: uno
646 nodeid: 1
647 quorum_votes: 1
648 ring0_addr: 10.10.10.1
649 }
650
651 }
652
653 quorum {
654 provider: corosync_votequorum
655 }
656
657 totem {
658 cluster_name: testcluster
659 config_version: 4
660 ip_version: ipv4-6
661 secauth: on
662 version: 2
663 interface {
664 linknumber: 0
665 }
666
667 }
668 ----
669
670 Then, after a final check if all changed information is correct, we save it and
671 once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
672 section to bring it into effect.
673
674 The changes will be applied live, so restarting corosync is not strictly
675 necessary. If you changed other settings as well, or notice corosync
676 complaining, you can optionally trigger a restart.
677
678 On a single node execute:
679
680 [source,bash]
681 ----
682 systemctl restart corosync
683 ----
684
685 Now check if everything is fine:
686
687 [source,bash]
688 ----
689 systemctl status corosync
690 ----
691
692 If corosync runs again correct restart corosync also on all other nodes.
693 They will then join the cluster membership one by one on the new network.
694
695 [[pvecm_corosync_addresses]]
696 Corosync addresses
697 ~~~~~~~~~~~~~~~~~~
698
699 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
700 `corosync.conf`) can be specified in two ways:
701
702 * **IPv4/v6 addresses** will be used directly. They are recommended, since they
703 are static and usually not changed carelessly.
704
705 * **Hostnames** will be resolved using `getaddrinfo`, which means that per
706 default, IPv6 addresses will be used first, if available (see also
707 `man gai.conf`). Keep this in mind, especially when upgrading an existing
708 cluster to IPv6.
709
710 CAUTION: Hostnames should be used with care, since the address they
711 resolve to can be changed without touching corosync or the node it runs on -
712 which may lead to a situation where an address is changed without thinking
713 about implications for corosync.
714
715 A seperate, static hostname specifically for corosync is recommended, if
716 hostnames are preferred. Also, make sure that every node in the cluster can
717 resolve all hostnames correctly.
718
719 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
720 entry. Only the resolved IP is then saved to the configuration.
721
722 Nodes that joined the cluster on earlier versions likely still use their
723 unresolved hostname in `corosync.conf`. It might be a good idea to replace
724 them with IPs or a seperate hostname, as mentioned above.
725
726
727 [[pvecm_redundancy]]
728 Corosync Redundancy
729 -------------------
730
731 Corosync supports redundant networking via its integrated kronosnet layer by
732 default (it is not supported on the legacy udp/udpu transports). It can be
733 enabled by specifying more than one link address, either via the '--linkX'
734 parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
735 adding a new node) or by specifying more than one 'ringX_addr' in
736 `corosync.conf`.
737
738 NOTE: To provide useful failover, every link should be on its own
739 physical network connection.
740
741 Links are used according to a priority setting. You can configure this priority
742 by setting 'knet_link_priority' in the corresponding interface section in
743 `corosync.conf`, or, preferrably, using the 'priority' parameter when creating
744 your cluster with `pvecm`:
745
746 ----
747 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=20 --link1 10.20.20.1,priority=15
748 ----
749
750 This would cause 'link1' to be used first, since it has the lower priority.
751
752 If no priorities are configured manually (or two links have the same priority),
753 links will be used in order of their number, with the lower number having higher
754 priority.
755
756 Even if all links are working, only the one with the highest priority will see
757 corosync traffic. Link priorities cannot be mixed, i.e. links with different
758 priorities will not be able to communicate with each other.
759
760 Since lower priority links will not see traffic unless all higher priorities
761 have failed, it becomes a useful strategy to specify even networks used for
762 other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
763 worst, a higher-latency or more congested connection might be better than no
764 connection at all.
765
766 Adding Redundant Links To An Existing Cluster
767 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
768
769 To add a new link to a running configuration, first check how to
770 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
771
772 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
773 sure that your 'X' is the same for every node you add it to, and that it is
774 unique for each node.
775
776 Lastly, add a new 'interface', as shown below, to your `totem`
777 section, replacing 'X' with your link number chosen above.
778
779 Assuming you added a link with number 1, the new configuration file could look
780 like this:
781
782 ----
783 logging {
784 debug: off
785 to_syslog: yes
786 }
787
788 nodelist {
789
790 node {
791 name: due
792 nodeid: 2
793 quorum_votes: 1
794 ring0_addr: 10.10.10.2
795 ring1_addr: 10.20.20.2
796 }
797
798 node {
799 name: tre
800 nodeid: 3
801 quorum_votes: 1
802 ring0_addr: 10.10.10.3
803 ring1_addr: 10.20.20.3
804 }
805
806 node {
807 name: uno
808 nodeid: 1
809 quorum_votes: 1
810 ring0_addr: 10.10.10.1
811 ring1_addr: 10.20.20.1
812 }
813
814 }
815
816 quorum {
817 provider: corosync_votequorum
818 }
819
820 totem {
821 cluster_name: testcluster
822 config_version: 4
823 ip_version: ipv4-6
824 secauth: on
825 version: 2
826 interface {
827 linknumber: 0
828 }
829 interface {
830 linknumber: 1
831 }
832 }
833 ----
834
835 The new link will be enabled as soon as you follow the last steps to
836 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
837 be necessary. You can check that corosync loaded the new link using:
838
839 ----
840 journalctl -b -u corosync
841 ----
842
843 It might be a good idea to test the new link by temporarily disconnecting the
844 old link on one node and making sure that its status remains online while
845 disconnected:
846
847 ----
848 pvecm status
849 ----
850
851 If you see a healthy cluster state, it means that your new link is being used.
852
853
854 Corosync External Vote Support
855 ------------------------------
856
857 This section describes a way to deploy an external voter in a {pve} cluster.
858 When configured, the cluster can sustain more node failures without
859 violating safety properties of the cluster communication.
860
861 For this to work there are two services involved:
862
863 * a so called qdevice daemon which runs on each {pve} node
864
865 * an external vote daemon which runs on an independent server.
866
867 As a result you can achieve higher availability even in smaller setups (for
868 example 2+1 nodes).
869
870 QDevice Technical Overview
871 ~~~~~~~~~~~~~~~~~~~~~~~~~~
872
873 The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
874 node. It provides a configured number of votes to the clusters quorum
875 subsystem based on an external running third-party arbitrator's decision.
876 Its primary use is to allow a cluster to sustain more node failures than
877 standard quorum rules allow. This can be done safely as the external device
878 can see all nodes and thus choose only one set of nodes to give its vote.
879 This will only be done if said set of nodes can have quorum (again) when
880 receiving the third-party vote.
881
882 Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
883 a daemon which provides a vote to a cluster partition if it can reach the
884 partition members over the network. It will give only votes to one partition
885 of a cluster at any time.
886 It's designed to support multiple clusters and is almost configuration and
887 state free. New clusters are handled dynamically and no configuration file
888 is needed on the host running a QDevice.
889
890 The external host has the only requirement that it needs network access to the
891 cluster and a corosync-qnetd package available. We provide such a package
892 for Debian based hosts, other Linux distributions should also have a package
893 available through their respective package manager.
894
895 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
896 TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
897 latencies than 2 ms.
898
899 Supported Setups
900 ~~~~~~~~~~~~~~~~
901
902 We support QDevices for clusters with an even number of nodes and recommend
903 it for 2 node clusters, if they should provide higher availability.
904 For clusters with an odd node count we discourage the use of QDevices
905 currently. The reason for this, is the difference of the votes the QDevice
906 provides for each cluster type. Even numbered clusters get single additional
907 vote, with this we can only increase availability, i.e. if the QDevice
908 itself fails we are in the same situation as with no QDevice at all.
909
910 Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
911 where 'N' corresponds to the cluster node count. This difference makes
912 sense, if we had only one additional vote the cluster can get into a split
913 brain situation.
914 This algorithm would allow that all nodes but one (and naturally the
915 QDevice itself) could fail.
916 There are two drawbacks with this:
917
918 * If the QNet daemon itself fails, no other node may fail or the cluster
919 immediately loses quorum. For example, in a cluster with 15 nodes 7
920 could fail before the cluster becomes inquorate. But, if a QDevice is
921 configured here and said QDevice fails itself **no single node** of
922 the 15 may fail. The QDevice acts almost as a single point of failure in
923 this case.
924
925 * The fact that all but one node plus QDevice may fail sound promising at
926 first, but this may result in a mass recovery of HA services that would
927 overload the single node left. Also ceph server will stop to provide
928 services after only '((N-1)/2)' nodes are online.
929
930 If you understand the drawbacks and implications you can decide yourself if
931 you should use this technology in an odd numbered cluster setup.
932
933 QDevice-Net Setup
934 ~~~~~~~~~~~~~~~~~
935
936 We recommend to run any daemon which provides votes to corosync-qdevice as an
937 unprivileged user. {pve} and Debian provides a package which is already
938 configured to do so.
939 The traffic between the daemon and the cluster must be encrypted to ensure a
940 safe and secure QDevice integration in {pve}.
941
942 First install the 'corosync-qnetd' package on your external server and
943 the 'corosync-qdevice' package on all cluster nodes.
944
945 After that, ensure that all your nodes on the cluster are online.
946
947 You can now easily set up your QDevice by running the following command on one
948 of the {pve} nodes:
949
950 ----
951 pve# pvecm qdevice setup <QDEVICE-IP>
952 ----
953
954 The SSH key from the cluster will be automatically copied to the QDevice. You
955 might need to enter an SSH password during this step.
956
957 After you enter the password and all the steps are successfully completed, you
958 will see "Done". You can check the status now:
959
960 ----
961 pve# pvecm status
962
963 ...
964
965 Votequorum information
966 ~~~~~~~~~~~~~~~~~~~~~
967 Expected votes: 3
968 Highest expected: 3
969 Total votes: 3
970 Quorum: 2
971 Flags: Quorate Qdevice
972
973 Membership information
974 ~~~~~~~~~~~~~~~~~~~~~~
975 Nodeid Votes Qdevice Name
976 0x00000001 1 A,V,NMW 192.168.22.180 (local)
977 0x00000002 1 A,V,NMW 192.168.22.181
978 0x00000000 1 Qdevice
979
980 ----
981
982 which means the QDevice is set up.
983
984 Frequently Asked Questions
985 ~~~~~~~~~~~~~~~~~~~~~~~~~~
986
987 Tie Breaking
988 ^^^^^^^^^^^^
989
990 In case of a tie, where two same-sized cluster partitions cannot see each other
991 but the QDevice, the QDevice chooses randomly one of those partitions and
992 provides a vote to it.
993
994 Possible Negative Implications
995 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
996
997 For clusters with an even node count there are no negative implications when
998 setting up a QDevice. If it fails to work, you are as good as without QDevice at
999 all.
1000
1001 Adding/Deleting Nodes After QDevice Setup
1002 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1003
1004 If you want to add a new node or remove an existing one from a cluster with a
1005 QDevice setup, you need to remove the QDevice first. After that, you can add or
1006 remove nodes normally. Once you have a cluster with an even node count again,
1007 you can set up the QDevice again as described above.
1008
1009 Removing the QDevice
1010 ^^^^^^^^^^^^^^^^^^^^
1011
1012 If you used the official `pvecm` tool to add the QDevice, you can remove it
1013 trivially by running:
1014
1015 ----
1016 pve# pvecm qdevice remove
1017 ----
1018
1019 //Still TODO
1020 //^^^^^^^^^^
1021 //There is still stuff to add here
1022
1023
1024 Corosync Configuration
1025 ----------------------
1026
1027 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1028 controls the cluster membership and its network.
1029 For further information about it, check the corosync.conf man page:
1030 [source,bash]
1031 ----
1032 man corosync.conf
1033 ----
1034
1035 For node membership you should always use the `pvecm` tool provided by {pve}.
1036 You may have to edit the configuration file manually for other changes.
1037 Here are a few best practice tips for doing this.
1038
1039 [[pvecm_edit_corosync_conf]]
1040 Edit corosync.conf
1041 ~~~~~~~~~~~~~~~~~~
1042
1043 Editing the corosync.conf file is not always very straightforward. There are
1044 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1045 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1046 propagate the changes to the local one, but not vice versa.
1047
1048 The configuration will get updated automatically as soon as the file changes.
1049 This means changes which can be integrated in a running corosync will take
1050 effect immediately. So you should always make a copy and edit that instead, to
1051 avoid triggering some unwanted changes by an in-between safe.
1052
1053 [source,bash]
1054 ----
1055 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1056 ----
1057
1058 Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1059 preinstalled on any {pve} node for example.
1060
1061 NOTE: Always increment the 'config_version' number on configuration changes,
1062 omitting this can lead to problems.
1063
1064 After making the necessary changes create another copy of the current working
1065 configuration file. This serves as a backup if the new configuration fails to
1066 apply or makes problems in other ways.
1067
1068 [source,bash]
1069 ----
1070 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1071 ----
1072
1073 Then move the new configuration file over the old one:
1074 [source,bash]
1075 ----
1076 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1077 ----
1078
1079 You may check with the commands
1080 [source,bash]
1081 ----
1082 systemctl status corosync
1083 journalctl -b -u corosync
1084 ----
1085
1086 If the change could be applied automatically. If not you may have to restart the
1087 corosync service via:
1088 [source,bash]
1089 ----
1090 systemctl restart corosync
1091 ----
1092
1093 On errors check the troubleshooting section below.
1094
1095 Troubleshooting
1096 ~~~~~~~~~~~~~~~
1097
1098 Issue: 'quorum.expected_votes must be configured'
1099 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1100
1101 When corosync starts to fail and you get the following message in the system log:
1102
1103 ----
1104 [...]
1105 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1106 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1107 'configuration error: nodelist or quorum.expected_votes must be configured!'
1108 [...]
1109 ----
1110
1111 It means that the hostname you set for corosync 'ringX_addr' in the
1112 configuration could not be resolved.
1113
1114 Write Configuration When Not Quorate
1115 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1116
1117 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1118 know what you do, use:
1119 [source,bash]
1120 ----
1121 pvecm expected 1
1122 ----
1123
1124 This sets the expected vote count to 1 and makes the cluster quorate. You can
1125 now fix your configuration, or revert it back to the last working backup.
1126
1127 This is not enough if corosync cannot start anymore. Here it is best to edit the
1128 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1129 that corosync can start again. Ensure that on all nodes this configuration has
1130 the same content to avoid split brains. If you are not sure what went wrong
1131 it's best to ask the Proxmox Community to help you.
1132
1133
1134 [[pvecm_corosync_conf_glossary]]
1135 Corosync Configuration Glossary
1136 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1137
1138 ringX_addr::
1139 This names the different link addresses for the kronosnet connections between
1140 nodes.
1141
1142
1143 Cluster Cold Start
1144 ------------------
1145
1146 It is obvious that a cluster is not quorate when all nodes are
1147 offline. This is a common case after a power failure.
1148
1149 NOTE: It is always a good idea to use an uninterruptible power supply
1150 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1151 you want HA.
1152
1153 On node startup, the `pve-guests` service is started and waits for
1154 quorum. Once quorate, it starts all guests which have the `onboot`
1155 flag set.
1156
1157 When you turn on nodes, or when power comes back after power failure,
1158 it is likely that some nodes boots faster than others. Please keep in
1159 mind that guest startup is delayed until you reach quorum.
1160
1161
1162 Guest Migration
1163 ---------------
1164
1165 Migrating virtual guests to other nodes is a useful feature in a
1166 cluster. There are settings to control the behavior of such
1167 migrations. This can be done via the configuration file
1168 `datacenter.cfg` or for a specific migration via API or command line
1169 parameters.
1170
1171 It makes a difference if a Guest is online or offline, or if it has
1172 local resources (like a local disk).
1173
1174 For Details about Virtual Machine Migration see the
1175 xref:qm_migration[QEMU/KVM Migration Chapter].
1176
1177 For Details about Container Migration see the
1178 xref:pct_migration[Container Migration Chapter].
1179
1180 Migration Type
1181 ~~~~~~~~~~~~~~
1182
1183 The migration type defines if the migration data should be sent over an
1184 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1185 Setting the migration type to insecure means that the RAM content of a
1186 virtual guest gets also transferred unencrypted, which can lead to
1187 information disclosure of critical data from inside the guest (for
1188 example passwords or encryption keys).
1189
1190 Therefore, we strongly recommend using the secure channel if you do
1191 not have full control over the network and can not guarantee that no
1192 one is eavesdropping on it.
1193
1194 NOTE: Storage migration does not follow this setting. Currently, it
1195 always sends the storage content over a secure channel.
1196
1197 Encryption requires a lot of computing power, so this setting is often
1198 changed to "unsafe" to achieve better performance. The impact on
1199 modern systems is lower because they implement AES encryption in
1200 hardware. The performance impact is particularly evident in fast
1201 networks where you can transfer 10 Gbps or more.
1202
1203 Migration Network
1204 ~~~~~~~~~~~~~~~~~
1205
1206 By default, {pve} uses the network in which cluster communication
1207 takes place to send the migration traffic. This is not optimal because
1208 sensitive cluster traffic can be disrupted and this network may not
1209 have the best bandwidth available on the node.
1210
1211 Setting the migration network parameter allows the use of a dedicated
1212 network for the entire migration traffic. In addition to the memory,
1213 this also affects the storage traffic for offline migrations.
1214
1215 The migration network is set as a network in the CIDR notation. This
1216 has the advantage that you do not have to set individual IP addresses
1217 for each node. {pve} can determine the real address on the
1218 destination node from the network specified in the CIDR form. To
1219 enable this, the network must be specified so that each node has one,
1220 but only one IP in the respective network.
1221
1222 Example
1223 ^^^^^^^
1224
1225 We assume that we have a three-node setup with three separate
1226 networks. One for public communication with the Internet, one for
1227 cluster communication and a very fast one, which we want to use as a
1228 dedicated network for migration.
1229
1230 A network configuration for such a setup might look as follows:
1231
1232 ----
1233 iface eno1 inet manual
1234
1235 # public network
1236 auto vmbr0
1237 iface vmbr0 inet static
1238 address 192.X.Y.57
1239 netmask 255.255.250.0
1240 gateway 192.X.Y.1
1241 bridge_ports eno1
1242 bridge_stp off
1243 bridge_fd 0
1244
1245 # cluster network
1246 auto eno2
1247 iface eno2 inet static
1248 address 10.1.1.1
1249 netmask 255.255.255.0
1250
1251 # fast network
1252 auto eno3
1253 iface eno3 inet static
1254 address 10.1.2.1
1255 netmask 255.255.255.0
1256 ----
1257
1258 Here, we will use the network 10.1.2.0/24 as a migration network. For
1259 a single migration, you can do this using the `migration_network`
1260 parameter of the command line tool:
1261
1262 ----
1263 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1264 ----
1265
1266 To configure this as the default network for all migrations in the
1267 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1268 file:
1269
1270 ----
1271 # use dedicated migration network
1272 migration: secure,network=10.1.2.0/24
1273 ----
1274
1275 NOTE: The migration type must always be set when the migration network
1276 gets set in `/etc/pve/datacenter.cfg`.
1277
1278
1279 ifdef::manvolnum[]
1280 include::pve-copyright.adoc[]
1281 endif::manvolnum[]