]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
fix some typos
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
32
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
37 nodes.
38
39 Grouping nodes into a cluster has the following advantages:
40
41 * Centralized, web based management
42
43 * Multi-master clusters: each node can do all management tasks
44
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
47
48 * Easy migration of virtual machines and containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
60 for corosync to work.
61
62 * Date and time have to be synchronized.
63
64 * SSH tunnel on TCP port 22 between nodes is used.
65
66 * If you are interested in High Availability, you need to have at
67 least three nodes for reliable quorum. All nodes should have the
68 same version.
69
70 * We recommend a dedicated NIC for the cluster traffic, especially if
71 you use shared storage.
72
73 * Root password of a cluster node is required for adding nodes.
74
75 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
76 nodes.
77
78 NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
79 not supported as production configuration and should only used temporarily
80 during upgrading the whole cluster from one to another major version.
81
82 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
83 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
84 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
85 upgrade procedure to {pve} 6.0.
86
87
88 Preparing Nodes
89 ---------------
90
91 First, install {PVE} on all nodes. Make sure that each node is
92 installed with the final hostname and IP configuration. Changing the
93 hostname and IP is not possible after cluster creation.
94
95 While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
96 make their names resolvable through other means), this is not necessary for a
97 cluster to work. It may be useful however, as you can then connect from one node
98 to the other with SSH via the easier to remember node name (see also
99 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
100 recommend to reference nodes by their IP addresses in the cluster configuration.
101
102
103 [[pvecm_create_cluster]]
104 Create a Cluster
105 ----------------
106
107 You can either create a cluster on the console (login via `ssh`), or through
108 the API using the {pve} Webinterface (__Datacenter -> Cluster__).
109
110 NOTE: Use a unique name for your cluster. This name cannot be changed later.
111 The cluster name follows the same rules as node names.
112
113 [[pvecm_cluster_create_via_gui]]
114 Create via Web GUI
115 ~~~~~~~~~~~~~~~~~~
116
117 [thumbnail="screenshot/gui-cluster-create.png"]
118
119 Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
120 name and select a network connection from the dropdown to serve as the main
121 cluster network (Link 0). It defaults to the IP resolved via the node's
122 hostname.
123
124 To add a second link as fallback, you can select the 'Advanced' checkbox and
125 choose an additional network interface (Link 1, see also
126 xref:pvecm_redundancy[Corosync Redundancy]).
127
128 NOTE: Ensure the network selected for the cluster communication is not used for
129 any high traffic loads like those of (network) storages or live-migration.
130 While the cluster network itself produces small amounts of data, it is very
131 sensitive to latency. Check out full
132 xref:pvecm_cluster_network_requirements[cluster network requirements].
133
134 [[pvecm_cluster_create_via_cli]]
135 Create via Command Line
136 ~~~~~~~~~~~~~~~~~~~~~~~
137
138 Login via `ssh` to the first {pve} node and run the following command:
139
140 ----
141 hp1# pvecm create CLUSTERNAME
142 ----
143
144 To check the state of the new cluster use:
145
146 ----
147 hp1# pvecm status
148 ----
149
150 Multiple Clusters In Same Network
151 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
152
153 It is possible to create multiple clusters in the same physical or logical
154 network. Each such cluster must have a unique name to avoid possible clashes in
155 the cluster communication stack. This also helps avoid human confusion by making
156 clusters clearly distinguishable.
157
158 While the bandwidth requirement of a corosync cluster is relatively low, the
159 latency of packages and the package per second (PPS) rate is the limiting
160 factor. Different clusters in the same network can compete with each other for
161 these resources, so it may still make sense to use separate physical network
162 infrastructure for bigger clusters.
163
164 [[pvecm_join_node_to_cluster]]
165 Adding Nodes to the Cluster
166 ---------------------------
167
168 CAUTION: A node that is about to be added to the cluster cannot hold any guests.
169 All existing configuration in `/etc/pve` is overwritten when joining a cluster,
170 since guest IDs could be conflicting. As a workaround create a backup of the
171 guest (`vzdump`) and restore it as a different ID after the node has been added
172 to the cluster.
173
174 Join Node to Cluster via GUI
175 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
176
177 [thumbnail="screenshot/gui-cluster-join-information.png"]
178
179 Login to the web interface on an existing cluster node. Under __Datacenter ->
180 Cluster__, click the button *Join Information* at the top. Then, click on the
181 button *Copy Information*. Alternatively, copy the string from the 'Information'
182 field manually.
183
184 [thumbnail="screenshot/gui-cluster-join.png"]
185
186 Next, login to the web interface on the node you want to add.
187 Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
188 'Information' field with the 'Join Information' text you copied earlier.
189 Most settings required for joining the cluster will be filled out
190 automatically. For security reasons, the cluster password has to be entered
191 manually.
192
193 NOTE: To enter all required data manually, you can disable the 'Assisted Join'
194 checkbox.
195
196 After clicking the *Join* button, the cluster join process will start
197 immediately. After the node joined the cluster its current node certificate
198 will be replaced by one signed from the cluster certificate authority (CA),
199 that means the current session will stop to work after a few seconds. You might
200 then need to force-reload the webinterface and re-login with the cluster
201 credentials.
202
203 Now your node should be visible under __Datacenter -> Cluster__.
204
205 Join Node to Cluster via Command Line
206 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
207
208 Login via `ssh` to the node you want to join into an existing cluster.
209
210 ----
211 hp2# pvecm add IP-ADDRESS-CLUSTER
212 ----
213
214 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
215 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
216
217
218 To check the state of the cluster use:
219
220 ----
221 # pvecm status
222 ----
223
224 .Cluster status after adding 4 nodes
225 ----
226 hp2# pvecm status
227 Quorum information
228 ~~~~~~~~~~~~~~~~~~
229 Date: Mon Apr 20 12:30:13 2015
230 Quorum provider: corosync_votequorum
231 Nodes: 4
232 Node ID: 0x00000001
233 Ring ID: 1/8
234 Quorate: Yes
235
236 Votequorum information
237 ~~~~~~~~~~~~~~~~~~~~~~
238 Expected votes: 4
239 Highest expected: 4
240 Total votes: 4
241 Quorum: 3
242 Flags: Quorate
243
244 Membership information
245 ~~~~~~~~~~~~~~~~~~~~~~
246 Nodeid Votes Name
247 0x00000001 1 192.168.15.91
248 0x00000002 1 192.168.15.92 (local)
249 0x00000003 1 192.168.15.93
250 0x00000004 1 192.168.15.94
251 ----
252
253 If you only want the list of all nodes use:
254
255 ----
256 # pvecm nodes
257 ----
258
259 .List nodes in a cluster
260 ----
261 hp2# pvecm nodes
262
263 Membership information
264 ~~~~~~~~~~~~~~~~~~~~~~
265 Nodeid Votes Name
266 1 1 hp1
267 2 1 hp2 (local)
268 3 1 hp3
269 4 1 hp4
270 ----
271
272 [[pvecm_adding_nodes_with_separated_cluster_network]]
273 Adding Nodes With Separated Cluster Network
274 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
275
276 When adding a node to a cluster with a separated cluster network you need to
277 use the 'link0' parameter to set the nodes address on that network:
278
279 [source,bash]
280 ----
281 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
282 ----
283
284 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
285 kronosnet transport layer, also use the 'link1' parameter.
286
287 Using the GUI, you can select the correct interface from the corresponding 'Link 0'
288 and 'Link 1' fields in the *Cluster Join* dialog.
289
290 Remove a Cluster Node
291 ---------------------
292
293 CAUTION: Read carefully the procedure before proceeding, as it could
294 not be what you want or need.
295
296 Move all virtual machines from the node. Make sure you have no local
297 data or backups you want to keep, or save them accordingly.
298 In the following example we will remove the node hp4 from the cluster.
299
300 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
301 command to identify the node ID to remove:
302
303 ----
304 hp1# pvecm nodes
305
306 Membership information
307 ~~~~~~~~~~~~~~~~~~~~~~
308 Nodeid Votes Name
309 1 1 hp1 (local)
310 2 1 hp2
311 3 1 hp3
312 4 1 hp4
313 ----
314
315
316 At this point you must power off hp4 and
317 make sure that it will not power on again (in the network) as it
318 is.
319
320 IMPORTANT: As said above, it is critical to power off the node
321 *before* removal, and make sure that it will *never* power on again
322 (in the existing cluster network) as it is.
323 If you power on the node as it is, your cluster will be screwed up and
324 it could be difficult to restore a clean cluster state.
325
326 After powering off the node hp4, we can safely remove it from the cluster.
327
328 ----
329 hp1# pvecm delnode hp4
330 ----
331
332 If the operation succeeds no output is returned, just check the node
333 list again with `pvecm nodes` or `pvecm status`. You should see
334 something like:
335
336 ----
337 hp1# pvecm status
338
339 Quorum information
340 ~~~~~~~~~~~~~~~~~~
341 Date: Mon Apr 20 12:44:28 2015
342 Quorum provider: corosync_votequorum
343 Nodes: 3
344 Node ID: 0x00000001
345 Ring ID: 1/8
346 Quorate: Yes
347
348 Votequorum information
349 ~~~~~~~~~~~~~~~~~~~~~~
350 Expected votes: 3
351 Highest expected: 3
352 Total votes: 3
353 Quorum: 2
354 Flags: Quorate
355
356 Membership information
357 ~~~~~~~~~~~~~~~~~~~~~~
358 Nodeid Votes Name
359 0x00000001 1 192.168.15.90 (local)
360 0x00000002 1 192.168.15.91
361 0x00000003 1 192.168.15.92
362 ----
363
364 If, for whatever reason, you want this server to join the same cluster again,
365 you have to
366
367 * reinstall {pve} on it from scratch
368
369 * then join it, as explained in the previous section.
370
371 NOTE: After removal of the node, its SSH fingerprint will still reside in the
372 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
373 a node with the same IP or hostname, run `pvecm updatecerts` once on the
374 re-added node to update its fingerprint cluster wide.
375
376 [[pvecm_separate_node_without_reinstall]]
377 Separate A Node Without Reinstalling
378 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
379
380 CAUTION: This is *not* the recommended method, proceed with caution. Use the
381 above mentioned method if you're unsure.
382
383 You can also separate a node from a cluster without reinstalling it from
384 scratch. But after removing the node from the cluster it will still have
385 access to the shared storages! This must be resolved before you start removing
386 the node from the cluster. A {pve} cluster cannot share the exact same
387 storage with another cluster, as storage locking doesn't work over cluster
388 boundary. Further, it may also lead to VMID conflicts.
389
390 Its suggested that you create a new storage where only the node which you want
391 to separate has access. This can be a new export on your NFS or a new Ceph
392 pool, to name a few examples. Its just important that the exact same storage
393 does not gets accessed by multiple clusters. After setting this storage up move
394 all data from the node and its VMs to it. Then you are ready to separate the
395 node from the cluster.
396
397 WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
398 run into conflicts and problems.
399
400 First stop the corosync and the pve-cluster services on the node:
401 [source,bash]
402 ----
403 systemctl stop pve-cluster
404 systemctl stop corosync
405 ----
406
407 Start the cluster filesystem again in local mode:
408 [source,bash]
409 ----
410 pmxcfs -l
411 ----
412
413 Delete the corosync configuration files:
414 [source,bash]
415 ----
416 rm /etc/pve/corosync.conf
417 rm /etc/corosync/*
418 ----
419
420 You can now start the filesystem again as normal service:
421 [source,bash]
422 ----
423 killall pmxcfs
424 systemctl start pve-cluster
425 ----
426
427 The node is now separated from the cluster. You can deleted it from a remaining
428 node of the cluster with:
429 [source,bash]
430 ----
431 pvecm delnode oldnode
432 ----
433
434 If the command failed, because the remaining node in the cluster lost quorum
435 when the now separate node exited, you may set the expected votes to 1 as a workaround:
436 [source,bash]
437 ----
438 pvecm expected 1
439 ----
440
441 And then repeat the 'pvecm delnode' command.
442
443 Now switch back to the separated node, here delete all remaining files left
444 from the old cluster. This ensures that the node can be added to another
445 cluster again without problems.
446
447 [source,bash]
448 ----
449 rm /var/lib/corosync/*
450 ----
451
452 As the configuration files from the other nodes are still in the cluster
453 filesystem you may want to clean those up too. Remove simply the whole
454 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
455 you used the correct one before deleting it.
456
457 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
458 the nodes can still connect to each other with public key authentication. This
459 should be fixed by removing the respective keys from the
460 '/etc/pve/priv/authorized_keys' file.
461
462
463 Quorum
464 ------
465
466 {pve} use a quorum-based technique to provide a consistent state among
467 all cluster nodes.
468
469 [quote, from Wikipedia, Quorum (distributed computing)]
470 ____
471 A quorum is the minimum number of votes that a distributed transaction
472 has to obtain in order to be allowed to perform an operation in a
473 distributed system.
474 ____
475
476 In case of network partitioning, state changes requires that a
477 majority of nodes are online. The cluster switches to read-only mode
478 if it loses quorum.
479
480 NOTE: {pve} assigns a single vote to each node by default.
481
482
483 Cluster Network
484 ---------------
485
486 The cluster network is the core of a cluster. All messages sent over it have to
487 be delivered reliably to all nodes in their respective order. In {pve} this
488 part is done by corosync, an implementation of a high performance, low overhead
489 high availability development toolkit. It serves our decentralized
490 configuration file system (`pmxcfs`).
491
492 [[pvecm_cluster_network_requirements]]
493 Network Requirements
494 ~~~~~~~~~~~~~~~~~~~~
495 This needs a reliable network with latencies under 2 milliseconds (LAN
496 performance) to work properly. The network should not be used heavily by other
497 members, ideally corosync runs on its own network. Do not use a shared network
498 for corosync and storage (except as a potential low-priority fallback in a
499 xref:pvecm_redundancy[redundant] configuration).
500
501 Before setting up a cluster, it is good practice to check if the network is fit
502 for that purpose. To make sure the nodes can connect to each other on the
503 cluster network, you can test the connectivity between them with the `ping`
504 tool.
505
506 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
507 be generated - no manual action is required.
508
509 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
510 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
511 communication, which, for now, only supports regular UDP unicast.
512
513 CAUTION: You can still enable Multicast or legacy unicast by setting your
514 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
515 but keep in mind that this will disable all cryptography and redundancy support.
516 This is therefore not recommended.
517
518 Separate Cluster Network
519 ~~~~~~~~~~~~~~~~~~~~~~~~
520
521 When creating a cluster without any parameters the corosync cluster network is
522 generally shared with the Web UI and the VMs and their traffic. Depending on
523 your setup, even storage traffic may get sent over the same network. Its
524 recommended to change that, as corosync is a time critical real time
525 application.
526
527 Setting Up A New Network
528 ^^^^^^^^^^^^^^^^^^^^^^^^
529
530 First you have to set up a new network interface. It should be on a physically
531 separate network. Ensure that your network fulfills the
532 xref:pvecm_cluster_network_requirements[cluster network requirements].
533
534 Separate On Cluster Creation
535 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
536
537 This is possible via the 'linkX' parameters of the 'pvecm create'
538 command used for creating a new cluster.
539
540 If you have set up an additional NIC with a static address on 10.10.10.1/25,
541 and want to send and receive all cluster communication over this interface,
542 you would execute:
543
544 [source,bash]
545 ----
546 pvecm create test --link0 10.10.10.1
547 ----
548
549 To check if everything is working properly execute:
550 [source,bash]
551 ----
552 systemctl status corosync
553 ----
554
555 Afterwards, proceed as described above to
556 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
557
558 [[pvecm_separate_cluster_net_after_creation]]
559 Separate After Cluster Creation
560 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
561
562 You can do this if you have already created a cluster and want to switch
563 its communication to another network, without rebuilding the whole cluster.
564 This change may lead to short durations of quorum loss in the cluster, as nodes
565 have to restart corosync and come up one after the other on the new network.
566
567 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
568 Then, open it and you should see a file similar to:
569
570 ----
571 logging {
572 debug: off
573 to_syslog: yes
574 }
575
576 nodelist {
577
578 node {
579 name: due
580 nodeid: 2
581 quorum_votes: 1
582 ring0_addr: due
583 }
584
585 node {
586 name: tre
587 nodeid: 3
588 quorum_votes: 1
589 ring0_addr: tre
590 }
591
592 node {
593 name: uno
594 nodeid: 1
595 quorum_votes: 1
596 ring0_addr: uno
597 }
598
599 }
600
601 quorum {
602 provider: corosync_votequorum
603 }
604
605 totem {
606 cluster_name: testcluster
607 config_version: 3
608 ip_version: ipv4-6
609 secauth: on
610 version: 2
611 interface {
612 linknumber: 0
613 }
614
615 }
616 ----
617
618 NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
619 is a remnant of older corosync versions that is kept for backwards
620 compatibility.
621
622 The first thing you want to do is add the 'name' properties in the node entries
623 if you do not see them already. Those *must* match the node name.
624
625 Then replace all addresses from the 'ring0_addr' properties of all nodes with
626 the new addresses. You may use plain IP addresses or hostnames here. If you use
627 hostnames ensure that they are resolvable from all nodes. (see also
628 xref:pvecm_corosync_addresses[Link Address Types])
629
630 In this example, we want to switch the cluster communication to the
631 10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
632
633 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
634 as well, although we recommend to not change multiple addresses at once, to make
635 it easier to recover if something goes wrong.
636
637 After we increase the 'config_version' property, the new configuration file
638 should look like:
639
640 ----
641 logging {
642 debug: off
643 to_syslog: yes
644 }
645
646 nodelist {
647
648 node {
649 name: due
650 nodeid: 2
651 quorum_votes: 1
652 ring0_addr: 10.10.10.2
653 }
654
655 node {
656 name: tre
657 nodeid: 3
658 quorum_votes: 1
659 ring0_addr: 10.10.10.3
660 }
661
662 node {
663 name: uno
664 nodeid: 1
665 quorum_votes: 1
666 ring0_addr: 10.10.10.1
667 }
668
669 }
670
671 quorum {
672 provider: corosync_votequorum
673 }
674
675 totem {
676 cluster_name: testcluster
677 config_version: 4
678 ip_version: ipv4-6
679 secauth: on
680 version: 2
681 interface {
682 linknumber: 0
683 }
684
685 }
686 ----
687
688 Then, after a final check if all changed information is correct, we save it and
689 once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
690 section to bring it into effect.
691
692 The changes will be applied live, so restarting corosync is not strictly
693 necessary. If you changed other settings as well, or notice corosync
694 complaining, you can optionally trigger a restart.
695
696 On a single node execute:
697
698 [source,bash]
699 ----
700 systemctl restart corosync
701 ----
702
703 Now check if everything is fine:
704
705 [source,bash]
706 ----
707 systemctl status corosync
708 ----
709
710 If corosync runs again correct restart corosync also on all other nodes.
711 They will then join the cluster membership one by one on the new network.
712
713 [[pvecm_corosync_addresses]]
714 Corosync addresses
715 ~~~~~~~~~~~~~~~~~~
716
717 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
718 `corosync.conf`) can be specified in two ways:
719
720 * **IPv4/v6 addresses** will be used directly. They are recommended, since they
721 are static and usually not changed carelessly.
722
723 * **Hostnames** will be resolved using `getaddrinfo`, which means that per
724 default, IPv6 addresses will be used first, if available (see also
725 `man gai.conf`). Keep this in mind, especially when upgrading an existing
726 cluster to IPv6.
727
728 CAUTION: Hostnames should be used with care, since the address they
729 resolve to can be changed without touching corosync or the node it runs on -
730 which may lead to a situation where an address is changed without thinking
731 about implications for corosync.
732
733 A separate, static hostname specifically for corosync is recommended, if
734 hostnames are preferred. Also, make sure that every node in the cluster can
735 resolve all hostnames correctly.
736
737 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
738 entry. Only the resolved IP is then saved to the configuration.
739
740 Nodes that joined the cluster on earlier versions likely still use their
741 unresolved hostname in `corosync.conf`. It might be a good idea to replace
742 them with IPs or a separate hostname, as mentioned above.
743
744
745 [[pvecm_redundancy]]
746 Corosync Redundancy
747 -------------------
748
749 Corosync supports redundant networking via its integrated kronosnet layer by
750 default (it is not supported on the legacy udp/udpu transports). It can be
751 enabled by specifying more than one link address, either via the '--linkX'
752 parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
753 adding a new node) or by specifying more than one 'ringX_addr' in
754 `corosync.conf`.
755
756 NOTE: To provide useful failover, every link should be on its own
757 physical network connection.
758
759 Links are used according to a priority setting. You can configure this priority
760 by setting 'knet_link_priority' in the corresponding interface section in
761 `corosync.conf`, or, preferably, using the 'priority' parameter when creating
762 your cluster with `pvecm`:
763
764 ----
765 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
766 ----
767
768 This would cause 'link1' to be used first, since it has the higher priority.
769
770 If no priorities are configured manually (or two links have the same priority),
771 links will be used in order of their number, with the lower number having higher
772 priority.
773
774 Even if all links are working, only the one with the highest priority will see
775 corosync traffic. Link priorities cannot be mixed, i.e. links with different
776 priorities will not be able to communicate with each other.
777
778 Since lower priority links will not see traffic unless all higher priorities
779 have failed, it becomes a useful strategy to specify even networks used for
780 other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
781 worst, a higher-latency or more congested connection might be better than no
782 connection at all.
783
784 Adding Redundant Links To An Existing Cluster
785 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
786
787 To add a new link to a running configuration, first check how to
788 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
789
790 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
791 sure that your 'X' is the same for every node you add it to, and that it is
792 unique for each node.
793
794 Lastly, add a new 'interface', as shown below, to your `totem`
795 section, replacing 'X' with your link number chosen above.
796
797 Assuming you added a link with number 1, the new configuration file could look
798 like this:
799
800 ----
801 logging {
802 debug: off
803 to_syslog: yes
804 }
805
806 nodelist {
807
808 node {
809 name: due
810 nodeid: 2
811 quorum_votes: 1
812 ring0_addr: 10.10.10.2
813 ring1_addr: 10.20.20.2
814 }
815
816 node {
817 name: tre
818 nodeid: 3
819 quorum_votes: 1
820 ring0_addr: 10.10.10.3
821 ring1_addr: 10.20.20.3
822 }
823
824 node {
825 name: uno
826 nodeid: 1
827 quorum_votes: 1
828 ring0_addr: 10.10.10.1
829 ring1_addr: 10.20.20.1
830 }
831
832 }
833
834 quorum {
835 provider: corosync_votequorum
836 }
837
838 totem {
839 cluster_name: testcluster
840 config_version: 4
841 ip_version: ipv4-6
842 secauth: on
843 version: 2
844 interface {
845 linknumber: 0
846 }
847 interface {
848 linknumber: 1
849 }
850 }
851 ----
852
853 The new link will be enabled as soon as you follow the last steps to
854 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
855 be necessary. You can check that corosync loaded the new link using:
856
857 ----
858 journalctl -b -u corosync
859 ----
860
861 It might be a good idea to test the new link by temporarily disconnecting the
862 old link on one node and making sure that its status remains online while
863 disconnected:
864
865 ----
866 pvecm status
867 ----
868
869 If you see a healthy cluster state, it means that your new link is being used.
870
871
872 Corosync External Vote Support
873 ------------------------------
874
875 This section describes a way to deploy an external voter in a {pve} cluster.
876 When configured, the cluster can sustain more node failures without
877 violating safety properties of the cluster communication.
878
879 For this to work there are two services involved:
880
881 * a so called qdevice daemon which runs on each {pve} node
882
883 * an external vote daemon which runs on an independent server.
884
885 As a result you can achieve higher availability even in smaller setups (for
886 example 2+1 nodes).
887
888 QDevice Technical Overview
889 ~~~~~~~~~~~~~~~~~~~~~~~~~~
890
891 The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
892 node. It provides a configured number of votes to the clusters quorum
893 subsystem based on an external running third-party arbitrator's decision.
894 Its primary use is to allow a cluster to sustain more node failures than
895 standard quorum rules allow. This can be done safely as the external device
896 can see all nodes and thus choose only one set of nodes to give its vote.
897 This will only be done if said set of nodes can have quorum (again) when
898 receiving the third-party vote.
899
900 Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
901 a daemon which provides a vote to a cluster partition if it can reach the
902 partition members over the network. It will give only votes to one partition
903 of a cluster at any time.
904 It's designed to support multiple clusters and is almost configuration and
905 state free. New clusters are handled dynamically and no configuration file
906 is needed on the host running a QDevice.
907
908 The external host has the only requirement that it needs network access to the
909 cluster and a corosync-qnetd package available. We provide such a package
910 for Debian based hosts, other Linux distributions should also have a package
911 available through their respective package manager.
912
913 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
914 TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
915 latencies than 2 ms.
916
917 Supported Setups
918 ~~~~~~~~~~~~~~~~
919
920 We support QDevices for clusters with an even number of nodes and recommend
921 it for 2 node clusters, if they should provide higher availability.
922 For clusters with an odd node count we discourage the use of QDevices
923 currently. The reason for this, is the difference of the votes the QDevice
924 provides for each cluster type. Even numbered clusters get single additional
925 vote, with this we can only increase availability, i.e. if the QDevice
926 itself fails we are in the same situation as with no QDevice at all.
927
928 Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
929 where 'N' corresponds to the cluster node count. This difference makes
930 sense, if we had only one additional vote the cluster can get into a split
931 brain situation.
932 This algorithm would allow that all nodes but one (and naturally the
933 QDevice itself) could fail.
934 There are two drawbacks with this:
935
936 * If the QNet daemon itself fails, no other node may fail or the cluster
937 immediately loses quorum. For example, in a cluster with 15 nodes 7
938 could fail before the cluster becomes inquorate. But, if a QDevice is
939 configured here and said QDevice fails itself **no single node** of
940 the 15 may fail. The QDevice acts almost as a single point of failure in
941 this case.
942
943 * The fact that all but one node plus QDevice may fail sound promising at
944 first, but this may result in a mass recovery of HA services that would
945 overload the single node left. Also ceph server will stop to provide
946 services after only '((N-1)/2)' nodes are online.
947
948 If you understand the drawbacks and implications you can decide yourself if
949 you should use this technology in an odd numbered cluster setup.
950
951 QDevice-Net Setup
952 ~~~~~~~~~~~~~~~~~
953
954 We recommend to run any daemon which provides votes to corosync-qdevice as an
955 unprivileged user. {pve} and Debian provides a package which is already
956 configured to do so.
957 The traffic between the daemon and the cluster must be encrypted to ensure a
958 safe and secure QDevice integration in {pve}.
959
960 First install the 'corosync-qnetd' package on your external server and
961 the 'corosync-qdevice' package on all cluster nodes.
962
963 After that, ensure that all your nodes on the cluster are online.
964
965 You can now easily set up your QDevice by running the following command on one
966 of the {pve} nodes:
967
968 ----
969 pve# pvecm qdevice setup <QDEVICE-IP>
970 ----
971
972 The SSH key from the cluster will be automatically copied to the QDevice. You
973 might need to enter an SSH password during this step.
974
975 After you enter the password and all the steps are successfully completed, you
976 will see "Done". You can check the status now:
977
978 ----
979 pve# pvecm status
980
981 ...
982
983 Votequorum information
984 ~~~~~~~~~~~~~~~~~~~~~
985 Expected votes: 3
986 Highest expected: 3
987 Total votes: 3
988 Quorum: 2
989 Flags: Quorate Qdevice
990
991 Membership information
992 ~~~~~~~~~~~~~~~~~~~~~~
993 Nodeid Votes Qdevice Name
994 0x00000001 1 A,V,NMW 192.168.22.180 (local)
995 0x00000002 1 A,V,NMW 192.168.22.181
996 0x00000000 1 Qdevice
997
998 ----
999
1000 which means the QDevice is set up.
1001
1002 Frequently Asked Questions
1003 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1004
1005 Tie Breaking
1006 ^^^^^^^^^^^^
1007
1008 In case of a tie, where two same-sized cluster partitions cannot see each other
1009 but the QDevice, the QDevice chooses randomly one of those partitions and
1010 provides a vote to it.
1011
1012 Possible Negative Implications
1013 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1014
1015 For clusters with an even node count there are no negative implications when
1016 setting up a QDevice. If it fails to work, you are as good as without QDevice at
1017 all.
1018
1019 Adding/Deleting Nodes After QDevice Setup
1020 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1021
1022 If you want to add a new node or remove an existing one from a cluster with a
1023 QDevice setup, you need to remove the QDevice first. After that, you can add or
1024 remove nodes normally. Once you have a cluster with an even node count again,
1025 you can set up the QDevice again as described above.
1026
1027 Removing the QDevice
1028 ^^^^^^^^^^^^^^^^^^^^
1029
1030 If you used the official `pvecm` tool to add the QDevice, you can remove it
1031 trivially by running:
1032
1033 ----
1034 pve# pvecm qdevice remove
1035 ----
1036
1037 //Still TODO
1038 //^^^^^^^^^^
1039 //There is still stuff to add here
1040
1041
1042 Corosync Configuration
1043 ----------------------
1044
1045 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1046 controls the cluster membership and its network.
1047 For further information about it, check the corosync.conf man page:
1048 [source,bash]
1049 ----
1050 man corosync.conf
1051 ----
1052
1053 For node membership you should always use the `pvecm` tool provided by {pve}.
1054 You may have to edit the configuration file manually for other changes.
1055 Here are a few best practice tips for doing this.
1056
1057 [[pvecm_edit_corosync_conf]]
1058 Edit corosync.conf
1059 ~~~~~~~~~~~~~~~~~~
1060
1061 Editing the corosync.conf file is not always very straightforward. There are
1062 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1063 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1064 propagate the changes to the local one, but not vice versa.
1065
1066 The configuration will get updated automatically as soon as the file changes.
1067 This means changes which can be integrated in a running corosync will take
1068 effect immediately. So you should always make a copy and edit that instead, to
1069 avoid triggering some unwanted changes by an in-between safe.
1070
1071 [source,bash]
1072 ----
1073 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1074 ----
1075
1076 Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1077 preinstalled on any {pve} node for example.
1078
1079 NOTE: Always increment the 'config_version' number on configuration changes,
1080 omitting this can lead to problems.
1081
1082 After making the necessary changes create another copy of the current working
1083 configuration file. This serves as a backup if the new configuration fails to
1084 apply or makes problems in other ways.
1085
1086 [source,bash]
1087 ----
1088 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1089 ----
1090
1091 Then move the new configuration file over the old one:
1092 [source,bash]
1093 ----
1094 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1095 ----
1096
1097 You may check with the commands
1098 [source,bash]
1099 ----
1100 systemctl status corosync
1101 journalctl -b -u corosync
1102 ----
1103
1104 If the change could be applied automatically. If not you may have to restart the
1105 corosync service via:
1106 [source,bash]
1107 ----
1108 systemctl restart corosync
1109 ----
1110
1111 On errors check the troubleshooting section below.
1112
1113 Troubleshooting
1114 ~~~~~~~~~~~~~~~
1115
1116 Issue: 'quorum.expected_votes must be configured'
1117 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1118
1119 When corosync starts to fail and you get the following message in the system log:
1120
1121 ----
1122 [...]
1123 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1124 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1125 'configuration error: nodelist or quorum.expected_votes must be configured!'
1126 [...]
1127 ----
1128
1129 It means that the hostname you set for corosync 'ringX_addr' in the
1130 configuration could not be resolved.
1131
1132 Write Configuration When Not Quorate
1133 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1134
1135 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1136 know what you do, use:
1137 [source,bash]
1138 ----
1139 pvecm expected 1
1140 ----
1141
1142 This sets the expected vote count to 1 and makes the cluster quorate. You can
1143 now fix your configuration, or revert it back to the last working backup.
1144
1145 This is not enough if corosync cannot start anymore. Here it is best to edit the
1146 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1147 that corosync can start again. Ensure that on all nodes this configuration has
1148 the same content to avoid split brains. If you are not sure what went wrong
1149 it's best to ask the Proxmox Community to help you.
1150
1151
1152 [[pvecm_corosync_conf_glossary]]
1153 Corosync Configuration Glossary
1154 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1155
1156 ringX_addr::
1157 This names the different link addresses for the kronosnet connections between
1158 nodes.
1159
1160
1161 Cluster Cold Start
1162 ------------------
1163
1164 It is obvious that a cluster is not quorate when all nodes are
1165 offline. This is a common case after a power failure.
1166
1167 NOTE: It is always a good idea to use an uninterruptible power supply
1168 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1169 you want HA.
1170
1171 On node startup, the `pve-guests` service is started and waits for
1172 quorum. Once quorate, it starts all guests which have the `onboot`
1173 flag set.
1174
1175 When you turn on nodes, or when power comes back after power failure,
1176 it is likely that some nodes boots faster than others. Please keep in
1177 mind that guest startup is delayed until you reach quorum.
1178
1179
1180 Guest Migration
1181 ---------------
1182
1183 Migrating virtual guests to other nodes is a useful feature in a
1184 cluster. There are settings to control the behavior of such
1185 migrations. This can be done via the configuration file
1186 `datacenter.cfg` or for a specific migration via API or command line
1187 parameters.
1188
1189 It makes a difference if a Guest is online or offline, or if it has
1190 local resources (like a local disk).
1191
1192 For Details about Virtual Machine Migration see the
1193 xref:qm_migration[QEMU/KVM Migration Chapter].
1194
1195 For Details about Container Migration see the
1196 xref:pct_migration[Container Migration Chapter].
1197
1198 Migration Type
1199 ~~~~~~~~~~~~~~
1200
1201 The migration type defines if the migration data should be sent over an
1202 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1203 Setting the migration type to insecure means that the RAM content of a
1204 virtual guest gets also transferred unencrypted, which can lead to
1205 information disclosure of critical data from inside the guest (for
1206 example passwords or encryption keys).
1207
1208 Therefore, we strongly recommend using the secure channel if you do
1209 not have full control over the network and can not guarantee that no
1210 one is eavesdropping on it.
1211
1212 NOTE: Storage migration does not follow this setting. Currently, it
1213 always sends the storage content over a secure channel.
1214
1215 Encryption requires a lot of computing power, so this setting is often
1216 changed to "unsafe" to achieve better performance. The impact on
1217 modern systems is lower because they implement AES encryption in
1218 hardware. The performance impact is particularly evident in fast
1219 networks where you can transfer 10 Gbps or more.
1220
1221 Migration Network
1222 ~~~~~~~~~~~~~~~~~
1223
1224 By default, {pve} uses the network in which cluster communication
1225 takes place to send the migration traffic. This is not optimal because
1226 sensitive cluster traffic can be disrupted and this network may not
1227 have the best bandwidth available on the node.
1228
1229 Setting the migration network parameter allows the use of a dedicated
1230 network for the entire migration traffic. In addition to the memory,
1231 this also affects the storage traffic for offline migrations.
1232
1233 The migration network is set as a network in the CIDR notation. This
1234 has the advantage that you do not have to set individual IP addresses
1235 for each node. {pve} can determine the real address on the
1236 destination node from the network specified in the CIDR form. To
1237 enable this, the network must be specified so that each node has one,
1238 but only one IP in the respective network.
1239
1240 Example
1241 ^^^^^^^
1242
1243 We assume that we have a three-node setup with three separate
1244 networks. One for public communication with the Internet, one for
1245 cluster communication and a very fast one, which we want to use as a
1246 dedicated network for migration.
1247
1248 A network configuration for such a setup might look as follows:
1249
1250 ----
1251 iface eno1 inet manual
1252
1253 # public network
1254 auto vmbr0
1255 iface vmbr0 inet static
1256 address 192.X.Y.57
1257 netmask 255.255.250.0
1258 gateway 192.X.Y.1
1259 bridge_ports eno1
1260 bridge_stp off
1261 bridge_fd 0
1262
1263 # cluster network
1264 auto eno2
1265 iface eno2 inet static
1266 address 10.1.1.1
1267 netmask 255.255.255.0
1268
1269 # fast network
1270 auto eno3
1271 iface eno3 inet static
1272 address 10.1.2.1
1273 netmask 255.255.255.0
1274 ----
1275
1276 Here, we will use the network 10.1.2.0/24 as a migration network. For
1277 a single migration, you can do this using the `migration_network`
1278 parameter of the command line tool:
1279
1280 ----
1281 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1282 ----
1283
1284 To configure this as the default network for all migrations in the
1285 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1286 file:
1287
1288 ----
1289 # use dedicated migration network
1290 migration: secure,network=10.1.2.0/24
1291 ----
1292
1293 NOTE: The migration type must always be set when the migration network
1294 gets set in `/etc/pve/datacenter.cfg`.
1295
1296
1297 ifdef::manvolnum[]
1298 include::pve-copyright.adoc[]
1299 endif::manvolnum[]