]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
fixup: TOC JS script is already in html pages for mediawiki
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
32
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
37 nodes.
38
39 Grouping nodes into a cluster has the following advantages:
40
41 * Centralized, web based management
42
43 * Multi-master clusters: each node can do all management task
44
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
47
48 * Easy migration of virtual machines and containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
60 for corosync to work.
61
62 * Date and time have to be synchronized.
63
64 * SSH tunnel on TCP port 22 between nodes is used.
65
66 * If you are interested in High Availability, you need to have at
67 least three nodes for reliable quorum. All nodes should have the
68 same version.
69
70 * We recommend a dedicated NIC for the cluster traffic, especially if
71 you use shared storage.
72
73 * Root password of a cluster node is required for adding nodes.
74
75 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
76 nodes.
77
78 NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
79 production configuration and should only used temporarily during upgrading the
80 whole cluster from one to another major version.
81
82 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
83 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
84 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
85 upgrade procedure to {pve} 6.0.
86
87
88 Preparing Nodes
89 ---------------
90
91 First, install {PVE} on all nodes. Make sure that each node is
92 installed with the final hostname and IP configuration. Changing the
93 hostname and IP is not possible after cluster creation.
94
95 Currently the cluster creation can either be done on the console (login via
96 `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
97 Cluster__).
98
99 While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
100 make their names resolvable through other means), this is not necessary for a
101 cluster to work. It may be useful however, as you can then connect from one node
102 to the other with SSH via the easier to remember node name (see also
103 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
104 recommend to reference nodes by their IP addresses in the cluster configuration.
105
106
107 [[pvecm_create_cluster]]
108 Create the Cluster
109 ------------------
110
111 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
112 This name cannot be changed later. The cluster name follows the same rules as
113 node names.
114
115 ----
116 hp1# pvecm create CLUSTERNAME
117 ----
118
119 NOTE: It is possible to create multiple clusters in the same physical or logical
120 network. Use unique cluster names if you do so. To avoid human confusion, it is
121 also recommended to choose different names even if clusters do not share the
122 cluster network.
123
124 To check the state of your cluster use:
125
126 ----
127 hp1# pvecm status
128 ----
129
130 Multiple Clusters In Same Network
131 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
132
133 It is possible to create multiple clusters in the same physical or logical
134 network. Each such cluster must have a unique name, this does not only helps
135 admins to distinguish on which cluster they currently operate, it is also
136 required to avoid possible clashes in the cluster communication stack.
137
138 While the bandwidth requirement of a corosync cluster is relatively low, the
139 latency of packages and the package per second (PPS) rate is the limiting
140 factor. Different clusters in the same network can compete with each other for
141 these resources, so it may still make sense to use separate physical network
142 infrastructure for bigger clusters.
143
144 [[pvecm_join_node_to_cluster]]
145 Adding Nodes to the Cluster
146 ---------------------------
147
148 Login via `ssh` to the node you want to add.
149
150 ----
151 hp2# pvecm add IP-ADDRESS-CLUSTER
152 ----
153
154 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
155 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
156
157 CAUTION: A new node cannot hold any VMs, because you would get
158 conflicts about identical VM IDs. Also, all existing configuration in
159 `/etc/pve` is overwritten when you join a new node to the cluster. To
160 workaround, use `vzdump` to backup and restore to a different VMID after
161 adding the node to the cluster.
162
163 To check the state of the cluster use:
164
165 ----
166 # pvecm status
167 ----
168
169 .Cluster status after adding 4 nodes
170 ----
171 hp2# pvecm status
172 Quorum information
173 ~~~~~~~~~~~~~~~~~~
174 Date: Mon Apr 20 12:30:13 2015
175 Quorum provider: corosync_votequorum
176 Nodes: 4
177 Node ID: 0x00000001
178 Ring ID: 1/8
179 Quorate: Yes
180
181 Votequorum information
182 ~~~~~~~~~~~~~~~~~~~~~~
183 Expected votes: 4
184 Highest expected: 4
185 Total votes: 4
186 Quorum: 3
187 Flags: Quorate
188
189 Membership information
190 ~~~~~~~~~~~~~~~~~~~~~~
191 Nodeid Votes Name
192 0x00000001 1 192.168.15.91
193 0x00000002 1 192.168.15.92 (local)
194 0x00000003 1 192.168.15.93
195 0x00000004 1 192.168.15.94
196 ----
197
198 If you only want the list of all nodes use:
199
200 ----
201 # pvecm nodes
202 ----
203
204 .List nodes in a cluster
205 ----
206 hp2# pvecm nodes
207
208 Membership information
209 ~~~~~~~~~~~~~~~~~~~~~~
210 Nodeid Votes Name
211 1 1 hp1
212 2 1 hp2 (local)
213 3 1 hp3
214 4 1 hp4
215 ----
216
217 [[pvecm_adding_nodes_with_separated_cluster_network]]
218 Adding Nodes With Separated Cluster Network
219 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
220
221 When adding a node to a cluster with a separated cluster network you need to
222 use the 'link0' parameter to set the nodes address on that network:
223
224 [source,bash]
225 ----
226 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
227 ----
228
229 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
230 kronosnet transport layer, also use the 'link1' parameter.
231
232
233 Remove a Cluster Node
234 ---------------------
235
236 CAUTION: Read carefully the procedure before proceeding, as it could
237 not be what you want or need.
238
239 Move all virtual machines from the node. Make sure you have no local
240 data or backups you want to keep, or save them accordingly.
241 In the following example we will remove the node hp4 from the cluster.
242
243 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
244 command to identify the node ID to remove:
245
246 ----
247 hp1# pvecm nodes
248
249 Membership information
250 ~~~~~~~~~~~~~~~~~~~~~~
251 Nodeid Votes Name
252 1 1 hp1 (local)
253 2 1 hp2
254 3 1 hp3
255 4 1 hp4
256 ----
257
258
259 At this point you must power off hp4 and
260 make sure that it will not power on again (in the network) as it
261 is.
262
263 IMPORTANT: As said above, it is critical to power off the node
264 *before* removal, and make sure that it will *never* power on again
265 (in the existing cluster network) as it is.
266 If you power on the node as it is, your cluster will be screwed up and
267 it could be difficult to restore a clean cluster state.
268
269 After powering off the node hp4, we can safely remove it from the cluster.
270
271 ----
272 hp1# pvecm delnode hp4
273 ----
274
275 If the operation succeeds no output is returned, just check the node
276 list again with `pvecm nodes` or `pvecm status`. You should see
277 something like:
278
279 ----
280 hp1# pvecm status
281
282 Quorum information
283 ~~~~~~~~~~~~~~~~~~
284 Date: Mon Apr 20 12:44:28 2015
285 Quorum provider: corosync_votequorum
286 Nodes: 3
287 Node ID: 0x00000001
288 Ring ID: 1/8
289 Quorate: Yes
290
291 Votequorum information
292 ~~~~~~~~~~~~~~~~~~~~~~
293 Expected votes: 3
294 Highest expected: 3
295 Total votes: 3
296 Quorum: 2
297 Flags: Quorate
298
299 Membership information
300 ~~~~~~~~~~~~~~~~~~~~~~
301 Nodeid Votes Name
302 0x00000001 1 192.168.15.90 (local)
303 0x00000002 1 192.168.15.91
304 0x00000003 1 192.168.15.92
305 ----
306
307 If, for whatever reason, you want this server to join the same cluster again,
308 you have to
309
310 * reinstall {pve} on it from scratch
311
312 * then join it, as explained in the previous section.
313
314 NOTE: After removal of the node, its SSH fingerprint will still reside in the
315 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
316 a node with the same IP or hostname, run `pvecm updatecerts` once on the
317 re-added node to update its fingerprint cluster wide.
318
319 [[pvecm_separate_node_without_reinstall]]
320 Separate A Node Without Reinstalling
321 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
322
323 CAUTION: This is *not* the recommended method, proceed with caution. Use the
324 above mentioned method if you're unsure.
325
326 You can also separate a node from a cluster without reinstalling it from
327 scratch. But after removing the node from the cluster it will still have
328 access to the shared storages! This must be resolved before you start removing
329 the node from the cluster. A {pve} cluster cannot share the exact same
330 storage with another cluster, as storage locking doesn't work over cluster
331 boundary. Further, it may also lead to VMID conflicts.
332
333 Its suggested that you create a new storage where only the node which you want
334 to separate has access. This can be a new export on your NFS or a new Ceph
335 pool, to name a few examples. Its just important that the exact same storage
336 does not gets accessed by multiple clusters. After setting this storage up move
337 all data from the node and its VMs to it. Then you are ready to separate the
338 node from the cluster.
339
340 WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
341 run into conflicts and problems.
342
343 First stop the corosync and the pve-cluster services on the node:
344 [source,bash]
345 ----
346 systemctl stop pve-cluster
347 systemctl stop corosync
348 ----
349
350 Start the cluster filesystem again in local mode:
351 [source,bash]
352 ----
353 pmxcfs -l
354 ----
355
356 Delete the corosync configuration files:
357 [source,bash]
358 ----
359 rm /etc/pve/corosync.conf
360 rm /etc/corosync/*
361 ----
362
363 You can now start the filesystem again as normal service:
364 [source,bash]
365 ----
366 killall pmxcfs
367 systemctl start pve-cluster
368 ----
369
370 The node is now separated from the cluster. You can deleted it from a remaining
371 node of the cluster with:
372 [source,bash]
373 ----
374 pvecm delnode oldnode
375 ----
376
377 If the command failed, because the remaining node in the cluster lost quorum
378 when the now separate node exited, you may set the expected votes to 1 as a workaround:
379 [source,bash]
380 ----
381 pvecm expected 1
382 ----
383
384 And then repeat the 'pvecm delnode' command.
385
386 Now switch back to the separated node, here delete all remaining files left
387 from the old cluster. This ensures that the node can be added to another
388 cluster again without problems.
389
390 [source,bash]
391 ----
392 rm /var/lib/corosync/*
393 ----
394
395 As the configuration files from the other nodes are still in the cluster
396 filesystem you may want to clean those up too. Remove simply the whole
397 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
398 you used the correct one before deleting it.
399
400 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
401 the nodes can still connect to each other with public key authentication. This
402 should be fixed by removing the respective keys from the
403 '/etc/pve/priv/authorized_keys' file.
404
405
406 Quorum
407 ------
408
409 {pve} use a quorum-based technique to provide a consistent state among
410 all cluster nodes.
411
412 [quote, from Wikipedia, Quorum (distributed computing)]
413 ____
414 A quorum is the minimum number of votes that a distributed transaction
415 has to obtain in order to be allowed to perform an operation in a
416 distributed system.
417 ____
418
419 In case of network partitioning, state changes requires that a
420 majority of nodes are online. The cluster switches to read-only mode
421 if it loses quorum.
422
423 NOTE: {pve} assigns a single vote to each node by default.
424
425
426 Cluster Network
427 ---------------
428
429 The cluster network is the core of a cluster. All messages sent over it have to
430 be delivered reliably to all nodes in their respective order. In {pve} this
431 part is done by corosync, an implementation of a high performance, low overhead
432 high availability development toolkit. It serves our decentralized
433 configuration file system (`pmxcfs`).
434
435 [[pvecm_cluster_network_requirements]]
436 Network Requirements
437 ~~~~~~~~~~~~~~~~~~~~
438 This needs a reliable network with latencies under 2 milliseconds (LAN
439 performance) to work properly. The network should not be used heavily by other
440 members, ideally corosync runs on its own network. Do not use a shared network
441 for corosync and storage (except as a potential low-priority fallback in a
442 xref:pvecm_redundancy[redundant] configuration).
443
444 Before setting up a cluster, it is good practice to check if the network is fit
445 for that purpose. To make sure the nodes can connect to each other on the
446 cluster network, you can test the connectivity between them with the `ping`
447 tool.
448
449 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
450 be generated - no manual action is required.
451
452 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
453 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
454 communication, which, for now, only supports regular UDP unicast.
455
456 CAUTION: You can still enable Multicast or legacy unicast by setting your
457 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
458 but keep in mind that this will disable all cryptography and redundancy support.
459 This is therefore not recommended.
460
461 Separate Cluster Network
462 ~~~~~~~~~~~~~~~~~~~~~~~~
463
464 When creating a cluster without any parameters the corosync cluster network is
465 generally shared with the Web UI and the VMs and their traffic. Depending on
466 your setup, even storage traffic may get sent over the same network. Its
467 recommended to change that, as corosync is a time critical real time
468 application.
469
470 Setting Up A New Network
471 ^^^^^^^^^^^^^^^^^^^^^^^^
472
473 First you have to set up a new network interface. It should be on a physically
474 separate network. Ensure that your network fulfills the
475 xref:pvecm_cluster_network_requirements[cluster network requirements].
476
477 Separate On Cluster Creation
478 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
479
480 This is possible via the 'linkX' parameters of the 'pvecm create'
481 command used for creating a new cluster.
482
483 If you have set up an additional NIC with a static address on 10.10.10.1/25,
484 and want to send and receive all cluster communication over this interface,
485 you would execute:
486
487 [source,bash]
488 ----
489 pvecm create test --link0 10.10.10.1
490 ----
491
492 To check if everything is working properly execute:
493 [source,bash]
494 ----
495 systemctl status corosync
496 ----
497
498 Afterwards, proceed as described above to
499 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
500
501 [[pvecm_separate_cluster_net_after_creation]]
502 Separate After Cluster Creation
503 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
504
505 You can do this if you have already created a cluster and want to switch
506 its communication to another network, without rebuilding the whole cluster.
507 This change may lead to short durations of quorum loss in the cluster, as nodes
508 have to restart corosync and come up one after the other on the new network.
509
510 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
511 Then, open it and you should see a file similar to:
512
513 ----
514 logging {
515 debug: off
516 to_syslog: yes
517 }
518
519 nodelist {
520
521 node {
522 name: due
523 nodeid: 2
524 quorum_votes: 1
525 ring0_addr: due
526 }
527
528 node {
529 name: tre
530 nodeid: 3
531 quorum_votes: 1
532 ring0_addr: tre
533 }
534
535 node {
536 name: uno
537 nodeid: 1
538 quorum_votes: 1
539 ring0_addr: uno
540 }
541
542 }
543
544 quorum {
545 provider: corosync_votequorum
546 }
547
548 totem {
549 cluster_name: testcluster
550 config_version: 3
551 ip_version: ipv4-6
552 secauth: on
553 version: 2
554 interface {
555 linknumber: 0
556 }
557
558 }
559 ----
560
561 NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
562 is a remnant of older corosync versions that is kept for backwards
563 compatibility.
564
565 The first thing you want to do is add the 'name' properties in the node entries
566 if you do not see them already. Those *must* match the node name.
567
568 Then replace all addresses from the 'ring0_addr' properties of all nodes with
569 the new addresses. You may use plain IP addresses or hostnames here. If you use
570 hostnames ensure that they are resolvable from all nodes. (see also
571 xref:pvecm_corosync_addresses[Link Address Types])
572
573 In this example, we want to switch the cluster communication to the
574 10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
575
576 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
577 as well, although we recommend to not change multiple addresses at once, to make
578 it easier to recover if something goes wrong.
579
580 After we increase the 'config_version' property, the new configuration file
581 should look like:
582
583 ----
584 logging {
585 debug: off
586 to_syslog: yes
587 }
588
589 nodelist {
590
591 node {
592 name: due
593 nodeid: 2
594 quorum_votes: 1
595 ring0_addr: 10.10.10.2
596 }
597
598 node {
599 name: tre
600 nodeid: 3
601 quorum_votes: 1
602 ring0_addr: 10.10.10.3
603 }
604
605 node {
606 name: uno
607 nodeid: 1
608 quorum_votes: 1
609 ring0_addr: 10.10.10.1
610 }
611
612 }
613
614 quorum {
615 provider: corosync_votequorum
616 }
617
618 totem {
619 cluster_name: testcluster
620 config_version: 4
621 ip_version: ipv4-6
622 secauth: on
623 version: 2
624 interface {
625 linknumber: 0
626 }
627
628 }
629 ----
630
631 Then, after a final check if all changed information is correct, we save it and
632 once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
633 section to bring it into effect.
634
635 The changes will be applied live, so restarting corosync is not strictly
636 necessary. If you changed other settings as well, or notice corosync
637 complaining, you can optionally trigger a restart.
638
639 On a single node execute:
640
641 [source,bash]
642 ----
643 systemctl restart corosync
644 ----
645
646 Now check if everything is fine:
647
648 [source,bash]
649 ----
650 systemctl status corosync
651 ----
652
653 If corosync runs again correct restart corosync also on all other nodes.
654 They will then join the cluster membership one by one on the new network.
655
656 [[pvecm_corosync_addresses]]
657 Corosync addresses
658 ~~~~~~~~~~~~~~~~~~
659
660 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
661 `corosync.conf`) can be specified in two ways:
662
663 * **IPv4/v6 addresses** will be used directly. They are recommended, since they
664 are static and usually not changed carelessly.
665
666 * **Hostnames** will be resolved using `getaddrinfo`, which means that per
667 default, IPv6 addresses will be used first, if available (see also
668 `man gai.conf`). Keep this in mind, especially when upgrading an existing
669 cluster to IPv6.
670
671 CAUTION: Hostnames should be used with care, since the address they
672 resolve to can be changed without touching corosync or the node it runs on -
673 which may lead to a situation where an address is changed without thinking
674 about implications for corosync.
675
676 A seperate, static hostname specifically for corosync is recommended, if
677 hostnames are preferred. Also, make sure that every node in the cluster can
678 resolve all hostnames correctly.
679
680 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
681 entry. Only the resolved IP is then saved to the configuration.
682
683 Nodes that joined the cluster on earlier versions likely still use their
684 unresolved hostname in `corosync.conf`. It might be a good idea to replace
685 them with IPs or a seperate hostname, as mentioned above.
686
687
688 [[pvecm_redundancy]]
689 Corosync Redundancy
690 -------------------
691
692 Corosync supports redundant networking via its integrated kronosnet layer by
693 default (it is not supported on the legacy udp/udpu transports). It can be
694 enabled by specifying more than one link address, either via the '--linkX'
695 parameters of `pvecm` (while creating a cluster or adding a new node) or by
696 specifying more than one 'ringX_addr' in `corosync.conf`.
697
698 NOTE: To provide useful failover, every link should be on its own
699 physical network connection.
700
701 Links are used according to a priority setting. You can configure this priority
702 by setting 'knet_link_priority' in the corresponding interface section in
703 `corosync.conf`, or, preferrably, using the 'priority' parameter when creating
704 your cluster with `pvecm`:
705
706 ----
707 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=20 --link1 10.20.20.1,priority=15
708 ----
709
710 This would cause 'link1' to be used first, since it has the lower priority.
711
712 If no priorities are configured manually (or two links have the same priority),
713 links will be used in order of their number, with the lower number having higher
714 priority.
715
716 Even if all links are working, only the one with the highest priority will see
717 corosync traffic. Link priorities cannot be mixed, i.e. links with different
718 priorities will not be able to communicate with each other.
719
720 Since lower priority links will not see traffic unless all higher priorities
721 have failed, it becomes a useful strategy to specify even networks used for
722 other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
723 worst, a higher-latency or more congested connection might be better than no
724 connection at all.
725
726 Adding Redundant Links To An Existing Cluster
727 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
728
729 To add a new link to a running configuration, first check how to
730 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
731
732 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
733 sure that your 'X' is the same for every node you add it to, and that it is
734 unique for each node.
735
736 Lastly, add a new 'interface', as shown below, to your `totem`
737 section, replacing 'X' with your link number chosen above.
738
739 Assuming you added a link with number 1, the new configuration file could look
740 like this:
741
742 ----
743 logging {
744 debug: off
745 to_syslog: yes
746 }
747
748 nodelist {
749
750 node {
751 name: due
752 nodeid: 2
753 quorum_votes: 1
754 ring0_addr: 10.10.10.2
755 ring1_addr: 10.20.20.2
756 }
757
758 node {
759 name: tre
760 nodeid: 3
761 quorum_votes: 1
762 ring0_addr: 10.10.10.3
763 ring1_addr: 10.20.20.3
764 }
765
766 node {
767 name: uno
768 nodeid: 1
769 quorum_votes: 1
770 ring0_addr: 10.10.10.1
771 ring1_addr: 10.20.20.1
772 }
773
774 }
775
776 quorum {
777 provider: corosync_votequorum
778 }
779
780 totem {
781 cluster_name: testcluster
782 config_version: 4
783 ip_version: ipv4-6
784 secauth: on
785 version: 2
786 interface {
787 linknumber: 0
788 }
789 interface {
790 linknumber: 1
791 }
792 }
793 ----
794
795 The new link will be enabled as soon as you follow the last steps to
796 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
797 be necessary. You can check that corosync loaded the new link using:
798
799 ----
800 journalctl -b -u corosync
801 ----
802
803 It might be a good idea to test the new link by temporarily disconnecting the
804 old link on one node and making sure that its status remains online while
805 disconnected:
806
807 ----
808 pvecm status
809 ----
810
811 If you see a healthy cluster state, it means that your new link is being used.
812
813
814 Corosync External Vote Support
815 ------------------------------
816
817 This section describes a way to deploy an external voter in a {pve} cluster.
818 When configured, the cluster can sustain more node failures without
819 violating safety properties of the cluster communication.
820
821 For this to work there are two services involved:
822
823 * a so called qdevice daemon which runs on each {pve} node
824
825 * an external vote daemon which runs on an independent server.
826
827 As a result you can achieve higher availability even in smaller setups (for
828 example 2+1 nodes).
829
830 QDevice Technical Overview
831 ~~~~~~~~~~~~~~~~~~~~~~~~~~
832
833 The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
834 node. It provides a configured number of votes to the clusters quorum
835 subsystem based on an external running third-party arbitrator's decision.
836 Its primary use is to allow a cluster to sustain more node failures than
837 standard quorum rules allow. This can be done safely as the external device
838 can see all nodes and thus choose only one set of nodes to give its vote.
839 This will only be done if said set of nodes can have quorum (again) when
840 receiving the third-party vote.
841
842 Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
843 a daemon which provides a vote to a cluster partition if it can reach the
844 partition members over the network. It will give only votes to one partition
845 of a cluster at any time.
846 It's designed to support multiple clusters and is almost configuration and
847 state free. New clusters are handled dynamically and no configuration file
848 is needed on the host running a QDevice.
849
850 The external host has the only requirement that it needs network access to the
851 cluster and a corosync-qnetd package available. We provide such a package
852 for Debian based hosts, other Linux distributions should also have a package
853 available through their respective package manager.
854
855 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
856 TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
857 latencies than 2 ms.
858
859 Supported Setups
860 ~~~~~~~~~~~~~~~~
861
862 We support QDevices for clusters with an even number of nodes and recommend
863 it for 2 node clusters, if they should provide higher availability.
864 For clusters with an odd node count we discourage the use of QDevices
865 currently. The reason for this, is the difference of the votes the QDevice
866 provides for each cluster type. Even numbered clusters get single additional
867 vote, with this we can only increase availability, i.e. if the QDevice
868 itself fails we are in the same situation as with no QDevice at all.
869
870 Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
871 where 'N' corresponds to the cluster node count. This difference makes
872 sense, if we had only one additional vote the cluster can get into a split
873 brain situation.
874 This algorithm would allow that all nodes but one (and naturally the
875 QDevice itself) could fail.
876 There are two drawbacks with this:
877
878 * If the QNet daemon itself fails, no other node may fail or the cluster
879 immediately loses quorum. For example, in a cluster with 15 nodes 7
880 could fail before the cluster becomes inquorate. But, if a QDevice is
881 configured here and said QDevice fails itself **no single node** of
882 the 15 may fail. The QDevice acts almost as a single point of failure in
883 this case.
884
885 * The fact that all but one node plus QDevice may fail sound promising at
886 first, but this may result in a mass recovery of HA services that would
887 overload the single node left. Also ceph server will stop to provide
888 services after only '((N-1)/2)' nodes are online.
889
890 If you understand the drawbacks and implications you can decide yourself if
891 you should use this technology in an odd numbered cluster setup.
892
893 QDevice-Net Setup
894 ~~~~~~~~~~~~~~~~~
895
896 We recommend to run any daemon which provides votes to corosync-qdevice as an
897 unprivileged user. {pve} and Debian provides a package which is already
898 configured to do so.
899 The traffic between the daemon and the cluster must be encrypted to ensure a
900 safe and secure QDevice integration in {pve}.
901
902 First install the 'corosync-qnetd' package on your external server and
903 the 'corosync-qdevice' package on all cluster nodes.
904
905 After that, ensure that all your nodes on the cluster are online.
906
907 You can now easily set up your QDevice by running the following command on one
908 of the {pve} nodes:
909
910 ----
911 pve# pvecm qdevice setup <QDEVICE-IP>
912 ----
913
914 The SSH key from the cluster will be automatically copied to the QDevice. You
915 might need to enter an SSH password during this step.
916
917 After you enter the password and all the steps are successfully completed, you
918 will see "Done". You can check the status now:
919
920 ----
921 pve# pvecm status
922
923 ...
924
925 Votequorum information
926 ~~~~~~~~~~~~~~~~~~~~~
927 Expected votes: 3
928 Highest expected: 3
929 Total votes: 3
930 Quorum: 2
931 Flags: Quorate Qdevice
932
933 Membership information
934 ~~~~~~~~~~~~~~~~~~~~~~
935 Nodeid Votes Qdevice Name
936 0x00000001 1 A,V,NMW 192.168.22.180 (local)
937 0x00000002 1 A,V,NMW 192.168.22.181
938 0x00000000 1 Qdevice
939
940 ----
941
942 which means the QDevice is set up.
943
944 Frequently Asked Questions
945 ~~~~~~~~~~~~~~~~~~~~~~~~~~
946
947 Tie Breaking
948 ^^^^^^^^^^^^
949
950 In case of a tie, where two same-sized cluster partitions cannot see each other
951 but the QDevice, the QDevice chooses randomly one of those partitions and
952 provides a vote to it.
953
954 Possible Negative Implications
955 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
956
957 For clusters with an even node count there are no negative implications when
958 setting up a QDevice. If it fails to work, you are as good as without QDevice at
959 all.
960
961 Adding/Deleting Nodes After QDevice Setup
962 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
963
964 If you want to add a new node or remove an existing one from a cluster with a
965 QDevice setup, you need to remove the QDevice first. After that, you can add or
966 remove nodes normally. Once you have a cluster with an even node count again,
967 you can set up the QDevice again as described above.
968
969 Removing the QDevice
970 ^^^^^^^^^^^^^^^^^^^^
971
972 If you used the official `pvecm` tool to add the QDevice, you can remove it
973 trivially by running:
974
975 ----
976 pve# pvecm qdevice remove
977 ----
978
979 //Still TODO
980 //^^^^^^^^^^
981 //There is still stuff to add here
982
983
984 Corosync Configuration
985 ----------------------
986
987 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
988 controls the cluster membership and its network.
989 For further information about it, check the corosync.conf man page:
990 [source,bash]
991 ----
992 man corosync.conf
993 ----
994
995 For node membership you should always use the `pvecm` tool provided by {pve}.
996 You may have to edit the configuration file manually for other changes.
997 Here are a few best practice tips for doing this.
998
999 [[pvecm_edit_corosync_conf]]
1000 Edit corosync.conf
1001 ~~~~~~~~~~~~~~~~~~
1002
1003 Editing the corosync.conf file is not always very straightforward. There are
1004 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1005 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1006 propagate the changes to the local one, but not vice versa.
1007
1008 The configuration will get updated automatically as soon as the file changes.
1009 This means changes which can be integrated in a running corosync will take
1010 effect immediately. So you should always make a copy and edit that instead, to
1011 avoid triggering some unwanted changes by an in-between safe.
1012
1013 [source,bash]
1014 ----
1015 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1016 ----
1017
1018 Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1019 preinstalled on any {pve} node for example.
1020
1021 NOTE: Always increment the 'config_version' number on configuration changes,
1022 omitting this can lead to problems.
1023
1024 After making the necessary changes create another copy of the current working
1025 configuration file. This serves as a backup if the new configuration fails to
1026 apply or makes problems in other ways.
1027
1028 [source,bash]
1029 ----
1030 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1031 ----
1032
1033 Then move the new configuration file over the old one:
1034 [source,bash]
1035 ----
1036 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1037 ----
1038
1039 You may check with the commands
1040 [source,bash]
1041 ----
1042 systemctl status corosync
1043 journalctl -b -u corosync
1044 ----
1045
1046 If the change could be applied automatically. If not you may have to restart the
1047 corosync service via:
1048 [source,bash]
1049 ----
1050 systemctl restart corosync
1051 ----
1052
1053 On errors check the troubleshooting section below.
1054
1055 Troubleshooting
1056 ~~~~~~~~~~~~~~~
1057
1058 Issue: 'quorum.expected_votes must be configured'
1059 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1060
1061 When corosync starts to fail and you get the following message in the system log:
1062
1063 ----
1064 [...]
1065 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1066 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1067 'configuration error: nodelist or quorum.expected_votes must be configured!'
1068 [...]
1069 ----
1070
1071 It means that the hostname you set for corosync 'ringX_addr' in the
1072 configuration could not be resolved.
1073
1074 Write Configuration When Not Quorate
1075 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1076
1077 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1078 know what you do, use:
1079 [source,bash]
1080 ----
1081 pvecm expected 1
1082 ----
1083
1084 This sets the expected vote count to 1 and makes the cluster quorate. You can
1085 now fix your configuration, or revert it back to the last working backup.
1086
1087 This is not enough if corosync cannot start anymore. Here its best to edit the
1088 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1089 that corosync can start again. Ensure that on all nodes this configuration has
1090 the same content to avoid split brains. If you are not sure what went wrong
1091 it's best to ask the Proxmox Community to help you.
1092
1093
1094 [[pvecm_corosync_conf_glossary]]
1095 Corosync Configuration Glossary
1096 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1097
1098 ringX_addr::
1099 This names the different link addresses for the kronosnet connections between
1100 nodes.
1101
1102
1103 Cluster Cold Start
1104 ------------------
1105
1106 It is obvious that a cluster is not quorate when all nodes are
1107 offline. This is a common case after a power failure.
1108
1109 NOTE: It is always a good idea to use an uninterruptible power supply
1110 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1111 you want HA.
1112
1113 On node startup, the `pve-guests` service is started and waits for
1114 quorum. Once quorate, it starts all guests which have the `onboot`
1115 flag set.
1116
1117 When you turn on nodes, or when power comes back after power failure,
1118 it is likely that some nodes boots faster than others. Please keep in
1119 mind that guest startup is delayed until you reach quorum.
1120
1121
1122 Guest Migration
1123 ---------------
1124
1125 Migrating virtual guests to other nodes is a useful feature in a
1126 cluster. There are settings to control the behavior of such
1127 migrations. This can be done via the configuration file
1128 `datacenter.cfg` or for a specific migration via API or command line
1129 parameters.
1130
1131 It makes a difference if a Guest is online or offline, or if it has
1132 local resources (like a local disk).
1133
1134 For Details about Virtual Machine Migration see the
1135 xref:qm_migration[QEMU/KVM Migration Chapter].
1136
1137 For Details about Container Migration see the
1138 xref:pct_migration[Container Migration Chapter].
1139
1140 Migration Type
1141 ~~~~~~~~~~~~~~
1142
1143 The migration type defines if the migration data should be sent over an
1144 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1145 Setting the migration type to insecure means that the RAM content of a
1146 virtual guest gets also transferred unencrypted, which can lead to
1147 information disclosure of critical data from inside the guest (for
1148 example passwords or encryption keys).
1149
1150 Therefore, we strongly recommend using the secure channel if you do
1151 not have full control over the network and can not guarantee that no
1152 one is eavesdropping to it.
1153
1154 NOTE: Storage migration does not follow this setting. Currently, it
1155 always sends the storage content over a secure channel.
1156
1157 Encryption requires a lot of computing power, so this setting is often
1158 changed to "unsafe" to achieve better performance. The impact on
1159 modern systems is lower because they implement AES encryption in
1160 hardware. The performance impact is particularly evident in fast
1161 networks where you can transfer 10 Gbps or more.
1162
1163 Migration Network
1164 ~~~~~~~~~~~~~~~~~
1165
1166 By default, {pve} uses the network in which cluster communication
1167 takes place to send the migration traffic. This is not optimal because
1168 sensitive cluster traffic can be disrupted and this network may not
1169 have the best bandwidth available on the node.
1170
1171 Setting the migration network parameter allows the use of a dedicated
1172 network for the entire migration traffic. In addition to the memory,
1173 this also affects the storage traffic for offline migrations.
1174
1175 The migration network is set as a network in the CIDR notation. This
1176 has the advantage that you do not have to set individual IP addresses
1177 for each node. {pve} can determine the real address on the
1178 destination node from the network specified in the CIDR form. To
1179 enable this, the network must be specified so that each node has one,
1180 but only one IP in the respective network.
1181
1182 Example
1183 ^^^^^^^
1184
1185 We assume that we have a three-node setup with three separate
1186 networks. One for public communication with the Internet, one for
1187 cluster communication and a very fast one, which we want to use as a
1188 dedicated network for migration.
1189
1190 A network configuration for such a setup might look as follows:
1191
1192 ----
1193 iface eno1 inet manual
1194
1195 # public network
1196 auto vmbr0
1197 iface vmbr0 inet static
1198 address 192.X.Y.57
1199 netmask 255.255.250.0
1200 gateway 192.X.Y.1
1201 bridge_ports eno1
1202 bridge_stp off
1203 bridge_fd 0
1204
1205 # cluster network
1206 auto eno2
1207 iface eno2 inet static
1208 address 10.1.1.1
1209 netmask 255.255.255.0
1210
1211 # fast network
1212 auto eno3
1213 iface eno3 inet static
1214 address 10.1.2.1
1215 netmask 255.255.255.0
1216 ----
1217
1218 Here, we will use the network 10.1.2.0/24 as a migration network. For
1219 a single migration, you can do this using the `migration_network`
1220 parameter of the command line tool:
1221
1222 ----
1223 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1224 ----
1225
1226 To configure this as the default network for all migrations in the
1227 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1228 file:
1229
1230 ----
1231 # use dedicated migration network
1232 migration: secure,network=10.1.2.0/24
1233 ----
1234
1235 NOTE: The migration type must always be set when the migration network
1236 gets set in `/etc/pve/datacenter.cfg`.
1237
1238
1239 ifdef::manvolnum[]
1240 include::pve-copyright.adoc[]
1241 endif::manvolnum[]