]> git.proxmox.com Git - pve-docs.git/blob - pvecm.adoc
Update pvecm documentation for corosync 3
[pve-docs.git] / pvecm.adoc
1 [[chapter_pvecm]]
2 ifdef::manvolnum[]
3 pvecm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvecm - Proxmox VE Cluster Manager
11
12 SYNOPSIS
13 --------
14
15 include::pvecm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Cluster Manager
23 ===============
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
32
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
37 nodes.
38
39 Grouping nodes into a cluster has the following advantages:
40
41 * Centralized, web based management
42
43 * Multi-master clusters: each node can do all management task
44
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
47
48 * Easy migration of virtual machines and containers between physical
49 hosts
50
51 * Fast deployment
52
53 * Cluster-wide services like firewall and HA
54
55
56 Requirements
57 ------------
58
59 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
60 for corosync to work.
61
62 * Date and time have to be synchronized.
63
64 * SSH tunnel on TCP port 22 between nodes is used.
65
66 * If you are interested in High Availability, you need to have at
67 least three nodes for reliable quorum. All nodes should have the
68 same version.
69
70 * We recommend a dedicated NIC for the cluster traffic, especially if
71 you use shared storage.
72
73 * Root password of a cluster node is required for adding nodes.
74
75 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
76 nodes.
77
78 NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
79 production configuration and should only used temporarily during upgrading the
80 whole cluster from one to another major version.
81
82 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
83 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
84 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
85 upgrade procedure to {pve} 6.0.
86
87
88 Preparing Nodes
89 ---------------
90
91 First, install {PVE} on all nodes. Make sure that each node is
92 installed with the final hostname and IP configuration. Changing the
93 hostname and IP is not possible after cluster creation.
94
95 Currently the cluster creation can either be done on the console (login via
96 `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
97 Cluster__).
98
99 While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
100 make their names resolvable through other means), this is not necessary for a
101 cluster to work. It may be useful however, as you can then connect from one node
102 to the other with SSH via the easier to remember node name (see also
103 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
104 recommend to reference nodes by their IP addresses in the cluster configuration.
105
106
107 [[pvecm_create_cluster]]
108 Create the Cluster
109 ------------------
110
111 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
112 This name cannot be changed later. The cluster name follows the same rules as
113 node names.
114
115 ----
116 hp1# pvecm create CLUSTERNAME
117 ----
118
119 NOTE: It is possible to create multiple clusters in the same physical or logical
120 network. Use unique cluster names if you do so. To avoid human confusion, it is
121 also recommended to choose different names even if clusters do not share the
122 cluster network.
123
124 To check the state of your cluster use:
125
126 ----
127 hp1# pvecm status
128 ----
129
130
131 [[pvecm_join_node_to_cluster]]
132 Adding Nodes to the Cluster
133 ---------------------------
134
135 Login via `ssh` to the node you want to add.
136
137 ----
138 hp2# pvecm add IP-ADDRESS-CLUSTER
139 ----
140
141 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
142 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
143
144 CAUTION: A new node cannot hold any VMs, because you would get
145 conflicts about identical VM IDs. Also, all existing configuration in
146 `/etc/pve` is overwritten when you join a new node to the cluster. To
147 workaround, use `vzdump` to backup and restore to a different VMID after
148 adding the node to the cluster.
149
150 To check the state of the cluster use:
151
152 ----
153 # pvecm status
154 ----
155
156 .Cluster status after adding 4 nodes
157 ----
158 hp2# pvecm status
159 Quorum information
160 ~~~~~~~~~~~~~~~~~~
161 Date: Mon Apr 20 12:30:13 2015
162 Quorum provider: corosync_votequorum
163 Nodes: 4
164 Node ID: 0x00000001
165 Ring ID: 1/8
166 Quorate: Yes
167
168 Votequorum information
169 ~~~~~~~~~~~~~~~~~~~~~~
170 Expected votes: 4
171 Highest expected: 4
172 Total votes: 4
173 Quorum: 3
174 Flags: Quorate
175
176 Membership information
177 ~~~~~~~~~~~~~~~~~~~~~~
178 Nodeid Votes Name
179 0x00000001 1 192.168.15.91
180 0x00000002 1 192.168.15.92 (local)
181 0x00000003 1 192.168.15.93
182 0x00000004 1 192.168.15.94
183 ----
184
185 If you only want the list of all nodes use:
186
187 ----
188 # pvecm nodes
189 ----
190
191 .List nodes in a cluster
192 ----
193 hp2# pvecm nodes
194
195 Membership information
196 ~~~~~~~~~~~~~~~~~~~~~~
197 Nodeid Votes Name
198 1 1 hp1
199 2 1 hp2 (local)
200 3 1 hp3
201 4 1 hp4
202 ----
203
204 [[pvecm_adding_nodes_with_separated_cluster_network]]
205 Adding Nodes With Separated Cluster Network
206 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
207
208 When adding a node to a cluster with a separated cluster network you need to
209 use the 'link0' parameter to set the nodes address on that network:
210
211 [source,bash]
212 ----
213 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
214 ----
215
216 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
217 kronosnet transport layer, also use the 'link1' parameter.
218
219
220 Remove a Cluster Node
221 ---------------------
222
223 CAUTION: Read carefully the procedure before proceeding, as it could
224 not be what you want or need.
225
226 Move all virtual machines from the node. Make sure you have no local
227 data or backups you want to keep, or save them accordingly.
228 In the following example we will remove the node hp4 from the cluster.
229
230 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
231 command to identify the node ID to remove:
232
233 ----
234 hp1# pvecm nodes
235
236 Membership information
237 ~~~~~~~~~~~~~~~~~~~~~~
238 Nodeid Votes Name
239 1 1 hp1 (local)
240 2 1 hp2
241 3 1 hp3
242 4 1 hp4
243 ----
244
245
246 At this point you must power off hp4 and
247 make sure that it will not power on again (in the network) as it
248 is.
249
250 IMPORTANT: As said above, it is critical to power off the node
251 *before* removal, and make sure that it will *never* power on again
252 (in the existing cluster network) as it is.
253 If you power on the node as it is, your cluster will be screwed up and
254 it could be difficult to restore a clean cluster state.
255
256 After powering off the node hp4, we can safely remove it from the cluster.
257
258 ----
259 hp1# pvecm delnode hp4
260 ----
261
262 If the operation succeeds no output is returned, just check the node
263 list again with `pvecm nodes` or `pvecm status`. You should see
264 something like:
265
266 ----
267 hp1# pvecm status
268
269 Quorum information
270 ~~~~~~~~~~~~~~~~~~
271 Date: Mon Apr 20 12:44:28 2015
272 Quorum provider: corosync_votequorum
273 Nodes: 3
274 Node ID: 0x00000001
275 Ring ID: 1/8
276 Quorate: Yes
277
278 Votequorum information
279 ~~~~~~~~~~~~~~~~~~~~~~
280 Expected votes: 3
281 Highest expected: 3
282 Total votes: 3
283 Quorum: 2
284 Flags: Quorate
285
286 Membership information
287 ~~~~~~~~~~~~~~~~~~~~~~
288 Nodeid Votes Name
289 0x00000001 1 192.168.15.90 (local)
290 0x00000002 1 192.168.15.91
291 0x00000003 1 192.168.15.92
292 ----
293
294 If, for whatever reason, you want this server to join the same cluster again,
295 you have to
296
297 * reinstall {pve} on it from scratch
298
299 * then join it, as explained in the previous section.
300
301 NOTE: After removal of the node, its SSH fingerprint will still reside in the
302 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
303 a node with the same IP or hostname, run `pvecm updatecerts` once on the
304 re-added node to update its fingerprint cluster wide.
305
306 [[pvecm_separate_node_without_reinstall]]
307 Separate A Node Without Reinstalling
308 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
309
310 CAUTION: This is *not* the recommended method, proceed with caution. Use the
311 above mentioned method if you're unsure.
312
313 You can also separate a node from a cluster without reinstalling it from
314 scratch. But after removing the node from the cluster it will still have
315 access to the shared storages! This must be resolved before you start removing
316 the node from the cluster. A {pve} cluster cannot share the exact same
317 storage with another cluster, as storage locking doesn't work over cluster
318 boundary. Further, it may also lead to VMID conflicts.
319
320 Its suggested that you create a new storage where only the node which you want
321 to separate has access. This can be a new export on your NFS or a new Ceph
322 pool, to name a few examples. Its just important that the exact same storage
323 does not gets accessed by multiple clusters. After setting this storage up move
324 all data from the node and its VMs to it. Then you are ready to separate the
325 node from the cluster.
326
327 WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
328 run into conflicts and problems.
329
330 First stop the corosync and the pve-cluster services on the node:
331 [source,bash]
332 ----
333 systemctl stop pve-cluster
334 systemctl stop corosync
335 ----
336
337 Start the cluster filesystem again in local mode:
338 [source,bash]
339 ----
340 pmxcfs -l
341 ----
342
343 Delete the corosync configuration files:
344 [source,bash]
345 ----
346 rm /etc/pve/corosync.conf
347 rm /etc/corosync/*
348 ----
349
350 You can now start the filesystem again as normal service:
351 [source,bash]
352 ----
353 killall pmxcfs
354 systemctl start pve-cluster
355 ----
356
357 The node is now separated from the cluster. You can deleted it from a remaining
358 node of the cluster with:
359 [source,bash]
360 ----
361 pvecm delnode oldnode
362 ----
363
364 If the command failed, because the remaining node in the cluster lost quorum
365 when the now separate node exited, you may set the expected votes to 1 as a workaround:
366 [source,bash]
367 ----
368 pvecm expected 1
369 ----
370
371 And then repeat the 'pvecm delnode' command.
372
373 Now switch back to the separated node, here delete all remaining files left
374 from the old cluster. This ensures that the node can be added to another
375 cluster again without problems.
376
377 [source,bash]
378 ----
379 rm /var/lib/corosync/*
380 ----
381
382 As the configuration files from the other nodes are still in the cluster
383 filesystem you may want to clean those up too. Remove simply the whole
384 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
385 you used the correct one before deleting it.
386
387 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
388 the nodes can still connect to each other with public key authentication. This
389 should be fixed by removing the respective keys from the
390 '/etc/pve/priv/authorized_keys' file.
391
392
393 Quorum
394 ------
395
396 {pve} use a quorum-based technique to provide a consistent state among
397 all cluster nodes.
398
399 [quote, from Wikipedia, Quorum (distributed computing)]
400 ____
401 A quorum is the minimum number of votes that a distributed transaction
402 has to obtain in order to be allowed to perform an operation in a
403 distributed system.
404 ____
405
406 In case of network partitioning, state changes requires that a
407 majority of nodes are online. The cluster switches to read-only mode
408 if it loses quorum.
409
410 NOTE: {pve} assigns a single vote to each node by default.
411
412
413 Cluster Network
414 ---------------
415
416 The cluster network is the core of a cluster. All messages sent over it have to
417 be delivered reliably to all nodes in their respective order. In {pve} this
418 part is done by corosync, an implementation of a high performance, low overhead
419 high availability development toolkit. It serves our decentralized
420 configuration file system (`pmxcfs`).
421
422 [[pvecm_cluster_network_requirements]]
423 Network Requirements
424 ~~~~~~~~~~~~~~~~~~~~
425 This needs a reliable network with latencies under 2 milliseconds (LAN
426 performance) to work properly. The network should not be used heavily by other
427 members, ideally corosync runs on its own network. Do not use a shared network
428 for corosync and storage (except as a potential low-priority fallback in a
429 xref:pvecm_redundancy[redundant] configuration).
430
431 Before setting up a cluster, it is good practice to check if the network is fit
432 for that purpose. To make sure the nodes can connect to each other on the
433 cluster network, you can test the connectivity between them with the `ping`
434 tool.
435
436 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
437 be generated - no manual action is required.
438
439 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
440 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
441 communication, which, for now, only supports regular UDP unicast.
442
443 CAUTION: You can still enable Multicast or legacy unicast by setting your
444 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
445 but keep in mind that this will disable all cryptography and redundancy support.
446 This is therefore not recommended.
447
448 Separate Cluster Network
449 ~~~~~~~~~~~~~~~~~~~~~~~~
450
451 When creating a cluster without any parameters the corosync cluster network is
452 generally shared with the Web UI and the VMs and their traffic. Depending on
453 your setup, even storage traffic may get sent over the same network. Its
454 recommended to change that, as corosync is a time critical real time
455 application.
456
457 Setting Up A New Network
458 ^^^^^^^^^^^^^^^^^^^^^^^^
459
460 First you have to set up a new network interface. It should be on a physically
461 separate network. Ensure that your network fulfills the
462 xref:pvecm_cluster_network_requirements[cluster network requirements].
463
464 Separate On Cluster Creation
465 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
466
467 This is possible via the 'linkX' parameters of the 'pvecm create'
468 command used for creating a new cluster.
469
470 If you have set up an additional NIC with a static address on 10.10.10.1/25,
471 and want to send and receive all cluster communication over this interface,
472 you would execute:
473
474 [source,bash]
475 ----
476 pvecm create test --link0 10.10.10.1
477 ----
478
479 To check if everything is working properly execute:
480 [source,bash]
481 ----
482 systemctl status corosync
483 ----
484
485 Afterwards, proceed as described above to
486 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
487
488 [[pvecm_separate_cluster_net_after_creation]]
489 Separate After Cluster Creation
490 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
491
492 You can do this if you have already created a cluster and want to switch
493 its communication to another network, without rebuilding the whole cluster.
494 This change may lead to short durations of quorum loss in the cluster, as nodes
495 have to restart corosync and come up one after the other on the new network.
496
497 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
498 Then, open it and you should see a file similar to:
499
500 ----
501 logging {
502 debug: off
503 to_syslog: yes
504 }
505
506 nodelist {
507
508 node {
509 name: due
510 nodeid: 2
511 quorum_votes: 1
512 ring0_addr: due
513 }
514
515 node {
516 name: tre
517 nodeid: 3
518 quorum_votes: 1
519 ring0_addr: tre
520 }
521
522 node {
523 name: uno
524 nodeid: 1
525 quorum_votes: 1
526 ring0_addr: uno
527 }
528
529 }
530
531 quorum {
532 provider: corosync_votequorum
533 }
534
535 totem {
536 cluster_name: testcluster
537 config_version: 3
538 ip_version: ipv4-6
539 secauth: on
540 version: 2
541 interface {
542 linknumber: 0
543 }
544
545 }
546 ----
547
548 NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
549 is a remnant of older corosync versions that is kept for backwards
550 compatibility.
551
552 The first thing you want to do is add the 'name' properties in the node entries
553 if you do not see them already. Those *must* match the node name.
554
555 Then replace all addresses from the 'ring0_addr' properties of all nodes with
556 the new addresses. You may use plain IP addresses or hostnames here. If you use
557 hostnames ensure that they are resolvable from all nodes. (see also
558 xref:pvecm_corosync_addresses[Link Address Types])
559
560 In this example, we want to switch the cluster communication to the
561 10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
562
563 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
564 as well, although we recommend to not change multiple addresses at once, to make
565 it easier to recover if something goes wrong.
566
567 After we increase the 'config_version' property, the new configuration file
568 should look like:
569
570 ----
571 logging {
572 debug: off
573 to_syslog: yes
574 }
575
576 nodelist {
577
578 node {
579 name: due
580 nodeid: 2
581 quorum_votes: 1
582 ring0_addr: 10.10.10.2
583 }
584
585 node {
586 name: tre
587 nodeid: 3
588 quorum_votes: 1
589 ring0_addr: 10.10.10.3
590 }
591
592 node {
593 name: uno
594 nodeid: 1
595 quorum_votes: 1
596 ring0_addr: 10.10.10.1
597 }
598
599 }
600
601 quorum {
602 provider: corosync_votequorum
603 }
604
605 totem {
606 cluster_name: testcluster
607 config_version: 4
608 ip_version: ipv4-6
609 secauth: on
610 version: 2
611 interface {
612 linknumber: 0
613 }
614
615 }
616 ----
617
618 Then, after a final check if all changed information is correct, we save it and
619 once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
620 section to bring it into effect.
621
622 The changes will be applied live, so restarting corosync is not strictly
623 necessary. If you changed other settings as well, or notice corosync
624 complaining, you can optionally trigger a restart.
625
626 On a single node execute:
627
628 [source,bash]
629 ----
630 systemctl restart corosync
631 ----
632
633 Now check if everything is fine:
634
635 [source,bash]
636 ----
637 systemctl status corosync
638 ----
639
640 If corosync runs again correct restart corosync also on all other nodes.
641 They will then join the cluster membership one by one on the new network.
642
643 [[pvecm_corosync_addresses]]
644 Corosync addresses
645 ~~~~~~~~~~~~~~~~~~
646
647 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
648 `corosync.conf`) can be specified in two ways:
649
650 * **IPv4/v6 addresses** will be used directly. They are recommended, since they
651 are static and usually not changed carelessly.
652
653 * **Hostnames** will be resolved using `getaddrinfo`, which means that per
654 default, IPv6 addresses will be used first, if available (see also
655 `man gai.conf`). Keep this in mind, especially when upgrading an existing
656 cluster to IPv6.
657
658 CAUTION: Hostnames should be used with care, since the address they
659 resolve to can be changed without touching corosync or the node it runs on -
660 which may lead to a situation where an address is changed without thinking
661 about implications for corosync.
662
663 A seperate, static hostname specifically for corosync is recommended, if
664 hostnames are preferred. Also, make sure that every node in the cluster can
665 resolve all hostnames correctly.
666
667 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
668 entry. Only the resolved IP is then saved to the configuration.
669
670 Nodes that joined the cluster on earlier versions likely still use their
671 unresolved hostname in `corosync.conf`. It might be a good idea to replace
672 them with IPs or a seperate hostname, as mentioned above.
673
674
675 [[pvecm_redundancy]]
676 Corosync Redundancy
677 -------------------
678
679 Corosync supports redundant networking via its integrated kronosnet layer by
680 default (it is not supported on the legacy udp/udpu transports). It can be
681 enabled by specifying more than one link address, either via the '--linkX'
682 parameters of `pvecm` (while creating a cluster or adding a new node) or by
683 specifying more than one 'ringX_addr' in `corosync.conf`.
684
685 NOTE: To provide useful failover, every link should be on its own
686 physical network connection.
687
688 Links are used according to a priority setting. You can configure this priority
689 by setting 'knet_link_priority' in the corresponding interface section in
690 `corosync.conf`, or, preferrably, using the 'priority' parameter when creating
691 your cluster with `pvecm`:
692
693 ----
694 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=20 --link1 10.20.20.1,priority=15
695 ----
696
697 This would cause 'link1' to be used first, since it has the lower priority.
698
699 If no priorities are configured manually (or two links have the same priority),
700 links will be used in order of their number, with the lower number having higher
701 priority.
702
703 Even if all links are working, only the one with the highest priority will see
704 corosync traffic. Link priorities cannot be mixed, i.e. links with different
705 priorities will not be able to communicate with each other.
706
707 Since lower priority links will not see traffic unless all higher priorities
708 have failed, it becomes a useful strategy to specify even networks used for
709 other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
710 worst, a higher-latency or more congested connection might be better than no
711 connection at all.
712
713 Adding Redundant Links To An Existing Cluster
714 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
715
716 To add a new link to a running configuration, first check how to
717 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
718
719 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
720 sure that your 'X' is the same for every node you add it to, and that it is
721 unique for each node.
722
723 Lastly, add a new 'interface', as shown below, to your `totem`
724 section, replacing 'X' with your link number chosen above.
725
726 Assuming you added a link with number 1, the new configuration file could look
727 like this:
728
729 ----
730 logging {
731 debug: off
732 to_syslog: yes
733 }
734
735 nodelist {
736
737 node {
738 name: due
739 nodeid: 2
740 quorum_votes: 1
741 ring0_addr: 10.10.10.2
742 ring1_addr: 10.20.20.2
743 }
744
745 node {
746 name: tre
747 nodeid: 3
748 quorum_votes: 1
749 ring0_addr: 10.10.10.3
750 ring1_addr: 10.20.20.3
751 }
752
753 node {
754 name: uno
755 nodeid: 1
756 quorum_votes: 1
757 ring0_addr: 10.10.10.1
758 ring1_addr: 10.20.20.1
759 }
760
761 }
762
763 quorum {
764 provider: corosync_votequorum
765 }
766
767 totem {
768 cluster_name: testcluster
769 config_version: 4
770 ip_version: ipv4-6
771 secauth: on
772 version: 2
773 interface {
774 linknumber: 0
775 }
776 interface {
777 linknumber: 1
778 }
779 }
780 ----
781
782 The new link will be enabled as soon as you follow the last steps to
783 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
784 be necessary. You can check that corosync loaded the new link using:
785
786 ----
787 journalctl -b -u corosync
788 ----
789
790 It might be a good idea to test the new link by temporarily disconnecting the
791 old link on one node and making sure that its status remains online while
792 disconnected:
793
794 ----
795 pvecm status
796 ----
797
798 If you see a healthy cluster state, it means that your new link is being used.
799
800
801 Corosync External Vote Support
802 ------------------------------
803
804 This section describes a way to deploy an external voter in a {pve} cluster.
805 When configured, the cluster can sustain more node failures without
806 violating safety properties of the cluster communication.
807
808 For this to work there are two services involved:
809
810 * a so called qdevice daemon which runs on each {pve} node
811
812 * an external vote daemon which runs on an independent server.
813
814 As a result you can achieve higher availability even in smaller setups (for
815 example 2+1 nodes).
816
817 QDevice Technical Overview
818 ~~~~~~~~~~~~~~~~~~~~~~~~~~
819
820 The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
821 node. It provides a configured number of votes to the clusters quorum
822 subsystem based on an external running third-party arbitrator's decision.
823 Its primary use is to allow a cluster to sustain more node failures than
824 standard quorum rules allow. This can be done safely as the external device
825 can see all nodes and thus choose only one set of nodes to give its vote.
826 This will only be done if said set of nodes can have quorum (again) when
827 receiving the third-party vote.
828
829 Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
830 a daemon which provides a vote to a cluster partition if it can reach the
831 partition members over the network. It will give only votes to one partition
832 of a cluster at any time.
833 It's designed to support multiple clusters and is almost configuration and
834 state free. New clusters are handled dynamically and no configuration file
835 is needed on the host running a QDevice.
836
837 The external host has the only requirement that it needs network access to the
838 cluster and a corosync-qnetd package available. We provide such a package
839 for Debian based hosts, other Linux distributions should also have a package
840 available through their respective package manager.
841
842 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
843 TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
844 latencies than 2 ms.
845
846 Supported Setups
847 ~~~~~~~~~~~~~~~~
848
849 We support QDevices for clusters with an even number of nodes and recommend
850 it for 2 node clusters, if they should provide higher availability.
851 For clusters with an odd node count we discourage the use of QDevices
852 currently. The reason for this, is the difference of the votes the QDevice
853 provides for each cluster type. Even numbered clusters get single additional
854 vote, with this we can only increase availability, i.e. if the QDevice
855 itself fails we are in the same situation as with no QDevice at all.
856
857 Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
858 where 'N' corresponds to the cluster node count. This difference makes
859 sense, if we had only one additional vote the cluster can get into a split
860 brain situation.
861 This algorithm would allow that all nodes but one (and naturally the
862 QDevice itself) could fail.
863 There are two drawbacks with this:
864
865 * If the QNet daemon itself fails, no other node may fail or the cluster
866 immediately loses quorum. For example, in a cluster with 15 nodes 7
867 could fail before the cluster becomes inquorate. But, if a QDevice is
868 configured here and said QDevice fails itself **no single node** of
869 the 15 may fail. The QDevice acts almost as a single point of failure in
870 this case.
871
872 * The fact that all but one node plus QDevice may fail sound promising at
873 first, but this may result in a mass recovery of HA services that would
874 overload the single node left. Also ceph server will stop to provide
875 services after only '((N-1)/2)' nodes are online.
876
877 If you understand the drawbacks and implications you can decide yourself if
878 you should use this technology in an odd numbered cluster setup.
879
880 QDevice-Net Setup
881 ~~~~~~~~~~~~~~~~~
882
883 We recommend to run any daemon which provides votes to corosync-qdevice as an
884 unprivileged user. {pve} and Debian provides a package which is already
885 configured to do so.
886 The traffic between the daemon and the cluster must be encrypted to ensure a
887 safe and secure QDevice integration in {pve}.
888
889 First install the 'corosync-qnetd' package on your external server and
890 the 'corosync-qdevice' package on all cluster nodes.
891
892 After that, ensure that all your nodes on the cluster are online.
893
894 You can now easily set up your QDevice by running the following command on one
895 of the {pve} nodes:
896
897 ----
898 pve# pvecm qdevice setup <QDEVICE-IP>
899 ----
900
901 The SSH key from the cluster will be automatically copied to the QDevice. You
902 might need to enter an SSH password during this step.
903
904 After you enter the password and all the steps are successfully completed, you
905 will see "Done". You can check the status now:
906
907 ----
908 pve# pvecm status
909
910 ...
911
912 Votequorum information
913 ~~~~~~~~~~~~~~~~~~~~~
914 Expected votes: 3
915 Highest expected: 3
916 Total votes: 3
917 Quorum: 2
918 Flags: Quorate Qdevice
919
920 Membership information
921 ~~~~~~~~~~~~~~~~~~~~~~
922 Nodeid Votes Qdevice Name
923 0x00000001 1 A,V,NMW 192.168.22.180 (local)
924 0x00000002 1 A,V,NMW 192.168.22.181
925 0x00000000 1 Qdevice
926
927 ----
928
929 which means the QDevice is set up.
930
931 Frequently Asked Questions
932 ~~~~~~~~~~~~~~~~~~~~~~~~~~
933
934 Tie Breaking
935 ^^^^^^^^^^^^
936
937 In case of a tie, where two same-sized cluster partitions cannot see each other
938 but the QDevice, the QDevice chooses randomly one of those partitions and
939 provides a vote to it.
940
941 Possible Negative Implications
942 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
943
944 For clusters with an even node count there are no negative implications when
945 setting up a QDevice. If it fails to work, you are as good as without QDevice at
946 all.
947
948 Adding/Deleting Nodes After QDevice Setup
949 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
950
951 If you want to add a new node or remove an existing one from a cluster with a
952 QDevice setup, you need to remove the QDevice first. After that, you can add or
953 remove nodes normally. Once you have a cluster with an even node count again,
954 you can set up the QDevice again as described above.
955
956 Removing the QDevice
957 ^^^^^^^^^^^^^^^^^^^^
958
959 If you used the official `pvecm` tool to add the QDevice, you can remove it
960 trivially by running:
961
962 ----
963 pve# pvecm qdevice remove
964 ----
965
966 //Still TODO
967 //^^^^^^^^^^
968 //There is still stuff to add here
969
970
971 Corosync Configuration
972 ----------------------
973
974 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
975 controls the cluster membership and its network.
976 For further information about it, check the corosync.conf man page:
977 [source,bash]
978 ----
979 man corosync.conf
980 ----
981
982 For node membership you should always use the `pvecm` tool provided by {pve}.
983 You may have to edit the configuration file manually for other changes.
984 Here are a few best practice tips for doing this.
985
986 [[pvecm_edit_corosync_conf]]
987 Edit corosync.conf
988 ~~~~~~~~~~~~~~~~~~
989
990 Editing the corosync.conf file is not always very straightforward. There are
991 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
992 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
993 propagate the changes to the local one, but not vice versa.
994
995 The configuration will get updated automatically as soon as the file changes.
996 This means changes which can be integrated in a running corosync will take
997 effect immediately. So you should always make a copy and edit that instead, to
998 avoid triggering some unwanted changes by an in-between safe.
999
1000 [source,bash]
1001 ----
1002 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1003 ----
1004
1005 Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1006 preinstalled on any {pve} node for example.
1007
1008 NOTE: Always increment the 'config_version' number on configuration changes,
1009 omitting this can lead to problems.
1010
1011 After making the necessary changes create another copy of the current working
1012 configuration file. This serves as a backup if the new configuration fails to
1013 apply or makes problems in other ways.
1014
1015 [source,bash]
1016 ----
1017 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1018 ----
1019
1020 Then move the new configuration file over the old one:
1021 [source,bash]
1022 ----
1023 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1024 ----
1025
1026 You may check with the commands
1027 [source,bash]
1028 ----
1029 systemctl status corosync
1030 journalctl -b -u corosync
1031 ----
1032
1033 If the change could be applied automatically. If not you may have to restart the
1034 corosync service via:
1035 [source,bash]
1036 ----
1037 systemctl restart corosync
1038 ----
1039
1040 On errors check the troubleshooting section below.
1041
1042 Troubleshooting
1043 ~~~~~~~~~~~~~~~
1044
1045 Issue: 'quorum.expected_votes must be configured'
1046 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1047
1048 When corosync starts to fail and you get the following message in the system log:
1049
1050 ----
1051 [...]
1052 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1053 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1054 'configuration error: nodelist or quorum.expected_votes must be configured!'
1055 [...]
1056 ----
1057
1058 It means that the hostname you set for corosync 'ringX_addr' in the
1059 configuration could not be resolved.
1060
1061 Write Configuration When Not Quorate
1062 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1063
1064 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1065 know what you do, use:
1066 [source,bash]
1067 ----
1068 pvecm expected 1
1069 ----
1070
1071 This sets the expected vote count to 1 and makes the cluster quorate. You can
1072 now fix your configuration, or revert it back to the last working backup.
1073
1074 This is not enough if corosync cannot start anymore. Here its best to edit the
1075 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1076 that corosync can start again. Ensure that on all nodes this configuration has
1077 the same content to avoid split brains. If you are not sure what went wrong
1078 it's best to ask the Proxmox Community to help you.
1079
1080
1081 [[pvecm_corosync_conf_glossary]]
1082 Corosync Configuration Glossary
1083 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1084
1085 ringX_addr::
1086 This names the different link addresses for the kronosnet connections between
1087 nodes.
1088
1089
1090 Cluster Cold Start
1091 ------------------
1092
1093 It is obvious that a cluster is not quorate when all nodes are
1094 offline. This is a common case after a power failure.
1095
1096 NOTE: It is always a good idea to use an uninterruptible power supply
1097 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1098 you want HA.
1099
1100 On node startup, the `pve-guests` service is started and waits for
1101 quorum. Once quorate, it starts all guests which have the `onboot`
1102 flag set.
1103
1104 When you turn on nodes, or when power comes back after power failure,
1105 it is likely that some nodes boots faster than others. Please keep in
1106 mind that guest startup is delayed until you reach quorum.
1107
1108
1109 Guest Migration
1110 ---------------
1111
1112 Migrating virtual guests to other nodes is a useful feature in a
1113 cluster. There are settings to control the behavior of such
1114 migrations. This can be done via the configuration file
1115 `datacenter.cfg` or for a specific migration via API or command line
1116 parameters.
1117
1118 It makes a difference if a Guest is online or offline, or if it has
1119 local resources (like a local disk).
1120
1121 For Details about Virtual Machine Migration see the
1122 xref:qm_migration[QEMU/KVM Migration Chapter].
1123
1124 For Details about Container Migration see the
1125 xref:pct_migration[Container Migration Chapter].
1126
1127 Migration Type
1128 ~~~~~~~~~~~~~~
1129
1130 The migration type defines if the migration data should be sent over an
1131 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1132 Setting the migration type to insecure means that the RAM content of a
1133 virtual guest gets also transferred unencrypted, which can lead to
1134 information disclosure of critical data from inside the guest (for
1135 example passwords or encryption keys).
1136
1137 Therefore, we strongly recommend using the secure channel if you do
1138 not have full control over the network and can not guarantee that no
1139 one is eavesdropping to it.
1140
1141 NOTE: Storage migration does not follow this setting. Currently, it
1142 always sends the storage content over a secure channel.
1143
1144 Encryption requires a lot of computing power, so this setting is often
1145 changed to "unsafe" to achieve better performance. The impact on
1146 modern systems is lower because they implement AES encryption in
1147 hardware. The performance impact is particularly evident in fast
1148 networks where you can transfer 10 Gbps or more.
1149
1150 Migration Network
1151 ~~~~~~~~~~~~~~~~~
1152
1153 By default, {pve} uses the network in which cluster communication
1154 takes place to send the migration traffic. This is not optimal because
1155 sensitive cluster traffic can be disrupted and this network may not
1156 have the best bandwidth available on the node.
1157
1158 Setting the migration network parameter allows the use of a dedicated
1159 network for the entire migration traffic. In addition to the memory,
1160 this also affects the storage traffic for offline migrations.
1161
1162 The migration network is set as a network in the CIDR notation. This
1163 has the advantage that you do not have to set individual IP addresses
1164 for each node. {pve} can determine the real address on the
1165 destination node from the network specified in the CIDR form. To
1166 enable this, the network must be specified so that each node has one,
1167 but only one IP in the respective network.
1168
1169 Example
1170 ^^^^^^^
1171
1172 We assume that we have a three-node setup with three separate
1173 networks. One for public communication with the Internet, one for
1174 cluster communication and a very fast one, which we want to use as a
1175 dedicated network for migration.
1176
1177 A network configuration for such a setup might look as follows:
1178
1179 ----
1180 iface eno1 inet manual
1181
1182 # public network
1183 auto vmbr0
1184 iface vmbr0 inet static
1185 address 192.X.Y.57
1186 netmask 255.255.250.0
1187 gateway 192.X.Y.1
1188 bridge_ports eno1
1189 bridge_stp off
1190 bridge_fd 0
1191
1192 # cluster network
1193 auto eno2
1194 iface eno2 inet static
1195 address 10.1.1.1
1196 netmask 255.255.255.0
1197
1198 # fast network
1199 auto eno3
1200 iface eno3 inet static
1201 address 10.1.2.1
1202 netmask 255.255.255.0
1203 ----
1204
1205 Here, we will use the network 10.1.2.0/24 as a migration network. For
1206 a single migration, you can do this using the `migration_network`
1207 parameter of the command line tool:
1208
1209 ----
1210 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1211 ----
1212
1213 To configure this as the default network for all migrations in the
1214 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1215 file:
1216
1217 ----
1218 # use dedicated migration network
1219 migration: secure,network=10.1.2.0/24
1220 ----
1221
1222 NOTE: The migration type must always be set when the migration network
1223 gets set in `/etc/pve/datacenter.cfg`.
1224
1225
1226 ifdef::manvolnum[]
1227 include::pve-copyright.adoc[]
1228 endif::manvolnum[]