]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
Add audio device documentation
[pve-docs.git] / pvecm.adoc
CommitLineData
bde0e57d 1[[chapter_pvecm]]
d8742b0c 2ifdef::manvolnum[]
b2f242ab
DM
3pvecm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6
FG
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
8a865621 29http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 30communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
31(probably more, dependent on network latency).
32
8c1189b6 33`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 34leave the cluster, get status information and do various other cluster
e300cf7d
FG
35related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
37nodes.
38
39Grouping nodes into a cluster has the following advantages:
40
41* Centralized, web based management
42
5eba0743 43* Multi-master clusters: each node can do all management task
8a865621 44
8c1189b6
FG
45* `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
8a865621 47
5eba0743 48* Easy migration of virtual machines and containers between physical
8a865621
DM
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
a9e7c3aa
SR
59* All nodes must be able to connect to each other via UDP ports 5404 and 5405
60 for corosync to work.
8a865621
DM
61
62* Date and time have to be synchronized.
63
ceabe189 64* SSH tunnel on TCP port 22 between nodes is used.
8a865621 65
ceabe189
DM
66* If you are interested in High Availability, you need to have at
67 least three nodes for reliable quorum. All nodes should have the
68 same version.
8a865621
DM
69
70* We recommend a dedicated NIC for the cluster traffic, especially if
71 you use shared storage.
72
d4a9910f
DL
73* Root password of a cluster node is required for adding nodes.
74
e4b62d04
TL
75NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
76nodes.
77
78NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
79production configuration and should only used temporarily during upgrading the
80whole cluster from one to another major version.
8a865621 81
a9e7c3aa
SR
82NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
83cluster protocol (corosync) between {pve} 6.x and earlier versions changed
84fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
85upgrade procedure to {pve} 6.0.
86
8a865621 87
ceabe189
DM
88Preparing Nodes
89---------------
8a865621
DM
90
91First, install {PVE} on all nodes. Make sure that each node is
92installed with the final hostname and IP configuration. Changing the
93hostname and IP is not possible after cluster creation.
94
30101530
TL
95Currently the cluster creation can either be done on the console (login via
96`ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
97Cluster__).
8a865621 98
a9e7c3aa
SR
99While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
100make their names resolvable through other means), this is not necessary for a
101cluster to work. It may be useful however, as you can then connect from one node
102to the other with SSH via the easier to remember node name (see also
103xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
104recommend to reference nodes by their IP addresses in the cluster configuration.
105
9a7396aa 106
11202f1d 107[[pvecm_create_cluster]]
8a865621 108Create the Cluster
ceabe189 109------------------
8a865621 110
8c1189b6 111Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
9a7396aa
TL
112This name cannot be changed later. The cluster name follows the same rules as
113node names.
8a865621 114
c15cdfba
TL
115----
116 hp1# pvecm create CLUSTERNAME
117----
8a865621 118
a9e7c3aa
SR
119NOTE: It is possible to create multiple clusters in the same physical or logical
120network. Use unique cluster names if you do so. To avoid human confusion, it is
121also recommended to choose different names even if clusters do not share the
122cluster network.
63f956c8 123
8a865621
DM
124To check the state of your cluster use:
125
c15cdfba 126----
8a865621 127 hp1# pvecm status
c15cdfba 128----
8a865621 129
dd1aa0e0
TL
130Multiple Clusters In Same Network
131~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
132
133It is possible to create multiple clusters in the same physical or logical
134network. Each such cluster must have a unique name, this does not only helps
135admins to distinguish on which cluster they currently operate, it is also
136required to avoid possible clashes in the cluster communication stack.
137
138While the bandwidth requirement of a corosync cluster is relatively low, the
139latency of packages and the package per second (PPS) rate is the limiting
140factor. Different clusters in the same network can compete with each other for
141these resources, so it may still make sense to use separate physical network
142infrastructure for bigger clusters.
8a865621 143
11202f1d 144[[pvecm_join_node_to_cluster]]
8a865621 145Adding Nodes to the Cluster
ceabe189 146---------------------------
8a865621 147
8c1189b6 148Login via `ssh` to the node you want to add.
8a865621 149
c15cdfba 150----
8a865621 151 hp2# pvecm add IP-ADDRESS-CLUSTER
c15cdfba 152----
8a865621 153
270757a1 154For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
a9e7c3aa 155An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
8a865621 156
5eba0743 157CAUTION: A new node cannot hold any VMs, because you would get
7980581f 158conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
159`/etc/pve` is overwritten when you join a new node to the cluster. To
160workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 161adding the node to the cluster.
8a865621 162
a9e7c3aa 163To check the state of the cluster use:
8a865621 164
c15cdfba 165----
8a865621 166 # pvecm status
c15cdfba 167----
8a865621 168
ceabe189 169.Cluster status after adding 4 nodes
8a865621
DM
170----
171hp2# pvecm status
172Quorum information
173~~~~~~~~~~~~~~~~~~
174Date: Mon Apr 20 12:30:13 2015
175Quorum provider: corosync_votequorum
176Nodes: 4
177Node ID: 0x00000001
a9e7c3aa 178Ring ID: 1/8
8a865621
DM
179Quorate: Yes
180
181Votequorum information
182~~~~~~~~~~~~~~~~~~~~~~
183Expected votes: 4
184Highest expected: 4
185Total votes: 4
91f3edd0 186Quorum: 3
8a865621
DM
187Flags: Quorate
188
189Membership information
190~~~~~~~~~~~~~~~~~~~~~~
191 Nodeid Votes Name
1920x00000001 1 192.168.15.91
1930x00000002 1 192.168.15.92 (local)
1940x00000003 1 192.168.15.93
1950x00000004 1 192.168.15.94
196----
197
198If you only want the list of all nodes use:
199
c15cdfba 200----
8a865621 201 # pvecm nodes
c15cdfba 202----
8a865621 203
5eba0743 204.List nodes in a cluster
8a865621
DM
205----
206hp2# pvecm nodes
207
208Membership information
209~~~~~~~~~~~~~~~~~~~~~~
210 Nodeid Votes Name
211 1 1 hp1
212 2 1 hp2 (local)
213 3 1 hp3
214 4 1 hp4
215----
216
3254bfdd 217[[pvecm_adding_nodes_with_separated_cluster_network]]
e4ec4154
TL
218Adding Nodes With Separated Cluster Network
219~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
220
221When adding a node to a cluster with a separated cluster network you need to
a9e7c3aa 222use the 'link0' parameter to set the nodes address on that network:
e4ec4154
TL
223
224[source,bash]
4d19cb00 225----
a9e7c3aa 226pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
4d19cb00 227----
e4ec4154 228
a9e7c3aa
SR
229If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
230kronosnet transport layer, also use the 'link1' parameter.
e4ec4154 231
8a865621
DM
232
233Remove a Cluster Node
ceabe189 234---------------------
8a865621
DM
235
236CAUTION: Read carefully the procedure before proceeding, as it could
237not be what you want or need.
238
239Move all virtual machines from the node. Make sure you have no local
240data or backups you want to keep, or save them accordingly.
e8503c6c 241In the following example we will remove the node hp4 from the cluster.
8a865621 242
e8503c6c
EK
243Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
244command to identify the node ID to remove:
8a865621
DM
245
246----
247hp1# pvecm nodes
248
249Membership information
250~~~~~~~~~~~~~~~~~~~~~~
251 Nodeid Votes Name
252 1 1 hp1 (local)
253 2 1 hp2
254 3 1 hp3
255 4 1 hp4
256----
257
e8503c6c
EK
258
259At this point you must power off hp4 and
260make sure that it will not power on again (in the network) as it
261is.
262
263IMPORTANT: As said above, it is critical to power off the node
264*before* removal, and make sure that it will *never* power on again
265(in the existing cluster network) as it is.
266If you power on the node as it is, your cluster will be screwed up and
267it could be difficult to restore a clean cluster state.
268
269After powering off the node hp4, we can safely remove it from the cluster.
8a865621 270
c15cdfba 271----
8a865621 272 hp1# pvecm delnode hp4
c15cdfba 273----
8a865621
DM
274
275If the operation succeeds no output is returned, just check the node
8c1189b6 276list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
277something like:
278
279----
280hp1# pvecm status
281
282Quorum information
283~~~~~~~~~~~~~~~~~~
284Date: Mon Apr 20 12:44:28 2015
285Quorum provider: corosync_votequorum
286Nodes: 3
287Node ID: 0x00000001
a9e7c3aa 288Ring ID: 1/8
8a865621
DM
289Quorate: Yes
290
291Votequorum information
292~~~~~~~~~~~~~~~~~~~~~~
293Expected votes: 3
294Highest expected: 3
295Total votes: 3
91f3edd0 296Quorum: 2
8a865621
DM
297Flags: Quorate
298
299Membership information
300~~~~~~~~~~~~~~~~~~~~~~
301 Nodeid Votes Name
3020x00000001 1 192.168.15.90 (local)
3030x00000002 1 192.168.15.91
3040x00000003 1 192.168.15.92
305----
306
a9e7c3aa
SR
307If, for whatever reason, you want this server to join the same cluster again,
308you have to
8a865621 309
26ca7ff5 310* reinstall {pve} on it from scratch
8a865621
DM
311
312* then join it, as explained in the previous section.
d8742b0c 313
41925ede
SR
314NOTE: After removal of the node, its SSH fingerprint will still reside in the
315'known_hosts' of the other nodes. If you receive an SSH error after rejoining
9121b45b
TL
316a node with the same IP or hostname, run `pvecm updatecerts` once on the
317re-added node to update its fingerprint cluster wide.
41925ede 318
38ae8db3 319[[pvecm_separate_node_without_reinstall]]
555e966b
TL
320Separate A Node Without Reinstalling
321~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
322
323CAUTION: This is *not* the recommended method, proceed with caution. Use the
324above mentioned method if you're unsure.
325
326You can also separate a node from a cluster without reinstalling it from
327scratch. But after removing the node from the cluster it will still have
328access to the shared storages! This must be resolved before you start removing
329the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
330storage with another cluster, as storage locking doesn't work over cluster
331boundary. Further, it may also lead to VMID conflicts.
555e966b 332
3be22308 333Its suggested that you create a new storage where only the node which you want
a9e7c3aa 334to separate has access. This can be a new export on your NFS or a new Ceph
3be22308
TL
335pool, to name a few examples. Its just important that the exact same storage
336does not gets accessed by multiple clusters. After setting this storage up move
337all data from the node and its VMs to it. Then you are ready to separate the
338node from the cluster.
555e966b 339
a9e7c3aa
SR
340WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
341run into conflicts and problems.
555e966b
TL
342
343First stop the corosync and the pve-cluster services on the node:
344[source,bash]
4d19cb00 345----
555e966b
TL
346systemctl stop pve-cluster
347systemctl stop corosync
4d19cb00 348----
555e966b
TL
349
350Start the cluster filesystem again in local mode:
351[source,bash]
4d19cb00 352----
555e966b 353pmxcfs -l
4d19cb00 354----
555e966b
TL
355
356Delete the corosync configuration files:
357[source,bash]
4d19cb00 358----
555e966b
TL
359rm /etc/pve/corosync.conf
360rm /etc/corosync/*
4d19cb00 361----
555e966b
TL
362
363You can now start the filesystem again as normal service:
364[source,bash]
4d19cb00 365----
555e966b
TL
366killall pmxcfs
367systemctl start pve-cluster
4d19cb00 368----
555e966b
TL
369
370The node is now separated from the cluster. You can deleted it from a remaining
371node of the cluster with:
372[source,bash]
4d19cb00 373----
555e966b 374pvecm delnode oldnode
4d19cb00 375----
555e966b
TL
376
377If the command failed, because the remaining node in the cluster lost quorum
378when the now separate node exited, you may set the expected votes to 1 as a workaround:
379[source,bash]
4d19cb00 380----
555e966b 381pvecm expected 1
4d19cb00 382----
555e966b 383
96d698db 384And then repeat the 'pvecm delnode' command.
555e966b
TL
385
386Now switch back to the separated node, here delete all remaining files left
387from the old cluster. This ensures that the node can be added to another
388cluster again without problems.
389
390[source,bash]
4d19cb00 391----
555e966b 392rm /var/lib/corosync/*
4d19cb00 393----
555e966b
TL
394
395As the configuration files from the other nodes are still in the cluster
396filesystem you may want to clean those up too. Remove simply the whole
397directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
398you used the correct one before deleting it.
399
400CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
401the nodes can still connect to each other with public key authentication. This
402should be fixed by removing the respective keys from the
403'/etc/pve/priv/authorized_keys' file.
d8742b0c 404
a9e7c3aa 405
806ef12d
DM
406Quorum
407------
408
409{pve} use a quorum-based technique to provide a consistent state among
410all cluster nodes.
411
412[quote, from Wikipedia, Quorum (distributed computing)]
413____
414A quorum is the minimum number of votes that a distributed transaction
415has to obtain in order to be allowed to perform an operation in a
416distributed system.
417____
418
419In case of network partitioning, state changes requires that a
420majority of nodes are online. The cluster switches to read-only mode
5eba0743 421if it loses quorum.
806ef12d
DM
422
423NOTE: {pve} assigns a single vote to each node by default.
424
a9e7c3aa 425
e4ec4154
TL
426Cluster Network
427---------------
428
429The cluster network is the core of a cluster. All messages sent over it have to
a9e7c3aa
SR
430be delivered reliably to all nodes in their respective order. In {pve} this
431part is done by corosync, an implementation of a high performance, low overhead
e4ec4154
TL
432high availability development toolkit. It serves our decentralized
433configuration file system (`pmxcfs`).
434
3254bfdd 435[[pvecm_cluster_network_requirements]]
e4ec4154
TL
436Network Requirements
437~~~~~~~~~~~~~~~~~~~~
438This needs a reliable network with latencies under 2 milliseconds (LAN
a9e7c3aa
SR
439performance) to work properly. The network should not be used heavily by other
440members, ideally corosync runs on its own network. Do not use a shared network
441for corosync and storage (except as a potential low-priority fallback in a
442xref:pvecm_redundancy[redundant] configuration).
e4ec4154 443
a9e7c3aa
SR
444Before setting up a cluster, it is good practice to check if the network is fit
445for that purpose. To make sure the nodes can connect to each other on the
446cluster network, you can test the connectivity between them with the `ping`
447tool.
e4ec4154 448
a9e7c3aa
SR
449If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
450be generated - no manual action is required.
e4ec4154 451
a9e7c3aa
SR
452NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
453Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
454communication, which, for now, only supports regular UDP unicast.
e4ec4154 455
a9e7c3aa
SR
456CAUTION: You can still enable Multicast or legacy unicast by setting your
457transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
458but keep in mind that this will disable all cryptography and redundancy support.
459This is therefore not recommended.
e4ec4154
TL
460
461Separate Cluster Network
462~~~~~~~~~~~~~~~~~~~~~~~~
463
a9e7c3aa
SR
464When creating a cluster without any parameters the corosync cluster network is
465generally shared with the Web UI and the VMs and their traffic. Depending on
466your setup, even storage traffic may get sent over the same network. Its
467recommended to change that, as corosync is a time critical real time
468application.
e4ec4154
TL
469
470Setting Up A New Network
471^^^^^^^^^^^^^^^^^^^^^^^^
472
a9e7c3aa 473First you have to set up a new network interface. It should be on a physically
e4ec4154 474separate network. Ensure that your network fulfills the
3254bfdd 475xref:pvecm_cluster_network_requirements[cluster network requirements].
e4ec4154
TL
476
477Separate On Cluster Creation
478^^^^^^^^^^^^^^^^^^^^^^^^^^^^
479
a9e7c3aa
SR
480This is possible via the 'linkX' parameters of the 'pvecm create'
481command used for creating a new cluster.
e4ec4154 482
a9e7c3aa
SR
483If you have set up an additional NIC with a static address on 10.10.10.1/25,
484and want to send and receive all cluster communication over this interface,
e4ec4154
TL
485you would execute:
486
487[source,bash]
4d19cb00 488----
a9e7c3aa 489pvecm create test --link0 10.10.10.1
4d19cb00 490----
e4ec4154
TL
491
492To check if everything is working properly execute:
493[source,bash]
4d19cb00 494----
e4ec4154 495systemctl status corosync
4d19cb00 496----
e4ec4154 497
a9e7c3aa 498Afterwards, proceed as described above to
3254bfdd 499xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
82d52451 500
3254bfdd 501[[pvecm_separate_cluster_net_after_creation]]
e4ec4154
TL
502Separate After Cluster Creation
503^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
504
a9e7c3aa 505You can do this if you have already created a cluster and want to switch
e4ec4154
TL
506its communication to another network, without rebuilding the whole cluster.
507This change may lead to short durations of quorum loss in the cluster, as nodes
508have to restart corosync and come up one after the other on the new network.
509
3254bfdd 510Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
a9e7c3aa 511Then, open it and you should see a file similar to:
e4ec4154
TL
512
513----
514logging {
515 debug: off
516 to_syslog: yes
517}
518
519nodelist {
520
521 node {
522 name: due
523 nodeid: 2
524 quorum_votes: 1
525 ring0_addr: due
526 }
527
528 node {
529 name: tre
530 nodeid: 3
531 quorum_votes: 1
532 ring0_addr: tre
533 }
534
535 node {
536 name: uno
537 nodeid: 1
538 quorum_votes: 1
539 ring0_addr: uno
540 }
541
542}
543
544quorum {
545 provider: corosync_votequorum
546}
547
548totem {
a9e7c3aa 549 cluster_name: testcluster
e4ec4154 550 config_version: 3
a9e7c3aa 551 ip_version: ipv4-6
e4ec4154
TL
552 secauth: on
553 version: 2
554 interface {
a9e7c3aa 555 linknumber: 0
e4ec4154
TL
556 }
557
558}
559----
560
a9e7c3aa
SR
561NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
562is a remnant of older corosync versions that is kept for backwards
563compatibility.
564
565The first thing you want to do is add the 'name' properties in the node entries
566if you do not see them already. Those *must* match the node name.
e4ec4154 567
a9e7c3aa
SR
568Then replace all addresses from the 'ring0_addr' properties of all nodes with
569the new addresses. You may use plain IP addresses or hostnames here. If you use
270757a1 570hostnames ensure that they are resolvable from all nodes. (see also
a9e7c3aa 571xref:pvecm_corosync_addresses[Link Address Types])
e4ec4154 572
a9e7c3aa
SR
573In this example, we want to switch the cluster communication to the
57410.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
e4ec4154 575
a9e7c3aa
SR
576NOTE: The exact same procedure can be used to change other 'ringX_addr' values
577as well, although we recommend to not change multiple addresses at once, to make
578it easier to recover if something goes wrong.
579
580After we increase the 'config_version' property, the new configuration file
e4ec4154
TL
581should look like:
582
583----
e4ec4154
TL
584logging {
585 debug: off
586 to_syslog: yes
587}
588
589nodelist {
590
591 node {
592 name: due
593 nodeid: 2
594 quorum_votes: 1
595 ring0_addr: 10.10.10.2
596 }
597
598 node {
599 name: tre
600 nodeid: 3
601 quorum_votes: 1
602 ring0_addr: 10.10.10.3
603 }
604
605 node {
606 name: uno
607 nodeid: 1
608 quorum_votes: 1
609 ring0_addr: 10.10.10.1
610 }
611
612}
613
614quorum {
615 provider: corosync_votequorum
616}
617
618totem {
a9e7c3aa 619 cluster_name: testcluster
e4ec4154 620 config_version: 4
a9e7c3aa 621 ip_version: ipv4-6
e4ec4154
TL
622 secauth: on
623 version: 2
624 interface {
a9e7c3aa 625 linknumber: 0
e4ec4154
TL
626 }
627
628}
629----
630
a9e7c3aa
SR
631Then, after a final check if all changed information is correct, we save it and
632once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
633section to bring it into effect.
e4ec4154 634
a9e7c3aa
SR
635The changes will be applied live, so restarting corosync is not strictly
636necessary. If you changed other settings as well, or notice corosync
637complaining, you can optionally trigger a restart.
e4ec4154
TL
638
639On a single node execute:
a9e7c3aa 640
e4ec4154 641[source,bash]
4d19cb00 642----
e4ec4154 643systemctl restart corosync
4d19cb00 644----
e4ec4154
TL
645
646Now check if everything is fine:
647
648[source,bash]
4d19cb00 649----
e4ec4154 650systemctl status corosync
4d19cb00 651----
e4ec4154
TL
652
653If corosync runs again correct restart corosync also on all other nodes.
654They will then join the cluster membership one by one on the new network.
655
3254bfdd 656[[pvecm_corosync_addresses]]
270757a1
SR
657Corosync addresses
658~~~~~~~~~~~~~~~~~~
659
a9e7c3aa
SR
660A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
661`corosync.conf`) can be specified in two ways:
270757a1
SR
662
663* **IPv4/v6 addresses** will be used directly. They are recommended, since they
664are static and usually not changed carelessly.
665
666* **Hostnames** will be resolved using `getaddrinfo`, which means that per
667default, IPv6 addresses will be used first, if available (see also
668`man gai.conf`). Keep this in mind, especially when upgrading an existing
669cluster to IPv6.
670
671CAUTION: Hostnames should be used with care, since the address they
672resolve to can be changed without touching corosync or the node it runs on -
673which may lead to a situation where an address is changed without thinking
674about implications for corosync.
675
676A seperate, static hostname specifically for corosync is recommended, if
677hostnames are preferred. Also, make sure that every node in the cluster can
678resolve all hostnames correctly.
679
680Since {pve} 5.1, while supported, hostnames will be resolved at the time of
681entry. Only the resolved IP is then saved to the configuration.
682
683Nodes that joined the cluster on earlier versions likely still use their
684unresolved hostname in `corosync.conf`. It might be a good idea to replace
685them with IPs or a seperate hostname, as mentioned above.
686
e4ec4154 687
a9e7c3aa
SR
688[[pvecm_redundancy]]
689Corosync Redundancy
690-------------------
e4ec4154 691
a9e7c3aa
SR
692Corosync supports redundant networking via its integrated kronosnet layer by
693default (it is not supported on the legacy udp/udpu transports). It can be
694enabled by specifying more than one link address, either via the '--linkX'
695parameters of `pvecm` (while creating a cluster or adding a new node) or by
696specifying more than one 'ringX_addr' in `corosync.conf`.
e4ec4154 697
a9e7c3aa
SR
698NOTE: To provide useful failover, every link should be on its own
699physical network connection.
e4ec4154 700
a9e7c3aa
SR
701Links are used according to a priority setting. You can configure this priority
702by setting 'knet_link_priority' in the corresponding interface section in
703`corosync.conf`, or, preferrably, using the 'priority' parameter when creating
704your cluster with `pvecm`:
e4ec4154 705
4d19cb00 706----
a9e7c3aa 707 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=20 --link1 10.20.20.1,priority=15
4d19cb00 708----
e4ec4154 709
a9e7c3aa
SR
710This would cause 'link1' to be used first, since it has the lower priority.
711
712If no priorities are configured manually (or two links have the same priority),
713links will be used in order of their number, with the lower number having higher
714priority.
715
716Even if all links are working, only the one with the highest priority will see
717corosync traffic. Link priorities cannot be mixed, i.e. links with different
718priorities will not be able to communicate with each other.
e4ec4154 719
a9e7c3aa
SR
720Since lower priority links will not see traffic unless all higher priorities
721have failed, it becomes a useful strategy to specify even networks used for
722other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
723worst, a higher-latency or more congested connection might be better than no
724connection at all.
e4ec4154 725
a9e7c3aa
SR
726Adding Redundant Links To An Existing Cluster
727~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
e4ec4154 728
a9e7c3aa
SR
729To add a new link to a running configuration, first check how to
730xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
e4ec4154 731
a9e7c3aa
SR
732Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
733sure that your 'X' is the same for every node you add it to, and that it is
734unique for each node.
735
736Lastly, add a new 'interface', as shown below, to your `totem`
737section, replacing 'X' with your link number chosen above.
738
739Assuming you added a link with number 1, the new configuration file could look
740like this:
e4ec4154
TL
741
742----
a9e7c3aa
SR
743logging {
744 debug: off
745 to_syslog: yes
e4ec4154
TL
746}
747
748nodelist {
a9e7c3aa 749
e4ec4154 750 node {
a9e7c3aa
SR
751 name: due
752 nodeid: 2
e4ec4154 753 quorum_votes: 1
a9e7c3aa
SR
754 ring0_addr: 10.10.10.2
755 ring1_addr: 10.20.20.2
e4ec4154
TL
756 }
757
a9e7c3aa
SR
758 node {
759 name: tre
760 nodeid: 3
e4ec4154 761 quorum_votes: 1
a9e7c3aa
SR
762 ring0_addr: 10.10.10.3
763 ring1_addr: 10.20.20.3
e4ec4154
TL
764 }
765
a9e7c3aa
SR
766 node {
767 name: uno
768 nodeid: 1
769 quorum_votes: 1
770 ring0_addr: 10.10.10.1
771 ring1_addr: 10.20.20.1
772 }
773
774}
775
776quorum {
777 provider: corosync_votequorum
778}
779
780totem {
781 cluster_name: testcluster
782 config_version: 4
783 ip_version: ipv4-6
784 secauth: on
785 version: 2
786 interface {
787 linknumber: 0
788 }
789 interface {
790 linknumber: 1
791 }
e4ec4154 792}
a9e7c3aa 793----
e4ec4154 794
a9e7c3aa
SR
795The new link will be enabled as soon as you follow the last steps to
796xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
797be necessary. You can check that corosync loaded the new link using:
e4ec4154 798
a9e7c3aa
SR
799----
800journalctl -b -u corosync
e4ec4154
TL
801----
802
a9e7c3aa
SR
803It might be a good idea to test the new link by temporarily disconnecting the
804old link on one node and making sure that its status remains online while
805disconnected:
e4ec4154 806
a9e7c3aa
SR
807----
808pvecm status
809----
810
811If you see a healthy cluster state, it means that your new link is being used.
e4ec4154 812
e4ec4154 813
c21d2cbe
OB
814Corosync External Vote Support
815------------------------------
816
817This section describes a way to deploy an external voter in a {pve} cluster.
818When configured, the cluster can sustain more node failures without
819violating safety properties of the cluster communication.
820
821For this to work there are two services involved:
822
823* a so called qdevice daemon which runs on each {pve} node
824
825* an external vote daemon which runs on an independent server.
826
827As a result you can achieve higher availability even in smaller setups (for
828example 2+1 nodes).
829
830QDevice Technical Overview
831~~~~~~~~~~~~~~~~~~~~~~~~~~
832
833The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
834node. It provides a configured number of votes to the clusters quorum
835subsystem based on an external running third-party arbitrator's decision.
836Its primary use is to allow a cluster to sustain more node failures than
837standard quorum rules allow. This can be done safely as the external device
838can see all nodes and thus choose only one set of nodes to give its vote.
51730d56 839This will only be done if said set of nodes can have quorum (again) when
c21d2cbe
OB
840receiving the third-party vote.
841
842Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
843a daemon which provides a vote to a cluster partition if it can reach the
844partition members over the network. It will give only votes to one partition
845of a cluster at any time.
846It's designed to support multiple clusters and is almost configuration and
847state free. New clusters are handled dynamically and no configuration file
848is needed on the host running a QDevice.
849
850The external host has the only requirement that it needs network access to the
851cluster and a corosync-qnetd package available. We provide such a package
852for Debian based hosts, other Linux distributions should also have a package
853available through their respective package manager.
854
855NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
a9e7c3aa
SR
856TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
857latencies than 2 ms.
c21d2cbe
OB
858
859Supported Setups
860~~~~~~~~~~~~~~~~
861
862We support QDevices for clusters with an even number of nodes and recommend
863it for 2 node clusters, if they should provide higher availability.
864For clusters with an odd node count we discourage the use of QDevices
865currently. The reason for this, is the difference of the votes the QDevice
866provides for each cluster type. Even numbered clusters get single additional
867vote, with this we can only increase availability, i.e. if the QDevice
868itself fails we are in the same situation as with no QDevice at all.
869
870Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
871where 'N' corresponds to the cluster node count. This difference makes
872sense, if we had only one additional vote the cluster can get into a split
873brain situation.
874This algorithm would allow that all nodes but one (and naturally the
875QDevice itself) could fail.
876There are two drawbacks with this:
877
878* If the QNet daemon itself fails, no other node may fail or the cluster
879 immediately loses quorum. For example, in a cluster with 15 nodes 7
880 could fail before the cluster becomes inquorate. But, if a QDevice is
881 configured here and said QDevice fails itself **no single node** of
882 the 15 may fail. The QDevice acts almost as a single point of failure in
883 this case.
884
885* The fact that all but one node plus QDevice may fail sound promising at
886 first, but this may result in a mass recovery of HA services that would
887 overload the single node left. Also ceph server will stop to provide
888 services after only '((N-1)/2)' nodes are online.
889
890If you understand the drawbacks and implications you can decide yourself if
891you should use this technology in an odd numbered cluster setup.
892
c21d2cbe
OB
893QDevice-Net Setup
894~~~~~~~~~~~~~~~~~
895
896We recommend to run any daemon which provides votes to corosync-qdevice as an
e34c3e91
TL
897unprivileged user. {pve} and Debian provides a package which is already
898configured to do so.
c21d2cbe
OB
899The traffic between the daemon and the cluster must be encrypted to ensure a
900safe and secure QDevice integration in {pve}.
901
902First install the 'corosync-qnetd' package on your external server and
903the 'corosync-qdevice' package on all cluster nodes.
904
905After that, ensure that all your nodes on the cluster are online.
906
907You can now easily set up your QDevice by running the following command on one
908of the {pve} nodes:
909
910----
911pve# pvecm qdevice setup <QDEVICE-IP>
912----
913
914The SSH key from the cluster will be automatically copied to the QDevice. You
915might need to enter an SSH password during this step.
916
917After you enter the password and all the steps are successfully completed, you
918will see "Done". You can check the status now:
919
920----
921pve# pvecm status
922
923...
924
925Votequorum information
926~~~~~~~~~~~~~~~~~~~~~
927Expected votes: 3
928Highest expected: 3
929Total votes: 3
930Quorum: 2
931Flags: Quorate Qdevice
932
933Membership information
934~~~~~~~~~~~~~~~~~~~~~~
935 Nodeid Votes Qdevice Name
936 0x00000001 1 A,V,NMW 192.168.22.180 (local)
937 0x00000002 1 A,V,NMW 192.168.22.181
938 0x00000000 1 Qdevice
939
940----
941
942which means the QDevice is set up.
943
c21d2cbe
OB
944Frequently Asked Questions
945~~~~~~~~~~~~~~~~~~~~~~~~~~
946
947Tie Breaking
948^^^^^^^^^^^^
949
00821894
TL
950In case of a tie, where two same-sized cluster partitions cannot see each other
951but the QDevice, the QDevice chooses randomly one of those partitions and
c21d2cbe
OB
952provides a vote to it.
953
d31de328
TL
954Possible Negative Implications
955^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
956
00821894
TL
957For clusters with an even node count there are no negative implications when
958setting up a QDevice. If it fails to work, you are as good as without QDevice at
959all.
d31de328 960
870c2817
OB
961Adding/Deleting Nodes After QDevice Setup
962^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
d31de328
TL
963
964If you want to add a new node or remove an existing one from a cluster with a
00821894
TL
965QDevice setup, you need to remove the QDevice first. After that, you can add or
966remove nodes normally. Once you have a cluster with an even node count again,
967you can set up the QDevice again as described above.
870c2817
OB
968
969Removing the QDevice
970^^^^^^^^^^^^^^^^^^^^
971
00821894
TL
972If you used the official `pvecm` tool to add the QDevice, you can remove it
973trivially by running:
870c2817
OB
974
975----
976pve# pvecm qdevice remove
977----
d31de328 978
51730d56
TL
979//Still TODO
980//^^^^^^^^^^
a9e7c3aa 981//There is still stuff to add here
c21d2cbe
OB
982
983
e4ec4154
TL
984Corosync Configuration
985----------------------
986
a9e7c3aa
SR
987The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
988controls the cluster membership and its network.
989For further information about it, check the corosync.conf man page:
e4ec4154 990[source,bash]
4d19cb00 991----
e4ec4154 992man corosync.conf
4d19cb00 993----
e4ec4154
TL
994
995For node membership you should always use the `pvecm` tool provided by {pve}.
996You may have to edit the configuration file manually for other changes.
997Here are a few best practice tips for doing this.
998
3254bfdd 999[[pvecm_edit_corosync_conf]]
e4ec4154
TL
1000Edit corosync.conf
1001~~~~~~~~~~~~~~~~~~
1002
a9e7c3aa
SR
1003Editing the corosync.conf file is not always very straightforward. There are
1004two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
e4ec4154
TL
1005`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1006propagate the changes to the local one, but not vice versa.
1007
1008The configuration will get updated automatically as soon as the file changes.
1009This means changes which can be integrated in a running corosync will take
a9e7c3aa
SR
1010effect immediately. So you should always make a copy and edit that instead, to
1011avoid triggering some unwanted changes by an in-between safe.
e4ec4154
TL
1012
1013[source,bash]
4d19cb00 1014----
e4ec4154 1015cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 1016----
e4ec4154 1017
a9e7c3aa
SR
1018Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1019preinstalled on any {pve} node for example.
e4ec4154
TL
1020
1021NOTE: Always increment the 'config_version' number on configuration changes,
1022omitting this can lead to problems.
1023
1024After making the necessary changes create another copy of the current working
1025configuration file. This serves as a backup if the new configuration fails to
1026apply or makes problems in other ways.
1027
1028[source,bash]
4d19cb00 1029----
e4ec4154 1030cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 1031----
e4ec4154
TL
1032
1033Then move the new configuration file over the old one:
1034[source,bash]
4d19cb00 1035----
e4ec4154 1036mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 1037----
e4ec4154
TL
1038
1039You may check with the commands
1040[source,bash]
4d19cb00 1041----
e4ec4154
TL
1042systemctl status corosync
1043journalctl -b -u corosync
4d19cb00 1044----
e4ec4154 1045
a9e7c3aa 1046If the change could be applied automatically. If not you may have to restart the
e4ec4154
TL
1047corosync service via:
1048[source,bash]
4d19cb00 1049----
e4ec4154 1050systemctl restart corosync
4d19cb00 1051----
e4ec4154
TL
1052
1053On errors check the troubleshooting section below.
1054
1055Troubleshooting
1056~~~~~~~~~~~~~~~
1057
1058Issue: 'quorum.expected_votes must be configured'
1059^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1060
1061When corosync starts to fail and you get the following message in the system log:
1062
1063----
1064[...]
1065corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1066corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1067 'configuration error: nodelist or quorum.expected_votes must be configured!'
1068[...]
1069----
1070
1071It means that the hostname you set for corosync 'ringX_addr' in the
1072configuration could not be resolved.
1073
e4ec4154
TL
1074Write Configuration When Not Quorate
1075^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1076
1077If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1078know what you do, use:
1079[source,bash]
4d19cb00 1080----
e4ec4154 1081pvecm expected 1
4d19cb00 1082----
e4ec4154
TL
1083
1084This sets the expected vote count to 1 and makes the cluster quorate. You can
1085now fix your configuration, or revert it back to the last working backup.
1086
1087This is not enough if corosync cannot start anymore. Here its best to edit the
1088local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1089that corosync can start again. Ensure that on all nodes this configuration has
1090the same content to avoid split brains. If you are not sure what went wrong
1091it's best to ask the Proxmox Community to help you.
1092
1093
3254bfdd 1094[[pvecm_corosync_conf_glossary]]
e4ec4154
TL
1095Corosync Configuration Glossary
1096~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1097
1098ringX_addr::
a9e7c3aa
SR
1099This names the different link addresses for the kronosnet connections between
1100nodes.
e4ec4154 1101
806ef12d
DM
1102
1103Cluster Cold Start
1104------------------
1105
1106It is obvious that a cluster is not quorate when all nodes are
1107offline. This is a common case after a power failure.
1108
1109NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 1110(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
1111you want HA.
1112
204231df 1113On node startup, the `pve-guests` service is started and waits for
8c1189b6 1114quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
1115flag set.
1116
1117When you turn on nodes, or when power comes back after power failure,
1118it is likely that some nodes boots faster than others. Please keep in
1119mind that guest startup is delayed until you reach quorum.
806ef12d 1120
054a7e7d 1121
082ea7d9
TL
1122Guest Migration
1123---------------
1124
054a7e7d
DM
1125Migrating virtual guests to other nodes is a useful feature in a
1126cluster. There are settings to control the behavior of such
1127migrations. This can be done via the configuration file
1128`datacenter.cfg` or for a specific migration via API or command line
1129parameters.
1130
da6c7dee
DC
1131It makes a difference if a Guest is online or offline, or if it has
1132local resources (like a local disk).
1133
1134For Details about Virtual Machine Migration see the
a9e7c3aa 1135xref:qm_migration[QEMU/KVM Migration Chapter].
da6c7dee
DC
1136
1137For Details about Container Migration see the
a9e7c3aa 1138xref:pct_migration[Container Migration Chapter].
082ea7d9
TL
1139
1140Migration Type
1141~~~~~~~~~~~~~~
1142
44f38275 1143The migration type defines if the migration data should be sent over an
d63be10b 1144encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 1145Setting the migration type to insecure means that the RAM content of a
470d4313 1146virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
1147information disclosure of critical data from inside the guest (for
1148example passwords or encryption keys).
054a7e7d
DM
1149
1150Therefore, we strongly recommend using the secure channel if you do
1151not have full control over the network and can not guarantee that no
1152one is eavesdropping to it.
082ea7d9 1153
054a7e7d
DM
1154NOTE: Storage migration does not follow this setting. Currently, it
1155always sends the storage content over a secure channel.
1156
1157Encryption requires a lot of computing power, so this setting is often
1158changed to "unsafe" to achieve better performance. The impact on
1159modern systems is lower because they implement AES encryption in
b1743473
DM
1160hardware. The performance impact is particularly evident in fast
1161networks where you can transfer 10 Gbps or more.
082ea7d9 1162
082ea7d9
TL
1163Migration Network
1164~~~~~~~~~~~~~~~~~
1165
a9baa444
TL
1166By default, {pve} uses the network in which cluster communication
1167takes place to send the migration traffic. This is not optimal because
1168sensitive cluster traffic can be disrupted and this network may not
1169have the best bandwidth available on the node.
1170
1171Setting the migration network parameter allows the use of a dedicated
1172network for the entire migration traffic. In addition to the memory,
1173this also affects the storage traffic for offline migrations.
1174
1175The migration network is set as a network in the CIDR notation. This
1176has the advantage that you do not have to set individual IP addresses
1177for each node. {pve} can determine the real address on the
1178destination node from the network specified in the CIDR form. To
1179enable this, the network must be specified so that each node has one,
1180but only one IP in the respective network.
1181
082ea7d9
TL
1182Example
1183^^^^^^^
1184
a9baa444
TL
1185We assume that we have a three-node setup with three separate
1186networks. One for public communication with the Internet, one for
1187cluster communication and a very fast one, which we want to use as a
1188dedicated network for migration.
1189
1190A network configuration for such a setup might look as follows:
082ea7d9
TL
1191
1192----
7a0d4784 1193iface eno1 inet manual
082ea7d9
TL
1194
1195# public network
1196auto vmbr0
1197iface vmbr0 inet static
1198 address 192.X.Y.57
1199 netmask 255.255.250.0
1200 gateway 192.X.Y.1
7a0d4784 1201 bridge_ports eno1
082ea7d9
TL
1202 bridge_stp off
1203 bridge_fd 0
1204
1205# cluster network
7a0d4784
WL
1206auto eno2
1207iface eno2 inet static
082ea7d9
TL
1208 address 10.1.1.1
1209 netmask 255.255.255.0
1210
1211# fast network
7a0d4784
WL
1212auto eno3
1213iface eno3 inet static
082ea7d9
TL
1214 address 10.1.2.1
1215 netmask 255.255.255.0
082ea7d9
TL
1216----
1217
a9baa444
TL
1218Here, we will use the network 10.1.2.0/24 as a migration network. For
1219a single migration, you can do this using the `migration_network`
1220parameter of the command line tool:
1221
082ea7d9 1222----
b1743473 1223# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
1224----
1225
a9baa444
TL
1226To configure this as the default network for all migrations in the
1227cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1228file:
1229
082ea7d9 1230----
a9baa444 1231# use dedicated migration network
b1743473 1232migration: secure,network=10.1.2.0/24
082ea7d9
TL
1233----
1234
a9baa444
TL
1235NOTE: The migration type must always be set when the migration network
1236gets set in `/etc/pve/datacenter.cfg`.
1237
806ef12d 1238
d8742b0c
DM
1239ifdef::manvolnum[]
1240include::pve-copyright.adoc[]
1241endif::manvolnum[]