]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvecm.adoc
fix #3375: warn user to remove replication jobs
[pve-docs.git] / pvecm.adoc
... / ...
CommitLineData
1[[chapter_pvecm]]
2ifdef::manvolnum[]
3pvecm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvecm - Proxmox VE Cluster Manager
11
12SYNOPSIS
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
24:pve-toplevel:
25endif::manvolnum[]
26
27The {pve} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
29http://www.corosync.org[Corosync Cluster Engine] for reliable group
30communication. There's no explicit limit for the number of nodes in a cluster.
31In practice, the actual possible node count may be limited by the host and
32network performance. Currently (2021), there are reports of clusters (using
33high-end enterprise hardware) with over 50 nodes in production.
34
35`pvecm` can be used to create a new cluster, join nodes to a cluster,
36leave the cluster, get status information, and do various other cluster-related
37tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
38is used to transparently distribute the cluster configuration to all cluster
39nodes.
40
41Grouping nodes into a cluster has the following advantages:
42
43* Centralized, web-based management
44
45* Multi-master clusters: each node can do all management tasks
46
47* Use of `pmxcfs`, a database-driven file system, for storing configuration
48 files, replicated in real-time on all nodes using `corosync`
49
50* Easy migration of virtual machines and containers between physical
51 hosts
52
53* Fast deployment
54
55* Cluster-wide services like firewall and HA
56
57
58Requirements
59------------
60
61* All nodes must be able to connect to each other via UDP ports 5404 and 5405
62 for corosync to work.
63
64* Date and time must be synchronized.
65
66* An SSH tunnel on TCP port 22 between nodes is required.
67
68* If you are interested in High Availability, you need to have at
69 least three nodes for reliable quorum. All nodes should have the
70 same version.
71
72* We recommend a dedicated NIC for the cluster traffic, especially if
73 you use shared storage.
74
75* The root password of a cluster node is required for adding nodes.
76
77NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
78nodes.
79
80NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
81not supported as a production configuration and should only be done temporarily,
82during an upgrade of the whole cluster from one major version to another.
83
84NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
85cluster protocol (corosync) between {pve} 6.x and earlier versions changed
86fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
87upgrade procedure to {pve} 6.0.
88
89
90Preparing Nodes
91---------------
92
93First, install {pve} on all nodes. Make sure that each node is
94installed with the final hostname and IP configuration. Changing the
95hostname and IP is not possible after cluster creation.
96
97While it's common to reference all node names and their IPs in `/etc/hosts` (or
98make their names resolvable through other means), this is not necessary for a
99cluster to work. It may be useful however, as you can then connect from one node
100to another via SSH, using the easier to remember node name (see also
101xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
102recommend referencing nodes by their IP addresses in the cluster configuration.
103
104
105[[pvecm_create_cluster]]
106Create a Cluster
107----------------
108
109You can either create a cluster on the console (login via `ssh`), or through
110the API using the {pve} web interface (__Datacenter -> Cluster__).
111
112NOTE: Use a unique name for your cluster. This name cannot be changed later.
113The cluster name follows the same rules as node names.
114
115[[pvecm_cluster_create_via_gui]]
116Create via Web GUI
117~~~~~~~~~~~~~~~~~~
118
119[thumbnail="screenshot/gui-cluster-create.png"]
120
121Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
122name and select a network connection from the drop-down list to serve as the
123main cluster network (Link 0). It defaults to the IP resolved via the node's
124hostname.
125
126As of {pve} 6.2, up to 8 fallback links can be added to a cluster. To add a
127redundant link, click the 'Add' button and select a link number and IP address
128from the respective fields. Prior to {pve} 6.2, to add a second link as
129fallback, you can select the 'Advanced' checkbox and choose an additional
130network interface (Link 1, see also xref:pvecm_redundancy[Corosync Redundancy]).
131
132NOTE: Ensure that the network selected for cluster communication is not used for
133any high traffic purposes, like network storage or live-migration.
134While the cluster network itself produces small amounts of data, it is very
135sensitive to latency. Check out full
136xref:pvecm_cluster_network_requirements[cluster network requirements].
137
138[[pvecm_cluster_create_via_cli]]
139Create via the Command Line
140~~~~~~~~~~~~~~~~~~~~~~~~~~~
141
142Login via `ssh` to the first {pve} node and run the following command:
143
144----
145 hp1# pvecm create CLUSTERNAME
146----
147
148To check the state of the new cluster use:
149
150----
151 hp1# pvecm status
152----
153
154Multiple Clusters in the Same Network
155~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
156
157It is possible to create multiple clusters in the same physical or logical
158network. In this case, each cluster must have a unique name to avoid possible
159clashes in the cluster communication stack. Furthermore, this helps avoid human
160confusion by making clusters clearly distinguishable.
161
162While the bandwidth requirement of a corosync cluster is relatively low, the
163latency of packages and the package per second (PPS) rate is the limiting
164factor. Different clusters in the same network can compete with each other for
165these resources, so it may still make sense to use separate physical network
166infrastructure for bigger clusters.
167
168[[pvecm_join_node_to_cluster]]
169Adding Nodes to the Cluster
170---------------------------
171
172CAUTION: A node that is about to be added to the cluster cannot hold any guests.
173All existing configuration in `/etc/pve` is overwritten when joining a cluster,
174since guest IDs could otherwise conflict. As a workaround, you can create a
175backup of the guest (`vzdump`) and restore it under a different ID, after the
176node has been added to the cluster.
177
178Join Node to Cluster via GUI
179~~~~~~~~~~~~~~~~~~~~~~~~~~~~
180
181[thumbnail="screenshot/gui-cluster-join-information.png"]
182
183Log in to the web interface on an existing cluster node. Under __Datacenter ->
184Cluster__, click the *Join Information* button at the top. Then, click on the
185button *Copy Information*. Alternatively, copy the string from the 'Information'
186field manually.
187
188[thumbnail="screenshot/gui-cluster-join.png"]
189
190Next, log in to the web interface on the node you want to add.
191Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
192'Information' field with the 'Join Information' text you copied earlier.
193Most settings required for joining the cluster will be filled out
194automatically. For security reasons, the cluster password has to be entered
195manually.
196
197NOTE: To enter all required data manually, you can disable the 'Assisted Join'
198checkbox.
199
200After clicking the *Join* button, the cluster join process will start
201immediately. After the node has joined the cluster, its current node certificate
202will be replaced by one signed from the cluster certificate authority (CA).
203This means that the current session will stop working after a few seconds. You
204then might need to force-reload the web interface and log in again with the
205cluster credentials.
206
207Now your node should be visible under __Datacenter -> Cluster__.
208
209Join Node to Cluster via Command Line
210~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
211
212Log in to the node you want to join into an existing cluster via `ssh`.
213
214----
215 # pvecm add IP-ADDRESS-CLUSTER
216----
217
218For `IP-ADDRESS-CLUSTER`, use the IP or hostname of an existing cluster node.
219An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
220
221
222To check the state of the cluster use:
223
224----
225 # pvecm status
226----
227
228.Cluster status after adding 4 nodes
229----
230 # pvecm status
231Cluster information
232~~~~~~~~~~~~~~~~~~~
233Name: prod-central
234Config Version: 3
235Transport: knet
236Secure auth: on
237
238Quorum information
239~~~~~~~~~~~~~~~~~~
240Date: Tue Sep 14 11:06:47 2021
241Quorum provider: corosync_votequorum
242Nodes: 4
243Node ID: 0x00000001
244Ring ID: 1.1a8
245Quorate: Yes
246
247Votequorum information
248~~~~~~~~~~~~~~~~~~~~~~
249Expected votes: 4
250Highest expected: 4
251Total votes: 4
252Quorum: 3
253Flags: Quorate
254
255Membership information
256~~~~~~~~~~~~~~~~~~~~~~
257 Nodeid Votes Name
2580x00000001 1 192.168.15.91
2590x00000002 1 192.168.15.92 (local)
2600x00000003 1 192.168.15.93
2610x00000004 1 192.168.15.94
262----
263
264If you only want a list of all nodes, use:
265
266----
267 # pvecm nodes
268----
269
270.List nodes in a cluster
271----
272 # pvecm nodes
273
274Membership information
275~~~~~~~~~~~~~~~~~~~~~~
276 Nodeid Votes Name
277 1 1 hp1
278 2 1 hp2 (local)
279 3 1 hp3
280 4 1 hp4
281----
282
283[[pvecm_adding_nodes_with_separated_cluster_network]]
284Adding Nodes with Separated Cluster Network
285~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
286
287When adding a node to a cluster with a separated cluster network, you need to
288use the 'link0' parameter to set the nodes address on that network:
289
290[source,bash]
291----
292pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
293----
294
295If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
296Kronosnet transport layer, also use the 'link1' parameter.
297
298Using the GUI, you can select the correct interface from the corresponding
299'Link X' fields in the *Cluster Join* dialog.
300
301Remove a Cluster Node
302---------------------
303
304CAUTION: Read the procedure carefully before proceeding, as it may
305not be what you want or need.
306
307Move all virtual machines from the node. Ensure that you have made copies of any
308local data or backups that you want to keep. In addition, make sure to remove
309any scheduled replication jobs to the node to be removed.
310
311CAUTION: Failure to remove replication jobs to a node before removing said node
312will result in the replication job becoming irremovable. Especially note that
313replication automatically switches direction if a replicated VM is migrated, so
314by migrating a replicated VM from a node to be deleted, replication jobs will be
315set up to that node automatically.
316
317In the following example, we will remove the node hp4 from the cluster.
318
319Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
320command to identify the node ID to remove:
321
322----
323 hp1# pvecm nodes
324
325Membership information
326~~~~~~~~~~~~~~~~~~~~~~
327 Nodeid Votes Name
328 1 1 hp1 (local)
329 2 1 hp2
330 3 1 hp3
331 4 1 hp4
332----
333
334
335At this point, you must power off hp4 and ensure that it will not power on
336again (in the network) with its current configuration.
337
338IMPORTANT: As mentioned above, it is critical to power off the node
339*before* removal, and make sure that it will *not* power on again
340(in the existing cluster network) with its current configuration.
341If you power on the node as it is, the cluster could end up broken,
342and it could be difficult to restore it to a functioning state.
343
344After powering off the node hp4, we can safely remove it from the cluster.
345
346----
347 hp1# pvecm delnode hp4
348 Killing node 4
349----
350
351NOTE: At this point, it is possible that you will receive an error message
352stating `Could not kill node (error = CS_ERR_NOT_EXIST)`. This does not
353signify an actual failure in the deletion of the node, but rather a failure in
354corosync trying to kill an offline node. Thus, it can be safely ignored.
355
356Use `pvecm nodes` or `pvecm status` to check the node list again. It should
357look something like:
358
359----
360hp1# pvecm status
361
362...
363
364Votequorum information
365~~~~~~~~~~~~~~~~~~~~~~
366Expected votes: 3
367Highest expected: 3
368Total votes: 3
369Quorum: 2
370Flags: Quorate
371
372Membership information
373~~~~~~~~~~~~~~~~~~~~~~
374 Nodeid Votes Name
3750x00000001 1 192.168.15.90 (local)
3760x00000002 1 192.168.15.91
3770x00000003 1 192.168.15.92
378----
379
380If, for whatever reason, you want this server to join the same cluster again,
381you have to:
382
383* do a fresh install of {pve} on it,
384
385* then join it, as explained in the previous section.
386
387NOTE: After removal of the node, its SSH fingerprint will still reside in the
388'known_hosts' of the other nodes. If you receive an SSH error after rejoining
389a node with the same IP or hostname, run `pvecm updatecerts` once on the
390re-added node to update its fingerprint cluster wide.
391
392[[pvecm_separate_node_without_reinstall]]
393Separate a Node Without Reinstalling
394~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
395
396CAUTION: This is *not* the recommended method, proceed with caution. Use the
397previous method if you're unsure.
398
399You can also separate a node from a cluster without reinstalling it from
400scratch. But after removing the node from the cluster, it will still have
401access to any shared storage. This must be resolved before you start removing
402the node from the cluster. A {pve} cluster cannot share the exact same
403storage with another cluster, as storage locking doesn't work over the cluster
404boundary. Furthermore, it may also lead to VMID conflicts.
405
406It's suggested that you create a new storage, where only the node which you want
407to separate has access. This can be a new export on your NFS or a new Ceph
408pool, to name a few examples. It's just important that the exact same storage
409does not get accessed by multiple clusters. After setting up this storage, move
410all data and VMs from the node to it. Then you are ready to separate the
411node from the cluster.
412
413WARNING: Ensure that all shared resources are cleanly separated! Otherwise you
414will run into conflicts and problems.
415
416First, stop the corosync and pve-cluster services on the node:
417[source,bash]
418----
419systemctl stop pve-cluster
420systemctl stop corosync
421----
422
423Start the cluster file system again in local mode:
424[source,bash]
425----
426pmxcfs -l
427----
428
429Delete the corosync configuration files:
430[source,bash]
431----
432rm /etc/pve/corosync.conf
433rm -r /etc/corosync/*
434----
435
436You can now start the file system again as a normal service:
437[source,bash]
438----
439killall pmxcfs
440systemctl start pve-cluster
441----
442
443The node is now separated from the cluster. You can deleted it from any
444remaining node of the cluster with:
445[source,bash]
446----
447pvecm delnode oldnode
448----
449
450If the command fails due to a loss of quorum in the remaining node, you can set
451the expected votes to 1 as a workaround:
452[source,bash]
453----
454pvecm expected 1
455----
456
457And then repeat the 'pvecm delnode' command.
458
459Now switch back to the separated node and delete all the remaining cluster
460files on it. This ensures that the node can be added to another cluster again
461without problems.
462
463[source,bash]
464----
465rm /var/lib/corosync/*
466----
467
468As the configuration files from the other nodes are still in the cluster
469file system, you may want to clean those up too. After making absolutely sure
470that you have the correct node name, you can simply remove the entire
471directory recursively from '/etc/pve/nodes/NODENAME'.
472
473CAUTION: The node's SSH keys will remain in the 'authorized_key' file. This
474means that the nodes can still connect to each other with public key
475authentication. You should fix this by removing the respective keys from the
476'/etc/pve/priv/authorized_keys' file.
477
478
479Quorum
480------
481
482{pve} use a quorum-based technique to provide a consistent state among
483all cluster nodes.
484
485[quote, from Wikipedia, Quorum (distributed computing)]
486____
487A quorum is the minimum number of votes that a distributed transaction
488has to obtain in order to be allowed to perform an operation in a
489distributed system.
490____
491
492In case of network partitioning, state changes requires that a
493majority of nodes are online. The cluster switches to read-only mode
494if it loses quorum.
495
496NOTE: {pve} assigns a single vote to each node by default.
497
498
499Cluster Network
500---------------
501
502The cluster network is the core of a cluster. All messages sent over it have to
503be delivered reliably to all nodes in their respective order. In {pve} this
504part is done by corosync, an implementation of a high performance, low overhead,
505high availability development toolkit. It serves our decentralized configuration
506file system (`pmxcfs`).
507
508[[pvecm_cluster_network_requirements]]
509Network Requirements
510~~~~~~~~~~~~~~~~~~~~
511This needs a reliable network with latencies under 2 milliseconds (LAN
512performance) to work properly. The network should not be used heavily by other
513members; ideally corosync runs on its own network. Do not use a shared network
514for corosync and storage (except as a potential low-priority fallback in a
515xref:pvecm_redundancy[redundant] configuration).
516
517Before setting up a cluster, it is good practice to check if the network is fit
518for that purpose. To ensure that the nodes can connect to each other on the
519cluster network, you can test the connectivity between them with the `ping`
520tool.
521
522If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
523be generated - no manual action is required.
524
525NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
526Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
527communication, which, for now, only supports regular UDP unicast.
528
529CAUTION: You can still enable Multicast or legacy unicast by setting your
530transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
531but keep in mind that this will disable all cryptography and redundancy support.
532This is therefore not recommended.
533
534Separate Cluster Network
535~~~~~~~~~~~~~~~~~~~~~~~~
536
537When creating a cluster without any parameters, the corosync cluster network is
538generally shared with the web interface and the VMs' network. Depending on
539your setup, even storage traffic may get sent over the same network. It's
540recommended to change that, as corosync is a time-critical, real-time
541application.
542
543Setting Up a New Network
544^^^^^^^^^^^^^^^^^^^^^^^^
545
546First, you have to set up a new network interface. It should be on a physically
547separate network. Ensure that your network fulfills the
548xref:pvecm_cluster_network_requirements[cluster network requirements].
549
550Separate On Cluster Creation
551^^^^^^^^^^^^^^^^^^^^^^^^^^^^
552
553This is possible via the 'linkX' parameters of the 'pvecm create'
554command, used for creating a new cluster.
555
556If you have set up an additional NIC with a static address on 10.10.10.1/25,
557and want to send and receive all cluster communication over this interface,
558you would execute:
559
560[source,bash]
561----
562pvecm create test --link0 10.10.10.1
563----
564
565To check if everything is working properly, execute:
566[source,bash]
567----
568systemctl status corosync
569----
570
571Afterwards, proceed as described above to
572xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
573
574[[pvecm_separate_cluster_net_after_creation]]
575Separate After Cluster Creation
576^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
577
578You can do this if you have already created a cluster and want to switch
579its communication to another network, without rebuilding the whole cluster.
580This change may lead to short periods of quorum loss in the cluster, as nodes
581have to restart corosync and come up one after the other on the new network.
582
583Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
584Then, open it and you should see a file similar to:
585
586----
587logging {
588 debug: off
589 to_syslog: yes
590}
591
592nodelist {
593
594 node {
595 name: due
596 nodeid: 2
597 quorum_votes: 1
598 ring0_addr: due
599 }
600
601 node {
602 name: tre
603 nodeid: 3
604 quorum_votes: 1
605 ring0_addr: tre
606 }
607
608 node {
609 name: uno
610 nodeid: 1
611 quorum_votes: 1
612 ring0_addr: uno
613 }
614
615}
616
617quorum {
618 provider: corosync_votequorum
619}
620
621totem {
622 cluster_name: testcluster
623 config_version: 3
624 ip_version: ipv4-6
625 secauth: on
626 version: 2
627 interface {
628 linknumber: 0
629 }
630
631}
632----
633
634NOTE: `ringX_addr` actually specifies a corosync *link address*. The name "ring"
635is a remnant of older corosync versions that is kept for backwards
636compatibility.
637
638The first thing you want to do is add the 'name' properties in the node entries,
639if you do not see them already. Those *must* match the node name.
640
641Then replace all addresses from the 'ring0_addr' properties of all nodes with
642the new addresses. You may use plain IP addresses or hostnames here. If you use
643hostnames, ensure that they are resolvable from all nodes (see also
644xref:pvecm_corosync_addresses[Link Address Types]).
645
646In this example, we want to switch cluster communication to the
64710.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively.
648
649NOTE: The exact same procedure can be used to change other 'ringX_addr' values
650as well. However, we recommend only changing one link address at a time, so
651that it's easier to recover if something goes wrong.
652
653After we increase the 'config_version' property, the new configuration file
654should look like:
655
656----
657logging {
658 debug: off
659 to_syslog: yes
660}
661
662nodelist {
663
664 node {
665 name: due
666 nodeid: 2
667 quorum_votes: 1
668 ring0_addr: 10.10.10.2
669 }
670
671 node {
672 name: tre
673 nodeid: 3
674 quorum_votes: 1
675 ring0_addr: 10.10.10.3
676 }
677
678 node {
679 name: uno
680 nodeid: 1
681 quorum_votes: 1
682 ring0_addr: 10.10.10.1
683 }
684
685}
686
687quorum {
688 provider: corosync_votequorum
689}
690
691totem {
692 cluster_name: testcluster
693 config_version: 4
694 ip_version: ipv4-6
695 secauth: on
696 version: 2
697 interface {
698 linknumber: 0
699 }
700
701}
702----
703
704Then, after a final check to see that all changed information is correct, we
705save it and once again follow the
706xref:pvecm_edit_corosync_conf[edit corosync.conf file] section to bring it into
707effect.
708
709The changes will be applied live, so restarting corosync is not strictly
710necessary. If you changed other settings as well, or notice corosync
711complaining, you can optionally trigger a restart.
712
713On a single node execute:
714
715[source,bash]
716----
717systemctl restart corosync
718----
719
720Now check if everything is okay:
721
722[source,bash]
723----
724systemctl status corosync
725----
726
727If corosync begins to work again, restart it on all other nodes too.
728They will then join the cluster membership one by one on the new network.
729
730[[pvecm_corosync_addresses]]
731Corosync Addresses
732~~~~~~~~~~~~~~~~~~
733
734A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
735`corosync.conf`) can be specified in two ways:
736
737* **IPv4/v6 addresses** can be used directly. They are recommended, since they
738are static and usually not changed carelessly.
739
740* **Hostnames** will be resolved using `getaddrinfo`, which means that by
741default, IPv6 addresses will be used first, if available (see also
742`man gai.conf`). Keep this in mind, especially when upgrading an existing
743cluster to IPv6.
744
745CAUTION: Hostnames should be used with care, since the addresses they
746resolve to can be changed without touching corosync or the node it runs on -
747which may lead to a situation where an address is changed without thinking
748about implications for corosync.
749
750A separate, static hostname specifically for corosync is recommended, if
751hostnames are preferred. Also, make sure that every node in the cluster can
752resolve all hostnames correctly.
753
754Since {pve} 5.1, while supported, hostnames will be resolved at the time of
755entry. Only the resolved IP is saved to the configuration.
756
757Nodes that joined the cluster on earlier versions likely still use their
758unresolved hostname in `corosync.conf`. It might be a good idea to replace
759them with IPs or a separate hostname, as mentioned above.
760
761
762[[pvecm_redundancy]]
763Corosync Redundancy
764-------------------
765
766Corosync supports redundant networking via its integrated Kronosnet layer by
767default (it is not supported on the legacy udp/udpu transports). It can be
768enabled by specifying more than one link address, either via the '--linkX'
769parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
770adding a new node) or by specifying more than one 'ringX_addr' in
771`corosync.conf`.
772
773NOTE: To provide useful failover, every link should be on its own
774physical network connection.
775
776Links are used according to a priority setting. You can configure this priority
777by setting 'knet_link_priority' in the corresponding interface section in
778`corosync.conf`, or, preferably, using the 'priority' parameter when creating
779your cluster with `pvecm`:
780
781----
782 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
783----
784
785This would cause 'link1' to be used first, since it has the higher priority.
786
787If no priorities are configured manually (or two links have the same priority),
788links will be used in order of their number, with the lower number having higher
789priority.
790
791Even if all links are working, only the one with the highest priority will see
792corosync traffic. Link priorities cannot be mixed, meaning that links with
793different priorities will not be able to communicate with each other.
794
795Since lower priority links will not see traffic unless all higher priorities
796have failed, it becomes a useful strategy to specify networks used for
797other tasks (VMs, storage, etc.) as low-priority links. If worst comes to
798worst, a higher latency or more congested connection might be better than no
799connection at all.
800
801Adding Redundant Links To An Existing Cluster
802~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
803
804To add a new link to a running configuration, first check how to
805xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
806
807Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
808sure that your 'X' is the same for every node you add it to, and that it is
809unique for each node.
810
811Lastly, add a new 'interface', as shown below, to your `totem`
812section, replacing 'X' with the link number chosen above.
813
814Assuming you added a link with number 1, the new configuration file could look
815like this:
816
817----
818logging {
819 debug: off
820 to_syslog: yes
821}
822
823nodelist {
824
825 node {
826 name: due
827 nodeid: 2
828 quorum_votes: 1
829 ring0_addr: 10.10.10.2
830 ring1_addr: 10.20.20.2
831 }
832
833 node {
834 name: tre
835 nodeid: 3
836 quorum_votes: 1
837 ring0_addr: 10.10.10.3
838 ring1_addr: 10.20.20.3
839 }
840
841 node {
842 name: uno
843 nodeid: 1
844 quorum_votes: 1
845 ring0_addr: 10.10.10.1
846 ring1_addr: 10.20.20.1
847 }
848
849}
850
851quorum {
852 provider: corosync_votequorum
853}
854
855totem {
856 cluster_name: testcluster
857 config_version: 4
858 ip_version: ipv4-6
859 secauth: on
860 version: 2
861 interface {
862 linknumber: 0
863 }
864 interface {
865 linknumber: 1
866 }
867}
868----
869
870The new link will be enabled as soon as you follow the last steps to
871xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
872be necessary. You can check that corosync loaded the new link using:
873
874----
875journalctl -b -u corosync
876----
877
878It might be a good idea to test the new link by temporarily disconnecting the
879old link on one node and making sure that its status remains online while
880disconnected:
881
882----
883pvecm status
884----
885
886If you see a healthy cluster state, it means that your new link is being used.
887
888
889Role of SSH in {pve} Clusters
890-----------------------------
891
892{pve} utilizes SSH tunnels for various features.
893
894* Proxying console/shell sessions (node and guests)
895+
896When using the shell for node B while being connected to node A, connects to a
897terminal proxy on node A, which is in turn connected to the login shell on node
898B via a non-interactive SSH tunnel.
899
900* VM and CT memory and local-storage migration in 'secure' mode.
901+
902During the migration, one or more SSH tunnel(s) are established between the
903source and target nodes, in order to exchange migration information and
904transfer memory and disk contents.
905
906* Storage replication
907
908.Pitfalls due to automatic execution of `.bashrc` and siblings
909[IMPORTANT]
910====
911In case you have a custom `.bashrc`, or similar files that get executed on
912login by the configured shell, `ssh` will automatically run it once the session
913is established successfully. This can cause some unexpected behavior, as those
914commands may be executed with root permissions on any of the operations
915described above. This can cause possible problematic side-effects!
916
917In order to avoid such complications, it's recommended to add a check in
918`/root/.bashrc` to make sure the session is interactive, and only then run
919`.bashrc` commands.
920
921You can add this snippet at the beginning of your `.bashrc` file:
922
923----
924# Early exit if not running interactively to avoid side-effects!
925case $- in
926 *i*) ;;
927 *) return;;
928esac
929----
930====
931
932
933Corosync External Vote Support
934------------------------------
935
936This section describes a way to deploy an external voter in a {pve} cluster.
937When configured, the cluster can sustain more node failures without
938violating safety properties of the cluster communication.
939
940For this to work, there are two services involved:
941
942* A QDevice daemon which runs on each {pve} node
943
944* An external vote daemon which runs on an independent server
945
946As a result, you can achieve higher availability, even in smaller setups (for
947example 2+1 nodes).
948
949QDevice Technical Overview
950~~~~~~~~~~~~~~~~~~~~~~~~~~
951
952The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
953node. It provides a configured number of votes to the cluster's quorum
954subsystem, based on an externally running third-party arbitrator's decision.
955Its primary use is to allow a cluster to sustain more node failures than
956standard quorum rules allow. This can be done safely as the external device
957can see all nodes and thus choose only one set of nodes to give its vote.
958This will only be done if said set of nodes can have quorum (again) after
959receiving the third-party vote.
960
961Currently, only 'QDevice Net' is supported as a third-party arbitrator. This is
962a daemon which provides a vote to a cluster partition, if it can reach the
963partition members over the network. It will only give votes to one partition
964of a cluster at any time.
965It's designed to support multiple clusters and is almost configuration and
966state free. New clusters are handled dynamically and no configuration file
967is needed on the host running a QDevice.
968
969The only requirements for the external host are that it needs network access to
970the cluster and to have a corosync-qnetd package available. We provide a package
971for Debian based hosts, and other Linux distributions should also have a package
972available through their respective package manager.
973
974NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
975TCP/IP. The daemon may even run outside of the cluster's LAN and can have longer
976latencies than 2 ms.
977
978Supported Setups
979~~~~~~~~~~~~~~~~
980
981We support QDevices for clusters with an even number of nodes and recommend
982it for 2 node clusters, if they should provide higher availability.
983For clusters with an odd node count, we currently discourage the use of
984QDevices. The reason for this is the difference in the votes which the QDevice
985provides for each cluster type. Even numbered clusters get a single additional
986vote, which only increases availability, because if the QDevice
987itself fails, you are in the same position as with no QDevice at all.
988
989On the other hand, with an odd numbered cluster size, the QDevice provides
990'(N-1)' votes -- where 'N' corresponds to the cluster node count. This
991alternative behavior makes sense; if it had only one additional vote, the
992cluster could get into a split-brain situation. This algorithm allows for all
993nodes but one (and naturally the QDevice itself) to fail. However, there are two
994drawbacks to this:
995
996* If the QNet daemon itself fails, no other node may fail or the cluster
997 immediately loses quorum. For example, in a cluster with 15 nodes, 7
998 could fail before the cluster becomes inquorate. But, if a QDevice is
999 configured here and it itself fails, **no single node** of the 15 may fail.
1000 The QDevice acts almost as a single point of failure in this case.
1001
1002* The fact that all but one node plus QDevice may fail sounds promising at
1003 first, but this may result in a mass recovery of HA services, which could
1004 overload the single remaining node. Furthermore, a Ceph server will stop
1005 providing services if only '((N-1)/2)' nodes or less remain online.
1006
1007If you understand the drawbacks and implications, you can decide yourself if
1008you want to use this technology in an odd numbered cluster setup.
1009
1010QDevice-Net Setup
1011~~~~~~~~~~~~~~~~~
1012
1013We recommend running any daemon which provides votes to corosync-qdevice as an
1014unprivileged user. {pve} and Debian provide a package which is already
1015configured to do so.
1016The traffic between the daemon and the cluster must be encrypted to ensure a
1017safe and secure integration of the QDevice in {pve}.
1018
1019First, install the 'corosync-qnetd' package on your external server
1020
1021----
1022external# apt install corosync-qnetd
1023----
1024
1025and the 'corosync-qdevice' package on all cluster nodes
1026
1027----
1028pve# apt install corosync-qdevice
1029----
1030
1031After doing this, ensure that all the nodes in the cluster are online.
1032
1033You can now set up your QDevice by running the following command on one
1034of the {pve} nodes:
1035
1036----
1037pve# pvecm qdevice setup <QDEVICE-IP>
1038----
1039
1040The SSH key from the cluster will be automatically copied to the QDevice.
1041
1042NOTE: Make sure that the SSH configuration on your external server allows root
1043login via password, if you are asked for a password during this step.
1044
1045After you enter the password and all the steps have successfully completed, you
1046will see "Done". You can verify that the QDevice has been set up with:
1047
1048----
1049pve# pvecm status
1050
1051...
1052
1053Votequorum information
1054~~~~~~~~~~~~~~~~~~~~~
1055Expected votes: 3
1056Highest expected: 3
1057Total votes: 3
1058Quorum: 2
1059Flags: Quorate Qdevice
1060
1061Membership information
1062~~~~~~~~~~~~~~~~~~~~~~
1063 Nodeid Votes Qdevice Name
1064 0x00000001 1 A,V,NMW 192.168.22.180 (local)
1065 0x00000002 1 A,V,NMW 192.168.22.181
1066 0x00000000 1 Qdevice
1067
1068----
1069
1070
1071Frequently Asked Questions
1072~~~~~~~~~~~~~~~~~~~~~~~~~~
1073
1074Tie Breaking
1075^^^^^^^^^^^^
1076
1077In case of a tie, where two same-sized cluster partitions cannot see each other
1078but can see the QDevice, the QDevice chooses one of those partitions randomly
1079and provides a vote to it.
1080
1081Possible Negative Implications
1082^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1083
1084For clusters with an even node count, there are no negative implications when
1085using a QDevice. If it fails to work, it is the same as not having a QDevice
1086at all.
1087
1088Adding/Deleting Nodes After QDevice Setup
1089^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1090
1091If you want to add a new node or remove an existing one from a cluster with a
1092QDevice setup, you need to remove the QDevice first. After that, you can add or
1093remove nodes normally. Once you have a cluster with an even node count again,
1094you can set up the QDevice again as described previously.
1095
1096Removing the QDevice
1097^^^^^^^^^^^^^^^^^^^^
1098
1099If you used the official `pvecm` tool to add the QDevice, you can remove it
1100by running:
1101
1102----
1103pve# pvecm qdevice remove
1104----
1105
1106//Still TODO
1107//^^^^^^^^^^
1108//There is still stuff to add here
1109
1110
1111Corosync Configuration
1112----------------------
1113
1114The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1115controls the cluster membership and its network.
1116For further information about it, check the corosync.conf man page:
1117[source,bash]
1118----
1119man corosync.conf
1120----
1121
1122For node membership, you should always use the `pvecm` tool provided by {pve}.
1123You may have to edit the configuration file manually for other changes.
1124Here are a few best practice tips for doing this.
1125
1126[[pvecm_edit_corosync_conf]]
1127Edit corosync.conf
1128~~~~~~~~~~~~~~~~~~
1129
1130Editing the corosync.conf file is not always very straightforward. There are
1131two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1132`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1133propagate the changes to the local one, but not vice versa.
1134
1135The configuration will get updated automatically, as soon as the file changes.
1136This means that changes which can be integrated in a running corosync will take
1137effect immediately. Thus, you should always make a copy and edit that instead,
1138to avoid triggering unintended changes when saving the file while editing.
1139
1140[source,bash]
1141----
1142cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1143----
1144
1145Then, open the config file with your favorite editor, such as `nano` or
1146`vim.tiny`, which come pre-installed on every {pve} node.
1147
1148NOTE: Always increment the 'config_version' number after configuration changes;
1149omitting this can lead to problems.
1150
1151After making the necessary changes, create another copy of the current working
1152configuration file. This serves as a backup if the new configuration fails to
1153apply or causes other issues.
1154
1155[source,bash]
1156----
1157cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1158----
1159
1160Then replace the old configuration file with the new one:
1161[source,bash]
1162----
1163mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1164----
1165
1166You can check if the changes could be applied automatically, using the following
1167commands:
1168[source,bash]
1169----
1170systemctl status corosync
1171journalctl -b -u corosync
1172----
1173
1174If the changes could not be applied automatically, you may have to restart the
1175corosync service via:
1176[source,bash]
1177----
1178systemctl restart corosync
1179----
1180
1181On errors, check the troubleshooting section below.
1182
1183Troubleshooting
1184~~~~~~~~~~~~~~~
1185
1186Issue: 'quorum.expected_votes must be configured'
1187^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1188
1189When corosync starts to fail and you get the following message in the system log:
1190
1191----
1192[...]
1193corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1194corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1195 'configuration error: nodelist or quorum.expected_votes must be configured!'
1196[...]
1197----
1198
1199It means that the hostname you set for a corosync 'ringX_addr' in the
1200configuration could not be resolved.
1201
1202Write Configuration When Not Quorate
1203^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1204
1205If you need to change '/etc/pve/corosync.conf' on a node with no quorum, and you
1206understand what you are doing, use:
1207[source,bash]
1208----
1209pvecm expected 1
1210----
1211
1212This sets the expected vote count to 1 and makes the cluster quorate. You can
1213then fix your configuration, or revert it back to the last working backup.
1214
1215This is not enough if corosync cannot start anymore. In that case, it is best to
1216edit the local copy of the corosync configuration in
1217'/etc/corosync/corosync.conf', so that corosync can start again. Ensure that on
1218all nodes, this configuration has the same content to avoid split-brain
1219situations.
1220
1221
1222[[pvecm_corosync_conf_glossary]]
1223Corosync Configuration Glossary
1224~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1225
1226ringX_addr::
1227This names the different link addresses for the Kronosnet connections between
1228nodes.
1229
1230
1231Cluster Cold Start
1232------------------
1233
1234It is obvious that a cluster is not quorate when all nodes are
1235offline. This is a common case after a power failure.
1236
1237NOTE: It is always a good idea to use an uninterruptible power supply
1238(``UPS'', also called ``battery backup'') to avoid this state, especially if
1239you want HA.
1240
1241On node startup, the `pve-guests` service is started and waits for
1242quorum. Once quorate, it starts all guests which have the `onboot`
1243flag set.
1244
1245When you turn on nodes, or when power comes back after power failure,
1246it is likely that some nodes will boot faster than others. Please keep in
1247mind that guest startup is delayed until you reach quorum.
1248
1249
1250Guest Migration
1251---------------
1252
1253Migrating virtual guests to other nodes is a useful feature in a
1254cluster. There are settings to control the behavior of such
1255migrations. This can be done via the configuration file
1256`datacenter.cfg` or for a specific migration via API or command line
1257parameters.
1258
1259It makes a difference if a guest is online or offline, or if it has
1260local resources (like a local disk).
1261
1262For details about virtual machine migration, see the
1263xref:qm_migration[QEMU/KVM Migration Chapter].
1264
1265For details about container migration, see the
1266xref:pct_migration[Container Migration Chapter].
1267
1268Migration Type
1269~~~~~~~~~~~~~~
1270
1271The migration type defines if the migration data should be sent over an
1272encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1273Setting the migration type to insecure means that the RAM content of a
1274virtual guest is also transferred unencrypted, which can lead to
1275information disclosure of critical data from inside the guest (for
1276example, passwords or encryption keys).
1277
1278Therefore, we strongly recommend using the secure channel if you do
1279not have full control over the network and can not guarantee that no
1280one is eavesdropping on it.
1281
1282NOTE: Storage migration does not follow this setting. Currently, it
1283always sends the storage content over a secure channel.
1284
1285Encryption requires a lot of computing power, so this setting is often
1286changed to "unsafe" to achieve better performance. The impact on
1287modern systems is lower because they implement AES encryption in
1288hardware. The performance impact is particularly evident in fast
1289networks, where you can transfer 10 Gbps or more.
1290
1291Migration Network
1292~~~~~~~~~~~~~~~~~
1293
1294By default, {pve} uses the network in which cluster communication
1295takes place to send the migration traffic. This is not optimal both because
1296sensitive cluster traffic can be disrupted and this network may not
1297have the best bandwidth available on the node.
1298
1299Setting the migration network parameter allows the use of a dedicated
1300network for all migration traffic. In addition to the memory,
1301this also affects the storage traffic for offline migrations.
1302
1303The migration network is set as a network using CIDR notation. This
1304has the advantage that you don't have to set individual IP addresses
1305for each node. {pve} can determine the real address on the
1306destination node from the network specified in the CIDR form. To
1307enable this, the network must be specified so that each node has exactly one
1308IP in the respective network.
1309
1310Example
1311^^^^^^^
1312
1313We assume that we have a three-node setup, with three separate
1314networks. One for public communication with the Internet, one for
1315cluster communication, and a very fast one, which we want to use as a
1316dedicated network for migration.
1317
1318A network configuration for such a setup might look as follows:
1319
1320----
1321iface eno1 inet manual
1322
1323# public network
1324auto vmbr0
1325iface vmbr0 inet static
1326 address 192.X.Y.57/24
1327 gateway 192.X.Y.1
1328 bridge-ports eno1
1329 bridge-stp off
1330 bridge-fd 0
1331
1332# cluster network
1333auto eno2
1334iface eno2 inet static
1335 address 10.1.1.1/24
1336
1337# fast network
1338auto eno3
1339iface eno3 inet static
1340 address 10.1.2.1/24
1341----
1342
1343Here, we will use the network 10.1.2.0/24 as a migration network. For
1344a single migration, you can do this using the `migration_network`
1345parameter of the command line tool:
1346
1347----
1348# qm migrate 106 tre --online --migration_network 10.1.2.0/24
1349----
1350
1351To configure this as the default network for all migrations in the
1352cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1353file:
1354
1355----
1356# use dedicated migration network
1357migration: secure,network=10.1.2.0/24
1358----
1359
1360NOTE: The migration type must always be set when the migration network
1361is set in `/etc/pve/datacenter.cfg`.
1362
1363
1364ifdef::manvolnum[]
1365include::pve-copyright.adoc[]
1366endif::manvolnum[]