]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvecm.adoc
followup comma fix
[pve-docs.git] / pvecm.adoc
... / ...
CommitLineData
1[[chapter_pvecm]]
2ifdef::manvolnum[]
3pvecm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvecm - Proxmox VE Cluster Manager
11
12SYNOPSIS
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
24:pve-toplevel:
25endif::manvolnum[]
26
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
29http://www.corosync.org[Corosync Cluster Engine] for reliable group
30communication. There's no explicit limit for the number of nodes in a cluster.
31In practice, the actual possible node count may be limited by the host and
32network performance. Currently (2021), there are reports of clusters (using
33high-end enterprise hardware) with over 50 nodes in production.
34
35`pvecm` can be used to create a new cluster, join nodes to a cluster,
36leave the cluster, get status information and do various other cluster
37related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
38is used to transparently distribute the cluster configuration to all cluster
39nodes.
40
41Grouping nodes into a cluster has the following advantages:
42
43* Centralized, web based management
44
45* Multi-master clusters: each node can do all management tasks
46
47* `pmxcfs`: database-driven file system for storing configuration files,
48 replicated in real-time on all nodes using `corosync`.
49
50* Easy migration of virtual machines and containers between physical
51 hosts
52
53* Fast deployment
54
55* Cluster-wide services like firewall and HA
56
57
58Requirements
59------------
60
61* All nodes must be able to connect to each other via UDP ports 5404 and 5405
62 for corosync to work.
63
64* Date and time have to be synchronized.
65
66* SSH tunnel on TCP port 22 between nodes is used.
67
68* If you are interested in High Availability, you need to have at
69 least three nodes for reliable quorum. All nodes should have the
70 same version.
71
72* We recommend a dedicated NIC for the cluster traffic, especially if
73 you use shared storage.
74
75* Root password of a cluster node is required for adding nodes.
76
77NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
78nodes.
79
80NOTE: While it's possible to mix {pve} 4.4 and {pve} 5.0 nodes, doing so is
81not supported as production configuration and should only used temporarily
82during upgrading the whole cluster from one to another major version.
83
84NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
85cluster protocol (corosync) between {pve} 6.x and earlier versions changed
86fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
87upgrade procedure to {pve} 6.0.
88
89
90Preparing Nodes
91---------------
92
93First, install {PVE} on all nodes. Make sure that each node is
94installed with the final hostname and IP configuration. Changing the
95hostname and IP is not possible after cluster creation.
96
97While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
98make their names resolvable through other means), this is not necessary for a
99cluster to work. It may be useful however, as you can then connect from one node
100to the other with SSH via the easier to remember node name (see also
101xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
102recommend to reference nodes by their IP addresses in the cluster configuration.
103
104
105[[pvecm_create_cluster]]
106Create a Cluster
107----------------
108
109You can either create a cluster on the console (login via `ssh`), or through
110the API using the {pve} Webinterface (__Datacenter -> Cluster__).
111
112NOTE: Use a unique name for your cluster. This name cannot be changed later.
113The cluster name follows the same rules as node names.
114
115[[pvecm_cluster_create_via_gui]]
116Create via Web GUI
117~~~~~~~~~~~~~~~~~~
118
119[thumbnail="screenshot/gui-cluster-create.png"]
120
121Under __Datacenter -> Cluster__, click on *Create Cluster*. Enter the cluster
122name and select a network connection from the dropdown to serve as the main
123cluster network (Link 0). It defaults to the IP resolved via the node's
124hostname.
125
126To add a second link as fallback, you can select the 'Advanced' checkbox and
127choose an additional network interface (Link 1, see also
128xref:pvecm_redundancy[Corosync Redundancy]).
129
130NOTE: Ensure the network selected for the cluster communication is not used for
131any high traffic loads like those of (network) storages or live-migration.
132While the cluster network itself produces small amounts of data, it is very
133sensitive to latency. Check out full
134xref:pvecm_cluster_network_requirements[cluster network requirements].
135
136[[pvecm_cluster_create_via_cli]]
137Create via Command Line
138~~~~~~~~~~~~~~~~~~~~~~~
139
140Login via `ssh` to the first {pve} node and run the following command:
141
142----
143 hp1# pvecm create CLUSTERNAME
144----
145
146To check the state of the new cluster use:
147
148----
149 hp1# pvecm status
150----
151
152Multiple Clusters In Same Network
153~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154
155It is possible to create multiple clusters in the same physical or logical
156network. Each such cluster must have a unique name to avoid possible clashes in
157the cluster communication stack. This also helps avoid human confusion by making
158clusters clearly distinguishable.
159
160While the bandwidth requirement of a corosync cluster is relatively low, the
161latency of packages and the package per second (PPS) rate is the limiting
162factor. Different clusters in the same network can compete with each other for
163these resources, so it may still make sense to use separate physical network
164infrastructure for bigger clusters.
165
166[[pvecm_join_node_to_cluster]]
167Adding Nodes to the Cluster
168---------------------------
169
170CAUTION: A node that is about to be added to the cluster cannot hold any guests.
171All existing configuration in `/etc/pve` is overwritten when joining a cluster,
172since guest IDs could be conflicting. As a workaround create a backup of the
173guest (`vzdump`) and restore it as a different ID after the node has been added
174to the cluster.
175
176Join Node to Cluster via GUI
177~~~~~~~~~~~~~~~~~~~~~~~~~~~~
178
179[thumbnail="screenshot/gui-cluster-join-information.png"]
180
181Login to the web interface on an existing cluster node. Under __Datacenter ->
182Cluster__, click the button *Join Information* at the top. Then, click on the
183button *Copy Information*. Alternatively, copy the string from the 'Information'
184field manually.
185
186[thumbnail="screenshot/gui-cluster-join.png"]
187
188Next, login to the web interface on the node you want to add.
189Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
190'Information' field with the 'Join Information' text you copied earlier.
191Most settings required for joining the cluster will be filled out
192automatically. For security reasons, the cluster password has to be entered
193manually.
194
195NOTE: To enter all required data manually, you can disable the 'Assisted Join'
196checkbox.
197
198After clicking the *Join* button, the cluster join process will start
199immediately. After the node joined the cluster its current node certificate
200will be replaced by one signed from the cluster certificate authority (CA),
201that means the current session will stop to work after a few seconds. You might
202then need to force-reload the webinterface and re-login with the cluster
203credentials.
204
205Now your node should be visible under __Datacenter -> Cluster__.
206
207Join Node to Cluster via Command Line
208~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
209
210Login via `ssh` to the node you want to join into an existing cluster.
211
212----
213 hp2# pvecm add IP-ADDRESS-CLUSTER
214----
215
216For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
217An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
218
219
220To check the state of the cluster use:
221
222----
223 # pvecm status
224----
225
226.Cluster status after adding 4 nodes
227----
228hp2# pvecm status
229Quorum information
230~~~~~~~~~~~~~~~~~~
231Date: Mon Apr 20 12:30:13 2015
232Quorum provider: corosync_votequorum
233Nodes: 4
234Node ID: 0x00000001
235Ring ID: 1/8
236Quorate: Yes
237
238Votequorum information
239~~~~~~~~~~~~~~~~~~~~~~
240Expected votes: 4
241Highest expected: 4
242Total votes: 4
243Quorum: 3
244Flags: Quorate
245
246Membership information
247~~~~~~~~~~~~~~~~~~~~~~
248 Nodeid Votes Name
2490x00000001 1 192.168.15.91
2500x00000002 1 192.168.15.92 (local)
2510x00000003 1 192.168.15.93
2520x00000004 1 192.168.15.94
253----
254
255If you only want the list of all nodes use:
256
257----
258 # pvecm nodes
259----
260
261.List nodes in a cluster
262----
263hp2# pvecm nodes
264
265Membership information
266~~~~~~~~~~~~~~~~~~~~~~
267 Nodeid Votes Name
268 1 1 hp1
269 2 1 hp2 (local)
270 3 1 hp3
271 4 1 hp4
272----
273
274[[pvecm_adding_nodes_with_separated_cluster_network]]
275Adding Nodes With Separated Cluster Network
276~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
277
278When adding a node to a cluster with a separated cluster network you need to
279use the 'link0' parameter to set the nodes address on that network:
280
281[source,bash]
282----
283pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
284----
285
286If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
287kronosnet transport layer, also use the 'link1' parameter.
288
289Using the GUI, you can select the correct interface from the corresponding 'Link 0'
290and 'Link 1' fields in the *Cluster Join* dialog.
291
292Remove a Cluster Node
293---------------------
294
295CAUTION: Read carefully the procedure before proceeding, as it could
296not be what you want or need.
297
298Move all virtual machines from the node. Make sure you have no local
299data or backups you want to keep, or save them accordingly.
300In the following example we will remove the node hp4 from the cluster.
301
302Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
303command to identify the node ID to remove:
304
305----
306hp1# pvecm nodes
307
308Membership information
309~~~~~~~~~~~~~~~~~~~~~~
310 Nodeid Votes Name
311 1 1 hp1 (local)
312 2 1 hp2
313 3 1 hp3
314 4 1 hp4
315----
316
317
318At this point you must power off hp4 and
319make sure that it will not power on again (in the network) as it
320is.
321
322IMPORTANT: As said above, it is critical to power off the node
323*before* removal, and make sure that it will *never* power on again
324(in the existing cluster network) as it is.
325If you power on the node as it is, your cluster will be screwed up and
326it could be difficult to restore a clean cluster state.
327
328After powering off the node hp4, we can safely remove it from the cluster.
329
330----
331 hp1# pvecm delnode hp4
332 Killing node 4
333----
334
335Use `pvecm nodes` or `pvecm status` to check the node list again. It should
336look something like:
337
338----
339hp1# pvecm status
340
341Quorum information
342~~~~~~~~~~~~~~~~~~
343Date: Mon Apr 20 12:44:28 2015
344Quorum provider: corosync_votequorum
345Nodes: 3
346Node ID: 0x00000001
347Ring ID: 1/8
348Quorate: Yes
349
350Votequorum information
351~~~~~~~~~~~~~~~~~~~~~~
352Expected votes: 3
353Highest expected: 3
354Total votes: 3
355Quorum: 2
356Flags: Quorate
357
358Membership information
359~~~~~~~~~~~~~~~~~~~~~~
360 Nodeid Votes Name
3610x00000001 1 192.168.15.90 (local)
3620x00000002 1 192.168.15.91
3630x00000003 1 192.168.15.92
364----
365
366If, for whatever reason, you want this server to join the same cluster again,
367you have to
368
369* reinstall {pve} on it from scratch
370
371* then join it, as explained in the previous section.
372
373NOTE: After removal of the node, its SSH fingerprint will still reside in the
374'known_hosts' of the other nodes. If you receive an SSH error after rejoining
375a node with the same IP or hostname, run `pvecm updatecerts` once on the
376re-added node to update its fingerprint cluster wide.
377
378[[pvecm_separate_node_without_reinstall]]
379Separate A Node Without Reinstalling
380~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
381
382CAUTION: This is *not* the recommended method, proceed with caution. Use the
383above mentioned method if you're unsure.
384
385You can also separate a node from a cluster without reinstalling it from
386scratch. But after removing the node from the cluster it will still have
387access to the shared storages! This must be resolved before you start removing
388the node from the cluster. A {pve} cluster cannot share the exact same
389storage with another cluster, as storage locking doesn't work over cluster
390boundary. Further, it may also lead to VMID conflicts.
391
392Its suggested that you create a new storage where only the node which you want
393to separate has access. This can be a new export on your NFS or a new Ceph
394pool, to name a few examples. Its just important that the exact same storage
395does not gets accessed by multiple clusters. After setting this storage up move
396all data from the node and its VMs to it. Then you are ready to separate the
397node from the cluster.
398
399WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
400run into conflicts and problems.
401
402First, stop the corosync and the pve-cluster services on the node:
403[source,bash]
404----
405systemctl stop pve-cluster
406systemctl stop corosync
407----
408
409Start the cluster filesystem again in local mode:
410[source,bash]
411----
412pmxcfs -l
413----
414
415Delete the corosync configuration files:
416[source,bash]
417----
418rm /etc/pve/corosync.conf
419rm -r /etc/corosync/*
420----
421
422You can now start the filesystem again as normal service:
423[source,bash]
424----
425killall pmxcfs
426systemctl start pve-cluster
427----
428
429The node is now separated from the cluster. You can deleted it from a remaining
430node of the cluster with:
431[source,bash]
432----
433pvecm delnode oldnode
434----
435
436If the command failed, because the remaining node in the cluster lost quorum
437when the now separate node exited, you may set the expected votes to 1 as a workaround:
438[source,bash]
439----
440pvecm expected 1
441----
442
443And then repeat the 'pvecm delnode' command.
444
445Now switch back to the separated node, here delete all remaining files left
446from the old cluster. This ensures that the node can be added to another
447cluster again without problems.
448
449[source,bash]
450----
451rm /var/lib/corosync/*
452----
453
454As the configuration files from the other nodes are still in the cluster
455filesystem you may want to clean those up too. Remove simply the whole
456directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
457you used the correct one before deleting it.
458
459CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
460the nodes can still connect to each other with public key authentication. This
461should be fixed by removing the respective keys from the
462'/etc/pve/priv/authorized_keys' file.
463
464
465Quorum
466------
467
468{pve} use a quorum-based technique to provide a consistent state among
469all cluster nodes.
470
471[quote, from Wikipedia, Quorum (distributed computing)]
472____
473A quorum is the minimum number of votes that a distributed transaction
474has to obtain in order to be allowed to perform an operation in a
475distributed system.
476____
477
478In case of network partitioning, state changes requires that a
479majority of nodes are online. The cluster switches to read-only mode
480if it loses quorum.
481
482NOTE: {pve} assigns a single vote to each node by default.
483
484
485Cluster Network
486---------------
487
488The cluster network is the core of a cluster. All messages sent over it have to
489be delivered reliably to all nodes in their respective order. In {pve} this
490part is done by corosync, an implementation of a high performance, low overhead
491high availability development toolkit. It serves our decentralized
492configuration file system (`pmxcfs`).
493
494[[pvecm_cluster_network_requirements]]
495Network Requirements
496~~~~~~~~~~~~~~~~~~~~
497This needs a reliable network with latencies under 2 milliseconds (LAN
498performance) to work properly. The network should not be used heavily by other
499members, ideally corosync runs on its own network. Do not use a shared network
500for corosync and storage (except as a potential low-priority fallback in a
501xref:pvecm_redundancy[redundant] configuration).
502
503Before setting up a cluster, it is good practice to check if the network is fit
504for that purpose. To make sure the nodes can connect to each other on the
505cluster network, you can test the connectivity between them with the `ping`
506tool.
507
508If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
509be generated - no manual action is required.
510
511NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
512Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
513communication, which, for now, only supports regular UDP unicast.
514
515CAUTION: You can still enable Multicast or legacy unicast by setting your
516transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
517but keep in mind that this will disable all cryptography and redundancy support.
518This is therefore not recommended.
519
520Separate Cluster Network
521~~~~~~~~~~~~~~~~~~~~~~~~
522
523When creating a cluster without any parameters the corosync cluster network is
524generally shared with the Web UI and the VMs and their traffic. Depending on
525your setup, even storage traffic may get sent over the same network. Its
526recommended to change that, as corosync is a time critical real time
527application.
528
529Setting Up A New Network
530^^^^^^^^^^^^^^^^^^^^^^^^
531
532First, you have to set up a new network interface. It should be on a physically
533separate network. Ensure that your network fulfills the
534xref:pvecm_cluster_network_requirements[cluster network requirements].
535
536Separate On Cluster Creation
537^^^^^^^^^^^^^^^^^^^^^^^^^^^^
538
539This is possible via the 'linkX' parameters of the 'pvecm create'
540command used for creating a new cluster.
541
542If you have set up an additional NIC with a static address on 10.10.10.1/25,
543and want to send and receive all cluster communication over this interface,
544you would execute:
545
546[source,bash]
547----
548pvecm create test --link0 10.10.10.1
549----
550
551To check if everything is working properly execute:
552[source,bash]
553----
554systemctl status corosync
555----
556
557Afterwards, proceed as described above to
558xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
559
560[[pvecm_separate_cluster_net_after_creation]]
561Separate After Cluster Creation
562^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
563
564You can do this if you have already created a cluster and want to switch
565its communication to another network, without rebuilding the whole cluster.
566This change may lead to short durations of quorum loss in the cluster, as nodes
567have to restart corosync and come up one after the other on the new network.
568
569Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
570Then, open it and you should see a file similar to:
571
572----
573logging {
574 debug: off
575 to_syslog: yes
576}
577
578nodelist {
579
580 node {
581 name: due
582 nodeid: 2
583 quorum_votes: 1
584 ring0_addr: due
585 }
586
587 node {
588 name: tre
589 nodeid: 3
590 quorum_votes: 1
591 ring0_addr: tre
592 }
593
594 node {
595 name: uno
596 nodeid: 1
597 quorum_votes: 1
598 ring0_addr: uno
599 }
600
601}
602
603quorum {
604 provider: corosync_votequorum
605}
606
607totem {
608 cluster_name: testcluster
609 config_version: 3
610 ip_version: ipv4-6
611 secauth: on
612 version: 2
613 interface {
614 linknumber: 0
615 }
616
617}
618----
619
620NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
621is a remnant of older corosync versions that is kept for backwards
622compatibility.
623
624The first thing you want to do is add the 'name' properties in the node entries
625if you do not see them already. Those *must* match the node name.
626
627Then replace all addresses from the 'ring0_addr' properties of all nodes with
628the new addresses. You may use plain IP addresses or hostnames here. If you use
629hostnames ensure that they are resolvable from all nodes. (see also
630xref:pvecm_corosync_addresses[Link Address Types])
631
632In this example, we want to switch the cluster communication to the
63310.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
634
635NOTE: The exact same procedure can be used to change other 'ringX_addr' values
636as well, although we recommend to not change multiple addresses at once, to make
637it easier to recover if something goes wrong.
638
639After we increase the 'config_version' property, the new configuration file
640should look like:
641
642----
643logging {
644 debug: off
645 to_syslog: yes
646}
647
648nodelist {
649
650 node {
651 name: due
652 nodeid: 2
653 quorum_votes: 1
654 ring0_addr: 10.10.10.2
655 }
656
657 node {
658 name: tre
659 nodeid: 3
660 quorum_votes: 1
661 ring0_addr: 10.10.10.3
662 }
663
664 node {
665 name: uno
666 nodeid: 1
667 quorum_votes: 1
668 ring0_addr: 10.10.10.1
669 }
670
671}
672
673quorum {
674 provider: corosync_votequorum
675}
676
677totem {
678 cluster_name: testcluster
679 config_version: 4
680 ip_version: ipv4-6
681 secauth: on
682 version: 2
683 interface {
684 linknumber: 0
685 }
686
687}
688----
689
690Then, after a final check if all changed information is correct, we save it and
691once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
692section to bring it into effect.
693
694The changes will be applied live, so restarting corosync is not strictly
695necessary. If you changed other settings as well, or notice corosync
696complaining, you can optionally trigger a restart.
697
698On a single node execute:
699
700[source,bash]
701----
702systemctl restart corosync
703----
704
705Now check if everything is fine:
706
707[source,bash]
708----
709systemctl status corosync
710----
711
712If corosync runs again correct restart corosync also on all other nodes.
713They will then join the cluster membership one by one on the new network.
714
715[[pvecm_corosync_addresses]]
716Corosync addresses
717~~~~~~~~~~~~~~~~~~
718
719A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
720`corosync.conf`) can be specified in two ways:
721
722* **IPv4/v6 addresses** will be used directly. They are recommended, since they
723are static and usually not changed carelessly.
724
725* **Hostnames** will be resolved using `getaddrinfo`, which means that per
726default, IPv6 addresses will be used first, if available (see also
727`man gai.conf`). Keep this in mind, especially when upgrading an existing
728cluster to IPv6.
729
730CAUTION: Hostnames should be used with care, since the address they
731resolve to can be changed without touching corosync or the node it runs on -
732which may lead to a situation where an address is changed without thinking
733about implications for corosync.
734
735A separate, static hostname specifically for corosync is recommended, if
736hostnames are preferred. Also, make sure that every node in the cluster can
737resolve all hostnames correctly.
738
739Since {pve} 5.1, while supported, hostnames will be resolved at the time of
740entry. Only the resolved IP is then saved to the configuration.
741
742Nodes that joined the cluster on earlier versions likely still use their
743unresolved hostname in `corosync.conf`. It might be a good idea to replace
744them with IPs or a separate hostname, as mentioned above.
745
746
747[[pvecm_redundancy]]
748Corosync Redundancy
749-------------------
750
751Corosync supports redundant networking via its integrated kronosnet layer by
752default (it is not supported on the legacy udp/udpu transports). It can be
753enabled by specifying more than one link address, either via the '--linkX'
754parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
755adding a new node) or by specifying more than one 'ringX_addr' in
756`corosync.conf`.
757
758NOTE: To provide useful failover, every link should be on its own
759physical network connection.
760
761Links are used according to a priority setting. You can configure this priority
762by setting 'knet_link_priority' in the corresponding interface section in
763`corosync.conf`, or, preferably, using the 'priority' parameter when creating
764your cluster with `pvecm`:
765
766----
767 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
768----
769
770This would cause 'link1' to be used first, since it has the higher priority.
771
772If no priorities are configured manually (or two links have the same priority),
773links will be used in order of their number, with the lower number having higher
774priority.
775
776Even if all links are working, only the one with the highest priority will see
777corosync traffic. Link priorities cannot be mixed, i.e. links with different
778priorities will not be able to communicate with each other.
779
780Since lower priority links will not see traffic unless all higher priorities
781have failed, it becomes a useful strategy to specify even networks used for
782other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
783worst, a higher-latency or more congested connection might be better than no
784connection at all.
785
786Adding Redundant Links To An Existing Cluster
787~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
788
789To add a new link to a running configuration, first check how to
790xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
791
792Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
793sure that your 'X' is the same for every node you add it to, and that it is
794unique for each node.
795
796Lastly, add a new 'interface', as shown below, to your `totem`
797section, replacing 'X' with your link number chosen above.
798
799Assuming you added a link with number 1, the new configuration file could look
800like this:
801
802----
803logging {
804 debug: off
805 to_syslog: yes
806}
807
808nodelist {
809
810 node {
811 name: due
812 nodeid: 2
813 quorum_votes: 1
814 ring0_addr: 10.10.10.2
815 ring1_addr: 10.20.20.2
816 }
817
818 node {
819 name: tre
820 nodeid: 3
821 quorum_votes: 1
822 ring0_addr: 10.10.10.3
823 ring1_addr: 10.20.20.3
824 }
825
826 node {
827 name: uno
828 nodeid: 1
829 quorum_votes: 1
830 ring0_addr: 10.10.10.1
831 ring1_addr: 10.20.20.1
832 }
833
834}
835
836quorum {
837 provider: corosync_votequorum
838}
839
840totem {
841 cluster_name: testcluster
842 config_version: 4
843 ip_version: ipv4-6
844 secauth: on
845 version: 2
846 interface {
847 linknumber: 0
848 }
849 interface {
850 linknumber: 1
851 }
852}
853----
854
855The new link will be enabled as soon as you follow the last steps to
856xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
857be necessary. You can check that corosync loaded the new link using:
858
859----
860journalctl -b -u corosync
861----
862
863It might be a good idea to test the new link by temporarily disconnecting the
864old link on one node and making sure that its status remains online while
865disconnected:
866
867----
868pvecm status
869----
870
871If you see a healthy cluster state, it means that your new link is being used.
872
873
874Role of SSH in {PVE} Clusters
875-----------------------------
876
877{PVE} utilizes SSH tunnels for various features.
878
879* Proxying console/shell sessions (node and guests)
880+
881When using the shell for node B while being connected to node A, connects to a
882terminal proxy on node A, which is in turn connected to the login shell on node
883B via a non-interactive SSH tunnel.
884
885* VM and CT memory and local-storage migration in 'secure' mode.
886+
887During the migration one or more SSH tunnel(s) are established between the
888source and target nodes, in order to exchange migration information and
889transfer memory and disk contents.
890
891* Storage replication
892
893.Pitfalls due to automatic execution of `.bashrc` and siblings
894[IMPORTANT]
895====
896In case you have a custom `.bashrc`, or similar files that get executed on
897login by the configured shell, `ssh` will automatically run it once the session
898is established successfully. This can cause some unexpected behavior, as those
899commands may be executed with root permissions on any above described
900operation. That can cause possible problematic side-effects!
901
902In order to avoid such complications, it's recommended to add a check in
903`/root/.bashrc` to make sure the session is interactive, and only then run
904`.bashrc` commands.
905
906You can add this snippet at the beginning of your `.bashrc` file:
907
908----
909# Early exit if not running interactively to avoid side-effects!
910case $- in
911 *i*) ;;
912 *) return;;
913esac
914----
915====
916
917
918Corosync External Vote Support
919------------------------------
920
921This section describes a way to deploy an external voter in a {pve} cluster.
922When configured, the cluster can sustain more node failures without
923violating safety properties of the cluster communication.
924
925For this to work there are two services involved:
926
927* a so called qdevice daemon which runs on each {pve} node
928
929* an external vote daemon which runs on an independent server.
930
931As a result you can achieve higher availability even in smaller setups (for
932example 2+1 nodes).
933
934QDevice Technical Overview
935~~~~~~~~~~~~~~~~~~~~~~~~~~
936
937The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
938node. It provides a configured number of votes to the clusters quorum
939subsystem based on an external running third-party arbitrator's decision.
940Its primary use is to allow a cluster to sustain more node failures than
941standard quorum rules allow. This can be done safely as the external device
942can see all nodes and thus choose only one set of nodes to give its vote.
943This will only be done if said set of nodes can have quorum (again) when
944receiving the third-party vote.
945
946Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
947a daemon which provides a vote to a cluster partition if it can reach the
948partition members over the network. It will give only votes to one partition
949of a cluster at any time.
950It's designed to support multiple clusters and is almost configuration and
951state free. New clusters are handled dynamically and no configuration file
952is needed on the host running a QDevice.
953
954The external host has the only requirement that it needs network access to the
955cluster and a corosync-qnetd package available. We provide such a package
956for Debian based hosts, other Linux distributions should also have a package
957available through their respective package manager.
958
959NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
960TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
961latencies than 2 ms.
962
963Supported Setups
964~~~~~~~~~~~~~~~~
965
966We support QDevices for clusters with an even number of nodes and recommend
967it for 2 node clusters, if they should provide higher availability.
968For clusters with an odd node count we discourage the use of QDevices
969currently. The reason for this, is the difference of the votes the QDevice
970provides for each cluster type. Even numbered clusters get single additional
971vote, with this we can only increase availability, i.e. if the QDevice
972itself fails we are in the same situation as with no QDevice at all.
973
974Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
975where 'N' corresponds to the cluster node count. This difference makes
976sense, if we had only one additional vote the cluster can get into a split
977brain situation.
978This algorithm would allow that all nodes but one (and naturally the
979QDevice itself) could fail.
980There are two drawbacks with this:
981
982* If the QNet daemon itself fails, no other node may fail or the cluster
983 immediately loses quorum. For example, in a cluster with 15 nodes 7
984 could fail before the cluster becomes inquorate. But, if a QDevice is
985 configured here and said QDevice fails itself **no single node** of
986 the 15 may fail. The QDevice acts almost as a single point of failure in
987 this case.
988
989* The fact that all but one node plus QDevice may fail sound promising at
990 first, but this may result in a mass recovery of HA services that would
991 overload the single node left. Also ceph server will stop to provide
992 services after only '((N-1)/2)' nodes are online.
993
994If you understand the drawbacks and implications you can decide yourself if
995you should use this technology in an odd numbered cluster setup.
996
997QDevice-Net Setup
998~~~~~~~~~~~~~~~~~
999
1000We recommend to run any daemon which provides votes to corosync-qdevice as an
1001unprivileged user. {pve} and Debian provide a package which is already
1002configured to do so.
1003The traffic between the daemon and the cluster must be encrypted to ensure a
1004safe and secure QDevice integration in {pve}.
1005
1006First, install the 'corosync-qnetd' package on your external server
1007
1008----
1009external# apt install corosync-qnetd
1010----
1011
1012and the 'corosync-qdevice' package on all cluster nodes
1013
1014----
1015pve# apt install corosync-qdevice
1016----
1017
1018After that, ensure that all your nodes on the cluster are online.
1019
1020You can now easily set up your QDevice by running the following command on one
1021of the {pve} nodes:
1022
1023----
1024pve# pvecm qdevice setup <QDEVICE-IP>
1025----
1026
1027The SSH key from the cluster will be automatically copied to the QDevice.
1028
1029NOTE: Make sure that the SSH configuration on your external server allows root
1030login via password, if you are asked for a password during this step.
1031
1032After you enter the password and all the steps are successfully completed, you
1033will see "Done". You can check the status now:
1034
1035----
1036pve# pvecm status
1037
1038...
1039
1040Votequorum information
1041~~~~~~~~~~~~~~~~~~~~~
1042Expected votes: 3
1043Highest expected: 3
1044Total votes: 3
1045Quorum: 2
1046Flags: Quorate Qdevice
1047
1048Membership information
1049~~~~~~~~~~~~~~~~~~~~~~
1050 Nodeid Votes Qdevice Name
1051 0x00000001 1 A,V,NMW 192.168.22.180 (local)
1052 0x00000002 1 A,V,NMW 192.168.22.181
1053 0x00000000 1 Qdevice
1054
1055----
1056
1057which means the QDevice is set up.
1058
1059Frequently Asked Questions
1060~~~~~~~~~~~~~~~~~~~~~~~~~~
1061
1062Tie Breaking
1063^^^^^^^^^^^^
1064
1065In case of a tie, where two same-sized cluster partitions cannot see each other
1066but the QDevice, the QDevice chooses randomly one of those partitions and
1067provides a vote to it.
1068
1069Possible Negative Implications
1070^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1071
1072For clusters with an even node count there are no negative implications when
1073setting up a QDevice. If it fails to work, you are as good as without QDevice at
1074all.
1075
1076Adding/Deleting Nodes After QDevice Setup
1077^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1078
1079If you want to add a new node or remove an existing one from a cluster with a
1080QDevice setup, you need to remove the QDevice first. After that, you can add or
1081remove nodes normally. Once you have a cluster with an even node count again,
1082you can set up the QDevice again as described above.
1083
1084Removing the QDevice
1085^^^^^^^^^^^^^^^^^^^^
1086
1087If you used the official `pvecm` tool to add the QDevice, you can remove it
1088trivially by running:
1089
1090----
1091pve# pvecm qdevice remove
1092----
1093
1094//Still TODO
1095//^^^^^^^^^^
1096//There is still stuff to add here
1097
1098
1099Corosync Configuration
1100----------------------
1101
1102The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
1103controls the cluster membership and its network.
1104For further information about it, check the corosync.conf man page:
1105[source,bash]
1106----
1107man corosync.conf
1108----
1109
1110For node membership you should always use the `pvecm` tool provided by {pve}.
1111You may have to edit the configuration file manually for other changes.
1112Here are a few best practice tips for doing this.
1113
1114[[pvecm_edit_corosync_conf]]
1115Edit corosync.conf
1116~~~~~~~~~~~~~~~~~~
1117
1118Editing the corosync.conf file is not always very straightforward. There are
1119two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1120`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1121propagate the changes to the local one, but not vice versa.
1122
1123The configuration will get updated automatically as soon as the file changes.
1124This means changes which can be integrated in a running corosync will take
1125effect immediately. So you should always make a copy and edit that instead, to
1126avoid triggering some unwanted changes by an in-between safe.
1127
1128[source,bash]
1129----
1130cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1131----
1132
1133Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1134preinstalled on any {pve} node for example.
1135
1136NOTE: Always increment the 'config_version' number on configuration changes,
1137omitting this can lead to problems.
1138
1139After making the necessary changes create another copy of the current working
1140configuration file. This serves as a backup if the new configuration fails to
1141apply or makes problems in other ways.
1142
1143[source,bash]
1144----
1145cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1146----
1147
1148Then move the new configuration file over the old one:
1149[source,bash]
1150----
1151mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1152----
1153
1154You may check with the commands
1155[source,bash]
1156----
1157systemctl status corosync
1158journalctl -b -u corosync
1159----
1160
1161If the change could be applied automatically. If not you may have to restart the
1162corosync service via:
1163[source,bash]
1164----
1165systemctl restart corosync
1166----
1167
1168On errors check the troubleshooting section below.
1169
1170Troubleshooting
1171~~~~~~~~~~~~~~~
1172
1173Issue: 'quorum.expected_votes must be configured'
1174^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1175
1176When corosync starts to fail and you get the following message in the system log:
1177
1178----
1179[...]
1180corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1181corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1182 'configuration error: nodelist or quorum.expected_votes must be configured!'
1183[...]
1184----
1185
1186It means that the hostname you set for corosync 'ringX_addr' in the
1187configuration could not be resolved.
1188
1189Write Configuration When Not Quorate
1190^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1191
1192If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1193know what you do, use:
1194[source,bash]
1195----
1196pvecm expected 1
1197----
1198
1199This sets the expected vote count to 1 and makes the cluster quorate. You can
1200now fix your configuration, or revert it back to the last working backup.
1201
1202This is not enough if corosync cannot start anymore. Here it is best to edit the
1203local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1204that corosync can start again. Ensure that on all nodes this configuration has
1205the same content to avoid split brains. If you are not sure what went wrong
1206it's best to ask the Proxmox Community to help you.
1207
1208
1209[[pvecm_corosync_conf_glossary]]
1210Corosync Configuration Glossary
1211~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1212
1213ringX_addr::
1214This names the different link addresses for the kronosnet connections between
1215nodes.
1216
1217
1218Cluster Cold Start
1219------------------
1220
1221It is obvious that a cluster is not quorate when all nodes are
1222offline. This is a common case after a power failure.
1223
1224NOTE: It is always a good idea to use an uninterruptible power supply
1225(``UPS'', also called ``battery backup'') to avoid this state, especially if
1226you want HA.
1227
1228On node startup, the `pve-guests` service is started and waits for
1229quorum. Once quorate, it starts all guests which have the `onboot`
1230flag set.
1231
1232When you turn on nodes, or when power comes back after power failure,
1233it is likely that some nodes boots faster than others. Please keep in
1234mind that guest startup is delayed until you reach quorum.
1235
1236
1237Guest Migration
1238---------------
1239
1240Migrating virtual guests to other nodes is a useful feature in a
1241cluster. There are settings to control the behavior of such
1242migrations. This can be done via the configuration file
1243`datacenter.cfg` or for a specific migration via API or command line
1244parameters.
1245
1246It makes a difference if a Guest is online or offline, or if it has
1247local resources (like a local disk).
1248
1249For Details about Virtual Machine Migration see the
1250xref:qm_migration[QEMU/KVM Migration Chapter].
1251
1252For Details about Container Migration see the
1253xref:pct_migration[Container Migration Chapter].
1254
1255Migration Type
1256~~~~~~~~~~~~~~
1257
1258The migration type defines if the migration data should be sent over an
1259encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1260Setting the migration type to insecure means that the RAM content of a
1261virtual guest gets also transferred unencrypted, which can lead to
1262information disclosure of critical data from inside the guest (for
1263example passwords or encryption keys).
1264
1265Therefore, we strongly recommend using the secure channel if you do
1266not have full control over the network and can not guarantee that no
1267one is eavesdropping on it.
1268
1269NOTE: Storage migration does not follow this setting. Currently, it
1270always sends the storage content over a secure channel.
1271
1272Encryption requires a lot of computing power, so this setting is often
1273changed to "unsafe" to achieve better performance. The impact on
1274modern systems is lower because they implement AES encryption in
1275hardware. The performance impact is particularly evident in fast
1276networks where you can transfer 10 Gbps or more.
1277
1278Migration Network
1279~~~~~~~~~~~~~~~~~
1280
1281By default, {pve} uses the network in which cluster communication
1282takes place to send the migration traffic. This is not optimal because
1283sensitive cluster traffic can be disrupted and this network may not
1284have the best bandwidth available on the node.
1285
1286Setting the migration network parameter allows the use of a dedicated
1287network for the entire migration traffic. In addition to the memory,
1288this also affects the storage traffic for offline migrations.
1289
1290The migration network is set as a network in the CIDR notation. This
1291has the advantage that you do not have to set individual IP addresses
1292for each node. {pve} can determine the real address on the
1293destination node from the network specified in the CIDR form. To
1294enable this, the network must be specified so that each node has one,
1295but only one IP in the respective network.
1296
1297Example
1298^^^^^^^
1299
1300We assume that we have a three-node setup with three separate
1301networks. One for public communication with the Internet, one for
1302cluster communication and a very fast one, which we want to use as a
1303dedicated network for migration.
1304
1305A network configuration for such a setup might look as follows:
1306
1307----
1308iface eno1 inet manual
1309
1310# public network
1311auto vmbr0
1312iface vmbr0 inet static
1313 address 192.X.Y.57
1314 netmask 255.255.250.0
1315 gateway 192.X.Y.1
1316 bridge-ports eno1
1317 bridge-stp off
1318 bridge-fd 0
1319
1320# cluster network
1321auto eno2
1322iface eno2 inet static
1323 address 10.1.1.1
1324 netmask 255.255.255.0
1325
1326# fast network
1327auto eno3
1328iface eno3 inet static
1329 address 10.1.2.1
1330 netmask 255.255.255.0
1331----
1332
1333Here, we will use the network 10.1.2.0/24 as a migration network. For
1334a single migration, you can do this using the `migration_network`
1335parameter of the command line tool:
1336
1337----
1338# qm migrate 106 tre --online --migration_network 10.1.2.0/24
1339----
1340
1341To configure this as the default network for all migrations in the
1342cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1343file:
1344
1345----
1346# use dedicated migration network
1347migration: secure,network=10.1.2.0/24
1348----
1349
1350NOTE: The migration type must always be set when the migration network
1351gets set in `/etc/pve/datacenter.cfg`.
1352
1353
1354ifdef::manvolnum[]
1355include::pve-copyright.adoc[]
1356endif::manvolnum[]