]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
followup: minor fixup and comment out "TODO"
[pve-docs.git] / pvecm.adoc
CommitLineData
bde0e57d 1[[chapter_pvecm]]
d8742b0c 2ifdef::manvolnum[]
b2f242ab
DM
3pvecm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6
FG
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
8a865621 29http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 30communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
31(probably more, dependent on network latency).
32
8c1189b6 33`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 34leave the cluster, get status information and do various other cluster
e300cf7d
FG
35related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
37nodes.
38
39Grouping nodes into a cluster has the following advantages:
40
41* Centralized, web based management
42
5eba0743 43* Multi-master clusters: each node can do all management task
8a865621 44
8c1189b6
FG
45* `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
8a865621 47
5eba0743 48* Easy migration of virtual machines and containers between physical
8a865621
DM
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
8c1189b6 59* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 60 to communicate between nodes (also see
ceabe189 61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 62 ports 5404 and 5405 for cluster communication.
ceabe189
DM
63+
64NOTE: Some switches do not support IP multicast by default and must be
65manually enabled first.
8a865621
DM
66
67* Date and time have to be synchronized.
68
ceabe189 69* SSH tunnel on TCP port 22 between nodes is used.
8a865621 70
ceabe189
DM
71* If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
8a865621
DM
74
75* We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
d4a9910f
DL
78* Root password of a cluster node is required for adding nodes.
79
e4b62d04
TL
80NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
81nodes.
82
83NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
84production configuration and should only used temporarily during upgrading the
85whole cluster from one to another major version.
8a865621
DM
86
87
ceabe189
DM
88Preparing Nodes
89---------------
8a865621
DM
90
91First, install {PVE} on all nodes. Make sure that each node is
92installed with the final hostname and IP configuration. Changing the
93hostname and IP is not possible after cluster creation.
94
30101530
TL
95Currently the cluster creation can either be done on the console (login via
96`ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
97Cluster__).
8a865621 98
9a7396aa
TL
99While it's often common use to reference all other nodenames in `/etc/hosts`
100with their IP this is not strictly necessary for a cluster, which normally uses
101multicast, to work. It maybe useful as you then can connect from one node to
102the other with SSH through the easier to remember node name.
103
11202f1d 104[[pvecm_create_cluster]]
8a865621 105Create the Cluster
ceabe189 106------------------
8a865621 107
8c1189b6 108Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
9a7396aa
TL
109This name cannot be changed later. The cluster name follows the same rules as
110node names.
8a865621 111
c15cdfba
TL
112----
113 hp1# pvecm create CLUSTERNAME
114----
8a865621 115
9a7396aa
TL
116CAUTION: The cluster name is used to compute the default multicast address.
117Please use unique cluster names if you run more than one cluster inside your
118network. To avoid human confusion, it is also recommended to choose different
119names even if clusters do not share the cluster network.
63f956c8 120
8a865621
DM
121To check the state of your cluster use:
122
c15cdfba 123----
8a865621 124 hp1# pvecm status
c15cdfba 125----
8a865621 126
82445c4e
TL
127Multiple Clusters In Same Network
128~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
129
130It is possible to create multiple clusters in the same physical or logical
131network. Each cluster must have a unique name, which is used to generate the
132cluster's multicast group address. As long as no duplicate cluster names are
133configured in one network segment, the different clusters won't interfere with
134each other.
135
136If multiple clusters operate in a single network it may be beneficial to setup
137an IGMP querier and enable IGMP Snooping in said network. This may reduce the
138load of the network significantly because multicast packets are only delivered
139to endpoints of the respective member nodes.
140
8a865621 141
11202f1d 142[[pvecm_join_node_to_cluster]]
8a865621 143Adding Nodes to the Cluster
ceabe189 144---------------------------
8a865621 145
8c1189b6 146Login via `ssh` to the node you want to add.
8a865621 147
c15cdfba 148----
8a865621 149 hp2# pvecm add IP-ADDRESS-CLUSTER
c15cdfba 150----
8a865621
DM
151
152For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
153
5eba0743 154CAUTION: A new node cannot hold any VMs, because you would get
7980581f 155conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
156`/etc/pve` is overwritten when you join a new node to the cluster. To
157workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 158adding the node to the cluster.
8a865621
DM
159
160To check the state of cluster:
161
c15cdfba 162----
8a865621 163 # pvecm status
c15cdfba 164----
8a865621 165
ceabe189 166.Cluster status after adding 4 nodes
8a865621
DM
167----
168hp2# pvecm status
169Quorum information
170~~~~~~~~~~~~~~~~~~
171Date: Mon Apr 20 12:30:13 2015
172Quorum provider: corosync_votequorum
173Nodes: 4
174Node ID: 0x00000001
175Ring ID: 1928
176Quorate: Yes
177
178Votequorum information
179~~~~~~~~~~~~~~~~~~~~~~
180Expected votes: 4
181Highest expected: 4
182Total votes: 4
183Quorum: 2
184Flags: Quorate
185
186Membership information
187~~~~~~~~~~~~~~~~~~~~~~
188 Nodeid Votes Name
1890x00000001 1 192.168.15.91
1900x00000002 1 192.168.15.92 (local)
1910x00000003 1 192.168.15.93
1920x00000004 1 192.168.15.94
193----
194
195If you only want the list of all nodes use:
196
c15cdfba 197----
8a865621 198 # pvecm nodes
c15cdfba 199----
8a865621 200
5eba0743 201.List nodes in a cluster
8a865621
DM
202----
203hp2# pvecm nodes
204
205Membership information
206~~~~~~~~~~~~~~~~~~~~~~
207 Nodeid Votes Name
208 1 1 hp1
209 2 1 hp2 (local)
210 3 1 hp3
211 4 1 hp4
212----
213
82d52451 214[[adding-nodes-with-separated-cluster-network]]
e4ec4154
TL
215Adding Nodes With Separated Cluster Network
216~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
217
218When adding a node to a cluster with a separated cluster network you need to
219use the 'ringX_addr' parameters to set the nodes address on those networks:
220
221[source,bash]
4d19cb00 222----
e4ec4154 223pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 224----
e4ec4154
TL
225
226If you want to use the Redundant Ring Protocol you will also want to pass the
227'ring1_addr' parameter.
228
8a865621
DM
229
230Remove a Cluster Node
ceabe189 231---------------------
8a865621
DM
232
233CAUTION: Read carefully the procedure before proceeding, as it could
234not be what you want or need.
235
236Move all virtual machines from the node. Make sure you have no local
237data or backups you want to keep, or save them accordingly.
e8503c6c 238In the following example we will remove the node hp4 from the cluster.
8a865621 239
e8503c6c
EK
240Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
241command to identify the node ID to remove:
8a865621
DM
242
243----
244hp1# pvecm nodes
245
246Membership information
247~~~~~~~~~~~~~~~~~~~~~~
248 Nodeid Votes Name
249 1 1 hp1 (local)
250 2 1 hp2
251 3 1 hp3
252 4 1 hp4
253----
254
e8503c6c
EK
255
256At this point you must power off hp4 and
257make sure that it will not power on again (in the network) as it
258is.
259
260IMPORTANT: As said above, it is critical to power off the node
261*before* removal, and make sure that it will *never* power on again
262(in the existing cluster network) as it is.
263If you power on the node as it is, your cluster will be screwed up and
264it could be difficult to restore a clean cluster state.
265
266After powering off the node hp4, we can safely remove it from the cluster.
8a865621 267
c15cdfba 268----
8a865621 269 hp1# pvecm delnode hp4
c15cdfba 270----
8a865621
DM
271
272If the operation succeeds no output is returned, just check the node
8c1189b6 273list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
274something like:
275
276----
277hp1# pvecm status
278
279Quorum information
280~~~~~~~~~~~~~~~~~~
281Date: Mon Apr 20 12:44:28 2015
282Quorum provider: corosync_votequorum
283Nodes: 3
284Node ID: 0x00000001
285Ring ID: 1992
286Quorate: Yes
287
288Votequorum information
289~~~~~~~~~~~~~~~~~~~~~~
290Expected votes: 3
291Highest expected: 3
292Total votes: 3
293Quorum: 3
294Flags: Quorate
295
296Membership information
297~~~~~~~~~~~~~~~~~~~~~~
298 Nodeid Votes Name
2990x00000001 1 192.168.15.90 (local)
3000x00000002 1 192.168.15.91
3010x00000003 1 192.168.15.92
302----
303
8a865621
DM
304If, for whatever reason, you want that this server joins the same
305cluster again, you have to
306
26ca7ff5 307* reinstall {pve} on it from scratch
8a865621
DM
308
309* then join it, as explained in the previous section.
d8742b0c 310
38ae8db3 311[[pvecm_separate_node_without_reinstall]]
555e966b
TL
312Separate A Node Without Reinstalling
313~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
314
315CAUTION: This is *not* the recommended method, proceed with caution. Use the
316above mentioned method if you're unsure.
317
318You can also separate a node from a cluster without reinstalling it from
319scratch. But after removing the node from the cluster it will still have
320access to the shared storages! This must be resolved before you start removing
321the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
322storage with another cluster, as storage locking doesn't work over cluster
323boundary. Further, it may also lead to VMID conflicts.
555e966b 324
3be22308
TL
325Its suggested that you create a new storage where only the node which you want
326to separate has access. This can be an new export on your NFS or a new Ceph
327pool, to name a few examples. Its just important that the exact same storage
328does not gets accessed by multiple clusters. After setting this storage up move
329all data from the node and its VMs to it. Then you are ready to separate the
330node from the cluster.
555e966b
TL
331
332WARNING: Ensure all shared resources are cleanly separated! You will run into
333conflicts and problems else.
334
335First stop the corosync and the pve-cluster services on the node:
336[source,bash]
4d19cb00 337----
555e966b
TL
338systemctl stop pve-cluster
339systemctl stop corosync
4d19cb00 340----
555e966b
TL
341
342Start the cluster filesystem again in local mode:
343[source,bash]
4d19cb00 344----
555e966b 345pmxcfs -l
4d19cb00 346----
555e966b
TL
347
348Delete the corosync configuration files:
349[source,bash]
4d19cb00 350----
555e966b
TL
351rm /etc/pve/corosync.conf
352rm /etc/corosync/*
4d19cb00 353----
555e966b
TL
354
355You can now start the filesystem again as normal service:
356[source,bash]
4d19cb00 357----
555e966b
TL
358killall pmxcfs
359systemctl start pve-cluster
4d19cb00 360----
555e966b
TL
361
362The node is now separated from the cluster. You can deleted it from a remaining
363node of the cluster with:
364[source,bash]
4d19cb00 365----
555e966b 366pvecm delnode oldnode
4d19cb00 367----
555e966b
TL
368
369If the command failed, because the remaining node in the cluster lost quorum
370when the now separate node exited, you may set the expected votes to 1 as a workaround:
371[source,bash]
4d19cb00 372----
555e966b 373pvecm expected 1
4d19cb00 374----
555e966b 375
96d698db 376And then repeat the 'pvecm delnode' command.
555e966b
TL
377
378Now switch back to the separated node, here delete all remaining files left
379from the old cluster. This ensures that the node can be added to another
380cluster again without problems.
381
382[source,bash]
4d19cb00 383----
555e966b 384rm /var/lib/corosync/*
4d19cb00 385----
555e966b
TL
386
387As the configuration files from the other nodes are still in the cluster
388filesystem you may want to clean those up too. Remove simply the whole
389directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
390you used the correct one before deleting it.
391
392CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
393the nodes can still connect to each other with public key authentication. This
394should be fixed by removing the respective keys from the
395'/etc/pve/priv/authorized_keys' file.
d8742b0c 396
806ef12d
DM
397Quorum
398------
399
400{pve} use a quorum-based technique to provide a consistent state among
401all cluster nodes.
402
403[quote, from Wikipedia, Quorum (distributed computing)]
404____
405A quorum is the minimum number of votes that a distributed transaction
406has to obtain in order to be allowed to perform an operation in a
407distributed system.
408____
409
410In case of network partitioning, state changes requires that a
411majority of nodes are online. The cluster switches to read-only mode
5eba0743 412if it loses quorum.
806ef12d
DM
413
414NOTE: {pve} assigns a single vote to each node by default.
415
e4ec4154
TL
416Cluster Network
417---------------
418
419The cluster network is the core of a cluster. All messages sent over it have to
420be delivered reliable to all nodes in their respective order. In {pve} this
421part is done by corosync, an implementation of a high performance low overhead
422high availability development toolkit. It serves our decentralized
423configuration file system (`pmxcfs`).
424
425[[cluster-network-requirements]]
426Network Requirements
427~~~~~~~~~~~~~~~~~~~~
428This needs a reliable network with latencies under 2 milliseconds (LAN
429performance) to work properly. While corosync can also use unicast for
430communication between nodes its **highly recommended** to have a multicast
431capable network. The network should not be used heavily by other members,
432ideally corosync runs on its own network.
433*never* share it with network where storage communicates too.
434
435Before setting up a cluster it is good practice to check if the network is fit
436for that purpose.
437
438* Ensure that all nodes are in the same subnet. This must only be true for the
439 network interfaces used for cluster communication (corosync).
440
441* Ensure all nodes can reach each other over those interfaces, using `ping` is
442 enough for a basic test.
443
444* Ensure that multicast works in general and a high package rates. This can be
445 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 446+
e4ec4154
TL
447[source,bash]
448----
449omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
450----
451
452* Ensure that multicast communication works over an extended period of time.
a181f090 453 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
454 no multicast querier is active. This test has a duration of around 10
455 minutes.
9e73d831 456+
e4ec4154 457[source,bash]
4d19cb00 458----
e4ec4154 459omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 460----
e4ec4154
TL
461
462Your network is not ready for clustering if any of these test fails. Recheck
463your network configuration. Especially switches are notorious for having
464multicast disabled by default or IGMP snooping enabled with no IGMP querier
465active.
466
467In smaller cluster its also an option to use unicast if you really cannot get
468multicast to work.
469
470Separate Cluster Network
471~~~~~~~~~~~~~~~~~~~~~~~~
472
473When creating a cluster without any parameters the cluster network is generally
474shared with the Web UI and the VMs and its traffic. Depending on your setup
475even storage traffic may get sent over the same network. Its recommended to
476change that, as corosync is a time critical real time application.
477
478Setting Up A New Network
479^^^^^^^^^^^^^^^^^^^^^^^^
480
481First you have to setup a new network interface. It should be on a physical
482separate network. Ensure that your network fulfills the
483<<cluster-network-requirements,cluster network requirements>>.
484
485Separate On Cluster Creation
486^^^^^^^^^^^^^^^^^^^^^^^^^^^^
487
488This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
489the 'pvecm create' command used for creating a new cluster.
490
44f38275 491If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
492and want to send and receive all cluster communication over this interface
493you would execute:
494
495[source,bash]
4d19cb00 496----
e4ec4154 497pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 498----
e4ec4154
TL
499
500To check if everything is working properly execute:
501[source,bash]
4d19cb00 502----
e4ec4154 503systemctl status corosync
4d19cb00 504----
e4ec4154 505
266cb17b
WB
506Afterwards, proceed as descripted in the section to
507<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
82d52451 508
e4ec4154
TL
509[[separate-cluster-net-after-creation]]
510Separate After Cluster Creation
511^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
512
513You can do this also if you have already created a cluster and want to switch
514its communication to another network, without rebuilding the whole cluster.
515This change may lead to short durations of quorum loss in the cluster, as nodes
516have to restart corosync and come up one after the other on the new network.
517
518Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
519The open it and you should see a file similar to:
520
521----
522logging {
523 debug: off
524 to_syslog: yes
525}
526
527nodelist {
528
529 node {
530 name: due
531 nodeid: 2
532 quorum_votes: 1
533 ring0_addr: due
534 }
535
536 node {
537 name: tre
538 nodeid: 3
539 quorum_votes: 1
540 ring0_addr: tre
541 }
542
543 node {
544 name: uno
545 nodeid: 1
546 quorum_votes: 1
547 ring0_addr: uno
548 }
549
550}
551
552quorum {
553 provider: corosync_votequorum
554}
555
556totem {
557 cluster_name: thomas-testcluster
558 config_version: 3
559 ip_version: ipv4
560 secauth: on
561 version: 2
562 interface {
563 bindnetaddr: 192.168.30.50
564 ringnumber: 0
565 }
566
567}
568----
569
570The first you want to do is add the 'name' properties in the node entries if
571you do not see them already. Those *must* match the node name.
572
573Then replace the address from the 'ring0_addr' properties with the new
574addresses. You may use plain IP addresses or also hostnames here. If you use
575hostnames ensure that they are resolvable from all nodes.
576
577In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 578network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
579in the totem section of the config to an address of the new network. It can be
580any address from the subnet configured on the new network interface.
581
582After you increased the 'config_version' property the new configuration file
583should look like:
584
585----
586
587logging {
588 debug: off
589 to_syslog: yes
590}
591
592nodelist {
593
594 node {
595 name: due
596 nodeid: 2
597 quorum_votes: 1
598 ring0_addr: 10.10.10.2
599 }
600
601 node {
602 name: tre
603 nodeid: 3
604 quorum_votes: 1
605 ring0_addr: 10.10.10.3
606 }
607
608 node {
609 name: uno
610 nodeid: 1
611 quorum_votes: 1
612 ring0_addr: 10.10.10.1
613 }
614
615}
616
617quorum {
618 provider: corosync_votequorum
619}
620
621totem {
622 cluster_name: thomas-testcluster
623 config_version: 4
624 ip_version: ipv4
625 secauth: on
626 version: 2
627 interface {
628 bindnetaddr: 10.10.10.1
629 ringnumber: 0
630 }
631
632}
633----
634
635Now after a final check whether all changed information is correct we save it
636and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
637learn how to bring it in effect.
638
639As our change cannot be enforced live from corosync we have to do an restart.
640
641On a single node execute:
642[source,bash]
4d19cb00 643----
e4ec4154 644systemctl restart corosync
4d19cb00 645----
e4ec4154
TL
646
647Now check if everything is fine:
648
649[source,bash]
4d19cb00 650----
e4ec4154 651systemctl status corosync
4d19cb00 652----
e4ec4154
TL
653
654If corosync runs again correct restart corosync also on all other nodes.
655They will then join the cluster membership one by one on the new network.
656
11202f1d 657[[pvecm_rrp]]
e4ec4154
TL
658Redundant Ring Protocol
659~~~~~~~~~~~~~~~~~~~~~~~
660To avoid a single point of failure you should implement counter measurements.
661This can be on the hardware and operating system level through network bonding.
662
663Corosync itself offers also a possibility to add redundancy through the so
664called 'Redundant Ring Protocol'. This protocol allows running a second totem
665ring on another network, this network should be physically separated from the
666other rings network to actually increase availability.
667
668RRP On Cluster Creation
669~~~~~~~~~~~~~~~~~~~~~~~
670
671The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
672'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
673
674NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
675
676So if you have two networks, one on the 10.10.10.1/24 and the other on the
67710.10.20.1/24 subnet you would execute:
678
679[source,bash]
4d19cb00 680----
e4ec4154
TL
681pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
682-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 683----
e4ec4154 684
6e78f927 685RRP On Existing Clusters
e4ec4154
TL
686~~~~~~~~~~~~~~~~~~~~~~~~
687
6e78f927
TL
688You will take similar steps as described in
689<<separate-cluster-net-after-creation,separating the cluster network>> to
690enable RRP on an already running cluster. The single difference is, that you
691will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
692
693First add a new `interface` subsection in the `totem` section, set its
694`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
695address of the subnet you have configured for your new ring.
696Further set the `rrp_mode` to `passive`, this is the only stable mode.
697
698Then add to each node entry in the `nodelist` section its new `ring1_addr`
699property with the nodes additional ring address.
700
701So if you have two networks, one on the 10.10.10.1/24 and the other on the
70210.10.20.1/24 subnet, the final configuration file should look like:
703
704----
705totem {
706 cluster_name: tweak
707 config_version: 9
708 ip_version: ipv4
709 rrp_mode: passive
710 secauth: on
711 version: 2
712 interface {
713 bindnetaddr: 10.10.10.1
714 ringnumber: 0
715 }
716 interface {
717 bindnetaddr: 10.10.20.1
718 ringnumber: 1
719 }
720}
721
722nodelist {
723 node {
724 name: pvecm1
725 nodeid: 1
726 quorum_votes: 1
727 ring0_addr: 10.10.10.1
728 ring1_addr: 10.10.20.1
729 }
730
731 node {
732 name: pvecm2
733 nodeid: 2
734 quorum_votes: 1
735 ring0_addr: 10.10.10.2
736 ring1_addr: 10.10.20.2
737 }
738
739 [...] # other cluster nodes here
740}
741
742[...] # other remaining config sections here
743
744----
745
7d48940b
DM
746Bring it in effect like described in the
747<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
748
749This is a change which cannot take live in effect and needs at least a restart
750of corosync. Recommended is a restart of the whole cluster.
751
752If you cannot reboot the whole cluster ensure no High Availability services are
753configured and the stop the corosync service on all nodes. After corosync is
754stopped on all nodes start it one after the other again.
755
c21d2cbe
OB
756Corosync External Vote Support
757------------------------------
758
759This section describes a way to deploy an external voter in a {pve} cluster.
760When configured, the cluster can sustain more node failures without
761violating safety properties of the cluster communication.
762
763For this to work there are two services involved:
764
765* a so called qdevice daemon which runs on each {pve} node
766
767* an external vote daemon which runs on an independent server.
768
769As a result you can achieve higher availability even in smaller setups (for
770example 2+1 nodes).
771
772QDevice Technical Overview
773~~~~~~~~~~~~~~~~~~~~~~~~~~
774
775The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
776node. It provides a configured number of votes to the clusters quorum
777subsystem based on an external running third-party arbitrator's decision.
778Its primary use is to allow a cluster to sustain more node failures than
779standard quorum rules allow. This can be done safely as the external device
780can see all nodes and thus choose only one set of nodes to give its vote.
51730d56 781This will only be done if said set of nodes can have quorum (again) when
c21d2cbe
OB
782receiving the third-party vote.
783
784Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
785a daemon which provides a vote to a cluster partition if it can reach the
786partition members over the network. It will give only votes to one partition
787of a cluster at any time.
788It's designed to support multiple clusters and is almost configuration and
789state free. New clusters are handled dynamically and no configuration file
790is needed on the host running a QDevice.
791
792The external host has the only requirement that it needs network access to the
793cluster and a corosync-qnetd package available. We provide such a package
794for Debian based hosts, other Linux distributions should also have a package
795available through their respective package manager.
796
797NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
798TCP/IP and thus does not need a multicast capable network between itself and
799the cluster. In fact the daemon may run outside of the LAN and can have
800longer latencies than 2 ms.
801
802
803Supported Setups
804~~~~~~~~~~~~~~~~
805
806We support QDevices for clusters with an even number of nodes and recommend
807it for 2 node clusters, if they should provide higher availability.
808For clusters with an odd node count we discourage the use of QDevices
809currently. The reason for this, is the difference of the votes the QDevice
810provides for each cluster type. Even numbered clusters get single additional
811vote, with this we can only increase availability, i.e. if the QDevice
812itself fails we are in the same situation as with no QDevice at all.
813
814Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
815where 'N' corresponds to the cluster node count. This difference makes
816sense, if we had only one additional vote the cluster can get into a split
817brain situation.
818This algorithm would allow that all nodes but one (and naturally the
819QDevice itself) could fail.
820There are two drawbacks with this:
821
822* If the QNet daemon itself fails, no other node may fail or the cluster
823 immediately loses quorum. For example, in a cluster with 15 nodes 7
824 could fail before the cluster becomes inquorate. But, if a QDevice is
825 configured here and said QDevice fails itself **no single node** of
826 the 15 may fail. The QDevice acts almost as a single point of failure in
827 this case.
828
829* The fact that all but one node plus QDevice may fail sound promising at
830 first, but this may result in a mass recovery of HA services that would
831 overload the single node left. Also ceph server will stop to provide
832 services after only '((N-1)/2)' nodes are online.
833
834If you understand the drawbacks and implications you can decide yourself if
835you should use this technology in an odd numbered cluster setup.
836
837
838QDevice-Net Setup
839~~~~~~~~~~~~~~~~~
840
841We recommend to run any daemon which provides votes to corosync-qdevice as an
842unprivileged user. {pve} and Debian Stretch provide a package which is
843already configured to do so.
844The traffic between the daemon and the cluster must be encrypted to ensure a
845safe and secure QDevice integration in {pve}.
846
847First install the 'corosync-qnetd' package on your external server and
848the 'corosync-qdevice' package on all cluster nodes.
849
850After that, ensure that all your nodes on the cluster are online.
851
852You can now easily set up your QDevice by running the following command on one
853of the {pve} nodes:
854
855----
856pve# pvecm qdevice setup <QDEVICE-IP>
857----
858
859The SSH key from the cluster will be automatically copied to the QDevice. You
860might need to enter an SSH password during this step.
861
862After you enter the password and all the steps are successfully completed, you
863will see "Done". You can check the status now:
864
865----
866pve# pvecm status
867
868...
869
870Votequorum information
871~~~~~~~~~~~~~~~~~~~~~
872Expected votes: 3
873Highest expected: 3
874Total votes: 3
875Quorum: 2
876Flags: Quorate Qdevice
877
878Membership information
879~~~~~~~~~~~~~~~~~~~~~~
880 Nodeid Votes Qdevice Name
881 0x00000001 1 A,V,NMW 192.168.22.180 (local)
882 0x00000002 1 A,V,NMW 192.168.22.181
883 0x00000000 1 Qdevice
884
885----
886
887which means the QDevice is set up.
888
889
890Frequently Asked Questions
891~~~~~~~~~~~~~~~~~~~~~~~~~~
892
893Tie Breaking
894^^^^^^^^^^^^
895
896In case of a tie, where two same-sized cluster partitions cannot see each
897other but the QDevice, the QDevice chooses randomly one of those partitions and
898provides a vote to it.
899
51730d56
TL
900//Still TODO
901//^^^^^^^^^^
902//There ist still stuff to add here
c21d2cbe
OB
903
904
e4ec4154
TL
905Corosync Configuration
906----------------------
907
470d4313 908The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
909controls the cluster member ship and its network.
910For reading more about it check the corosync.conf man page:
911[source,bash]
4d19cb00 912----
e4ec4154 913man corosync.conf
4d19cb00 914----
e4ec4154
TL
915
916For node membership you should always use the `pvecm` tool provided by {pve}.
917You may have to edit the configuration file manually for other changes.
918Here are a few best practice tips for doing this.
919
920[[edit-corosync-conf]]
921Edit corosync.conf
922~~~~~~~~~~~~~~~~~~
923
924Editing the corosync.conf file can be not always straight forward. There are
925two on each cluster, one in `/etc/pve/corosync.conf` and the other in
926`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
927propagate the changes to the local one, but not vice versa.
928
929The configuration will get updated automatically as soon as the file changes.
930This means changes which can be integrated in a running corosync will take
931instantly effect. So you should always make a copy and edit that instead, to
932avoid triggering some unwanted changes by an in between safe.
933
934[source,bash]
4d19cb00 935----
e4ec4154 936cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 937----
e4ec4154
TL
938
939Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
940preinstalled on {pve} for example.
941
942NOTE: Always increment the 'config_version' number on configuration changes,
943omitting this can lead to problems.
944
945After making the necessary changes create another copy of the current working
946configuration file. This serves as a backup if the new configuration fails to
947apply or makes problems in other ways.
948
949[source,bash]
4d19cb00 950----
e4ec4154 951cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 952----
e4ec4154
TL
953
954Then move the new configuration file over the old one:
955[source,bash]
4d19cb00 956----
e4ec4154 957mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 958----
e4ec4154
TL
959
960You may check with the commands
961[source,bash]
4d19cb00 962----
e4ec4154
TL
963systemctl status corosync
964journalctl -b -u corosync
4d19cb00 965----
e4ec4154
TL
966
967If the change could applied automatically. If not you may have to restart the
968corosync service via:
969[source,bash]
4d19cb00 970----
e4ec4154 971systemctl restart corosync
4d19cb00 972----
e4ec4154
TL
973
974On errors check the troubleshooting section below.
975
976Troubleshooting
977~~~~~~~~~~~~~~~
978
979Issue: 'quorum.expected_votes must be configured'
980^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
981
982When corosync starts to fail and you get the following message in the system log:
983
984----
985[...]
986corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
987corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
988 'configuration error: nodelist or quorum.expected_votes must be configured!'
989[...]
990----
991
992It means that the hostname you set for corosync 'ringX_addr' in the
993configuration could not be resolved.
994
995
996Write Configuration When Not Quorate
997^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
998
999If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1000know what you do, use:
1001[source,bash]
4d19cb00 1002----
e4ec4154 1003pvecm expected 1
4d19cb00 1004----
e4ec4154
TL
1005
1006This sets the expected vote count to 1 and makes the cluster quorate. You can
1007now fix your configuration, or revert it back to the last working backup.
1008
1009This is not enough if corosync cannot start anymore. Here its best to edit the
1010local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1011that corosync can start again. Ensure that on all nodes this configuration has
1012the same content to avoid split brains. If you are not sure what went wrong
1013it's best to ask the Proxmox Community to help you.
1014
1015
1016[[corosync-conf-glossary]]
1017Corosync Configuration Glossary
1018~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1019
1020ringX_addr::
1021This names the different ring addresses for the corosync totem rings used for
1022the cluster communication.
1023
1024bindnetaddr::
1025Defines to which interface the ring should bind to. It may be any address of
1026the subnet configured on the interface we want to use. In general its the
1027recommended to just use an address a node uses on this interface.
1028
1029rrp_mode::
1030Specifies the mode of the redundant ring protocol and may be passive, active or
1031none. Note that use of active is highly experimental and not official
1032supported. Passive is the preferred mode, it may double the cluster
1033communication throughput and increases availability.
1034
806ef12d
DM
1035
1036Cluster Cold Start
1037------------------
1038
1039It is obvious that a cluster is not quorate when all nodes are
1040offline. This is a common case after a power failure.
1041
1042NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 1043(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
1044you want HA.
1045
204231df 1046On node startup, the `pve-guests` service is started and waits for
8c1189b6 1047quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
1048flag set.
1049
1050When you turn on nodes, or when power comes back after power failure,
1051it is likely that some nodes boots faster than others. Please keep in
1052mind that guest startup is delayed until you reach quorum.
806ef12d 1053
054a7e7d 1054
082ea7d9
TL
1055Guest Migration
1056---------------
1057
054a7e7d
DM
1058Migrating virtual guests to other nodes is a useful feature in a
1059cluster. There are settings to control the behavior of such
1060migrations. This can be done via the configuration file
1061`datacenter.cfg` or for a specific migration via API or command line
1062parameters.
1063
da6c7dee
DC
1064It makes a difference if a Guest is online or offline, or if it has
1065local resources (like a local disk).
1066
1067For Details about Virtual Machine Migration see the
1068xref:qm_migration[QEMU/KVM Migration Chapter]
1069
1070For Details about Container Migration see the
1071xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
1072
1073Migration Type
1074~~~~~~~~~~~~~~
1075
44f38275 1076The migration type defines if the migration data should be sent over an
d63be10b 1077encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 1078Setting the migration type to insecure means that the RAM content of a
470d4313 1079virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
1080information disclosure of critical data from inside the guest (for
1081example passwords or encryption keys).
054a7e7d
DM
1082
1083Therefore, we strongly recommend using the secure channel if you do
1084not have full control over the network and can not guarantee that no
1085one is eavesdropping to it.
082ea7d9 1086
054a7e7d
DM
1087NOTE: Storage migration does not follow this setting. Currently, it
1088always sends the storage content over a secure channel.
1089
1090Encryption requires a lot of computing power, so this setting is often
1091changed to "unsafe" to achieve better performance. The impact on
1092modern systems is lower because they implement AES encryption in
b1743473
DM
1093hardware. The performance impact is particularly evident in fast
1094networks where you can transfer 10 Gbps or more.
082ea7d9 1095
082ea7d9
TL
1096
1097Migration Network
1098~~~~~~~~~~~~~~~~~
1099
a9baa444
TL
1100By default, {pve} uses the network in which cluster communication
1101takes place to send the migration traffic. This is not optimal because
1102sensitive cluster traffic can be disrupted and this network may not
1103have the best bandwidth available on the node.
1104
1105Setting the migration network parameter allows the use of a dedicated
1106network for the entire migration traffic. In addition to the memory,
1107this also affects the storage traffic for offline migrations.
1108
1109The migration network is set as a network in the CIDR notation. This
1110has the advantage that you do not have to set individual IP addresses
1111for each node. {pve} can determine the real address on the
1112destination node from the network specified in the CIDR form. To
1113enable this, the network must be specified so that each node has one,
1114but only one IP in the respective network.
1115
082ea7d9
TL
1116
1117Example
1118^^^^^^^
1119
a9baa444
TL
1120We assume that we have a three-node setup with three separate
1121networks. One for public communication with the Internet, one for
1122cluster communication and a very fast one, which we want to use as a
1123dedicated network for migration.
1124
1125A network configuration for such a setup might look as follows:
082ea7d9
TL
1126
1127----
7a0d4784 1128iface eno1 inet manual
082ea7d9
TL
1129
1130# public network
1131auto vmbr0
1132iface vmbr0 inet static
1133 address 192.X.Y.57
1134 netmask 255.255.250.0
1135 gateway 192.X.Y.1
7a0d4784 1136 bridge_ports eno1
082ea7d9
TL
1137 bridge_stp off
1138 bridge_fd 0
1139
1140# cluster network
7a0d4784
WL
1141auto eno2
1142iface eno2 inet static
082ea7d9
TL
1143 address 10.1.1.1
1144 netmask 255.255.255.0
1145
1146# fast network
7a0d4784
WL
1147auto eno3
1148iface eno3 inet static
082ea7d9
TL
1149 address 10.1.2.1
1150 netmask 255.255.255.0
082ea7d9
TL
1151----
1152
a9baa444
TL
1153Here, we will use the network 10.1.2.0/24 as a migration network. For
1154a single migration, you can do this using the `migration_network`
1155parameter of the command line tool:
1156
082ea7d9 1157----
b1743473 1158# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
1159----
1160
a9baa444
TL
1161To configure this as the default network for all migrations in the
1162cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1163file:
1164
082ea7d9 1165----
a9baa444 1166# use dedicated migration network
b1743473 1167migration: secure,network=10.1.2.0/24
082ea7d9
TL
1168----
1169
a9baa444
TL
1170NOTE: The migration type must always be set when the migration network
1171gets set in `/etc/pve/datacenter.cfg`.
1172
806ef12d 1173
d8742b0c
DM
1174ifdef::manvolnum[]
1175include::pve-copyright.adoc[]
1176endif::manvolnum[]