]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
fix #2092: add HA Simulator documentation
[pve-docs.git] / pvecm.adoc
CommitLineData
bde0e57d 1[[chapter_pvecm]]
d8742b0c 2ifdef::manvolnum[]
b2f242ab
DM
3pvecm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6
FG
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
8a865621 29http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 30communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
31(probably more, dependent on network latency).
32
8c1189b6 33`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 34leave the cluster, get status information and do various other cluster
e300cf7d
FG
35related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
37nodes.
38
39Grouping nodes into a cluster has the following advantages:
40
41* Centralized, web based management
42
5eba0743 43* Multi-master clusters: each node can do all management task
8a865621 44
8c1189b6
FG
45* `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
8a865621 47
5eba0743 48* Easy migration of virtual machines and containers between physical
8a865621
DM
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
8c1189b6 59* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 60 to communicate between nodes (also see
ceabe189 61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 62 ports 5404 and 5405 for cluster communication.
ceabe189
DM
63+
64NOTE: Some switches do not support IP multicast by default and must be
65manually enabled first.
8a865621
DM
66
67* Date and time have to be synchronized.
68
ceabe189 69* SSH tunnel on TCP port 22 between nodes is used.
8a865621 70
ceabe189
DM
71* If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
8a865621
DM
74
75* We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
d4a9910f
DL
78* Root password of a cluster node is required for adding nodes.
79
e4b62d04
TL
80NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
81nodes.
82
83NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
84production configuration and should only used temporarily during upgrading the
85whole cluster from one to another major version.
8a865621
DM
86
87
ceabe189
DM
88Preparing Nodes
89---------------
8a865621
DM
90
91First, install {PVE} on all nodes. Make sure that each node is
92installed with the final hostname and IP configuration. Changing the
93hostname and IP is not possible after cluster creation.
94
30101530
TL
95Currently the cluster creation can either be done on the console (login via
96`ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
97Cluster__).
8a865621 98
9a7396aa
TL
99While it's often common use to reference all other nodenames in `/etc/hosts`
100with their IP this is not strictly necessary for a cluster, which normally uses
101multicast, to work. It maybe useful as you then can connect from one node to
102the other with SSH through the easier to remember node name.
103
11202f1d 104[[pvecm_create_cluster]]
8a865621 105Create the Cluster
ceabe189 106------------------
8a865621 107
8c1189b6 108Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
9a7396aa
TL
109This name cannot be changed later. The cluster name follows the same rules as
110node names.
8a865621 111
c15cdfba
TL
112----
113 hp1# pvecm create CLUSTERNAME
114----
8a865621 115
9a7396aa
TL
116CAUTION: The cluster name is used to compute the default multicast address.
117Please use unique cluster names if you run more than one cluster inside your
118network. To avoid human confusion, it is also recommended to choose different
119names even if clusters do not share the cluster network.
63f956c8 120
8a865621
DM
121To check the state of your cluster use:
122
c15cdfba 123----
8a865621 124 hp1# pvecm status
c15cdfba 125----
8a865621 126
82445c4e
TL
127Multiple Clusters In Same Network
128~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
129
130It is possible to create multiple clusters in the same physical or logical
131network. Each cluster must have a unique name, which is used to generate the
132cluster's multicast group address. As long as no duplicate cluster names are
133configured in one network segment, the different clusters won't interfere with
134each other.
135
136If multiple clusters operate in a single network it may be beneficial to setup
137an IGMP querier and enable IGMP Snooping in said network. This may reduce the
138load of the network significantly because multicast packets are only delivered
139to endpoints of the respective member nodes.
140
8a865621 141
11202f1d 142[[pvecm_join_node_to_cluster]]
8a865621 143Adding Nodes to the Cluster
ceabe189 144---------------------------
8a865621 145
8c1189b6 146Login via `ssh` to the node you want to add.
8a865621 147
c15cdfba 148----
8a865621 149 hp2# pvecm add IP-ADDRESS-CLUSTER
c15cdfba 150----
8a865621 151
270757a1
SR
152For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
153An IP address is recommended (see <<corosync-addresses,Ring Address Types>>).
8a865621 154
5eba0743 155CAUTION: A new node cannot hold any VMs, because you would get
7980581f 156conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
157`/etc/pve` is overwritten when you join a new node to the cluster. To
158workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 159adding the node to the cluster.
8a865621
DM
160
161To check the state of cluster:
162
c15cdfba 163----
8a865621 164 # pvecm status
c15cdfba 165----
8a865621 166
ceabe189 167.Cluster status after adding 4 nodes
8a865621
DM
168----
169hp2# pvecm status
170Quorum information
171~~~~~~~~~~~~~~~~~~
172Date: Mon Apr 20 12:30:13 2015
173Quorum provider: corosync_votequorum
174Nodes: 4
175Node ID: 0x00000001
176Ring ID: 1928
177Quorate: Yes
178
179Votequorum information
180~~~~~~~~~~~~~~~~~~~~~~
181Expected votes: 4
182Highest expected: 4
183Total votes: 4
91f3edd0 184Quorum: 3
8a865621
DM
185Flags: Quorate
186
187Membership information
188~~~~~~~~~~~~~~~~~~~~~~
189 Nodeid Votes Name
1900x00000001 1 192.168.15.91
1910x00000002 1 192.168.15.92 (local)
1920x00000003 1 192.168.15.93
1930x00000004 1 192.168.15.94
194----
195
196If you only want the list of all nodes use:
197
c15cdfba 198----
8a865621 199 # pvecm nodes
c15cdfba 200----
8a865621 201
5eba0743 202.List nodes in a cluster
8a865621
DM
203----
204hp2# pvecm nodes
205
206Membership information
207~~~~~~~~~~~~~~~~~~~~~~
208 Nodeid Votes Name
209 1 1 hp1
210 2 1 hp2 (local)
211 3 1 hp3
212 4 1 hp4
213----
214
82d52451 215[[adding-nodes-with-separated-cluster-network]]
e4ec4154
TL
216Adding Nodes With Separated Cluster Network
217~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
218
219When adding a node to a cluster with a separated cluster network you need to
220use the 'ringX_addr' parameters to set the nodes address on those networks:
221
222[source,bash]
4d19cb00 223----
e4ec4154 224pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 225----
e4ec4154
TL
226
227If you want to use the Redundant Ring Protocol you will also want to pass the
228'ring1_addr' parameter.
229
8a865621
DM
230
231Remove a Cluster Node
ceabe189 232---------------------
8a865621
DM
233
234CAUTION: Read carefully the procedure before proceeding, as it could
235not be what you want or need.
236
237Move all virtual machines from the node. Make sure you have no local
238data or backups you want to keep, or save them accordingly.
e8503c6c 239In the following example we will remove the node hp4 from the cluster.
8a865621 240
e8503c6c
EK
241Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
242command to identify the node ID to remove:
8a865621
DM
243
244----
245hp1# pvecm nodes
246
247Membership information
248~~~~~~~~~~~~~~~~~~~~~~
249 Nodeid Votes Name
250 1 1 hp1 (local)
251 2 1 hp2
252 3 1 hp3
253 4 1 hp4
254----
255
e8503c6c
EK
256
257At this point you must power off hp4 and
258make sure that it will not power on again (in the network) as it
259is.
260
261IMPORTANT: As said above, it is critical to power off the node
262*before* removal, and make sure that it will *never* power on again
263(in the existing cluster network) as it is.
264If you power on the node as it is, your cluster will be screwed up and
265it could be difficult to restore a clean cluster state.
266
267After powering off the node hp4, we can safely remove it from the cluster.
8a865621 268
c15cdfba 269----
8a865621 270 hp1# pvecm delnode hp4
c15cdfba 271----
8a865621
DM
272
273If the operation succeeds no output is returned, just check the node
8c1189b6 274list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
275something like:
276
277----
278hp1# pvecm status
279
280Quorum information
281~~~~~~~~~~~~~~~~~~
282Date: Mon Apr 20 12:44:28 2015
283Quorum provider: corosync_votequorum
284Nodes: 3
285Node ID: 0x00000001
286Ring ID: 1992
287Quorate: Yes
288
289Votequorum information
290~~~~~~~~~~~~~~~~~~~~~~
291Expected votes: 3
292Highest expected: 3
293Total votes: 3
91f3edd0 294Quorum: 2
8a865621
DM
295Flags: Quorate
296
297Membership information
298~~~~~~~~~~~~~~~~~~~~~~
299 Nodeid Votes Name
3000x00000001 1 192.168.15.90 (local)
3010x00000002 1 192.168.15.91
3020x00000003 1 192.168.15.92
303----
304
8a865621
DM
305If, for whatever reason, you want that this server joins the same
306cluster again, you have to
307
26ca7ff5 308* reinstall {pve} on it from scratch
8a865621
DM
309
310* then join it, as explained in the previous section.
d8742b0c 311
38ae8db3 312[[pvecm_separate_node_without_reinstall]]
555e966b
TL
313Separate A Node Without Reinstalling
314~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
315
316CAUTION: This is *not* the recommended method, proceed with caution. Use the
317above mentioned method if you're unsure.
318
319You can also separate a node from a cluster without reinstalling it from
320scratch. But after removing the node from the cluster it will still have
321access to the shared storages! This must be resolved before you start removing
322the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
323storage with another cluster, as storage locking doesn't work over cluster
324boundary. Further, it may also lead to VMID conflicts.
555e966b 325
3be22308
TL
326Its suggested that you create a new storage where only the node which you want
327to separate has access. This can be an new export on your NFS or a new Ceph
328pool, to name a few examples. Its just important that the exact same storage
329does not gets accessed by multiple clusters. After setting this storage up move
330all data from the node and its VMs to it. Then you are ready to separate the
331node from the cluster.
555e966b
TL
332
333WARNING: Ensure all shared resources are cleanly separated! You will run into
334conflicts and problems else.
335
336First stop the corosync and the pve-cluster services on the node:
337[source,bash]
4d19cb00 338----
555e966b
TL
339systemctl stop pve-cluster
340systemctl stop corosync
4d19cb00 341----
555e966b
TL
342
343Start the cluster filesystem again in local mode:
344[source,bash]
4d19cb00 345----
555e966b 346pmxcfs -l
4d19cb00 347----
555e966b
TL
348
349Delete the corosync configuration files:
350[source,bash]
4d19cb00 351----
555e966b
TL
352rm /etc/pve/corosync.conf
353rm /etc/corosync/*
4d19cb00 354----
555e966b
TL
355
356You can now start the filesystem again as normal service:
357[source,bash]
4d19cb00 358----
555e966b
TL
359killall pmxcfs
360systemctl start pve-cluster
4d19cb00 361----
555e966b
TL
362
363The node is now separated from the cluster. You can deleted it from a remaining
364node of the cluster with:
365[source,bash]
4d19cb00 366----
555e966b 367pvecm delnode oldnode
4d19cb00 368----
555e966b
TL
369
370If the command failed, because the remaining node in the cluster lost quorum
371when the now separate node exited, you may set the expected votes to 1 as a workaround:
372[source,bash]
4d19cb00 373----
555e966b 374pvecm expected 1
4d19cb00 375----
555e966b 376
96d698db 377And then repeat the 'pvecm delnode' command.
555e966b
TL
378
379Now switch back to the separated node, here delete all remaining files left
380from the old cluster. This ensures that the node can be added to another
381cluster again without problems.
382
383[source,bash]
4d19cb00 384----
555e966b 385rm /var/lib/corosync/*
4d19cb00 386----
555e966b
TL
387
388As the configuration files from the other nodes are still in the cluster
389filesystem you may want to clean those up too. Remove simply the whole
390directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
391you used the correct one before deleting it.
392
393CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
394the nodes can still connect to each other with public key authentication. This
395should be fixed by removing the respective keys from the
396'/etc/pve/priv/authorized_keys' file.
d8742b0c 397
806ef12d
DM
398Quorum
399------
400
401{pve} use a quorum-based technique to provide a consistent state among
402all cluster nodes.
403
404[quote, from Wikipedia, Quorum (distributed computing)]
405____
406A quorum is the minimum number of votes that a distributed transaction
407has to obtain in order to be allowed to perform an operation in a
408distributed system.
409____
410
411In case of network partitioning, state changes requires that a
412majority of nodes are online. The cluster switches to read-only mode
5eba0743 413if it loses quorum.
806ef12d
DM
414
415NOTE: {pve} assigns a single vote to each node by default.
416
e4ec4154
TL
417Cluster Network
418---------------
419
420The cluster network is the core of a cluster. All messages sent over it have to
421be delivered reliable to all nodes in their respective order. In {pve} this
422part is done by corosync, an implementation of a high performance low overhead
423high availability development toolkit. It serves our decentralized
424configuration file system (`pmxcfs`).
425
426[[cluster-network-requirements]]
427Network Requirements
428~~~~~~~~~~~~~~~~~~~~
429This needs a reliable network with latencies under 2 milliseconds (LAN
430performance) to work properly. While corosync can also use unicast for
431communication between nodes its **highly recommended** to have a multicast
432capable network. The network should not be used heavily by other members,
433ideally corosync runs on its own network.
434*never* share it with network where storage communicates too.
435
436Before setting up a cluster it is good practice to check if the network is fit
437for that purpose.
438
439* Ensure that all nodes are in the same subnet. This must only be true for the
440 network interfaces used for cluster communication (corosync).
441
442* Ensure all nodes can reach each other over those interfaces, using `ping` is
443 enough for a basic test.
444
445* Ensure that multicast works in general and a high package rates. This can be
446 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 447+
e4ec4154
TL
448[source,bash]
449----
450omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
451----
452
453* Ensure that multicast communication works over an extended period of time.
a181f090 454 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
455 no multicast querier is active. This test has a duration of around 10
456 minutes.
9e73d831 457+
e4ec4154 458[source,bash]
4d19cb00 459----
e4ec4154 460omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 461----
e4ec4154
TL
462
463Your network is not ready for clustering if any of these test fails. Recheck
464your network configuration. Especially switches are notorious for having
465multicast disabled by default or IGMP snooping enabled with no IGMP querier
466active.
467
468In smaller cluster its also an option to use unicast if you really cannot get
469multicast to work.
470
471Separate Cluster Network
472~~~~~~~~~~~~~~~~~~~~~~~~
473
474When creating a cluster without any parameters the cluster network is generally
475shared with the Web UI and the VMs and its traffic. Depending on your setup
476even storage traffic may get sent over the same network. Its recommended to
477change that, as corosync is a time critical real time application.
478
479Setting Up A New Network
480^^^^^^^^^^^^^^^^^^^^^^^^
481
482First you have to setup a new network interface. It should be on a physical
483separate network. Ensure that your network fulfills the
484<<cluster-network-requirements,cluster network requirements>>.
485
486Separate On Cluster Creation
487^^^^^^^^^^^^^^^^^^^^^^^^^^^^
488
489This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
490the 'pvecm create' command used for creating a new cluster.
491
44f38275 492If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
493and want to send and receive all cluster communication over this interface
494you would execute:
495
496[source,bash]
4d19cb00 497----
e4ec4154 498pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 499----
e4ec4154
TL
500
501To check if everything is working properly execute:
502[source,bash]
4d19cb00 503----
e4ec4154 504systemctl status corosync
4d19cb00 505----
e4ec4154 506
266cb17b
WB
507Afterwards, proceed as descripted in the section to
508<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
82d52451 509
e4ec4154
TL
510[[separate-cluster-net-after-creation]]
511Separate After Cluster Creation
512^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
513
514You can do this also if you have already created a cluster and want to switch
515its communication to another network, without rebuilding the whole cluster.
516This change may lead to short durations of quorum loss in the cluster, as nodes
517have to restart corosync and come up one after the other on the new network.
518
519Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
520The open it and you should see a file similar to:
521
522----
523logging {
524 debug: off
525 to_syslog: yes
526}
527
528nodelist {
529
530 node {
531 name: due
532 nodeid: 2
533 quorum_votes: 1
534 ring0_addr: due
535 }
536
537 node {
538 name: tre
539 nodeid: 3
540 quorum_votes: 1
541 ring0_addr: tre
542 }
543
544 node {
545 name: uno
546 nodeid: 1
547 quorum_votes: 1
548 ring0_addr: uno
549 }
550
551}
552
553quorum {
554 provider: corosync_votequorum
555}
556
557totem {
558 cluster_name: thomas-testcluster
559 config_version: 3
560 ip_version: ipv4
561 secauth: on
562 version: 2
563 interface {
564 bindnetaddr: 192.168.30.50
565 ringnumber: 0
566 }
567
568}
569----
570
571The first you want to do is add the 'name' properties in the node entries if
572you do not see them already. Those *must* match the node name.
573
574Then replace the address from the 'ring0_addr' properties with the new
575addresses. You may use plain IP addresses or also hostnames here. If you use
270757a1
SR
576hostnames ensure that they are resolvable from all nodes. (see also
577<<corosync-addresses,Ring Address Types>>)
e4ec4154
TL
578
579In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 580network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
581in the totem section of the config to an address of the new network. It can be
582any address from the subnet configured on the new network interface.
583
584After you increased the 'config_version' property the new configuration file
585should look like:
586
587----
588
589logging {
590 debug: off
591 to_syslog: yes
592}
593
594nodelist {
595
596 node {
597 name: due
598 nodeid: 2
599 quorum_votes: 1
600 ring0_addr: 10.10.10.2
601 }
602
603 node {
604 name: tre
605 nodeid: 3
606 quorum_votes: 1
607 ring0_addr: 10.10.10.3
608 }
609
610 node {
611 name: uno
612 nodeid: 1
613 quorum_votes: 1
614 ring0_addr: 10.10.10.1
615 }
616
617}
618
619quorum {
620 provider: corosync_votequorum
621}
622
623totem {
624 cluster_name: thomas-testcluster
625 config_version: 4
626 ip_version: ipv4
627 secauth: on
628 version: 2
629 interface {
630 bindnetaddr: 10.10.10.1
631 ringnumber: 0
632 }
633
634}
635----
636
637Now after a final check whether all changed information is correct we save it
638and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
639learn how to bring it in effect.
640
641As our change cannot be enforced live from corosync we have to do an restart.
642
643On a single node execute:
644[source,bash]
4d19cb00 645----
e4ec4154 646systemctl restart corosync
4d19cb00 647----
e4ec4154
TL
648
649Now check if everything is fine:
650
651[source,bash]
4d19cb00 652----
e4ec4154 653systemctl status corosync
4d19cb00 654----
e4ec4154
TL
655
656If corosync runs again correct restart corosync also on all other nodes.
657They will then join the cluster membership one by one on the new network.
658
270757a1
SR
659[[corosync-addresses]]
660Corosync addresses
661~~~~~~~~~~~~~~~~~~
662
663A corosync link or ring address can be specified in two ways:
664
665* **IPv4/v6 addresses** will be used directly. They are recommended, since they
666are static and usually not changed carelessly.
667
668* **Hostnames** will be resolved using `getaddrinfo`, which means that per
669default, IPv6 addresses will be used first, if available (see also
670`man gai.conf`). Keep this in mind, especially when upgrading an existing
671cluster to IPv6.
672
673CAUTION: Hostnames should be used with care, since the address they
674resolve to can be changed without touching corosync or the node it runs on -
675which may lead to a situation where an address is changed without thinking
676about implications for corosync.
677
678A seperate, static hostname specifically for corosync is recommended, if
679hostnames are preferred. Also, make sure that every node in the cluster can
680resolve all hostnames correctly.
681
682Since {pve} 5.1, while supported, hostnames will be resolved at the time of
683entry. Only the resolved IP is then saved to the configuration.
684
685Nodes that joined the cluster on earlier versions likely still use their
686unresolved hostname in `corosync.conf`. It might be a good idea to replace
687them with IPs or a seperate hostname, as mentioned above.
688
11202f1d 689[[pvecm_rrp]]
e4ec4154
TL
690Redundant Ring Protocol
691~~~~~~~~~~~~~~~~~~~~~~~
692To avoid a single point of failure you should implement counter measurements.
693This can be on the hardware and operating system level through network bonding.
694
695Corosync itself offers also a possibility to add redundancy through the so
696called 'Redundant Ring Protocol'. This protocol allows running a second totem
697ring on another network, this network should be physically separated from the
698other rings network to actually increase availability.
699
700RRP On Cluster Creation
701~~~~~~~~~~~~~~~~~~~~~~~
702
703The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
704'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
705
706NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
707
708So if you have two networks, one on the 10.10.10.1/24 and the other on the
70910.10.20.1/24 subnet you would execute:
710
711[source,bash]
4d19cb00 712----
e4ec4154
TL
713pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
714-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 715----
e4ec4154 716
6e78f927 717RRP On Existing Clusters
e4ec4154
TL
718~~~~~~~~~~~~~~~~~~~~~~~~
719
6e78f927
TL
720You will take similar steps as described in
721<<separate-cluster-net-after-creation,separating the cluster network>> to
722enable RRP on an already running cluster. The single difference is, that you
723will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
724
725First add a new `interface` subsection in the `totem` section, set its
726`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
727address of the subnet you have configured for your new ring.
728Further set the `rrp_mode` to `passive`, this is the only stable mode.
729
730Then add to each node entry in the `nodelist` section its new `ring1_addr`
731property with the nodes additional ring address.
732
733So if you have two networks, one on the 10.10.10.1/24 and the other on the
73410.10.20.1/24 subnet, the final configuration file should look like:
735
736----
737totem {
738 cluster_name: tweak
739 config_version: 9
740 ip_version: ipv4
741 rrp_mode: passive
742 secauth: on
743 version: 2
744 interface {
745 bindnetaddr: 10.10.10.1
746 ringnumber: 0
747 }
748 interface {
749 bindnetaddr: 10.10.20.1
750 ringnumber: 1
751 }
752}
753
754nodelist {
755 node {
756 name: pvecm1
757 nodeid: 1
758 quorum_votes: 1
759 ring0_addr: 10.10.10.1
760 ring1_addr: 10.10.20.1
761 }
762
763 node {
764 name: pvecm2
765 nodeid: 2
766 quorum_votes: 1
767 ring0_addr: 10.10.10.2
768 ring1_addr: 10.10.20.2
769 }
770
771 [...] # other cluster nodes here
772}
773
774[...] # other remaining config sections here
775
776----
777
7d48940b
DM
778Bring it in effect like described in the
779<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
780
781This is a change which cannot take live in effect and needs at least a restart
782of corosync. Recommended is a restart of the whole cluster.
783
784If you cannot reboot the whole cluster ensure no High Availability services are
785configured and the stop the corosync service on all nodes. After corosync is
786stopped on all nodes start it one after the other again.
787
c21d2cbe
OB
788Corosync External Vote Support
789------------------------------
790
791This section describes a way to deploy an external voter in a {pve} cluster.
792When configured, the cluster can sustain more node failures without
793violating safety properties of the cluster communication.
794
795For this to work there are two services involved:
796
797* a so called qdevice daemon which runs on each {pve} node
798
799* an external vote daemon which runs on an independent server.
800
801As a result you can achieve higher availability even in smaller setups (for
802example 2+1 nodes).
803
804QDevice Technical Overview
805~~~~~~~~~~~~~~~~~~~~~~~~~~
806
807The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
808node. It provides a configured number of votes to the clusters quorum
809subsystem based on an external running third-party arbitrator's decision.
810Its primary use is to allow a cluster to sustain more node failures than
811standard quorum rules allow. This can be done safely as the external device
812can see all nodes and thus choose only one set of nodes to give its vote.
51730d56 813This will only be done if said set of nodes can have quorum (again) when
c21d2cbe
OB
814receiving the third-party vote.
815
816Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
817a daemon which provides a vote to a cluster partition if it can reach the
818partition members over the network. It will give only votes to one partition
819of a cluster at any time.
820It's designed to support multiple clusters and is almost configuration and
821state free. New clusters are handled dynamically and no configuration file
822is needed on the host running a QDevice.
823
824The external host has the only requirement that it needs network access to the
825cluster and a corosync-qnetd package available. We provide such a package
826for Debian based hosts, other Linux distributions should also have a package
827available through their respective package manager.
828
829NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
830TCP/IP and thus does not need a multicast capable network between itself and
831the cluster. In fact the daemon may run outside of the LAN and can have
832longer latencies than 2 ms.
833
834
835Supported Setups
836~~~~~~~~~~~~~~~~
837
838We support QDevices for clusters with an even number of nodes and recommend
839it for 2 node clusters, if they should provide higher availability.
840For clusters with an odd node count we discourage the use of QDevices
841currently. The reason for this, is the difference of the votes the QDevice
842provides for each cluster type. Even numbered clusters get single additional
843vote, with this we can only increase availability, i.e. if the QDevice
844itself fails we are in the same situation as with no QDevice at all.
845
846Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
847where 'N' corresponds to the cluster node count. This difference makes
848sense, if we had only one additional vote the cluster can get into a split
849brain situation.
850This algorithm would allow that all nodes but one (and naturally the
851QDevice itself) could fail.
852There are two drawbacks with this:
853
854* If the QNet daemon itself fails, no other node may fail or the cluster
855 immediately loses quorum. For example, in a cluster with 15 nodes 7
856 could fail before the cluster becomes inquorate. But, if a QDevice is
857 configured here and said QDevice fails itself **no single node** of
858 the 15 may fail. The QDevice acts almost as a single point of failure in
859 this case.
860
861* The fact that all but one node plus QDevice may fail sound promising at
862 first, but this may result in a mass recovery of HA services that would
863 overload the single node left. Also ceph server will stop to provide
864 services after only '((N-1)/2)' nodes are online.
865
866If you understand the drawbacks and implications you can decide yourself if
867you should use this technology in an odd numbered cluster setup.
868
869
870QDevice-Net Setup
871~~~~~~~~~~~~~~~~~
872
873We recommend to run any daemon which provides votes to corosync-qdevice as an
e34c3e91
TL
874unprivileged user. {pve} and Debian provides a package which is already
875configured to do so.
c21d2cbe
OB
876The traffic between the daemon and the cluster must be encrypted to ensure a
877safe and secure QDevice integration in {pve}.
878
879First install the 'corosync-qnetd' package on your external server and
880the 'corosync-qdevice' package on all cluster nodes.
881
882After that, ensure that all your nodes on the cluster are online.
883
884You can now easily set up your QDevice by running the following command on one
885of the {pve} nodes:
886
887----
888pve# pvecm qdevice setup <QDEVICE-IP>
889----
890
891The SSH key from the cluster will be automatically copied to the QDevice. You
892might need to enter an SSH password during this step.
893
894After you enter the password and all the steps are successfully completed, you
895will see "Done". You can check the status now:
896
897----
898pve# pvecm status
899
900...
901
902Votequorum information
903~~~~~~~~~~~~~~~~~~~~~
904Expected votes: 3
905Highest expected: 3
906Total votes: 3
907Quorum: 2
908Flags: Quorate Qdevice
909
910Membership information
911~~~~~~~~~~~~~~~~~~~~~~
912 Nodeid Votes Qdevice Name
913 0x00000001 1 A,V,NMW 192.168.22.180 (local)
914 0x00000002 1 A,V,NMW 192.168.22.181
915 0x00000000 1 Qdevice
916
917----
918
919which means the QDevice is set up.
920
921
922Frequently Asked Questions
923~~~~~~~~~~~~~~~~~~~~~~~~~~
924
925Tie Breaking
926^^^^^^^^^^^^
927
00821894
TL
928In case of a tie, where two same-sized cluster partitions cannot see each other
929but the QDevice, the QDevice chooses randomly one of those partitions and
c21d2cbe
OB
930provides a vote to it.
931
d31de328
TL
932Possible Negative Implications
933^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
934
00821894
TL
935For clusters with an even node count there are no negative implications when
936setting up a QDevice. If it fails to work, you are as good as without QDevice at
937all.
d31de328 938
870c2817
OB
939Adding/Deleting Nodes After QDevice Setup
940^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
d31de328
TL
941
942If you want to add a new node or remove an existing one from a cluster with a
00821894
TL
943QDevice setup, you need to remove the QDevice first. After that, you can add or
944remove nodes normally. Once you have a cluster with an even node count again,
945you can set up the QDevice again as described above.
870c2817
OB
946
947Removing the QDevice
948^^^^^^^^^^^^^^^^^^^^
949
00821894
TL
950If you used the official `pvecm` tool to add the QDevice, you can remove it
951trivially by running:
870c2817
OB
952
953----
954pve# pvecm qdevice remove
955----
d31de328 956
51730d56
TL
957//Still TODO
958//^^^^^^^^^^
959//There ist still stuff to add here
c21d2cbe
OB
960
961
e4ec4154
TL
962Corosync Configuration
963----------------------
964
470d4313 965The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
966controls the cluster member ship and its network.
967For reading more about it check the corosync.conf man page:
968[source,bash]
4d19cb00 969----
e4ec4154 970man corosync.conf
4d19cb00 971----
e4ec4154
TL
972
973For node membership you should always use the `pvecm` tool provided by {pve}.
974You may have to edit the configuration file manually for other changes.
975Here are a few best practice tips for doing this.
976
977[[edit-corosync-conf]]
978Edit corosync.conf
979~~~~~~~~~~~~~~~~~~
980
981Editing the corosync.conf file can be not always straight forward. There are
982two on each cluster, one in `/etc/pve/corosync.conf` and the other in
983`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
984propagate the changes to the local one, but not vice versa.
985
986The configuration will get updated automatically as soon as the file changes.
987This means changes which can be integrated in a running corosync will take
988instantly effect. So you should always make a copy and edit that instead, to
989avoid triggering some unwanted changes by an in between safe.
990
991[source,bash]
4d19cb00 992----
e4ec4154 993cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 994----
e4ec4154
TL
995
996Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
997preinstalled on {pve} for example.
998
999NOTE: Always increment the 'config_version' number on configuration changes,
1000omitting this can lead to problems.
1001
1002After making the necessary changes create another copy of the current working
1003configuration file. This serves as a backup if the new configuration fails to
1004apply or makes problems in other ways.
1005
1006[source,bash]
4d19cb00 1007----
e4ec4154 1008cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 1009----
e4ec4154
TL
1010
1011Then move the new configuration file over the old one:
1012[source,bash]
4d19cb00 1013----
e4ec4154 1014mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 1015----
e4ec4154
TL
1016
1017You may check with the commands
1018[source,bash]
4d19cb00 1019----
e4ec4154
TL
1020systemctl status corosync
1021journalctl -b -u corosync
4d19cb00 1022----
e4ec4154
TL
1023
1024If the change could applied automatically. If not you may have to restart the
1025corosync service via:
1026[source,bash]
4d19cb00 1027----
e4ec4154 1028systemctl restart corosync
4d19cb00 1029----
e4ec4154
TL
1030
1031On errors check the troubleshooting section below.
1032
1033Troubleshooting
1034~~~~~~~~~~~~~~~
1035
1036Issue: 'quorum.expected_votes must be configured'
1037^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1038
1039When corosync starts to fail and you get the following message in the system log:
1040
1041----
1042[...]
1043corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1044corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1045 'configuration error: nodelist or quorum.expected_votes must be configured!'
1046[...]
1047----
1048
1049It means that the hostname you set for corosync 'ringX_addr' in the
1050configuration could not be resolved.
1051
1052
1053Write Configuration When Not Quorate
1054^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1055
1056If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1057know what you do, use:
1058[source,bash]
4d19cb00 1059----
e4ec4154 1060pvecm expected 1
4d19cb00 1061----
e4ec4154
TL
1062
1063This sets the expected vote count to 1 and makes the cluster quorate. You can
1064now fix your configuration, or revert it back to the last working backup.
1065
1066This is not enough if corosync cannot start anymore. Here its best to edit the
1067local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1068that corosync can start again. Ensure that on all nodes this configuration has
1069the same content to avoid split brains. If you are not sure what went wrong
1070it's best to ask the Proxmox Community to help you.
1071
1072
1073[[corosync-conf-glossary]]
1074Corosync Configuration Glossary
1075~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1076
1077ringX_addr::
1078This names the different ring addresses for the corosync totem rings used for
1079the cluster communication.
1080
1081bindnetaddr::
1082Defines to which interface the ring should bind to. It may be any address of
1083the subnet configured on the interface we want to use. In general its the
1084recommended to just use an address a node uses on this interface.
1085
1086rrp_mode::
1087Specifies the mode of the redundant ring protocol and may be passive, active or
1088none. Note that use of active is highly experimental and not official
1089supported. Passive is the preferred mode, it may double the cluster
1090communication throughput and increases availability.
1091
806ef12d
DM
1092
1093Cluster Cold Start
1094------------------
1095
1096It is obvious that a cluster is not quorate when all nodes are
1097offline. This is a common case after a power failure.
1098
1099NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 1100(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
1101you want HA.
1102
204231df 1103On node startup, the `pve-guests` service is started and waits for
8c1189b6 1104quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
1105flag set.
1106
1107When you turn on nodes, or when power comes back after power failure,
1108it is likely that some nodes boots faster than others. Please keep in
1109mind that guest startup is delayed until you reach quorum.
806ef12d 1110
054a7e7d 1111
082ea7d9
TL
1112Guest Migration
1113---------------
1114
054a7e7d
DM
1115Migrating virtual guests to other nodes is a useful feature in a
1116cluster. There are settings to control the behavior of such
1117migrations. This can be done via the configuration file
1118`datacenter.cfg` or for a specific migration via API or command line
1119parameters.
1120
da6c7dee
DC
1121It makes a difference if a Guest is online or offline, or if it has
1122local resources (like a local disk).
1123
1124For Details about Virtual Machine Migration see the
1125xref:qm_migration[QEMU/KVM Migration Chapter]
1126
1127For Details about Container Migration see the
1128xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
1129
1130Migration Type
1131~~~~~~~~~~~~~~
1132
44f38275 1133The migration type defines if the migration data should be sent over an
d63be10b 1134encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 1135Setting the migration type to insecure means that the RAM content of a
470d4313 1136virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
1137information disclosure of critical data from inside the guest (for
1138example passwords or encryption keys).
054a7e7d
DM
1139
1140Therefore, we strongly recommend using the secure channel if you do
1141not have full control over the network and can not guarantee that no
1142one is eavesdropping to it.
082ea7d9 1143
054a7e7d
DM
1144NOTE: Storage migration does not follow this setting. Currently, it
1145always sends the storage content over a secure channel.
1146
1147Encryption requires a lot of computing power, so this setting is often
1148changed to "unsafe" to achieve better performance. The impact on
1149modern systems is lower because they implement AES encryption in
b1743473
DM
1150hardware. The performance impact is particularly evident in fast
1151networks where you can transfer 10 Gbps or more.
082ea7d9 1152
082ea7d9
TL
1153
1154Migration Network
1155~~~~~~~~~~~~~~~~~
1156
a9baa444
TL
1157By default, {pve} uses the network in which cluster communication
1158takes place to send the migration traffic. This is not optimal because
1159sensitive cluster traffic can be disrupted and this network may not
1160have the best bandwidth available on the node.
1161
1162Setting the migration network parameter allows the use of a dedicated
1163network for the entire migration traffic. In addition to the memory,
1164this also affects the storage traffic for offline migrations.
1165
1166The migration network is set as a network in the CIDR notation. This
1167has the advantage that you do not have to set individual IP addresses
1168for each node. {pve} can determine the real address on the
1169destination node from the network specified in the CIDR form. To
1170enable this, the network must be specified so that each node has one,
1171but only one IP in the respective network.
1172
082ea7d9
TL
1173
1174Example
1175^^^^^^^
1176
a9baa444
TL
1177We assume that we have a three-node setup with three separate
1178networks. One for public communication with the Internet, one for
1179cluster communication and a very fast one, which we want to use as a
1180dedicated network for migration.
1181
1182A network configuration for such a setup might look as follows:
082ea7d9
TL
1183
1184----
7a0d4784 1185iface eno1 inet manual
082ea7d9
TL
1186
1187# public network
1188auto vmbr0
1189iface vmbr0 inet static
1190 address 192.X.Y.57
1191 netmask 255.255.250.0
1192 gateway 192.X.Y.1
7a0d4784 1193 bridge_ports eno1
082ea7d9
TL
1194 bridge_stp off
1195 bridge_fd 0
1196
1197# cluster network
7a0d4784
WL
1198auto eno2
1199iface eno2 inet static
082ea7d9
TL
1200 address 10.1.1.1
1201 netmask 255.255.255.0
1202
1203# fast network
7a0d4784
WL
1204auto eno3
1205iface eno3 inet static
082ea7d9
TL
1206 address 10.1.2.1
1207 netmask 255.255.255.0
082ea7d9
TL
1208----
1209
a9baa444
TL
1210Here, we will use the network 10.1.2.0/24 as a migration network. For
1211a single migration, you can do this using the `migration_network`
1212parameter of the command line tool:
1213
082ea7d9 1214----
b1743473 1215# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
1216----
1217
a9baa444
TL
1218To configure this as the default network for all migrations in the
1219cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1220file:
1221
082ea7d9 1222----
a9baa444 1223# use dedicated migration network
b1743473 1224migration: secure,network=10.1.2.0/24
082ea7d9
TL
1225----
1226
a9baa444
TL
1227NOTE: The migration type must always be set when the migration network
1228gets set in `/etc/pve/datacenter.cfg`.
1229
806ef12d 1230
d8742b0c
DM
1231ifdef::manvolnum[]
1232include::pve-copyright.adoc[]
1233endif::manvolnum[]