]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
add qmeventd manpage
[pve-docs.git] / pvecm.adoc
CommitLineData
bde0e57d 1[[chapter_pvecm]]
d8742b0c 2ifdef::manvolnum[]
b2f242ab
DM
3pvecm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6
FG
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
8a865621 29http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 30communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
31(probably more, dependent on network latency).
32
8c1189b6 33`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 34leave the cluster, get status information and do various other cluster
e300cf7d
FG
35related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
37nodes.
38
39Grouping nodes into a cluster has the following advantages:
40
41* Centralized, web based management
42
5eba0743 43* Multi-master clusters: each node can do all management task
8a865621 44
8c1189b6
FG
45* `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
8a865621 47
5eba0743 48* Easy migration of virtual machines and containers between physical
8a865621
DM
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
8c1189b6 59* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 60 to communicate between nodes (also see
ceabe189 61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 62 ports 5404 and 5405 for cluster communication.
ceabe189
DM
63+
64NOTE: Some switches do not support IP multicast by default and must be
65manually enabled first.
8a865621
DM
66
67* Date and time have to be synchronized.
68
ceabe189 69* SSH tunnel on TCP port 22 between nodes is used.
8a865621 70
ceabe189
DM
71* If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
8a865621
DM
74
75* We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
d4a9910f
DL
78* Root password of a cluster node is required for adding nodes.
79
e4b62d04
TL
80NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
81nodes.
82
83NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
84production configuration and should only used temporarily during upgrading the
85whole cluster from one to another major version.
8a865621
DM
86
87
ceabe189
DM
88Preparing Nodes
89---------------
8a865621
DM
90
91First, install {PVE} on all nodes. Make sure that each node is
92installed with the final hostname and IP configuration. Changing the
93hostname and IP is not possible after cluster creation.
94
30101530
TL
95Currently the cluster creation can either be done on the console (login via
96`ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
97Cluster__).
8a865621 98
9a7396aa
TL
99While it's often common use to reference all other nodenames in `/etc/hosts`
100with their IP this is not strictly necessary for a cluster, which normally uses
101multicast, to work. It maybe useful as you then can connect from one node to
102the other with SSH through the easier to remember node name.
103
11202f1d 104[[pvecm_create_cluster]]
8a865621 105Create the Cluster
ceabe189 106------------------
8a865621 107
8c1189b6 108Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
9a7396aa
TL
109This name cannot be changed later. The cluster name follows the same rules as
110node names.
8a865621 111
c15cdfba
TL
112----
113 hp1# pvecm create CLUSTERNAME
114----
8a865621 115
9a7396aa
TL
116CAUTION: The cluster name is used to compute the default multicast address.
117Please use unique cluster names if you run more than one cluster inside your
118network. To avoid human confusion, it is also recommended to choose different
119names even if clusters do not share the cluster network.
63f956c8 120
8a865621
DM
121To check the state of your cluster use:
122
c15cdfba 123----
8a865621 124 hp1# pvecm status
c15cdfba 125----
8a865621 126
82445c4e
TL
127Multiple Clusters In Same Network
128~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
129
130It is possible to create multiple clusters in the same physical or logical
131network. Each cluster must have a unique name, which is used to generate the
132cluster's multicast group address. As long as no duplicate cluster names are
133configured in one network segment, the different clusters won't interfere with
134each other.
135
136If multiple clusters operate in a single network it may be beneficial to setup
137an IGMP querier and enable IGMP Snooping in said network. This may reduce the
138load of the network significantly because multicast packets are only delivered
139to endpoints of the respective member nodes.
140
8a865621 141
11202f1d 142[[pvecm_join_node_to_cluster]]
8a865621 143Adding Nodes to the Cluster
ceabe189 144---------------------------
8a865621 145
8c1189b6 146Login via `ssh` to the node you want to add.
8a865621 147
c15cdfba 148----
8a865621 149 hp2# pvecm add IP-ADDRESS-CLUSTER
c15cdfba 150----
8a865621
DM
151
152For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
153
5eba0743 154CAUTION: A new node cannot hold any VMs, because you would get
7980581f 155conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
156`/etc/pve` is overwritten when you join a new node to the cluster. To
157workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 158adding the node to the cluster.
8a865621
DM
159
160To check the state of cluster:
161
c15cdfba 162----
8a865621 163 # pvecm status
c15cdfba 164----
8a865621 165
ceabe189 166.Cluster status after adding 4 nodes
8a865621
DM
167----
168hp2# pvecm status
169Quorum information
170~~~~~~~~~~~~~~~~~~
171Date: Mon Apr 20 12:30:13 2015
172Quorum provider: corosync_votequorum
173Nodes: 4
174Node ID: 0x00000001
175Ring ID: 1928
176Quorate: Yes
177
178Votequorum information
179~~~~~~~~~~~~~~~~~~~~~~
180Expected votes: 4
181Highest expected: 4
182Total votes: 4
183Quorum: 2
184Flags: Quorate
185
186Membership information
187~~~~~~~~~~~~~~~~~~~~~~
188 Nodeid Votes Name
1890x00000001 1 192.168.15.91
1900x00000002 1 192.168.15.92 (local)
1910x00000003 1 192.168.15.93
1920x00000004 1 192.168.15.94
193----
194
195If you only want the list of all nodes use:
196
c15cdfba 197----
8a865621 198 # pvecm nodes
c15cdfba 199----
8a865621 200
5eba0743 201.List nodes in a cluster
8a865621
DM
202----
203hp2# pvecm nodes
204
205Membership information
206~~~~~~~~~~~~~~~~~~~~~~
207 Nodeid Votes Name
208 1 1 hp1
209 2 1 hp2 (local)
210 3 1 hp3
211 4 1 hp4
212----
213
82d52451 214[[adding-nodes-with-separated-cluster-network]]
e4ec4154
TL
215Adding Nodes With Separated Cluster Network
216~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
217
218When adding a node to a cluster with a separated cluster network you need to
219use the 'ringX_addr' parameters to set the nodes address on those networks:
220
221[source,bash]
4d19cb00 222----
e4ec4154 223pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 224----
e4ec4154
TL
225
226If you want to use the Redundant Ring Protocol you will also want to pass the
227'ring1_addr' parameter.
228
8a865621
DM
229
230Remove a Cluster Node
ceabe189 231---------------------
8a865621
DM
232
233CAUTION: Read carefully the procedure before proceeding, as it could
234not be what you want or need.
235
236Move all virtual machines from the node. Make sure you have no local
237data or backups you want to keep, or save them accordingly.
e8503c6c 238In the following example we will remove the node hp4 from the cluster.
8a865621 239
e8503c6c
EK
240Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
241command to identify the node ID to remove:
8a865621
DM
242
243----
244hp1# pvecm nodes
245
246Membership information
247~~~~~~~~~~~~~~~~~~~~~~
248 Nodeid Votes Name
249 1 1 hp1 (local)
250 2 1 hp2
251 3 1 hp3
252 4 1 hp4
253----
254
e8503c6c
EK
255
256At this point you must power off hp4 and
257make sure that it will not power on again (in the network) as it
258is.
259
260IMPORTANT: As said above, it is critical to power off the node
261*before* removal, and make sure that it will *never* power on again
262(in the existing cluster network) as it is.
263If you power on the node as it is, your cluster will be screwed up and
264it could be difficult to restore a clean cluster state.
265
266After powering off the node hp4, we can safely remove it from the cluster.
8a865621 267
c15cdfba 268----
8a865621 269 hp1# pvecm delnode hp4
c15cdfba 270----
8a865621
DM
271
272If the operation succeeds no output is returned, just check the node
8c1189b6 273list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
274something like:
275
276----
277hp1# pvecm status
278
279Quorum information
280~~~~~~~~~~~~~~~~~~
281Date: Mon Apr 20 12:44:28 2015
282Quorum provider: corosync_votequorum
283Nodes: 3
284Node ID: 0x00000001
285Ring ID: 1992
286Quorate: Yes
287
288Votequorum information
289~~~~~~~~~~~~~~~~~~~~~~
290Expected votes: 3
291Highest expected: 3
292Total votes: 3
293Quorum: 3
294Flags: Quorate
295
296Membership information
297~~~~~~~~~~~~~~~~~~~~~~
298 Nodeid Votes Name
2990x00000001 1 192.168.15.90 (local)
3000x00000002 1 192.168.15.91
3010x00000003 1 192.168.15.92
302----
303
8a865621
DM
304If, for whatever reason, you want that this server joins the same
305cluster again, you have to
306
26ca7ff5 307* reinstall {pve} on it from scratch
8a865621
DM
308
309* then join it, as explained in the previous section.
d8742b0c 310
38ae8db3 311[[pvecm_separate_node_without_reinstall]]
555e966b
TL
312Separate A Node Without Reinstalling
313~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
314
315CAUTION: This is *not* the recommended method, proceed with caution. Use the
316above mentioned method if you're unsure.
317
318You can also separate a node from a cluster without reinstalling it from
319scratch. But after removing the node from the cluster it will still have
320access to the shared storages! This must be resolved before you start removing
321the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
322storage with another cluster, as storage locking doesn't work over cluster
323boundary. Further, it may also lead to VMID conflicts.
555e966b 324
3be22308
TL
325Its suggested that you create a new storage where only the node which you want
326to separate has access. This can be an new export on your NFS or a new Ceph
327pool, to name a few examples. Its just important that the exact same storage
328does not gets accessed by multiple clusters. After setting this storage up move
329all data from the node and its VMs to it. Then you are ready to separate the
330node from the cluster.
555e966b
TL
331
332WARNING: Ensure all shared resources are cleanly separated! You will run into
333conflicts and problems else.
334
335First stop the corosync and the pve-cluster services on the node:
336[source,bash]
4d19cb00 337----
555e966b
TL
338systemctl stop pve-cluster
339systemctl stop corosync
4d19cb00 340----
555e966b
TL
341
342Start the cluster filesystem again in local mode:
343[source,bash]
4d19cb00 344----
555e966b 345pmxcfs -l
4d19cb00 346----
555e966b
TL
347
348Delete the corosync configuration files:
349[source,bash]
4d19cb00 350----
555e966b
TL
351rm /etc/pve/corosync.conf
352rm /etc/corosync/*
4d19cb00 353----
555e966b
TL
354
355You can now start the filesystem again as normal service:
356[source,bash]
4d19cb00 357----
555e966b
TL
358killall pmxcfs
359systemctl start pve-cluster
4d19cb00 360----
555e966b
TL
361
362The node is now separated from the cluster. You can deleted it from a remaining
363node of the cluster with:
364[source,bash]
4d19cb00 365----
555e966b 366pvecm delnode oldnode
4d19cb00 367----
555e966b
TL
368
369If the command failed, because the remaining node in the cluster lost quorum
370when the now separate node exited, you may set the expected votes to 1 as a workaround:
371[source,bash]
4d19cb00 372----
555e966b 373pvecm expected 1
4d19cb00 374----
555e966b
TL
375
376And the repeat the 'pvecm delnode' command.
377
378Now switch back to the separated node, here delete all remaining files left
379from the old cluster. This ensures that the node can be added to another
380cluster again without problems.
381
382[source,bash]
4d19cb00 383----
555e966b 384rm /var/lib/corosync/*
4d19cb00 385----
555e966b
TL
386
387As the configuration files from the other nodes are still in the cluster
388filesystem you may want to clean those up too. Remove simply the whole
389directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
390you used the correct one before deleting it.
391
392CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
393the nodes can still connect to each other with public key authentication. This
394should be fixed by removing the respective keys from the
395'/etc/pve/priv/authorized_keys' file.
d8742b0c 396
806ef12d
DM
397Quorum
398------
399
400{pve} use a quorum-based technique to provide a consistent state among
401all cluster nodes.
402
403[quote, from Wikipedia, Quorum (distributed computing)]
404____
405A quorum is the minimum number of votes that a distributed transaction
406has to obtain in order to be allowed to perform an operation in a
407distributed system.
408____
409
410In case of network partitioning, state changes requires that a
411majority of nodes are online. The cluster switches to read-only mode
5eba0743 412if it loses quorum.
806ef12d
DM
413
414NOTE: {pve} assigns a single vote to each node by default.
415
e4ec4154
TL
416Cluster Network
417---------------
418
419The cluster network is the core of a cluster. All messages sent over it have to
420be delivered reliable to all nodes in their respective order. In {pve} this
421part is done by corosync, an implementation of a high performance low overhead
422high availability development toolkit. It serves our decentralized
423configuration file system (`pmxcfs`).
424
425[[cluster-network-requirements]]
426Network Requirements
427~~~~~~~~~~~~~~~~~~~~
428This needs a reliable network with latencies under 2 milliseconds (LAN
429performance) to work properly. While corosync can also use unicast for
430communication between nodes its **highly recommended** to have a multicast
431capable network. The network should not be used heavily by other members,
432ideally corosync runs on its own network.
433*never* share it with network where storage communicates too.
434
435Before setting up a cluster it is good practice to check if the network is fit
436for that purpose.
437
438* Ensure that all nodes are in the same subnet. This must only be true for the
439 network interfaces used for cluster communication (corosync).
440
441* Ensure all nodes can reach each other over those interfaces, using `ping` is
442 enough for a basic test.
443
444* Ensure that multicast works in general and a high package rates. This can be
445 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 446+
e4ec4154
TL
447[source,bash]
448----
449omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
450----
451
452* Ensure that multicast communication works over an extended period of time.
a181f090 453 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
454 no multicast querier is active. This test has a duration of around 10
455 minutes.
9e73d831 456+
e4ec4154 457[source,bash]
4d19cb00 458----
e4ec4154 459omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 460----
e4ec4154
TL
461
462Your network is not ready for clustering if any of these test fails. Recheck
463your network configuration. Especially switches are notorious for having
464multicast disabled by default or IGMP snooping enabled with no IGMP querier
465active.
466
467In smaller cluster its also an option to use unicast if you really cannot get
468multicast to work.
469
470Separate Cluster Network
471~~~~~~~~~~~~~~~~~~~~~~~~
472
473When creating a cluster without any parameters the cluster network is generally
474shared with the Web UI and the VMs and its traffic. Depending on your setup
475even storage traffic may get sent over the same network. Its recommended to
476change that, as corosync is a time critical real time application.
477
478Setting Up A New Network
479^^^^^^^^^^^^^^^^^^^^^^^^
480
481First you have to setup a new network interface. It should be on a physical
482separate network. Ensure that your network fulfills the
483<<cluster-network-requirements,cluster network requirements>>.
484
485Separate On Cluster Creation
486^^^^^^^^^^^^^^^^^^^^^^^^^^^^
487
488This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
489the 'pvecm create' command used for creating a new cluster.
490
44f38275 491If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
492and want to send and receive all cluster communication over this interface
493you would execute:
494
495[source,bash]
4d19cb00 496----
e4ec4154 497pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 498----
e4ec4154
TL
499
500To check if everything is working properly execute:
501[source,bash]
4d19cb00 502----
e4ec4154 503systemctl status corosync
4d19cb00 504----
e4ec4154 505
266cb17b
WB
506Afterwards, proceed as descripted in the section to
507<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
82d52451 508
e4ec4154
TL
509[[separate-cluster-net-after-creation]]
510Separate After Cluster Creation
511^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
512
513You can do this also if you have already created a cluster and want to switch
514its communication to another network, without rebuilding the whole cluster.
515This change may lead to short durations of quorum loss in the cluster, as nodes
516have to restart corosync and come up one after the other on the new network.
517
518Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
519The open it and you should see a file similar to:
520
521----
522logging {
523 debug: off
524 to_syslog: yes
525}
526
527nodelist {
528
529 node {
530 name: due
531 nodeid: 2
532 quorum_votes: 1
533 ring0_addr: due
534 }
535
536 node {
537 name: tre
538 nodeid: 3
539 quorum_votes: 1
540 ring0_addr: tre
541 }
542
543 node {
544 name: uno
545 nodeid: 1
546 quorum_votes: 1
547 ring0_addr: uno
548 }
549
550}
551
552quorum {
553 provider: corosync_votequorum
554}
555
556totem {
557 cluster_name: thomas-testcluster
558 config_version: 3
559 ip_version: ipv4
560 secauth: on
561 version: 2
562 interface {
563 bindnetaddr: 192.168.30.50
564 ringnumber: 0
565 }
566
567}
568----
569
570The first you want to do is add the 'name' properties in the node entries if
571you do not see them already. Those *must* match the node name.
572
573Then replace the address from the 'ring0_addr' properties with the new
574addresses. You may use plain IP addresses or also hostnames here. If you use
575hostnames ensure that they are resolvable from all nodes.
576
577In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 578network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
579in the totem section of the config to an address of the new network. It can be
580any address from the subnet configured on the new network interface.
581
582After you increased the 'config_version' property the new configuration file
583should look like:
584
585----
586
587logging {
588 debug: off
589 to_syslog: yes
590}
591
592nodelist {
593
594 node {
595 name: due
596 nodeid: 2
597 quorum_votes: 1
598 ring0_addr: 10.10.10.2
599 }
600
601 node {
602 name: tre
603 nodeid: 3
604 quorum_votes: 1
605 ring0_addr: 10.10.10.3
606 }
607
608 node {
609 name: uno
610 nodeid: 1
611 quorum_votes: 1
612 ring0_addr: 10.10.10.1
613 }
614
615}
616
617quorum {
618 provider: corosync_votequorum
619}
620
621totem {
622 cluster_name: thomas-testcluster
623 config_version: 4
624 ip_version: ipv4
625 secauth: on
626 version: 2
627 interface {
628 bindnetaddr: 10.10.10.1
629 ringnumber: 0
630 }
631
632}
633----
634
635Now after a final check whether all changed information is correct we save it
636and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
637learn how to bring it in effect.
638
639As our change cannot be enforced live from corosync we have to do an restart.
640
641On a single node execute:
642[source,bash]
4d19cb00 643----
e4ec4154 644systemctl restart corosync
4d19cb00 645----
e4ec4154
TL
646
647Now check if everything is fine:
648
649[source,bash]
4d19cb00 650----
e4ec4154 651systemctl status corosync
4d19cb00 652----
e4ec4154
TL
653
654If corosync runs again correct restart corosync also on all other nodes.
655They will then join the cluster membership one by one on the new network.
656
11202f1d 657[[pvecm_rrp]]
e4ec4154
TL
658Redundant Ring Protocol
659~~~~~~~~~~~~~~~~~~~~~~~
660To avoid a single point of failure you should implement counter measurements.
661This can be on the hardware and operating system level through network bonding.
662
663Corosync itself offers also a possibility to add redundancy through the so
664called 'Redundant Ring Protocol'. This protocol allows running a second totem
665ring on another network, this network should be physically separated from the
666other rings network to actually increase availability.
667
668RRP On Cluster Creation
669~~~~~~~~~~~~~~~~~~~~~~~
670
671The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
672'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
673
674NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
675
676So if you have two networks, one on the 10.10.10.1/24 and the other on the
67710.10.20.1/24 subnet you would execute:
678
679[source,bash]
4d19cb00 680----
e4ec4154
TL
681pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
682-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 683----
e4ec4154 684
6e78f927 685RRP On Existing Clusters
e4ec4154
TL
686~~~~~~~~~~~~~~~~~~~~~~~~
687
6e78f927
TL
688You will take similar steps as described in
689<<separate-cluster-net-after-creation,separating the cluster network>> to
690enable RRP on an already running cluster. The single difference is, that you
691will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
692
693First add a new `interface` subsection in the `totem` section, set its
694`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
695address of the subnet you have configured for your new ring.
696Further set the `rrp_mode` to `passive`, this is the only stable mode.
697
698Then add to each node entry in the `nodelist` section its new `ring1_addr`
699property with the nodes additional ring address.
700
701So if you have two networks, one on the 10.10.10.1/24 and the other on the
70210.10.20.1/24 subnet, the final configuration file should look like:
703
704----
705totem {
706 cluster_name: tweak
707 config_version: 9
708 ip_version: ipv4
709 rrp_mode: passive
710 secauth: on
711 version: 2
712 interface {
713 bindnetaddr: 10.10.10.1
714 ringnumber: 0
715 }
716 interface {
717 bindnetaddr: 10.10.20.1
718 ringnumber: 1
719 }
720}
721
722nodelist {
723 node {
724 name: pvecm1
725 nodeid: 1
726 quorum_votes: 1
727 ring0_addr: 10.10.10.1
728 ring1_addr: 10.10.20.1
729 }
730
731 node {
732 name: pvecm2
733 nodeid: 2
734 quorum_votes: 1
735 ring0_addr: 10.10.10.2
736 ring1_addr: 10.10.20.2
737 }
738
739 [...] # other cluster nodes here
740}
741
742[...] # other remaining config sections here
743
744----
745
7d48940b
DM
746Bring it in effect like described in the
747<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
748
749This is a change which cannot take live in effect and needs at least a restart
750of corosync. Recommended is a restart of the whole cluster.
751
752If you cannot reboot the whole cluster ensure no High Availability services are
753configured and the stop the corosync service on all nodes. After corosync is
754stopped on all nodes start it one after the other again.
755
756Corosync Configuration
757----------------------
758
470d4313 759The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
760controls the cluster member ship and its network.
761For reading more about it check the corosync.conf man page:
762[source,bash]
4d19cb00 763----
e4ec4154 764man corosync.conf
4d19cb00 765----
e4ec4154
TL
766
767For node membership you should always use the `pvecm` tool provided by {pve}.
768You may have to edit the configuration file manually for other changes.
769Here are a few best practice tips for doing this.
770
771[[edit-corosync-conf]]
772Edit corosync.conf
773~~~~~~~~~~~~~~~~~~
774
775Editing the corosync.conf file can be not always straight forward. There are
776two on each cluster, one in `/etc/pve/corosync.conf` and the other in
777`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
778propagate the changes to the local one, but not vice versa.
779
780The configuration will get updated automatically as soon as the file changes.
781This means changes which can be integrated in a running corosync will take
782instantly effect. So you should always make a copy and edit that instead, to
783avoid triggering some unwanted changes by an in between safe.
784
785[source,bash]
4d19cb00 786----
e4ec4154 787cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 788----
e4ec4154
TL
789
790Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
791preinstalled on {pve} for example.
792
793NOTE: Always increment the 'config_version' number on configuration changes,
794omitting this can lead to problems.
795
796After making the necessary changes create another copy of the current working
797configuration file. This serves as a backup if the new configuration fails to
798apply or makes problems in other ways.
799
800[source,bash]
4d19cb00 801----
e4ec4154 802cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 803----
e4ec4154
TL
804
805Then move the new configuration file over the old one:
806[source,bash]
4d19cb00 807----
e4ec4154 808mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 809----
e4ec4154
TL
810
811You may check with the commands
812[source,bash]
4d19cb00 813----
e4ec4154
TL
814systemctl status corosync
815journalctl -b -u corosync
4d19cb00 816----
e4ec4154
TL
817
818If the change could applied automatically. If not you may have to restart the
819corosync service via:
820[source,bash]
4d19cb00 821----
e4ec4154 822systemctl restart corosync
4d19cb00 823----
e4ec4154
TL
824
825On errors check the troubleshooting section below.
826
827Troubleshooting
828~~~~~~~~~~~~~~~
829
830Issue: 'quorum.expected_votes must be configured'
831^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
832
833When corosync starts to fail and you get the following message in the system log:
834
835----
836[...]
837corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
838corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
839 'configuration error: nodelist or quorum.expected_votes must be configured!'
840[...]
841----
842
843It means that the hostname you set for corosync 'ringX_addr' in the
844configuration could not be resolved.
845
846
847Write Configuration When Not Quorate
848^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
849
850If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
851know what you do, use:
852[source,bash]
4d19cb00 853----
e4ec4154 854pvecm expected 1
4d19cb00 855----
e4ec4154
TL
856
857This sets the expected vote count to 1 and makes the cluster quorate. You can
858now fix your configuration, or revert it back to the last working backup.
859
860This is not enough if corosync cannot start anymore. Here its best to edit the
861local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
862that corosync can start again. Ensure that on all nodes this configuration has
863the same content to avoid split brains. If you are not sure what went wrong
864it's best to ask the Proxmox Community to help you.
865
866
867[[corosync-conf-glossary]]
868Corosync Configuration Glossary
869~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
870
871ringX_addr::
872This names the different ring addresses for the corosync totem rings used for
873the cluster communication.
874
875bindnetaddr::
876Defines to which interface the ring should bind to. It may be any address of
877the subnet configured on the interface we want to use. In general its the
878recommended to just use an address a node uses on this interface.
879
880rrp_mode::
881Specifies the mode of the redundant ring protocol and may be passive, active or
882none. Note that use of active is highly experimental and not official
883supported. Passive is the preferred mode, it may double the cluster
884communication throughput and increases availability.
885
806ef12d
DM
886
887Cluster Cold Start
888------------------
889
890It is obvious that a cluster is not quorate when all nodes are
891offline. This is a common case after a power failure.
892
893NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 894(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
895you want HA.
896
204231df 897On node startup, the `pve-guests` service is started and waits for
8c1189b6 898quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
899flag set.
900
901When you turn on nodes, or when power comes back after power failure,
902it is likely that some nodes boots faster than others. Please keep in
903mind that guest startup is delayed until you reach quorum.
806ef12d 904
054a7e7d 905
082ea7d9
TL
906Guest Migration
907---------------
908
054a7e7d
DM
909Migrating virtual guests to other nodes is a useful feature in a
910cluster. There are settings to control the behavior of such
911migrations. This can be done via the configuration file
912`datacenter.cfg` or for a specific migration via API or command line
913parameters.
914
da6c7dee
DC
915It makes a difference if a Guest is online or offline, or if it has
916local resources (like a local disk).
917
918For Details about Virtual Machine Migration see the
919xref:qm_migration[QEMU/KVM Migration Chapter]
920
921For Details about Container Migration see the
922xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
923
924Migration Type
925~~~~~~~~~~~~~~
926
44f38275 927The migration type defines if the migration data should be sent over an
d63be10b 928encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 929Setting the migration type to insecure means that the RAM content of a
470d4313 930virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
931information disclosure of critical data from inside the guest (for
932example passwords or encryption keys).
054a7e7d
DM
933
934Therefore, we strongly recommend using the secure channel if you do
935not have full control over the network and can not guarantee that no
936one is eavesdropping to it.
082ea7d9 937
054a7e7d
DM
938NOTE: Storage migration does not follow this setting. Currently, it
939always sends the storage content over a secure channel.
940
941Encryption requires a lot of computing power, so this setting is often
942changed to "unsafe" to achieve better performance. The impact on
943modern systems is lower because they implement AES encryption in
b1743473
DM
944hardware. The performance impact is particularly evident in fast
945networks where you can transfer 10 Gbps or more.
082ea7d9 946
082ea7d9
TL
947
948Migration Network
949~~~~~~~~~~~~~~~~~
950
a9baa444
TL
951By default, {pve} uses the network in which cluster communication
952takes place to send the migration traffic. This is not optimal because
953sensitive cluster traffic can be disrupted and this network may not
954have the best bandwidth available on the node.
955
956Setting the migration network parameter allows the use of a dedicated
957network for the entire migration traffic. In addition to the memory,
958this also affects the storage traffic for offline migrations.
959
960The migration network is set as a network in the CIDR notation. This
961has the advantage that you do not have to set individual IP addresses
962for each node. {pve} can determine the real address on the
963destination node from the network specified in the CIDR form. To
964enable this, the network must be specified so that each node has one,
965but only one IP in the respective network.
966
082ea7d9
TL
967
968Example
969^^^^^^^
970
a9baa444
TL
971We assume that we have a three-node setup with three separate
972networks. One for public communication with the Internet, one for
973cluster communication and a very fast one, which we want to use as a
974dedicated network for migration.
975
976A network configuration for such a setup might look as follows:
082ea7d9
TL
977
978----
7a0d4784 979iface eno1 inet manual
082ea7d9
TL
980
981# public network
982auto vmbr0
983iface vmbr0 inet static
984 address 192.X.Y.57
985 netmask 255.255.250.0
986 gateway 192.X.Y.1
7a0d4784 987 bridge_ports eno1
082ea7d9
TL
988 bridge_stp off
989 bridge_fd 0
990
991# cluster network
7a0d4784
WL
992auto eno2
993iface eno2 inet static
082ea7d9
TL
994 address 10.1.1.1
995 netmask 255.255.255.0
996
997# fast network
7a0d4784
WL
998auto eno3
999iface eno3 inet static
082ea7d9
TL
1000 address 10.1.2.1
1001 netmask 255.255.255.0
082ea7d9
TL
1002----
1003
a9baa444
TL
1004Here, we will use the network 10.1.2.0/24 as a migration network. For
1005a single migration, you can do this using the `migration_network`
1006parameter of the command line tool:
1007
082ea7d9 1008----
b1743473 1009# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
1010----
1011
a9baa444
TL
1012To configure this as the default network for all migrations in the
1013cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1014file:
1015
082ea7d9 1016----
a9baa444 1017# use dedicated migration network
b1743473 1018migration: secure,network=10.1.2.0/24
082ea7d9
TL
1019----
1020
a9baa444
TL
1021NOTE: The migration type must always be set when the migration network
1022gets set in `/etc/pve/datacenter.cfg`.
1023
806ef12d 1024
d8742b0c
DM
1025ifdef::manvolnum[]
1026include::pve-copyright.adoc[]
1027endif::manvolnum[]