]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
add VLAN explanation.] add VLAN explanation.
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c 1ifdef::manvolnum[]
b2f242ab
DM
2pvecm(1)
3========
5f09af76
DM
4:pve-toplevel:
5
d8742b0c
DM
6NAME
7----
8
74026b8f 9pvecm - Proxmox VE Cluster Manager
d8742b0c 10
49a5e11c 11SYNOPSIS
d8742b0c
DM
12--------
13
14include::pvecm.1-synopsis.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19
20ifndef::manvolnum[]
21Cluster Manager
22===============
5f09af76 23:pve-toplevel:
194d2f29 24endif::manvolnum[]
5f09af76 25
8c1189b6
FG
26The {PVE} cluster manager `pvecm` is a tool to create a group of
27physical servers. Such a group is called a *cluster*. We use the
8a865621 28http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 29communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
30(probably more, dependent on network latency).
31
8c1189b6 32`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 33leave the cluster, get status information and do various other cluster
e300cf7d
FG
34related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
36nodes.
37
38Grouping nodes into a cluster has the following advantages:
39
40* Centralized, web based management
41
5eba0743 42* Multi-master clusters: each node can do all management task
8a865621 43
8c1189b6
FG
44* `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
8a865621 46
5eba0743 47* Easy migration of virtual machines and containers between physical
8a865621
DM
48 hosts
49
50* Fast deployment
51
52* Cluster-wide services like firewall and HA
53
54
55Requirements
56------------
57
8c1189b6 58* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 59 to communicate between nodes (also see
ceabe189 60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 61 ports 5404 and 5405 for cluster communication.
ceabe189
DM
62+
63NOTE: Some switches do not support IP multicast by default and must be
64manually enabled first.
8a865621
DM
65
66* Date and time have to be synchronized.
67
ceabe189 68* SSH tunnel on TCP port 22 between nodes is used.
8a865621 69
ceabe189
DM
70* If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
8a865621
DM
73
74* We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 78Proxmox VE 4.0 cluster nodes.
8a865621
DM
79
80
ceabe189
DM
81Preparing Nodes
82---------------
8a865621
DM
83
84First, install {PVE} on all nodes. Make sure that each node is
85installed with the final hostname and IP configuration. Changing the
86hostname and IP is not possible after cluster creation.
87
88Currently the cluster creation has to be done on the console, so you
8c1189b6 89need to login via `ssh`.
8a865621 90
8a865621 91Create the Cluster
ceabe189 92------------------
8a865621 93
8c1189b6
FG
94Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95This name cannot be changed later.
8a865621
DM
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
63f956c8
DM
99CAUTION: The cluster name is used to compute the default multicast
100address. Please use unique cluster names if you run more than one
101cluster inside your network.
102
8a865621
DM
103To check the state of your cluster use:
104
105 hp1# pvecm status
106
82445c4e
TL
107Multiple Clusters In Same Network
108~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
109
110It is possible to create multiple clusters in the same physical or logical
111network. Each cluster must have a unique name, which is used to generate the
112cluster's multicast group address. As long as no duplicate cluster names are
113configured in one network segment, the different clusters won't interfere with
114each other.
115
116If multiple clusters operate in a single network it may be beneficial to setup
117an IGMP querier and enable IGMP Snooping in said network. This may reduce the
118load of the network significantly because multicast packets are only delivered
119to endpoints of the respective member nodes.
120
8a865621
DM
121
122Adding Nodes to the Cluster
ceabe189 123---------------------------
8a865621 124
8c1189b6 125Login via `ssh` to the node you want to add.
8a865621
DM
126
127 hp2# pvecm add IP-ADDRESS-CLUSTER
128
129For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
130
5eba0743 131CAUTION: A new node cannot hold any VMs, because you would get
7980581f 132conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
133`/etc/pve` is overwritten when you join a new node to the cluster. To
134workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 135adding the node to the cluster.
8a865621
DM
136
137To check the state of cluster:
138
139 # pvecm status
140
ceabe189 141.Cluster status after adding 4 nodes
8a865621
DM
142----
143hp2# pvecm status
144Quorum information
145~~~~~~~~~~~~~~~~~~
146Date: Mon Apr 20 12:30:13 2015
147Quorum provider: corosync_votequorum
148Nodes: 4
149Node ID: 0x00000001
150Ring ID: 1928
151Quorate: Yes
152
153Votequorum information
154~~~~~~~~~~~~~~~~~~~~~~
155Expected votes: 4
156Highest expected: 4
157Total votes: 4
158Quorum: 2
159Flags: Quorate
160
161Membership information
162~~~~~~~~~~~~~~~~~~~~~~
163 Nodeid Votes Name
1640x00000001 1 192.168.15.91
1650x00000002 1 192.168.15.92 (local)
1660x00000003 1 192.168.15.93
1670x00000004 1 192.168.15.94
168----
169
170If you only want the list of all nodes use:
171
172 # pvecm nodes
173
5eba0743 174.List nodes in a cluster
8a865621
DM
175----
176hp2# pvecm nodes
177
178Membership information
179~~~~~~~~~~~~~~~~~~~~~~
180 Nodeid Votes Name
181 1 1 hp1
182 2 1 hp2 (local)
183 3 1 hp3
184 4 1 hp4
185----
186
82d52451 187[[adding-nodes-with-separated-cluster-network]]
e4ec4154
TL
188Adding Nodes With Separated Cluster Network
189~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
190
191When adding a node to a cluster with a separated cluster network you need to
192use the 'ringX_addr' parameters to set the nodes address on those networks:
193
194[source,bash]
4d19cb00 195----
e4ec4154 196pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 197----
e4ec4154
TL
198
199If you want to use the Redundant Ring Protocol you will also want to pass the
200'ring1_addr' parameter.
201
8a865621
DM
202
203Remove a Cluster Node
ceabe189 204---------------------
8a865621
DM
205
206CAUTION: Read carefully the procedure before proceeding, as it could
207not be what you want or need.
208
209Move all virtual machines from the node. Make sure you have no local
210data or backups you want to keep, or save them accordingly.
e8503c6c 211In the following example we will remove the node hp4 from the cluster.
8a865621 212
e8503c6c
EK
213Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
214command to identify the node ID to remove:
8a865621
DM
215
216----
217hp1# pvecm nodes
218
219Membership information
220~~~~~~~~~~~~~~~~~~~~~~
221 Nodeid Votes Name
222 1 1 hp1 (local)
223 2 1 hp2
224 3 1 hp3
225 4 1 hp4
226----
227
e8503c6c
EK
228
229At this point you must power off hp4 and
230make sure that it will not power on again (in the network) as it
231is.
232
233IMPORTANT: As said above, it is critical to power off the node
234*before* removal, and make sure that it will *never* power on again
235(in the existing cluster network) as it is.
236If you power on the node as it is, your cluster will be screwed up and
237it could be difficult to restore a clean cluster state.
238
239After powering off the node hp4, we can safely remove it from the cluster.
8a865621
DM
240
241 hp1# pvecm delnode hp4
242
243If the operation succeeds no output is returned, just check the node
8c1189b6 244list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
245something like:
246
247----
248hp1# pvecm status
249
250Quorum information
251~~~~~~~~~~~~~~~~~~
252Date: Mon Apr 20 12:44:28 2015
253Quorum provider: corosync_votequorum
254Nodes: 3
255Node ID: 0x00000001
256Ring ID: 1992
257Quorate: Yes
258
259Votequorum information
260~~~~~~~~~~~~~~~~~~~~~~
261Expected votes: 3
262Highest expected: 3
263Total votes: 3
264Quorum: 3
265Flags: Quorate
266
267Membership information
268~~~~~~~~~~~~~~~~~~~~~~
269 Nodeid Votes Name
2700x00000001 1 192.168.15.90 (local)
2710x00000002 1 192.168.15.91
2720x00000003 1 192.168.15.92
273----
274
8a865621
DM
275If, for whatever reason, you want that this server joins the same
276cluster again, you have to
277
26ca7ff5 278* reinstall {pve} on it from scratch
8a865621
DM
279
280* then join it, as explained in the previous section.
d8742b0c 281
38ae8db3 282[[pvecm_separate_node_without_reinstall]]
555e966b
TL
283Separate A Node Without Reinstalling
284~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
285
286CAUTION: This is *not* the recommended method, proceed with caution. Use the
287above mentioned method if you're unsure.
288
289You can also separate a node from a cluster without reinstalling it from
290scratch. But after removing the node from the cluster it will still have
291access to the shared storages! This must be resolved before you start removing
292the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
293storage with another cluster, as storage locking doesn't work over cluster
294boundary. Further, it may also lead to VMID conflicts.
555e966b 295
3be22308
TL
296Its suggested that you create a new storage where only the node which you want
297to separate has access. This can be an new export on your NFS or a new Ceph
298pool, to name a few examples. Its just important that the exact same storage
299does not gets accessed by multiple clusters. After setting this storage up move
300all data from the node and its VMs to it. Then you are ready to separate the
301node from the cluster.
555e966b
TL
302
303WARNING: Ensure all shared resources are cleanly separated! You will run into
304conflicts and problems else.
305
306First stop the corosync and the pve-cluster services on the node:
307[source,bash]
4d19cb00 308----
555e966b
TL
309systemctl stop pve-cluster
310systemctl stop corosync
4d19cb00 311----
555e966b
TL
312
313Start the cluster filesystem again in local mode:
314[source,bash]
4d19cb00 315----
555e966b 316pmxcfs -l
4d19cb00 317----
555e966b
TL
318
319Delete the corosync configuration files:
320[source,bash]
4d19cb00 321----
555e966b
TL
322rm /etc/pve/corosync.conf
323rm /etc/corosync/*
4d19cb00 324----
555e966b
TL
325
326You can now start the filesystem again as normal service:
327[source,bash]
4d19cb00 328----
555e966b
TL
329killall pmxcfs
330systemctl start pve-cluster
4d19cb00 331----
555e966b
TL
332
333The node is now separated from the cluster. You can deleted it from a remaining
334node of the cluster with:
335[source,bash]
4d19cb00 336----
555e966b 337pvecm delnode oldnode
4d19cb00 338----
555e966b
TL
339
340If the command failed, because the remaining node in the cluster lost quorum
341when the now separate node exited, you may set the expected votes to 1 as a workaround:
342[source,bash]
4d19cb00 343----
555e966b 344pvecm expected 1
4d19cb00 345----
555e966b
TL
346
347And the repeat the 'pvecm delnode' command.
348
349Now switch back to the separated node, here delete all remaining files left
350from the old cluster. This ensures that the node can be added to another
351cluster again without problems.
352
353[source,bash]
4d19cb00 354----
555e966b 355rm /var/lib/corosync/*
4d19cb00 356----
555e966b
TL
357
358As the configuration files from the other nodes are still in the cluster
359filesystem you may want to clean those up too. Remove simply the whole
360directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
361you used the correct one before deleting it.
362
363CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
364the nodes can still connect to each other with public key authentication. This
365should be fixed by removing the respective keys from the
366'/etc/pve/priv/authorized_keys' file.
d8742b0c 367
806ef12d
DM
368Quorum
369------
370
371{pve} use a quorum-based technique to provide a consistent state among
372all cluster nodes.
373
374[quote, from Wikipedia, Quorum (distributed computing)]
375____
376A quorum is the minimum number of votes that a distributed transaction
377has to obtain in order to be allowed to perform an operation in a
378distributed system.
379____
380
381In case of network partitioning, state changes requires that a
382majority of nodes are online. The cluster switches to read-only mode
5eba0743 383if it loses quorum.
806ef12d
DM
384
385NOTE: {pve} assigns a single vote to each node by default.
386
e4ec4154
TL
387Cluster Network
388---------------
389
390The cluster network is the core of a cluster. All messages sent over it have to
391be delivered reliable to all nodes in their respective order. In {pve} this
392part is done by corosync, an implementation of a high performance low overhead
393high availability development toolkit. It serves our decentralized
394configuration file system (`pmxcfs`).
395
396[[cluster-network-requirements]]
397Network Requirements
398~~~~~~~~~~~~~~~~~~~~
399This needs a reliable network with latencies under 2 milliseconds (LAN
400performance) to work properly. While corosync can also use unicast for
401communication between nodes its **highly recommended** to have a multicast
402capable network. The network should not be used heavily by other members,
403ideally corosync runs on its own network.
404*never* share it with network where storage communicates too.
405
406Before setting up a cluster it is good practice to check if the network is fit
407for that purpose.
408
409* Ensure that all nodes are in the same subnet. This must only be true for the
410 network interfaces used for cluster communication (corosync).
411
412* Ensure all nodes can reach each other over those interfaces, using `ping` is
413 enough for a basic test.
414
415* Ensure that multicast works in general and a high package rates. This can be
416 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 417+
e4ec4154
TL
418[source,bash]
419----
420omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
421----
422
423* Ensure that multicast communication works over an extended period of time.
a181f090 424 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
425 no multicast querier is active. This test has a duration of around 10
426 minutes.
9e73d831 427+
e4ec4154 428[source,bash]
4d19cb00 429----
e4ec4154 430omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 431----
e4ec4154
TL
432
433Your network is not ready for clustering if any of these test fails. Recheck
434your network configuration. Especially switches are notorious for having
435multicast disabled by default or IGMP snooping enabled with no IGMP querier
436active.
437
438In smaller cluster its also an option to use unicast if you really cannot get
439multicast to work.
440
441Separate Cluster Network
442~~~~~~~~~~~~~~~~~~~~~~~~
443
444When creating a cluster without any parameters the cluster network is generally
445shared with the Web UI and the VMs and its traffic. Depending on your setup
446even storage traffic may get sent over the same network. Its recommended to
447change that, as corosync is a time critical real time application.
448
449Setting Up A New Network
450^^^^^^^^^^^^^^^^^^^^^^^^
451
452First you have to setup a new network interface. It should be on a physical
453separate network. Ensure that your network fulfills the
454<<cluster-network-requirements,cluster network requirements>>.
455
456Separate On Cluster Creation
457^^^^^^^^^^^^^^^^^^^^^^^^^^^^
458
459This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
460the 'pvecm create' command used for creating a new cluster.
461
44f38275 462If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
463and want to send and receive all cluster communication over this interface
464you would execute:
465
466[source,bash]
4d19cb00 467----
e4ec4154 468pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 469----
e4ec4154
TL
470
471To check if everything is working properly execute:
472[source,bash]
4d19cb00 473----
e4ec4154 474systemctl status corosync
4d19cb00 475----
e4ec4154 476
266cb17b
WB
477Afterwards, proceed as descripted in the section to
478<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
82d52451 479
e4ec4154
TL
480[[separate-cluster-net-after-creation]]
481Separate After Cluster Creation
482^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
483
484You can do this also if you have already created a cluster and want to switch
485its communication to another network, without rebuilding the whole cluster.
486This change may lead to short durations of quorum loss in the cluster, as nodes
487have to restart corosync and come up one after the other on the new network.
488
489Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
490The open it and you should see a file similar to:
491
492----
493logging {
494 debug: off
495 to_syslog: yes
496}
497
498nodelist {
499
500 node {
501 name: due
502 nodeid: 2
503 quorum_votes: 1
504 ring0_addr: due
505 }
506
507 node {
508 name: tre
509 nodeid: 3
510 quorum_votes: 1
511 ring0_addr: tre
512 }
513
514 node {
515 name: uno
516 nodeid: 1
517 quorum_votes: 1
518 ring0_addr: uno
519 }
520
521}
522
523quorum {
524 provider: corosync_votequorum
525}
526
527totem {
528 cluster_name: thomas-testcluster
529 config_version: 3
530 ip_version: ipv4
531 secauth: on
532 version: 2
533 interface {
534 bindnetaddr: 192.168.30.50
535 ringnumber: 0
536 }
537
538}
539----
540
541The first you want to do is add the 'name' properties in the node entries if
542you do not see them already. Those *must* match the node name.
543
544Then replace the address from the 'ring0_addr' properties with the new
545addresses. You may use plain IP addresses or also hostnames here. If you use
546hostnames ensure that they are resolvable from all nodes.
547
548In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 549network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
550in the totem section of the config to an address of the new network. It can be
551any address from the subnet configured on the new network interface.
552
553After you increased the 'config_version' property the new configuration file
554should look like:
555
556----
557
558logging {
559 debug: off
560 to_syslog: yes
561}
562
563nodelist {
564
565 node {
566 name: due
567 nodeid: 2
568 quorum_votes: 1
569 ring0_addr: 10.10.10.2
570 }
571
572 node {
573 name: tre
574 nodeid: 3
575 quorum_votes: 1
576 ring0_addr: 10.10.10.3
577 }
578
579 node {
580 name: uno
581 nodeid: 1
582 quorum_votes: 1
583 ring0_addr: 10.10.10.1
584 }
585
586}
587
588quorum {
589 provider: corosync_votequorum
590}
591
592totem {
593 cluster_name: thomas-testcluster
594 config_version: 4
595 ip_version: ipv4
596 secauth: on
597 version: 2
598 interface {
599 bindnetaddr: 10.10.10.1
600 ringnumber: 0
601 }
602
603}
604----
605
606Now after a final check whether all changed information is correct we save it
607and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
608learn how to bring it in effect.
609
610As our change cannot be enforced live from corosync we have to do an restart.
611
612On a single node execute:
613[source,bash]
4d19cb00 614----
e4ec4154 615systemctl restart corosync
4d19cb00 616----
e4ec4154
TL
617
618Now check if everything is fine:
619
620[source,bash]
4d19cb00 621----
e4ec4154 622systemctl status corosync
4d19cb00 623----
e4ec4154
TL
624
625If corosync runs again correct restart corosync also on all other nodes.
626They will then join the cluster membership one by one on the new network.
627
628Redundant Ring Protocol
629~~~~~~~~~~~~~~~~~~~~~~~
630To avoid a single point of failure you should implement counter measurements.
631This can be on the hardware and operating system level through network bonding.
632
633Corosync itself offers also a possibility to add redundancy through the so
634called 'Redundant Ring Protocol'. This protocol allows running a second totem
635ring on another network, this network should be physically separated from the
636other rings network to actually increase availability.
637
638RRP On Cluster Creation
639~~~~~~~~~~~~~~~~~~~~~~~
640
641The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
642'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
643
644NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
645
646So if you have two networks, one on the 10.10.10.1/24 and the other on the
64710.10.20.1/24 subnet you would execute:
648
649[source,bash]
4d19cb00 650----
e4ec4154
TL
651pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
652-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 653----
e4ec4154 654
6e78f927 655RRP On Existing Clusters
e4ec4154
TL
656~~~~~~~~~~~~~~~~~~~~~~~~
657
6e78f927
TL
658You will take similar steps as described in
659<<separate-cluster-net-after-creation,separating the cluster network>> to
660enable RRP on an already running cluster. The single difference is, that you
661will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
662
663First add a new `interface` subsection in the `totem` section, set its
664`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
665address of the subnet you have configured for your new ring.
666Further set the `rrp_mode` to `passive`, this is the only stable mode.
667
668Then add to each node entry in the `nodelist` section its new `ring1_addr`
669property with the nodes additional ring address.
670
671So if you have two networks, one on the 10.10.10.1/24 and the other on the
67210.10.20.1/24 subnet, the final configuration file should look like:
673
674----
675totem {
676 cluster_name: tweak
677 config_version: 9
678 ip_version: ipv4
679 rrp_mode: passive
680 secauth: on
681 version: 2
682 interface {
683 bindnetaddr: 10.10.10.1
684 ringnumber: 0
685 }
686 interface {
687 bindnetaddr: 10.10.20.1
688 ringnumber: 1
689 }
690}
691
692nodelist {
693 node {
694 name: pvecm1
695 nodeid: 1
696 quorum_votes: 1
697 ring0_addr: 10.10.10.1
698 ring1_addr: 10.10.20.1
699 }
700
701 node {
702 name: pvecm2
703 nodeid: 2
704 quorum_votes: 1
705 ring0_addr: 10.10.10.2
706 ring1_addr: 10.10.20.2
707 }
708
709 [...] # other cluster nodes here
710}
711
712[...] # other remaining config sections here
713
714----
715
7d48940b
DM
716Bring it in effect like described in the
717<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
718
719This is a change which cannot take live in effect and needs at least a restart
720of corosync. Recommended is a restart of the whole cluster.
721
722If you cannot reboot the whole cluster ensure no High Availability services are
723configured and the stop the corosync service on all nodes. After corosync is
724stopped on all nodes start it one after the other again.
725
726Corosync Configuration
727----------------------
728
470d4313 729The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
730controls the cluster member ship and its network.
731For reading more about it check the corosync.conf man page:
732[source,bash]
4d19cb00 733----
e4ec4154 734man corosync.conf
4d19cb00 735----
e4ec4154
TL
736
737For node membership you should always use the `pvecm` tool provided by {pve}.
738You may have to edit the configuration file manually for other changes.
739Here are a few best practice tips for doing this.
740
741[[edit-corosync-conf]]
742Edit corosync.conf
743~~~~~~~~~~~~~~~~~~
744
745Editing the corosync.conf file can be not always straight forward. There are
746two on each cluster, one in `/etc/pve/corosync.conf` and the other in
747`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
748propagate the changes to the local one, but not vice versa.
749
750The configuration will get updated automatically as soon as the file changes.
751This means changes which can be integrated in a running corosync will take
752instantly effect. So you should always make a copy and edit that instead, to
753avoid triggering some unwanted changes by an in between safe.
754
755[source,bash]
4d19cb00 756----
e4ec4154 757cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 758----
e4ec4154
TL
759
760Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
761preinstalled on {pve} for example.
762
763NOTE: Always increment the 'config_version' number on configuration changes,
764omitting this can lead to problems.
765
766After making the necessary changes create another copy of the current working
767configuration file. This serves as a backup if the new configuration fails to
768apply or makes problems in other ways.
769
770[source,bash]
4d19cb00 771----
e4ec4154 772cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 773----
e4ec4154
TL
774
775Then move the new configuration file over the old one:
776[source,bash]
4d19cb00 777----
e4ec4154 778mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 779----
e4ec4154
TL
780
781You may check with the commands
782[source,bash]
4d19cb00 783----
e4ec4154
TL
784systemctl status corosync
785journalctl -b -u corosync
4d19cb00 786----
e4ec4154
TL
787
788If the change could applied automatically. If not you may have to restart the
789corosync service via:
790[source,bash]
4d19cb00 791----
e4ec4154 792systemctl restart corosync
4d19cb00 793----
e4ec4154
TL
794
795On errors check the troubleshooting section below.
796
797Troubleshooting
798~~~~~~~~~~~~~~~
799
800Issue: 'quorum.expected_votes must be configured'
801^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
802
803When corosync starts to fail and you get the following message in the system log:
804
805----
806[...]
807corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
808corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
809 'configuration error: nodelist or quorum.expected_votes must be configured!'
810[...]
811----
812
813It means that the hostname you set for corosync 'ringX_addr' in the
814configuration could not be resolved.
815
816
817Write Configuration When Not Quorate
818^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
819
820If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
821know what you do, use:
822[source,bash]
4d19cb00 823----
e4ec4154 824pvecm expected 1
4d19cb00 825----
e4ec4154
TL
826
827This sets the expected vote count to 1 and makes the cluster quorate. You can
828now fix your configuration, or revert it back to the last working backup.
829
830This is not enough if corosync cannot start anymore. Here its best to edit the
831local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
832that corosync can start again. Ensure that on all nodes this configuration has
833the same content to avoid split brains. If you are not sure what went wrong
834it's best to ask the Proxmox Community to help you.
835
836
837[[corosync-conf-glossary]]
838Corosync Configuration Glossary
839~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
840
841ringX_addr::
842This names the different ring addresses for the corosync totem rings used for
843the cluster communication.
844
845bindnetaddr::
846Defines to which interface the ring should bind to. It may be any address of
847the subnet configured on the interface we want to use. In general its the
848recommended to just use an address a node uses on this interface.
849
850rrp_mode::
851Specifies the mode of the redundant ring protocol and may be passive, active or
852none. Note that use of active is highly experimental and not official
853supported. Passive is the preferred mode, it may double the cluster
854communication throughput and increases availability.
855
806ef12d
DM
856
857Cluster Cold Start
858------------------
859
860It is obvious that a cluster is not quorate when all nodes are
861offline. This is a common case after a power failure.
862
863NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 864(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
865you want HA.
866
204231df 867On node startup, the `pve-guests` service is started and waits for
8c1189b6 868quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
869flag set.
870
871When you turn on nodes, or when power comes back after power failure,
872it is likely that some nodes boots faster than others. Please keep in
873mind that guest startup is delayed until you reach quorum.
806ef12d 874
054a7e7d 875
082ea7d9
TL
876Guest Migration
877---------------
878
054a7e7d
DM
879Migrating virtual guests to other nodes is a useful feature in a
880cluster. There are settings to control the behavior of such
881migrations. This can be done via the configuration file
882`datacenter.cfg` or for a specific migration via API or command line
883parameters.
884
da6c7dee
DC
885It makes a difference if a Guest is online or offline, or if it has
886local resources (like a local disk).
887
888For Details about Virtual Machine Migration see the
889xref:qm_migration[QEMU/KVM Migration Chapter]
890
891For Details about Container Migration see the
892xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
893
894Migration Type
895~~~~~~~~~~~~~~
896
44f38275 897The migration type defines if the migration data should be sent over an
d63be10b 898encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 899Setting the migration type to insecure means that the RAM content of a
470d4313 900virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
901information disclosure of critical data from inside the guest (for
902example passwords or encryption keys).
054a7e7d
DM
903
904Therefore, we strongly recommend using the secure channel if you do
905not have full control over the network and can not guarantee that no
906one is eavesdropping to it.
082ea7d9 907
054a7e7d
DM
908NOTE: Storage migration does not follow this setting. Currently, it
909always sends the storage content over a secure channel.
910
911Encryption requires a lot of computing power, so this setting is often
912changed to "unsafe" to achieve better performance. The impact on
913modern systems is lower because they implement AES encryption in
b1743473
DM
914hardware. The performance impact is particularly evident in fast
915networks where you can transfer 10 Gbps or more.
082ea7d9 916
082ea7d9
TL
917
918Migration Network
919~~~~~~~~~~~~~~~~~
920
a9baa444
TL
921By default, {pve} uses the network in which cluster communication
922takes place to send the migration traffic. This is not optimal because
923sensitive cluster traffic can be disrupted and this network may not
924have the best bandwidth available on the node.
925
926Setting the migration network parameter allows the use of a dedicated
927network for the entire migration traffic. In addition to the memory,
928this also affects the storage traffic for offline migrations.
929
930The migration network is set as a network in the CIDR notation. This
931has the advantage that you do not have to set individual IP addresses
932for each node. {pve} can determine the real address on the
933destination node from the network specified in the CIDR form. To
934enable this, the network must be specified so that each node has one,
935but only one IP in the respective network.
936
082ea7d9
TL
937
938Example
939^^^^^^^
940
a9baa444
TL
941We assume that we have a three-node setup with three separate
942networks. One for public communication with the Internet, one for
943cluster communication and a very fast one, which we want to use as a
944dedicated network for migration.
945
946A network configuration for such a setup might look as follows:
082ea7d9
TL
947
948----
7a0d4784 949iface eno1 inet manual
082ea7d9
TL
950
951# public network
952auto vmbr0
953iface vmbr0 inet static
954 address 192.X.Y.57
955 netmask 255.255.250.0
956 gateway 192.X.Y.1
7a0d4784 957 bridge_ports eno1
082ea7d9
TL
958 bridge_stp off
959 bridge_fd 0
960
961# cluster network
7a0d4784
WL
962auto eno2
963iface eno2 inet static
082ea7d9
TL
964 address 10.1.1.1
965 netmask 255.255.255.0
966
967# fast network
7a0d4784
WL
968auto eno3
969iface eno3 inet static
082ea7d9
TL
970 address 10.1.2.1
971 netmask 255.255.255.0
082ea7d9
TL
972----
973
a9baa444
TL
974Here, we will use the network 10.1.2.0/24 as a migration network. For
975a single migration, you can do this using the `migration_network`
976parameter of the command line tool:
977
082ea7d9 978----
b1743473 979# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
980----
981
a9baa444
TL
982To configure this as the default network for all migrations in the
983cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
984file:
985
082ea7d9 986----
a9baa444 987# use dedicated migration network
b1743473 988migration: secure,network=10.1.2.0/24
082ea7d9
TL
989----
990
a9baa444
TL
991NOTE: The migration type must always be set when the migration network
992gets set in `/etc/pve/datacenter.cfg`.
993
806ef12d 994
d8742b0c
DM
995ifdef::manvolnum[]
996include::pve-copyright.adoc[]
997endif::manvolnum[]