]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
add documentation for display types and memory configuration
[pve-docs.git] / pvecm.adoc
CommitLineData
bde0e57d 1[[chapter_pvecm]]
d8742b0c 2ifdef::manvolnum[]
b2f242ab
DM
3pvecm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
d8742b0c
DM
7NAME
8----
9
74026b8f 10pvecm - Proxmox VE Cluster Manager
d8742b0c 11
49a5e11c 12SYNOPSIS
d8742b0c
DM
13--------
14
15include::pvecm.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Cluster Manager
23===============
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
8c1189b6
FG
27The {PVE} cluster manager `pvecm` is a tool to create a group of
28physical servers. Such a group is called a *cluster*. We use the
8a865621 29http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 30communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
31(probably more, dependent on network latency).
32
8c1189b6 33`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 34leave the cluster, get status information and do various other cluster
e300cf7d
FG
35related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
37nodes.
38
39Grouping nodes into a cluster has the following advantages:
40
41* Centralized, web based management
42
5eba0743 43* Multi-master clusters: each node can do all management task
8a865621 44
8c1189b6
FG
45* `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
8a865621 47
5eba0743 48* Easy migration of virtual machines and containers between physical
8a865621
DM
49 hosts
50
51* Fast deployment
52
53* Cluster-wide services like firewall and HA
54
55
56Requirements
57------------
58
8c1189b6 59* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 60 to communicate between nodes (also see
ceabe189 61 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 62 ports 5404 and 5405 for cluster communication.
ceabe189
DM
63+
64NOTE: Some switches do not support IP multicast by default and must be
65manually enabled first.
8a865621
DM
66
67* Date and time have to be synchronized.
68
ceabe189 69* SSH tunnel on TCP port 22 between nodes is used.
8a865621 70
ceabe189
DM
71* If you are interested in High Availability, you need to have at
72 least three nodes for reliable quorum. All nodes should have the
73 same version.
8a865621
DM
74
75* We recommend a dedicated NIC for the cluster traffic, especially if
76 you use shared storage.
77
d4a9910f
DL
78* Root password of a cluster node is required for adding nodes.
79
8a865621 80NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 81Proxmox VE 4.0 cluster nodes.
8a865621
DM
82
83
ceabe189
DM
84Preparing Nodes
85---------------
8a865621
DM
86
87First, install {PVE} on all nodes. Make sure that each node is
88installed with the final hostname and IP configuration. Changing the
89hostname and IP is not possible after cluster creation.
90
d4a9910f 91Currently the cluster creation can either be done on the console(login via `ssh`) or the GUI.
8a865621 92
11202f1d 93[[pvecm_create_cluster]]
8a865621 94Create the Cluster
ceabe189 95------------------
8a865621 96
8c1189b6 97Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
d4a9910f 98This name cannot be changed later. The cluster name follows the same rules as node names.
8a865621
DM
99
100 hp1# pvecm create YOUR-CLUSTER-NAME
101
63f956c8
DM
102CAUTION: The cluster name is used to compute the default multicast
103address. Please use unique cluster names if you run more than one
104cluster inside your network.
105
8a865621
DM
106To check the state of your cluster use:
107
108 hp1# pvecm status
109
82445c4e
TL
110Multiple Clusters In Same Network
111~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
112
113It is possible to create multiple clusters in the same physical or logical
114network. Each cluster must have a unique name, which is used to generate the
115cluster's multicast group address. As long as no duplicate cluster names are
116configured in one network segment, the different clusters won't interfere with
117each other.
118
119If multiple clusters operate in a single network it may be beneficial to setup
120an IGMP querier and enable IGMP Snooping in said network. This may reduce the
121load of the network significantly because multicast packets are only delivered
122to endpoints of the respective member nodes.
123
8a865621 124
11202f1d 125[[pvecm_join_node_to_cluster]]
8a865621 126Adding Nodes to the Cluster
ceabe189 127---------------------------
8a865621 128
8c1189b6 129Login via `ssh` to the node you want to add.
8a865621
DM
130
131 hp2# pvecm add IP-ADDRESS-CLUSTER
132
133For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
134
5eba0743 135CAUTION: A new node cannot hold any VMs, because you would get
7980581f 136conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
137`/etc/pve` is overwritten when you join a new node to the cluster. To
138workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 139adding the node to the cluster.
8a865621
DM
140
141To check the state of cluster:
142
143 # pvecm status
144
ceabe189 145.Cluster status after adding 4 nodes
8a865621
DM
146----
147hp2# pvecm status
148Quorum information
149~~~~~~~~~~~~~~~~~~
150Date: Mon Apr 20 12:30:13 2015
151Quorum provider: corosync_votequorum
152Nodes: 4
153Node ID: 0x00000001
154Ring ID: 1928
155Quorate: Yes
156
157Votequorum information
158~~~~~~~~~~~~~~~~~~~~~~
159Expected votes: 4
160Highest expected: 4
161Total votes: 4
162Quorum: 2
163Flags: Quorate
164
165Membership information
166~~~~~~~~~~~~~~~~~~~~~~
167 Nodeid Votes Name
1680x00000001 1 192.168.15.91
1690x00000002 1 192.168.15.92 (local)
1700x00000003 1 192.168.15.93
1710x00000004 1 192.168.15.94
172----
173
174If you only want the list of all nodes use:
175
176 # pvecm nodes
177
5eba0743 178.List nodes in a cluster
8a865621
DM
179----
180hp2# pvecm nodes
181
182Membership information
183~~~~~~~~~~~~~~~~~~~~~~
184 Nodeid Votes Name
185 1 1 hp1
186 2 1 hp2 (local)
187 3 1 hp3
188 4 1 hp4
189----
190
82d52451 191[[adding-nodes-with-separated-cluster-network]]
e4ec4154
TL
192Adding Nodes With Separated Cluster Network
193~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
194
195When adding a node to a cluster with a separated cluster network you need to
196use the 'ringX_addr' parameters to set the nodes address on those networks:
197
198[source,bash]
4d19cb00 199----
e4ec4154 200pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 201----
e4ec4154
TL
202
203If you want to use the Redundant Ring Protocol you will also want to pass the
204'ring1_addr' parameter.
205
8a865621
DM
206
207Remove a Cluster Node
ceabe189 208---------------------
8a865621
DM
209
210CAUTION: Read carefully the procedure before proceeding, as it could
211not be what you want or need.
212
213Move all virtual machines from the node. Make sure you have no local
214data or backups you want to keep, or save them accordingly.
e8503c6c 215In the following example we will remove the node hp4 from the cluster.
8a865621 216
e8503c6c
EK
217Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
218command to identify the node ID to remove:
8a865621
DM
219
220----
221hp1# pvecm nodes
222
223Membership information
224~~~~~~~~~~~~~~~~~~~~~~
225 Nodeid Votes Name
226 1 1 hp1 (local)
227 2 1 hp2
228 3 1 hp3
229 4 1 hp4
230----
231
e8503c6c
EK
232
233At this point you must power off hp4 and
234make sure that it will not power on again (in the network) as it
235is.
236
237IMPORTANT: As said above, it is critical to power off the node
238*before* removal, and make sure that it will *never* power on again
239(in the existing cluster network) as it is.
240If you power on the node as it is, your cluster will be screwed up and
241it could be difficult to restore a clean cluster state.
242
243After powering off the node hp4, we can safely remove it from the cluster.
8a865621
DM
244
245 hp1# pvecm delnode hp4
246
247If the operation succeeds no output is returned, just check the node
8c1189b6 248list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
249something like:
250
251----
252hp1# pvecm status
253
254Quorum information
255~~~~~~~~~~~~~~~~~~
256Date: Mon Apr 20 12:44:28 2015
257Quorum provider: corosync_votequorum
258Nodes: 3
259Node ID: 0x00000001
260Ring ID: 1992
261Quorate: Yes
262
263Votequorum information
264~~~~~~~~~~~~~~~~~~~~~~
265Expected votes: 3
266Highest expected: 3
267Total votes: 3
268Quorum: 3
269Flags: Quorate
270
271Membership information
272~~~~~~~~~~~~~~~~~~~~~~
273 Nodeid Votes Name
2740x00000001 1 192.168.15.90 (local)
2750x00000002 1 192.168.15.91
2760x00000003 1 192.168.15.92
277----
278
8a865621
DM
279If, for whatever reason, you want that this server joins the same
280cluster again, you have to
281
26ca7ff5 282* reinstall {pve} on it from scratch
8a865621
DM
283
284* then join it, as explained in the previous section.
d8742b0c 285
38ae8db3 286[[pvecm_separate_node_without_reinstall]]
555e966b
TL
287Separate A Node Without Reinstalling
288~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
289
290CAUTION: This is *not* the recommended method, proceed with caution. Use the
291above mentioned method if you're unsure.
292
293You can also separate a node from a cluster without reinstalling it from
294scratch. But after removing the node from the cluster it will still have
295access to the shared storages! This must be resolved before you start removing
296the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
297storage with another cluster, as storage locking doesn't work over cluster
298boundary. Further, it may also lead to VMID conflicts.
555e966b 299
3be22308
TL
300Its suggested that you create a new storage where only the node which you want
301to separate has access. This can be an new export on your NFS or a new Ceph
302pool, to name a few examples. Its just important that the exact same storage
303does not gets accessed by multiple clusters. After setting this storage up move
304all data from the node and its VMs to it. Then you are ready to separate the
305node from the cluster.
555e966b
TL
306
307WARNING: Ensure all shared resources are cleanly separated! You will run into
308conflicts and problems else.
309
310First stop the corosync and the pve-cluster services on the node:
311[source,bash]
4d19cb00 312----
555e966b
TL
313systemctl stop pve-cluster
314systemctl stop corosync
4d19cb00 315----
555e966b
TL
316
317Start the cluster filesystem again in local mode:
318[source,bash]
4d19cb00 319----
555e966b 320pmxcfs -l
4d19cb00 321----
555e966b
TL
322
323Delete the corosync configuration files:
324[source,bash]
4d19cb00 325----
555e966b
TL
326rm /etc/pve/corosync.conf
327rm /etc/corosync/*
4d19cb00 328----
555e966b
TL
329
330You can now start the filesystem again as normal service:
331[source,bash]
4d19cb00 332----
555e966b
TL
333killall pmxcfs
334systemctl start pve-cluster
4d19cb00 335----
555e966b
TL
336
337The node is now separated from the cluster. You can deleted it from a remaining
338node of the cluster with:
339[source,bash]
4d19cb00 340----
555e966b 341pvecm delnode oldnode
4d19cb00 342----
555e966b
TL
343
344If the command failed, because the remaining node in the cluster lost quorum
345when the now separate node exited, you may set the expected votes to 1 as a workaround:
346[source,bash]
4d19cb00 347----
555e966b 348pvecm expected 1
4d19cb00 349----
555e966b
TL
350
351And the repeat the 'pvecm delnode' command.
352
353Now switch back to the separated node, here delete all remaining files left
354from the old cluster. This ensures that the node can be added to another
355cluster again without problems.
356
357[source,bash]
4d19cb00 358----
555e966b 359rm /var/lib/corosync/*
4d19cb00 360----
555e966b
TL
361
362As the configuration files from the other nodes are still in the cluster
363filesystem you may want to clean those up too. Remove simply the whole
364directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
365you used the correct one before deleting it.
366
367CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
368the nodes can still connect to each other with public key authentication. This
369should be fixed by removing the respective keys from the
370'/etc/pve/priv/authorized_keys' file.
d8742b0c 371
806ef12d
DM
372Quorum
373------
374
375{pve} use a quorum-based technique to provide a consistent state among
376all cluster nodes.
377
378[quote, from Wikipedia, Quorum (distributed computing)]
379____
380A quorum is the minimum number of votes that a distributed transaction
381has to obtain in order to be allowed to perform an operation in a
382distributed system.
383____
384
385In case of network partitioning, state changes requires that a
386majority of nodes are online. The cluster switches to read-only mode
5eba0743 387if it loses quorum.
806ef12d
DM
388
389NOTE: {pve} assigns a single vote to each node by default.
390
e4ec4154
TL
391Cluster Network
392---------------
393
394The cluster network is the core of a cluster. All messages sent over it have to
395be delivered reliable to all nodes in their respective order. In {pve} this
396part is done by corosync, an implementation of a high performance low overhead
397high availability development toolkit. It serves our decentralized
398configuration file system (`pmxcfs`).
399
400[[cluster-network-requirements]]
401Network Requirements
402~~~~~~~~~~~~~~~~~~~~
403This needs a reliable network with latencies under 2 milliseconds (LAN
404performance) to work properly. While corosync can also use unicast for
405communication between nodes its **highly recommended** to have a multicast
406capable network. The network should not be used heavily by other members,
407ideally corosync runs on its own network.
408*never* share it with network where storage communicates too.
409
410Before setting up a cluster it is good practice to check if the network is fit
411for that purpose.
412
413* Ensure that all nodes are in the same subnet. This must only be true for the
414 network interfaces used for cluster communication (corosync).
415
416* Ensure all nodes can reach each other over those interfaces, using `ping` is
417 enough for a basic test.
418
419* Ensure that multicast works in general and a high package rates. This can be
420 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 421+
e4ec4154
TL
422[source,bash]
423----
424omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
425----
426
427* Ensure that multicast communication works over an extended period of time.
a181f090 428 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
429 no multicast querier is active. This test has a duration of around 10
430 minutes.
9e73d831 431+
e4ec4154 432[source,bash]
4d19cb00 433----
e4ec4154 434omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 435----
e4ec4154
TL
436
437Your network is not ready for clustering if any of these test fails. Recheck
438your network configuration. Especially switches are notorious for having
439multicast disabled by default or IGMP snooping enabled with no IGMP querier
440active.
441
442In smaller cluster its also an option to use unicast if you really cannot get
443multicast to work.
444
445Separate Cluster Network
446~~~~~~~~~~~~~~~~~~~~~~~~
447
448When creating a cluster without any parameters the cluster network is generally
449shared with the Web UI and the VMs and its traffic. Depending on your setup
450even storage traffic may get sent over the same network. Its recommended to
451change that, as corosync is a time critical real time application.
452
453Setting Up A New Network
454^^^^^^^^^^^^^^^^^^^^^^^^
455
456First you have to setup a new network interface. It should be on a physical
457separate network. Ensure that your network fulfills the
458<<cluster-network-requirements,cluster network requirements>>.
459
460Separate On Cluster Creation
461^^^^^^^^^^^^^^^^^^^^^^^^^^^^
462
463This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
464the 'pvecm create' command used for creating a new cluster.
465
44f38275 466If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
467and want to send and receive all cluster communication over this interface
468you would execute:
469
470[source,bash]
4d19cb00 471----
e4ec4154 472pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 473----
e4ec4154
TL
474
475To check if everything is working properly execute:
476[source,bash]
4d19cb00 477----
e4ec4154 478systemctl status corosync
4d19cb00 479----
e4ec4154 480
266cb17b
WB
481Afterwards, proceed as descripted in the section to
482<<adding-nodes-with-separated-cluster-network,add nodes with a separated cluster network>>.
82d52451 483
e4ec4154
TL
484[[separate-cluster-net-after-creation]]
485Separate After Cluster Creation
486^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
487
488You can do this also if you have already created a cluster and want to switch
489its communication to another network, without rebuilding the whole cluster.
490This change may lead to short durations of quorum loss in the cluster, as nodes
491have to restart corosync and come up one after the other on the new network.
492
493Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
494The open it and you should see a file similar to:
495
496----
497logging {
498 debug: off
499 to_syslog: yes
500}
501
502nodelist {
503
504 node {
505 name: due
506 nodeid: 2
507 quorum_votes: 1
508 ring0_addr: due
509 }
510
511 node {
512 name: tre
513 nodeid: 3
514 quorum_votes: 1
515 ring0_addr: tre
516 }
517
518 node {
519 name: uno
520 nodeid: 1
521 quorum_votes: 1
522 ring0_addr: uno
523 }
524
525}
526
527quorum {
528 provider: corosync_votequorum
529}
530
531totem {
532 cluster_name: thomas-testcluster
533 config_version: 3
534 ip_version: ipv4
535 secauth: on
536 version: 2
537 interface {
538 bindnetaddr: 192.168.30.50
539 ringnumber: 0
540 }
541
542}
543----
544
545The first you want to do is add the 'name' properties in the node entries if
546you do not see them already. Those *must* match the node name.
547
548Then replace the address from the 'ring0_addr' properties with the new
549addresses. You may use plain IP addresses or also hostnames here. If you use
550hostnames ensure that they are resolvable from all nodes.
551
552In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 553network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
554in the totem section of the config to an address of the new network. It can be
555any address from the subnet configured on the new network interface.
556
557After you increased the 'config_version' property the new configuration file
558should look like:
559
560----
561
562logging {
563 debug: off
564 to_syslog: yes
565}
566
567nodelist {
568
569 node {
570 name: due
571 nodeid: 2
572 quorum_votes: 1
573 ring0_addr: 10.10.10.2
574 }
575
576 node {
577 name: tre
578 nodeid: 3
579 quorum_votes: 1
580 ring0_addr: 10.10.10.3
581 }
582
583 node {
584 name: uno
585 nodeid: 1
586 quorum_votes: 1
587 ring0_addr: 10.10.10.1
588 }
589
590}
591
592quorum {
593 provider: corosync_votequorum
594}
595
596totem {
597 cluster_name: thomas-testcluster
598 config_version: 4
599 ip_version: ipv4
600 secauth: on
601 version: 2
602 interface {
603 bindnetaddr: 10.10.10.1
604 ringnumber: 0
605 }
606
607}
608----
609
610Now after a final check whether all changed information is correct we save it
611and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
612learn how to bring it in effect.
613
614As our change cannot be enforced live from corosync we have to do an restart.
615
616On a single node execute:
617[source,bash]
4d19cb00 618----
e4ec4154 619systemctl restart corosync
4d19cb00 620----
e4ec4154
TL
621
622Now check if everything is fine:
623
624[source,bash]
4d19cb00 625----
e4ec4154 626systemctl status corosync
4d19cb00 627----
e4ec4154
TL
628
629If corosync runs again correct restart corosync also on all other nodes.
630They will then join the cluster membership one by one on the new network.
631
11202f1d 632[[pvecm_rrp]]
e4ec4154
TL
633Redundant Ring Protocol
634~~~~~~~~~~~~~~~~~~~~~~~
635To avoid a single point of failure you should implement counter measurements.
636This can be on the hardware and operating system level through network bonding.
637
638Corosync itself offers also a possibility to add redundancy through the so
639called 'Redundant Ring Protocol'. This protocol allows running a second totem
640ring on another network, this network should be physically separated from the
641other rings network to actually increase availability.
642
643RRP On Cluster Creation
644~~~~~~~~~~~~~~~~~~~~~~~
645
646The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
647'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
648
649NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
650
651So if you have two networks, one on the 10.10.10.1/24 and the other on the
65210.10.20.1/24 subnet you would execute:
653
654[source,bash]
4d19cb00 655----
e4ec4154
TL
656pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
657-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 658----
e4ec4154 659
6e78f927 660RRP On Existing Clusters
e4ec4154
TL
661~~~~~~~~~~~~~~~~~~~~~~~~
662
6e78f927
TL
663You will take similar steps as described in
664<<separate-cluster-net-after-creation,separating the cluster network>> to
665enable RRP on an already running cluster. The single difference is, that you
666will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
667
668First add a new `interface` subsection in the `totem` section, set its
669`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
670address of the subnet you have configured for your new ring.
671Further set the `rrp_mode` to `passive`, this is the only stable mode.
672
673Then add to each node entry in the `nodelist` section its new `ring1_addr`
674property with the nodes additional ring address.
675
676So if you have two networks, one on the 10.10.10.1/24 and the other on the
67710.10.20.1/24 subnet, the final configuration file should look like:
678
679----
680totem {
681 cluster_name: tweak
682 config_version: 9
683 ip_version: ipv4
684 rrp_mode: passive
685 secauth: on
686 version: 2
687 interface {
688 bindnetaddr: 10.10.10.1
689 ringnumber: 0
690 }
691 interface {
692 bindnetaddr: 10.10.20.1
693 ringnumber: 1
694 }
695}
696
697nodelist {
698 node {
699 name: pvecm1
700 nodeid: 1
701 quorum_votes: 1
702 ring0_addr: 10.10.10.1
703 ring1_addr: 10.10.20.1
704 }
705
706 node {
707 name: pvecm2
708 nodeid: 2
709 quorum_votes: 1
710 ring0_addr: 10.10.10.2
711 ring1_addr: 10.10.20.2
712 }
713
714 [...] # other cluster nodes here
715}
716
717[...] # other remaining config sections here
718
719----
720
7d48940b
DM
721Bring it in effect like described in the
722<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
723
724This is a change which cannot take live in effect and needs at least a restart
725of corosync. Recommended is a restart of the whole cluster.
726
727If you cannot reboot the whole cluster ensure no High Availability services are
728configured and the stop the corosync service on all nodes. After corosync is
729stopped on all nodes start it one after the other again.
730
731Corosync Configuration
732----------------------
733
470d4313 734The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
735controls the cluster member ship and its network.
736For reading more about it check the corosync.conf man page:
737[source,bash]
4d19cb00 738----
e4ec4154 739man corosync.conf
4d19cb00 740----
e4ec4154
TL
741
742For node membership you should always use the `pvecm` tool provided by {pve}.
743You may have to edit the configuration file manually for other changes.
744Here are a few best practice tips for doing this.
745
746[[edit-corosync-conf]]
747Edit corosync.conf
748~~~~~~~~~~~~~~~~~~
749
750Editing the corosync.conf file can be not always straight forward. There are
751two on each cluster, one in `/etc/pve/corosync.conf` and the other in
752`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
753propagate the changes to the local one, but not vice versa.
754
755The configuration will get updated automatically as soon as the file changes.
756This means changes which can be integrated in a running corosync will take
757instantly effect. So you should always make a copy and edit that instead, to
758avoid triggering some unwanted changes by an in between safe.
759
760[source,bash]
4d19cb00 761----
e4ec4154 762cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 763----
e4ec4154
TL
764
765Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
766preinstalled on {pve} for example.
767
768NOTE: Always increment the 'config_version' number on configuration changes,
769omitting this can lead to problems.
770
771After making the necessary changes create another copy of the current working
772configuration file. This serves as a backup if the new configuration fails to
773apply or makes problems in other ways.
774
775[source,bash]
4d19cb00 776----
e4ec4154 777cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 778----
e4ec4154
TL
779
780Then move the new configuration file over the old one:
781[source,bash]
4d19cb00 782----
e4ec4154 783mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 784----
e4ec4154
TL
785
786You may check with the commands
787[source,bash]
4d19cb00 788----
e4ec4154
TL
789systemctl status corosync
790journalctl -b -u corosync
4d19cb00 791----
e4ec4154
TL
792
793If the change could applied automatically. If not you may have to restart the
794corosync service via:
795[source,bash]
4d19cb00 796----
e4ec4154 797systemctl restart corosync
4d19cb00 798----
e4ec4154
TL
799
800On errors check the troubleshooting section below.
801
802Troubleshooting
803~~~~~~~~~~~~~~~
804
805Issue: 'quorum.expected_votes must be configured'
806^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
807
808When corosync starts to fail and you get the following message in the system log:
809
810----
811[...]
812corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
813corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
814 'configuration error: nodelist or quorum.expected_votes must be configured!'
815[...]
816----
817
818It means that the hostname you set for corosync 'ringX_addr' in the
819configuration could not be resolved.
820
821
822Write Configuration When Not Quorate
823^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
824
825If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
826know what you do, use:
827[source,bash]
4d19cb00 828----
e4ec4154 829pvecm expected 1
4d19cb00 830----
e4ec4154
TL
831
832This sets the expected vote count to 1 and makes the cluster quorate. You can
833now fix your configuration, or revert it back to the last working backup.
834
835This is not enough if corosync cannot start anymore. Here its best to edit the
836local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
837that corosync can start again. Ensure that on all nodes this configuration has
838the same content to avoid split brains. If you are not sure what went wrong
839it's best to ask the Proxmox Community to help you.
840
841
842[[corosync-conf-glossary]]
843Corosync Configuration Glossary
844~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
845
846ringX_addr::
847This names the different ring addresses for the corosync totem rings used for
848the cluster communication.
849
850bindnetaddr::
851Defines to which interface the ring should bind to. It may be any address of
852the subnet configured on the interface we want to use. In general its the
853recommended to just use an address a node uses on this interface.
854
855rrp_mode::
856Specifies the mode of the redundant ring protocol and may be passive, active or
857none. Note that use of active is highly experimental and not official
858supported. Passive is the preferred mode, it may double the cluster
859communication throughput and increases availability.
860
806ef12d
DM
861
862Cluster Cold Start
863------------------
864
865It is obvious that a cluster is not quorate when all nodes are
866offline. This is a common case after a power failure.
867
868NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 869(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
870you want HA.
871
204231df 872On node startup, the `pve-guests` service is started and waits for
8c1189b6 873quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
874flag set.
875
876When you turn on nodes, or when power comes back after power failure,
877it is likely that some nodes boots faster than others. Please keep in
878mind that guest startup is delayed until you reach quorum.
806ef12d 879
054a7e7d 880
082ea7d9
TL
881Guest Migration
882---------------
883
054a7e7d
DM
884Migrating virtual guests to other nodes is a useful feature in a
885cluster. There are settings to control the behavior of such
886migrations. This can be done via the configuration file
887`datacenter.cfg` or for a specific migration via API or command line
888parameters.
889
da6c7dee
DC
890It makes a difference if a Guest is online or offline, or if it has
891local resources (like a local disk).
892
893For Details about Virtual Machine Migration see the
894xref:qm_migration[QEMU/KVM Migration Chapter]
895
896For Details about Container Migration see the
897xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
898
899Migration Type
900~~~~~~~~~~~~~~
901
44f38275 902The migration type defines if the migration data should be sent over an
d63be10b 903encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 904Setting the migration type to insecure means that the RAM content of a
470d4313 905virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
906information disclosure of critical data from inside the guest (for
907example passwords or encryption keys).
054a7e7d
DM
908
909Therefore, we strongly recommend using the secure channel if you do
910not have full control over the network and can not guarantee that no
911one is eavesdropping to it.
082ea7d9 912
054a7e7d
DM
913NOTE: Storage migration does not follow this setting. Currently, it
914always sends the storage content over a secure channel.
915
916Encryption requires a lot of computing power, so this setting is often
917changed to "unsafe" to achieve better performance. The impact on
918modern systems is lower because they implement AES encryption in
b1743473
DM
919hardware. The performance impact is particularly evident in fast
920networks where you can transfer 10 Gbps or more.
082ea7d9 921
082ea7d9
TL
922
923Migration Network
924~~~~~~~~~~~~~~~~~
925
a9baa444
TL
926By default, {pve} uses the network in which cluster communication
927takes place to send the migration traffic. This is not optimal because
928sensitive cluster traffic can be disrupted and this network may not
929have the best bandwidth available on the node.
930
931Setting the migration network parameter allows the use of a dedicated
932network for the entire migration traffic. In addition to the memory,
933this also affects the storage traffic for offline migrations.
934
935The migration network is set as a network in the CIDR notation. This
936has the advantage that you do not have to set individual IP addresses
937for each node. {pve} can determine the real address on the
938destination node from the network specified in the CIDR form. To
939enable this, the network must be specified so that each node has one,
940but only one IP in the respective network.
941
082ea7d9
TL
942
943Example
944^^^^^^^
945
a9baa444
TL
946We assume that we have a three-node setup with three separate
947networks. One for public communication with the Internet, one for
948cluster communication and a very fast one, which we want to use as a
949dedicated network for migration.
950
951A network configuration for such a setup might look as follows:
082ea7d9
TL
952
953----
7a0d4784 954iface eno1 inet manual
082ea7d9
TL
955
956# public network
957auto vmbr0
958iface vmbr0 inet static
959 address 192.X.Y.57
960 netmask 255.255.250.0
961 gateway 192.X.Y.1
7a0d4784 962 bridge_ports eno1
082ea7d9
TL
963 bridge_stp off
964 bridge_fd 0
965
966# cluster network
7a0d4784
WL
967auto eno2
968iface eno2 inet static
082ea7d9
TL
969 address 10.1.1.1
970 netmask 255.255.255.0
971
972# fast network
7a0d4784
WL
973auto eno3
974iface eno3 inet static
082ea7d9
TL
975 address 10.1.2.1
976 netmask 255.255.255.0
082ea7d9
TL
977----
978
a9baa444
TL
979Here, we will use the network 10.1.2.0/24 as a migration network. For
980a single migration, you can do this using the `migration_network`
981parameter of the command line tool:
982
082ea7d9 983----
b1743473 984# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
985----
986
a9baa444
TL
987To configure this as the default network for all migrations in the
988cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
989file:
990
082ea7d9 991----
a9baa444 992# use dedicated migration network
b1743473 993migration: secure,network=10.1.2.0/24
082ea7d9
TL
994----
995
a9baa444
TL
996NOTE: The migration type must always be set when the migration network
997gets set in `/etc/pve/datacenter.cfg`.
998
806ef12d 999
d8742b0c
DM
1000ifdef::manvolnum[]
1001include::pve-copyright.adoc[]
1002endif::manvolnum[]