]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
pvesr.adoc: add dummy man page for storage replication tool
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c 1ifdef::manvolnum[]
b2f242ab
DM
2pvecm(1)
3========
5f09af76
DM
4:pve-toplevel:
5
d8742b0c
DM
6NAME
7----
8
74026b8f 9pvecm - Proxmox VE Cluster Manager
d8742b0c 10
49a5e11c 11SYNOPSIS
d8742b0c
DM
12--------
13
14include::pvecm.1-synopsis.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19
20ifndef::manvolnum[]
21Cluster Manager
22===============
5f09af76 23:pve-toplevel:
194d2f29 24endif::manvolnum[]
5f09af76 25
8c1189b6
FG
26The {PVE} cluster manager `pvecm` is a tool to create a group of
27physical servers. Such a group is called a *cluster*. We use the
8a865621 28http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 29communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
30(probably more, dependent on network latency).
31
8c1189b6 32`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 33leave the cluster, get status information and do various other cluster
e300cf7d
FG
34related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
36nodes.
37
38Grouping nodes into a cluster has the following advantages:
39
40* Centralized, web based management
41
5eba0743 42* Multi-master clusters: each node can do all management task
8a865621 43
8c1189b6
FG
44* `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
8a865621 46
5eba0743 47* Easy migration of virtual machines and containers between physical
8a865621
DM
48 hosts
49
50* Fast deployment
51
52* Cluster-wide services like firewall and HA
53
54
55Requirements
56------------
57
8c1189b6 58* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 59 to communicate between nodes (also see
ceabe189 60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 61 ports 5404 and 5405 for cluster communication.
ceabe189
DM
62+
63NOTE: Some switches do not support IP multicast by default and must be
64manually enabled first.
8a865621
DM
65
66* Date and time have to be synchronized.
67
ceabe189 68* SSH tunnel on TCP port 22 between nodes is used.
8a865621 69
ceabe189
DM
70* If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
8a865621
DM
73
74* We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 78Proxmox VE 4.0 cluster nodes.
8a865621
DM
79
80
ceabe189
DM
81Preparing Nodes
82---------------
8a865621
DM
83
84First, install {PVE} on all nodes. Make sure that each node is
85installed with the final hostname and IP configuration. Changing the
86hostname and IP is not possible after cluster creation.
87
88Currently the cluster creation has to be done on the console, so you
8c1189b6 89need to login via `ssh`.
8a865621 90
8a865621 91Create the Cluster
ceabe189 92------------------
8a865621 93
8c1189b6
FG
94Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95This name cannot be changed later.
8a865621
DM
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
63f956c8
DM
99CAUTION: The cluster name is used to compute the default multicast
100address. Please use unique cluster names if you run more than one
101cluster inside your network.
102
8a865621
DM
103To check the state of your cluster use:
104
105 hp1# pvecm status
106
107
108Adding Nodes to the Cluster
ceabe189 109---------------------------
8a865621 110
8c1189b6 111Login via `ssh` to the node you want to add.
8a865621
DM
112
113 hp2# pvecm add IP-ADDRESS-CLUSTER
114
115For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
116
5eba0743 117CAUTION: A new node cannot hold any VMs, because you would get
7980581f 118conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
119`/etc/pve` is overwritten when you join a new node to the cluster. To
120workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 121adding the node to the cluster.
8a865621
DM
122
123To check the state of cluster:
124
125 # pvecm status
126
ceabe189 127.Cluster status after adding 4 nodes
8a865621
DM
128----
129hp2# pvecm status
130Quorum information
131~~~~~~~~~~~~~~~~~~
132Date: Mon Apr 20 12:30:13 2015
133Quorum provider: corosync_votequorum
134Nodes: 4
135Node ID: 0x00000001
136Ring ID: 1928
137Quorate: Yes
138
139Votequorum information
140~~~~~~~~~~~~~~~~~~~~~~
141Expected votes: 4
142Highest expected: 4
143Total votes: 4
144Quorum: 2
145Flags: Quorate
146
147Membership information
148~~~~~~~~~~~~~~~~~~~~~~
149 Nodeid Votes Name
1500x00000001 1 192.168.15.91
1510x00000002 1 192.168.15.92 (local)
1520x00000003 1 192.168.15.93
1530x00000004 1 192.168.15.94
154----
155
156If you only want the list of all nodes use:
157
158 # pvecm nodes
159
5eba0743 160.List nodes in a cluster
8a865621
DM
161----
162hp2# pvecm nodes
163
164Membership information
165~~~~~~~~~~~~~~~~~~~~~~
166 Nodeid Votes Name
167 1 1 hp1
168 2 1 hp2 (local)
169 3 1 hp3
170 4 1 hp4
171----
172
e4ec4154
TL
173Adding Nodes With Separated Cluster Network
174~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
175
176When adding a node to a cluster with a separated cluster network you need to
177use the 'ringX_addr' parameters to set the nodes address on those networks:
178
179[source,bash]
4d19cb00 180----
e4ec4154 181pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 182----
e4ec4154
TL
183
184If you want to use the Redundant Ring Protocol you will also want to pass the
185'ring1_addr' parameter.
186
8a865621
DM
187
188Remove a Cluster Node
ceabe189 189---------------------
8a865621
DM
190
191CAUTION: Read carefully the procedure before proceeding, as it could
192not be what you want or need.
193
194Move all virtual machines from the node. Make sure you have no local
195data or backups you want to keep, or save them accordingly.
e8503c6c 196In the following example we will remove the node hp4 from the cluster.
8a865621 197
e8503c6c
EK
198Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
199command to identify the node ID to remove:
8a865621
DM
200
201----
202hp1# pvecm nodes
203
204Membership information
205~~~~~~~~~~~~~~~~~~~~~~
206 Nodeid Votes Name
207 1 1 hp1 (local)
208 2 1 hp2
209 3 1 hp3
210 4 1 hp4
211----
212
e8503c6c
EK
213
214At this point you must power off hp4 and
215make sure that it will not power on again (in the network) as it
216is.
217
218IMPORTANT: As said above, it is critical to power off the node
219*before* removal, and make sure that it will *never* power on again
220(in the existing cluster network) as it is.
221If you power on the node as it is, your cluster will be screwed up and
222it could be difficult to restore a clean cluster state.
223
224After powering off the node hp4, we can safely remove it from the cluster.
8a865621
DM
225
226 hp1# pvecm delnode hp4
227
228If the operation succeeds no output is returned, just check the node
8c1189b6 229list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
230something like:
231
232----
233hp1# pvecm status
234
235Quorum information
236~~~~~~~~~~~~~~~~~~
237Date: Mon Apr 20 12:44:28 2015
238Quorum provider: corosync_votequorum
239Nodes: 3
240Node ID: 0x00000001
241Ring ID: 1992
242Quorate: Yes
243
244Votequorum information
245~~~~~~~~~~~~~~~~~~~~~~
246Expected votes: 3
247Highest expected: 3
248Total votes: 3
249Quorum: 3
250Flags: Quorate
251
252Membership information
253~~~~~~~~~~~~~~~~~~~~~~
254 Nodeid Votes Name
2550x00000001 1 192.168.15.90 (local)
2560x00000002 1 192.168.15.91
2570x00000003 1 192.168.15.92
258----
259
8a865621
DM
260If, for whatever reason, you want that this server joins the same
261cluster again, you have to
262
26ca7ff5 263* reinstall {pve} on it from scratch
8a865621
DM
264
265* then join it, as explained in the previous section.
d8742b0c 266
38ae8db3 267[[pvecm_separate_node_without_reinstall]]
555e966b
TL
268Separate A Node Without Reinstalling
269~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
270
271CAUTION: This is *not* the recommended method, proceed with caution. Use the
272above mentioned method if you're unsure.
273
274You can also separate a node from a cluster without reinstalling it from
275scratch. But after removing the node from the cluster it will still have
276access to the shared storages! This must be resolved before you start removing
277the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
278storage with another cluster, as storage locking doesn't work over cluster
279boundary. Further, it may also lead to VMID conflicts.
555e966b 280
3be22308
TL
281Its suggested that you create a new storage where only the node which you want
282to separate has access. This can be an new export on your NFS or a new Ceph
283pool, to name a few examples. Its just important that the exact same storage
284does not gets accessed by multiple clusters. After setting this storage up move
285all data from the node and its VMs to it. Then you are ready to separate the
286node from the cluster.
555e966b
TL
287
288WARNING: Ensure all shared resources are cleanly separated! You will run into
289conflicts and problems else.
290
291First stop the corosync and the pve-cluster services on the node:
292[source,bash]
4d19cb00 293----
555e966b
TL
294systemctl stop pve-cluster
295systemctl stop corosync
4d19cb00 296----
555e966b
TL
297
298Start the cluster filesystem again in local mode:
299[source,bash]
4d19cb00 300----
555e966b 301pmxcfs -l
4d19cb00 302----
555e966b
TL
303
304Delete the corosync configuration files:
305[source,bash]
4d19cb00 306----
555e966b
TL
307rm /etc/pve/corosync.conf
308rm /etc/corosync/*
4d19cb00 309----
555e966b
TL
310
311You can now start the filesystem again as normal service:
312[source,bash]
4d19cb00 313----
555e966b
TL
314killall pmxcfs
315systemctl start pve-cluster
4d19cb00 316----
555e966b
TL
317
318The node is now separated from the cluster. You can deleted it from a remaining
319node of the cluster with:
320[source,bash]
4d19cb00 321----
555e966b 322pvecm delnode oldnode
4d19cb00 323----
555e966b
TL
324
325If the command failed, because the remaining node in the cluster lost quorum
326when the now separate node exited, you may set the expected votes to 1 as a workaround:
327[source,bash]
4d19cb00 328----
555e966b 329pvecm expected 1
4d19cb00 330----
555e966b
TL
331
332And the repeat the 'pvecm delnode' command.
333
334Now switch back to the separated node, here delete all remaining files left
335from the old cluster. This ensures that the node can be added to another
336cluster again without problems.
337
338[source,bash]
4d19cb00 339----
555e966b 340rm /var/lib/corosync/*
4d19cb00 341----
555e966b
TL
342
343As the configuration files from the other nodes are still in the cluster
344filesystem you may want to clean those up too. Remove simply the whole
345directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
346you used the correct one before deleting it.
347
348CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
349the nodes can still connect to each other with public key authentication. This
350should be fixed by removing the respective keys from the
351'/etc/pve/priv/authorized_keys' file.
d8742b0c 352
806ef12d
DM
353Quorum
354------
355
356{pve} use a quorum-based technique to provide a consistent state among
357all cluster nodes.
358
359[quote, from Wikipedia, Quorum (distributed computing)]
360____
361A quorum is the minimum number of votes that a distributed transaction
362has to obtain in order to be allowed to perform an operation in a
363distributed system.
364____
365
366In case of network partitioning, state changes requires that a
367majority of nodes are online. The cluster switches to read-only mode
5eba0743 368if it loses quorum.
806ef12d
DM
369
370NOTE: {pve} assigns a single vote to each node by default.
371
e4ec4154
TL
372Cluster Network
373---------------
374
375The cluster network is the core of a cluster. All messages sent over it have to
376be delivered reliable to all nodes in their respective order. In {pve} this
377part is done by corosync, an implementation of a high performance low overhead
378high availability development toolkit. It serves our decentralized
379configuration file system (`pmxcfs`).
380
381[[cluster-network-requirements]]
382Network Requirements
383~~~~~~~~~~~~~~~~~~~~
384This needs a reliable network with latencies under 2 milliseconds (LAN
385performance) to work properly. While corosync can also use unicast for
386communication between nodes its **highly recommended** to have a multicast
387capable network. The network should not be used heavily by other members,
388ideally corosync runs on its own network.
389*never* share it with network where storage communicates too.
390
391Before setting up a cluster it is good practice to check if the network is fit
392for that purpose.
393
394* Ensure that all nodes are in the same subnet. This must only be true for the
395 network interfaces used for cluster communication (corosync).
396
397* Ensure all nodes can reach each other over those interfaces, using `ping` is
398 enough for a basic test.
399
400* Ensure that multicast works in general and a high package rates. This can be
401 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 402+
e4ec4154
TL
403[source,bash]
404----
405omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
406----
407
408* Ensure that multicast communication works over an extended period of time.
a181f090 409 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
410 no multicast querier is active. This test has a duration of around 10
411 minutes.
9e73d831 412+
e4ec4154 413[source,bash]
4d19cb00 414----
e4ec4154 415omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 416----
e4ec4154
TL
417
418Your network is not ready for clustering if any of these test fails. Recheck
419your network configuration. Especially switches are notorious for having
420multicast disabled by default or IGMP snooping enabled with no IGMP querier
421active.
422
423In smaller cluster its also an option to use unicast if you really cannot get
424multicast to work.
425
426Separate Cluster Network
427~~~~~~~~~~~~~~~~~~~~~~~~
428
429When creating a cluster without any parameters the cluster network is generally
430shared with the Web UI and the VMs and its traffic. Depending on your setup
431even storage traffic may get sent over the same network. Its recommended to
432change that, as corosync is a time critical real time application.
433
434Setting Up A New Network
435^^^^^^^^^^^^^^^^^^^^^^^^
436
437First you have to setup a new network interface. It should be on a physical
438separate network. Ensure that your network fulfills the
439<<cluster-network-requirements,cluster network requirements>>.
440
441Separate On Cluster Creation
442^^^^^^^^^^^^^^^^^^^^^^^^^^^^
443
444This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
445the 'pvecm create' command used for creating a new cluster.
446
447If you have setup a additional NIC with a static address on 10.10.10.1/25
448and want to send and receive all cluster communication over this interface
449you would execute:
450
451[source,bash]
4d19cb00 452----
e4ec4154 453pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 454----
e4ec4154
TL
455
456To check if everything is working properly execute:
457[source,bash]
4d19cb00 458----
e4ec4154 459systemctl status corosync
4d19cb00 460----
e4ec4154
TL
461
462[[separate-cluster-net-after-creation]]
463Separate After Cluster Creation
464^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
465
466You can do this also if you have already created a cluster and want to switch
467its communication to another network, without rebuilding the whole cluster.
468This change may lead to short durations of quorum loss in the cluster, as nodes
469have to restart corosync and come up one after the other on the new network.
470
471Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
472The open it and you should see a file similar to:
473
474----
475logging {
476 debug: off
477 to_syslog: yes
478}
479
480nodelist {
481
482 node {
483 name: due
484 nodeid: 2
485 quorum_votes: 1
486 ring0_addr: due
487 }
488
489 node {
490 name: tre
491 nodeid: 3
492 quorum_votes: 1
493 ring0_addr: tre
494 }
495
496 node {
497 name: uno
498 nodeid: 1
499 quorum_votes: 1
500 ring0_addr: uno
501 }
502
503}
504
505quorum {
506 provider: corosync_votequorum
507}
508
509totem {
510 cluster_name: thomas-testcluster
511 config_version: 3
512 ip_version: ipv4
513 secauth: on
514 version: 2
515 interface {
516 bindnetaddr: 192.168.30.50
517 ringnumber: 0
518 }
519
520}
521----
522
523The first you want to do is add the 'name' properties in the node entries if
524you do not see them already. Those *must* match the node name.
525
526Then replace the address from the 'ring0_addr' properties with the new
527addresses. You may use plain IP addresses or also hostnames here. If you use
528hostnames ensure that they are resolvable from all nodes.
529
530In my example I want to switch my cluster communication to the 10.10.10.1/25
531network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
532in the totem section of the config to an address of the new network. It can be
533any address from the subnet configured on the new network interface.
534
535After you increased the 'config_version' property the new configuration file
536should look like:
537
538----
539
540logging {
541 debug: off
542 to_syslog: yes
543}
544
545nodelist {
546
547 node {
548 name: due
549 nodeid: 2
550 quorum_votes: 1
551 ring0_addr: 10.10.10.2
552 }
553
554 node {
555 name: tre
556 nodeid: 3
557 quorum_votes: 1
558 ring0_addr: 10.10.10.3
559 }
560
561 node {
562 name: uno
563 nodeid: 1
564 quorum_votes: 1
565 ring0_addr: 10.10.10.1
566 }
567
568}
569
570quorum {
571 provider: corosync_votequorum
572}
573
574totem {
575 cluster_name: thomas-testcluster
576 config_version: 4
577 ip_version: ipv4
578 secauth: on
579 version: 2
580 interface {
581 bindnetaddr: 10.10.10.1
582 ringnumber: 0
583 }
584
585}
586----
587
588Now after a final check whether all changed information is correct we save it
589and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
590learn how to bring it in effect.
591
592As our change cannot be enforced live from corosync we have to do an restart.
593
594On a single node execute:
595[source,bash]
4d19cb00 596----
e4ec4154 597systemctl restart corosync
4d19cb00 598----
e4ec4154
TL
599
600Now check if everything is fine:
601
602[source,bash]
4d19cb00 603----
e4ec4154 604systemctl status corosync
4d19cb00 605----
e4ec4154
TL
606
607If corosync runs again correct restart corosync also on all other nodes.
608They will then join the cluster membership one by one on the new network.
609
610Redundant Ring Protocol
611~~~~~~~~~~~~~~~~~~~~~~~
612To avoid a single point of failure you should implement counter measurements.
613This can be on the hardware and operating system level through network bonding.
614
615Corosync itself offers also a possibility to add redundancy through the so
616called 'Redundant Ring Protocol'. This protocol allows running a second totem
617ring on another network, this network should be physically separated from the
618other rings network to actually increase availability.
619
620RRP On Cluster Creation
621~~~~~~~~~~~~~~~~~~~~~~~
622
623The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
624'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
625
626NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
627
628So if you have two networks, one on the 10.10.10.1/24 and the other on the
62910.10.20.1/24 subnet you would execute:
630
631[source,bash]
4d19cb00 632----
e4ec4154
TL
633pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
634-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 635----
e4ec4154 636
6e78f927 637RRP On Existing Clusters
e4ec4154
TL
638~~~~~~~~~~~~~~~~~~~~~~~~
639
6e78f927
TL
640You will take similar steps as described in
641<<separate-cluster-net-after-creation,separating the cluster network>> to
642enable RRP on an already running cluster. The single difference is, that you
643will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
644
645First add a new `interface` subsection in the `totem` section, set its
646`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
647address of the subnet you have configured for your new ring.
648Further set the `rrp_mode` to `passive`, this is the only stable mode.
649
650Then add to each node entry in the `nodelist` section its new `ring1_addr`
651property with the nodes additional ring address.
652
653So if you have two networks, one on the 10.10.10.1/24 and the other on the
65410.10.20.1/24 subnet, the final configuration file should look like:
655
656----
657totem {
658 cluster_name: tweak
659 config_version: 9
660 ip_version: ipv4
661 rrp_mode: passive
662 secauth: on
663 version: 2
664 interface {
665 bindnetaddr: 10.10.10.1
666 ringnumber: 0
667 }
668 interface {
669 bindnetaddr: 10.10.20.1
670 ringnumber: 1
671 }
672}
673
674nodelist {
675 node {
676 name: pvecm1
677 nodeid: 1
678 quorum_votes: 1
679 ring0_addr: 10.10.10.1
680 ring1_addr: 10.10.20.1
681 }
682
683 node {
684 name: pvecm2
685 nodeid: 2
686 quorum_votes: 1
687 ring0_addr: 10.10.10.2
688 ring1_addr: 10.10.20.2
689 }
690
691 [...] # other cluster nodes here
692}
693
694[...] # other remaining config sections here
695
696----
697
7d48940b
DM
698Bring it in effect like described in the
699<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
700
701This is a change which cannot take live in effect and needs at least a restart
702of corosync. Recommended is a restart of the whole cluster.
703
704If you cannot reboot the whole cluster ensure no High Availability services are
705configured and the stop the corosync service on all nodes. After corosync is
706stopped on all nodes start it one after the other again.
707
708Corosync Configuration
709----------------------
710
711The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
712controls the cluster member ship and its network.
713For reading more about it check the corosync.conf man page:
714[source,bash]
4d19cb00 715----
e4ec4154 716man corosync.conf
4d19cb00 717----
e4ec4154
TL
718
719For node membership you should always use the `pvecm` tool provided by {pve}.
720You may have to edit the configuration file manually for other changes.
721Here are a few best practice tips for doing this.
722
723[[edit-corosync-conf]]
724Edit corosync.conf
725~~~~~~~~~~~~~~~~~~
726
727Editing the corosync.conf file can be not always straight forward. There are
728two on each cluster, one in `/etc/pve/corosync.conf` and the other in
729`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
730propagate the changes to the local one, but not vice versa.
731
732The configuration will get updated automatically as soon as the file changes.
733This means changes which can be integrated in a running corosync will take
734instantly effect. So you should always make a copy and edit that instead, to
735avoid triggering some unwanted changes by an in between safe.
736
737[source,bash]
4d19cb00 738----
e4ec4154 739cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 740----
e4ec4154
TL
741
742Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
743preinstalled on {pve} for example.
744
745NOTE: Always increment the 'config_version' number on configuration changes,
746omitting this can lead to problems.
747
748After making the necessary changes create another copy of the current working
749configuration file. This serves as a backup if the new configuration fails to
750apply or makes problems in other ways.
751
752[source,bash]
4d19cb00 753----
e4ec4154 754cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 755----
e4ec4154
TL
756
757Then move the new configuration file over the old one:
758[source,bash]
4d19cb00 759----
e4ec4154 760mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 761----
e4ec4154
TL
762
763You may check with the commands
764[source,bash]
4d19cb00 765----
e4ec4154
TL
766systemctl status corosync
767journalctl -b -u corosync
4d19cb00 768----
e4ec4154
TL
769
770If the change could applied automatically. If not you may have to restart the
771corosync service via:
772[source,bash]
4d19cb00 773----
e4ec4154 774systemctl restart corosync
4d19cb00 775----
e4ec4154
TL
776
777On errors check the troubleshooting section below.
778
779Troubleshooting
780~~~~~~~~~~~~~~~
781
782Issue: 'quorum.expected_votes must be configured'
783^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
784
785When corosync starts to fail and you get the following message in the system log:
786
787----
788[...]
789corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
790corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
791 'configuration error: nodelist or quorum.expected_votes must be configured!'
792[...]
793----
794
795It means that the hostname you set for corosync 'ringX_addr' in the
796configuration could not be resolved.
797
798
799Write Configuration When Not Quorate
800^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
801
802If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
803know what you do, use:
804[source,bash]
4d19cb00 805----
e4ec4154 806pvecm expected 1
4d19cb00 807----
e4ec4154
TL
808
809This sets the expected vote count to 1 and makes the cluster quorate. You can
810now fix your configuration, or revert it back to the last working backup.
811
812This is not enough if corosync cannot start anymore. Here its best to edit the
813local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
814that corosync can start again. Ensure that on all nodes this configuration has
815the same content to avoid split brains. If you are not sure what went wrong
816it's best to ask the Proxmox Community to help you.
817
818
819[[corosync-conf-glossary]]
820Corosync Configuration Glossary
821~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
822
823ringX_addr::
824This names the different ring addresses for the corosync totem rings used for
825the cluster communication.
826
827bindnetaddr::
828Defines to which interface the ring should bind to. It may be any address of
829the subnet configured on the interface we want to use. In general its the
830recommended to just use an address a node uses on this interface.
831
832rrp_mode::
833Specifies the mode of the redundant ring protocol and may be passive, active or
834none. Note that use of active is highly experimental and not official
835supported. Passive is the preferred mode, it may double the cluster
836communication throughput and increases availability.
837
806ef12d
DM
838
839Cluster Cold Start
840------------------
841
842It is obvious that a cluster is not quorate when all nodes are
843offline. This is a common case after a power failure.
844
845NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 846(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
847you want HA.
848
8c1189b6
FG
849On node startup, service `pve-manager` is started and waits for
850quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
851flag set.
852
853When you turn on nodes, or when power comes back after power failure,
854it is likely that some nodes boots faster than others. Please keep in
855mind that guest startup is delayed until you reach quorum.
806ef12d 856
054a7e7d 857
082ea7d9
TL
858Guest Migration
859---------------
860
054a7e7d
DM
861Migrating virtual guests to other nodes is a useful feature in a
862cluster. There are settings to control the behavior of such
863migrations. This can be done via the configuration file
864`datacenter.cfg` or for a specific migration via API or command line
865parameters.
866
da6c7dee
DC
867It makes a difference if a Guest is online or offline, or if it has
868local resources (like a local disk).
869
870For Details about Virtual Machine Migration see the
871xref:qm_migration[QEMU/KVM Migration Chapter]
872
873For Details about Container Migration see the
874xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
875
876Migration Type
877~~~~~~~~~~~~~~
878
879The migration type defines if the migration data should be sent over a
d63be10b 880encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 881Setting the migration type to insecure means that the RAM content of a
b1743473
DM
882virtual guest gets also transfered unencrypted, which can lead to
883information disclosure of critical data from inside the guest (for
884example passwords or encryption keys).
054a7e7d
DM
885
886Therefore, we strongly recommend using the secure channel if you do
887not have full control over the network and can not guarantee that no
888one is eavesdropping to it.
082ea7d9 889
054a7e7d
DM
890NOTE: Storage migration does not follow this setting. Currently, it
891always sends the storage content over a secure channel.
892
893Encryption requires a lot of computing power, so this setting is often
894changed to "unsafe" to achieve better performance. The impact on
895modern systems is lower because they implement AES encryption in
b1743473
DM
896hardware. The performance impact is particularly evident in fast
897networks where you can transfer 10 Gbps or more.
082ea7d9 898
082ea7d9
TL
899
900Migration Network
901~~~~~~~~~~~~~~~~~
902
a9baa444
TL
903By default, {pve} uses the network in which cluster communication
904takes place to send the migration traffic. This is not optimal because
905sensitive cluster traffic can be disrupted and this network may not
906have the best bandwidth available on the node.
907
908Setting the migration network parameter allows the use of a dedicated
909network for the entire migration traffic. In addition to the memory,
910this also affects the storage traffic for offline migrations.
911
912The migration network is set as a network in the CIDR notation. This
913has the advantage that you do not have to set individual IP addresses
914for each node. {pve} can determine the real address on the
915destination node from the network specified in the CIDR form. To
916enable this, the network must be specified so that each node has one,
917but only one IP in the respective network.
918
082ea7d9
TL
919
920Example
921^^^^^^^
922
a9baa444
TL
923We assume that we have a three-node setup with three separate
924networks. One for public communication with the Internet, one for
925cluster communication and a very fast one, which we want to use as a
926dedicated network for migration.
927
928A network configuration for such a setup might look as follows:
082ea7d9
TL
929
930----
931iface eth0 inet manual
932
933# public network
934auto vmbr0
935iface vmbr0 inet static
936 address 192.X.Y.57
937 netmask 255.255.250.0
938 gateway 192.X.Y.1
939 bridge_ports eth0
940 bridge_stp off
941 bridge_fd 0
942
943# cluster network
944auto eth1
945iface eth1 inet static
946 address 10.1.1.1
947 netmask 255.255.255.0
948
949# fast network
950auto eth2
951iface eth2 inet static
952 address 10.1.2.1
953 netmask 255.255.255.0
082ea7d9
TL
954----
955
a9baa444
TL
956Here, we will use the network 10.1.2.0/24 as a migration network. For
957a single migration, you can do this using the `migration_network`
958parameter of the command line tool:
959
082ea7d9 960----
b1743473 961# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
962----
963
a9baa444
TL
964To configure this as the default network for all migrations in the
965cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
966file:
967
082ea7d9 968----
a9baa444 969# use dedicated migration network
b1743473 970migration: secure,network=10.1.2.0/24
082ea7d9
TL
971----
972
a9baa444
TL
973NOTE: The migration type must always be set when the migration network
974gets set in `/etc/pve/datacenter.cfg`.
975
806ef12d 976
d8742b0c
DM
977ifdef::manvolnum[]
978include::pve-copyright.adoc[]
979endif::manvolnum[]