]> git.proxmox.com Git - pve-docs.git/blame - pvecm.adoc
qm: improve auto start/stop section
[pve-docs.git] / pvecm.adoc
CommitLineData
d8742b0c 1ifdef::manvolnum[]
b2f242ab
DM
2pvecm(1)
3========
5f09af76
DM
4:pve-toplevel:
5
d8742b0c
DM
6NAME
7----
8
74026b8f 9pvecm - Proxmox VE Cluster Manager
d8742b0c 10
49a5e11c 11SYNOPSIS
d8742b0c
DM
12--------
13
14include::pvecm.1-synopsis.adoc[]
15
16DESCRIPTION
17-----------
18endif::manvolnum[]
19
20ifndef::manvolnum[]
21Cluster Manager
22===============
5f09af76 23:pve-toplevel:
194d2f29 24endif::manvolnum[]
5f09af76 25
8c1189b6
FG
26The {PVE} cluster manager `pvecm` is a tool to create a group of
27physical servers. Such a group is called a *cluster*. We use the
8a865621 28http://www.corosync.org[Corosync Cluster Engine] for reliable group
5eba0743 29communication, and such clusters can consist of up to 32 physical nodes
8a865621
DM
30(probably more, dependent on network latency).
31
8c1189b6 32`pvecm` can be used to create a new cluster, join nodes to a cluster,
8a865621 33leave the cluster, get status information and do various other cluster
e300cf7d
FG
34related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
35is used to transparently distribute the cluster configuration to all cluster
8a865621
DM
36nodes.
37
38Grouping nodes into a cluster has the following advantages:
39
40* Centralized, web based management
41
5eba0743 42* Multi-master clusters: each node can do all management task
8a865621 43
8c1189b6
FG
44* `pmxcfs`: database-driven file system for storing configuration files,
45 replicated in real-time on all nodes using `corosync`.
8a865621 46
5eba0743 47* Easy migration of virtual machines and containers between physical
8a865621
DM
48 hosts
49
50* Fast deployment
51
52* Cluster-wide services like firewall and HA
53
54
55Requirements
56------------
57
8c1189b6 58* All nodes must be in the same network as `corosync` uses IP Multicast
8a865621 59 to communicate between nodes (also see
ceabe189 60 http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
ff72a2ba 61 ports 5404 and 5405 for cluster communication.
ceabe189
DM
62+
63NOTE: Some switches do not support IP multicast by default and must be
64manually enabled first.
8a865621
DM
65
66* Date and time have to be synchronized.
67
ceabe189 68* SSH tunnel on TCP port 22 between nodes is used.
8a865621 69
ceabe189
DM
70* If you are interested in High Availability, you need to have at
71 least three nodes for reliable quorum. All nodes should have the
72 same version.
8a865621
DM
73
74* We recommend a dedicated NIC for the cluster traffic, especially if
75 you use shared storage.
76
77NOTE: It is not possible to mix Proxmox VE 3.x and earlier with
ceabe189 78Proxmox VE 4.0 cluster nodes.
8a865621
DM
79
80
ceabe189
DM
81Preparing Nodes
82---------------
8a865621
DM
83
84First, install {PVE} on all nodes. Make sure that each node is
85installed with the final hostname and IP configuration. Changing the
86hostname and IP is not possible after cluster creation.
87
88Currently the cluster creation has to be done on the console, so you
8c1189b6 89need to login via `ssh`.
8a865621 90
8a865621 91Create the Cluster
ceabe189 92------------------
8a865621 93
8c1189b6
FG
94Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
95This name cannot be changed later.
8a865621
DM
96
97 hp1# pvecm create YOUR-CLUSTER-NAME
98
63f956c8
DM
99CAUTION: The cluster name is used to compute the default multicast
100address. Please use unique cluster names if you run more than one
101cluster inside your network.
102
8a865621
DM
103To check the state of your cluster use:
104
105 hp1# pvecm status
106
82445c4e
TL
107Multiple Clusters In Same Network
108~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
109
110It is possible to create multiple clusters in the same physical or logical
111network. Each cluster must have a unique name, which is used to generate the
112cluster's multicast group address. As long as no duplicate cluster names are
113configured in one network segment, the different clusters won't interfere with
114each other.
115
116If multiple clusters operate in a single network it may be beneficial to setup
117an IGMP querier and enable IGMP Snooping in said network. This may reduce the
118load of the network significantly because multicast packets are only delivered
119to endpoints of the respective member nodes.
120
8a865621
DM
121
122Adding Nodes to the Cluster
ceabe189 123---------------------------
8a865621 124
8c1189b6 125Login via `ssh` to the node you want to add.
8a865621
DM
126
127 hp2# pvecm add IP-ADDRESS-CLUSTER
128
129For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
130
5eba0743 131CAUTION: A new node cannot hold any VMs, because you would get
7980581f 132conflicts about identical VM IDs. Also, all existing configuration in
8c1189b6
FG
133`/etc/pve` is overwritten when you join a new node to the cluster. To
134workaround, use `vzdump` to backup and restore to a different VMID after
7980581f 135adding the node to the cluster.
8a865621
DM
136
137To check the state of cluster:
138
139 # pvecm status
140
ceabe189 141.Cluster status after adding 4 nodes
8a865621
DM
142----
143hp2# pvecm status
144Quorum information
145~~~~~~~~~~~~~~~~~~
146Date: Mon Apr 20 12:30:13 2015
147Quorum provider: corosync_votequorum
148Nodes: 4
149Node ID: 0x00000001
150Ring ID: 1928
151Quorate: Yes
152
153Votequorum information
154~~~~~~~~~~~~~~~~~~~~~~
155Expected votes: 4
156Highest expected: 4
157Total votes: 4
158Quorum: 2
159Flags: Quorate
160
161Membership information
162~~~~~~~~~~~~~~~~~~~~~~
163 Nodeid Votes Name
1640x00000001 1 192.168.15.91
1650x00000002 1 192.168.15.92 (local)
1660x00000003 1 192.168.15.93
1670x00000004 1 192.168.15.94
168----
169
170If you only want the list of all nodes use:
171
172 # pvecm nodes
173
5eba0743 174.List nodes in a cluster
8a865621
DM
175----
176hp2# pvecm nodes
177
178Membership information
179~~~~~~~~~~~~~~~~~~~~~~
180 Nodeid Votes Name
181 1 1 hp1
182 2 1 hp2 (local)
183 3 1 hp3
184 4 1 hp4
185----
186
e4ec4154
TL
187Adding Nodes With Separated Cluster Network
188~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
189
190When adding a node to a cluster with a separated cluster network you need to
191use the 'ringX_addr' parameters to set the nodes address on those networks:
192
193[source,bash]
4d19cb00 194----
e4ec4154 195pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
4d19cb00 196----
e4ec4154
TL
197
198If you want to use the Redundant Ring Protocol you will also want to pass the
199'ring1_addr' parameter.
200
8a865621
DM
201
202Remove a Cluster Node
ceabe189 203---------------------
8a865621
DM
204
205CAUTION: Read carefully the procedure before proceeding, as it could
206not be what you want or need.
207
208Move all virtual machines from the node. Make sure you have no local
209data or backups you want to keep, or save them accordingly.
e8503c6c 210In the following example we will remove the node hp4 from the cluster.
8a865621 211
e8503c6c
EK
212Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
213command to identify the node ID to remove:
8a865621
DM
214
215----
216hp1# pvecm nodes
217
218Membership information
219~~~~~~~~~~~~~~~~~~~~~~
220 Nodeid Votes Name
221 1 1 hp1 (local)
222 2 1 hp2
223 3 1 hp3
224 4 1 hp4
225----
226
e8503c6c
EK
227
228At this point you must power off hp4 and
229make sure that it will not power on again (in the network) as it
230is.
231
232IMPORTANT: As said above, it is critical to power off the node
233*before* removal, and make sure that it will *never* power on again
234(in the existing cluster network) as it is.
235If you power on the node as it is, your cluster will be screwed up and
236it could be difficult to restore a clean cluster state.
237
238After powering off the node hp4, we can safely remove it from the cluster.
8a865621
DM
239
240 hp1# pvecm delnode hp4
241
242If the operation succeeds no output is returned, just check the node
8c1189b6 243list again with `pvecm nodes` or `pvecm status`. You should see
8a865621
DM
244something like:
245
246----
247hp1# pvecm status
248
249Quorum information
250~~~~~~~~~~~~~~~~~~
251Date: Mon Apr 20 12:44:28 2015
252Quorum provider: corosync_votequorum
253Nodes: 3
254Node ID: 0x00000001
255Ring ID: 1992
256Quorate: Yes
257
258Votequorum information
259~~~~~~~~~~~~~~~~~~~~~~
260Expected votes: 3
261Highest expected: 3
262Total votes: 3
263Quorum: 3
264Flags: Quorate
265
266Membership information
267~~~~~~~~~~~~~~~~~~~~~~
268 Nodeid Votes Name
2690x00000001 1 192.168.15.90 (local)
2700x00000002 1 192.168.15.91
2710x00000003 1 192.168.15.92
272----
273
8a865621
DM
274If, for whatever reason, you want that this server joins the same
275cluster again, you have to
276
26ca7ff5 277* reinstall {pve} on it from scratch
8a865621
DM
278
279* then join it, as explained in the previous section.
d8742b0c 280
38ae8db3 281[[pvecm_separate_node_without_reinstall]]
555e966b
TL
282Separate A Node Without Reinstalling
283~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
284
285CAUTION: This is *not* the recommended method, proceed with caution. Use the
286above mentioned method if you're unsure.
287
288You can also separate a node from a cluster without reinstalling it from
289scratch. But after removing the node from the cluster it will still have
290access to the shared storages! This must be resolved before you start removing
291the node from the cluster. A {pve} cluster cannot share the exact same
2ea5c4a5
TL
292storage with another cluster, as storage locking doesn't work over cluster
293boundary. Further, it may also lead to VMID conflicts.
555e966b 294
3be22308
TL
295Its suggested that you create a new storage where only the node which you want
296to separate has access. This can be an new export on your NFS or a new Ceph
297pool, to name a few examples. Its just important that the exact same storage
298does not gets accessed by multiple clusters. After setting this storage up move
299all data from the node and its VMs to it. Then you are ready to separate the
300node from the cluster.
555e966b
TL
301
302WARNING: Ensure all shared resources are cleanly separated! You will run into
303conflicts and problems else.
304
305First stop the corosync and the pve-cluster services on the node:
306[source,bash]
4d19cb00 307----
555e966b
TL
308systemctl stop pve-cluster
309systemctl stop corosync
4d19cb00 310----
555e966b
TL
311
312Start the cluster filesystem again in local mode:
313[source,bash]
4d19cb00 314----
555e966b 315pmxcfs -l
4d19cb00 316----
555e966b
TL
317
318Delete the corosync configuration files:
319[source,bash]
4d19cb00 320----
555e966b
TL
321rm /etc/pve/corosync.conf
322rm /etc/corosync/*
4d19cb00 323----
555e966b
TL
324
325You can now start the filesystem again as normal service:
326[source,bash]
4d19cb00 327----
555e966b
TL
328killall pmxcfs
329systemctl start pve-cluster
4d19cb00 330----
555e966b
TL
331
332The node is now separated from the cluster. You can deleted it from a remaining
333node of the cluster with:
334[source,bash]
4d19cb00 335----
555e966b 336pvecm delnode oldnode
4d19cb00 337----
555e966b
TL
338
339If the command failed, because the remaining node in the cluster lost quorum
340when the now separate node exited, you may set the expected votes to 1 as a workaround:
341[source,bash]
4d19cb00 342----
555e966b 343pvecm expected 1
4d19cb00 344----
555e966b
TL
345
346And the repeat the 'pvecm delnode' command.
347
348Now switch back to the separated node, here delete all remaining files left
349from the old cluster. This ensures that the node can be added to another
350cluster again without problems.
351
352[source,bash]
4d19cb00 353----
555e966b 354rm /var/lib/corosync/*
4d19cb00 355----
555e966b
TL
356
357As the configuration files from the other nodes are still in the cluster
358filesystem you may want to clean those up too. Remove simply the whole
359directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
360you used the correct one before deleting it.
361
362CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
363the nodes can still connect to each other with public key authentication. This
364should be fixed by removing the respective keys from the
365'/etc/pve/priv/authorized_keys' file.
d8742b0c 366
806ef12d
DM
367Quorum
368------
369
370{pve} use a quorum-based technique to provide a consistent state among
371all cluster nodes.
372
373[quote, from Wikipedia, Quorum (distributed computing)]
374____
375A quorum is the minimum number of votes that a distributed transaction
376has to obtain in order to be allowed to perform an operation in a
377distributed system.
378____
379
380In case of network partitioning, state changes requires that a
381majority of nodes are online. The cluster switches to read-only mode
5eba0743 382if it loses quorum.
806ef12d
DM
383
384NOTE: {pve} assigns a single vote to each node by default.
385
e4ec4154
TL
386Cluster Network
387---------------
388
389The cluster network is the core of a cluster. All messages sent over it have to
390be delivered reliable to all nodes in their respective order. In {pve} this
391part is done by corosync, an implementation of a high performance low overhead
392high availability development toolkit. It serves our decentralized
393configuration file system (`pmxcfs`).
394
395[[cluster-network-requirements]]
396Network Requirements
397~~~~~~~~~~~~~~~~~~~~
398This needs a reliable network with latencies under 2 milliseconds (LAN
399performance) to work properly. While corosync can also use unicast for
400communication between nodes its **highly recommended** to have a multicast
401capable network. The network should not be used heavily by other members,
402ideally corosync runs on its own network.
403*never* share it with network where storage communicates too.
404
405Before setting up a cluster it is good practice to check if the network is fit
406for that purpose.
407
408* Ensure that all nodes are in the same subnet. This must only be true for the
409 network interfaces used for cluster communication (corosync).
410
411* Ensure all nodes can reach each other over those interfaces, using `ping` is
412 enough for a basic test.
413
414* Ensure that multicast works in general and a high package rates. This can be
415 done with the `omping` tool. The final "%loss" number should be < 1%.
9e73d831 416+
e4ec4154
TL
417[source,bash]
418----
419omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
420----
421
422* Ensure that multicast communication works over an extended period of time.
a181f090 423 This uncovers problems where IGMP snooping is activated on the network but
e4ec4154
TL
424 no multicast querier is active. This test has a duration of around 10
425 minutes.
9e73d831 426+
e4ec4154 427[source,bash]
4d19cb00 428----
e4ec4154 429omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
4d19cb00 430----
e4ec4154
TL
431
432Your network is not ready for clustering if any of these test fails. Recheck
433your network configuration. Especially switches are notorious for having
434multicast disabled by default or IGMP snooping enabled with no IGMP querier
435active.
436
437In smaller cluster its also an option to use unicast if you really cannot get
438multicast to work.
439
440Separate Cluster Network
441~~~~~~~~~~~~~~~~~~~~~~~~
442
443When creating a cluster without any parameters the cluster network is generally
444shared with the Web UI and the VMs and its traffic. Depending on your setup
445even storage traffic may get sent over the same network. Its recommended to
446change that, as corosync is a time critical real time application.
447
448Setting Up A New Network
449^^^^^^^^^^^^^^^^^^^^^^^^
450
451First you have to setup a new network interface. It should be on a physical
452separate network. Ensure that your network fulfills the
453<<cluster-network-requirements,cluster network requirements>>.
454
455Separate On Cluster Creation
456^^^^^^^^^^^^^^^^^^^^^^^^^^^^
457
458This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
459the 'pvecm create' command used for creating a new cluster.
460
44f38275 461If you have setup an additional NIC with a static address on 10.10.10.1/25
e4ec4154
TL
462and want to send and receive all cluster communication over this interface
463you would execute:
464
465[source,bash]
4d19cb00 466----
e4ec4154 467pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
4d19cb00 468----
e4ec4154
TL
469
470To check if everything is working properly execute:
471[source,bash]
4d19cb00 472----
e4ec4154 473systemctl status corosync
4d19cb00 474----
e4ec4154
TL
475
476[[separate-cluster-net-after-creation]]
477Separate After Cluster Creation
478^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
479
480You can do this also if you have already created a cluster and want to switch
481its communication to another network, without rebuilding the whole cluster.
482This change may lead to short durations of quorum loss in the cluster, as nodes
483have to restart corosync and come up one after the other on the new network.
484
485Check how to <<edit-corosync-conf,edit the corosync.conf file>> first.
486The open it and you should see a file similar to:
487
488----
489logging {
490 debug: off
491 to_syslog: yes
492}
493
494nodelist {
495
496 node {
497 name: due
498 nodeid: 2
499 quorum_votes: 1
500 ring0_addr: due
501 }
502
503 node {
504 name: tre
505 nodeid: 3
506 quorum_votes: 1
507 ring0_addr: tre
508 }
509
510 node {
511 name: uno
512 nodeid: 1
513 quorum_votes: 1
514 ring0_addr: uno
515 }
516
517}
518
519quorum {
520 provider: corosync_votequorum
521}
522
523totem {
524 cluster_name: thomas-testcluster
525 config_version: 3
526 ip_version: ipv4
527 secauth: on
528 version: 2
529 interface {
530 bindnetaddr: 192.168.30.50
531 ringnumber: 0
532 }
533
534}
535----
536
537The first you want to do is add the 'name' properties in the node entries if
538you do not see them already. Those *must* match the node name.
539
540Then replace the address from the 'ring0_addr' properties with the new
541addresses. You may use plain IP addresses or also hostnames here. If you use
542hostnames ensure that they are resolvable from all nodes.
543
544In my example I want to switch my cluster communication to the 10.10.10.1/25
470d4313 545network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
e4ec4154
TL
546in the totem section of the config to an address of the new network. It can be
547any address from the subnet configured on the new network interface.
548
549After you increased the 'config_version' property the new configuration file
550should look like:
551
552----
553
554logging {
555 debug: off
556 to_syslog: yes
557}
558
559nodelist {
560
561 node {
562 name: due
563 nodeid: 2
564 quorum_votes: 1
565 ring0_addr: 10.10.10.2
566 }
567
568 node {
569 name: tre
570 nodeid: 3
571 quorum_votes: 1
572 ring0_addr: 10.10.10.3
573 }
574
575 node {
576 name: uno
577 nodeid: 1
578 quorum_votes: 1
579 ring0_addr: 10.10.10.1
580 }
581
582}
583
584quorum {
585 provider: corosync_votequorum
586}
587
588totem {
589 cluster_name: thomas-testcluster
590 config_version: 4
591 ip_version: ipv4
592 secauth: on
593 version: 2
594 interface {
595 bindnetaddr: 10.10.10.1
596 ringnumber: 0
597 }
598
599}
600----
601
602Now after a final check whether all changed information is correct we save it
603and see again the <<edit-corosync-conf,edit corosync.conf file>> section to
604learn how to bring it in effect.
605
606As our change cannot be enforced live from corosync we have to do an restart.
607
608On a single node execute:
609[source,bash]
4d19cb00 610----
e4ec4154 611systemctl restart corosync
4d19cb00 612----
e4ec4154
TL
613
614Now check if everything is fine:
615
616[source,bash]
4d19cb00 617----
e4ec4154 618systemctl status corosync
4d19cb00 619----
e4ec4154
TL
620
621If corosync runs again correct restart corosync also on all other nodes.
622They will then join the cluster membership one by one on the new network.
623
624Redundant Ring Protocol
625~~~~~~~~~~~~~~~~~~~~~~~
626To avoid a single point of failure you should implement counter measurements.
627This can be on the hardware and operating system level through network bonding.
628
629Corosync itself offers also a possibility to add redundancy through the so
630called 'Redundant Ring Protocol'. This protocol allows running a second totem
631ring on another network, this network should be physically separated from the
632other rings network to actually increase availability.
633
634RRP On Cluster Creation
635~~~~~~~~~~~~~~~~~~~~~~~
636
637The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
638'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
639
640NOTE: See the <<corosync-conf-glossary,glossary>> if you do not know what each parameter means.
641
642So if you have two networks, one on the 10.10.10.1/24 and the other on the
64310.10.20.1/24 subnet you would execute:
644
645[source,bash]
4d19cb00 646----
e4ec4154
TL
647pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
648-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
4d19cb00 649----
e4ec4154 650
6e78f927 651RRP On Existing Clusters
e4ec4154
TL
652~~~~~~~~~~~~~~~~~~~~~~~~
653
6e78f927
TL
654You will take similar steps as described in
655<<separate-cluster-net-after-creation,separating the cluster network>> to
656enable RRP on an already running cluster. The single difference is, that you
657will add `ring1` and use it instead of `ring0`.
e4ec4154
TL
658
659First add a new `interface` subsection in the `totem` section, set its
660`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
661address of the subnet you have configured for your new ring.
662Further set the `rrp_mode` to `passive`, this is the only stable mode.
663
664Then add to each node entry in the `nodelist` section its new `ring1_addr`
665property with the nodes additional ring address.
666
667So if you have two networks, one on the 10.10.10.1/24 and the other on the
66810.10.20.1/24 subnet, the final configuration file should look like:
669
670----
671totem {
672 cluster_name: tweak
673 config_version: 9
674 ip_version: ipv4
675 rrp_mode: passive
676 secauth: on
677 version: 2
678 interface {
679 bindnetaddr: 10.10.10.1
680 ringnumber: 0
681 }
682 interface {
683 bindnetaddr: 10.10.20.1
684 ringnumber: 1
685 }
686}
687
688nodelist {
689 node {
690 name: pvecm1
691 nodeid: 1
692 quorum_votes: 1
693 ring0_addr: 10.10.10.1
694 ring1_addr: 10.10.20.1
695 }
696
697 node {
698 name: pvecm2
699 nodeid: 2
700 quorum_votes: 1
701 ring0_addr: 10.10.10.2
702 ring1_addr: 10.10.20.2
703 }
704
705 [...] # other cluster nodes here
706}
707
708[...] # other remaining config sections here
709
710----
711
7d48940b
DM
712Bring it in effect like described in the
713<<edit-corosync-conf,edit the corosync.conf file>> section.
e4ec4154
TL
714
715This is a change which cannot take live in effect and needs at least a restart
716of corosync. Recommended is a restart of the whole cluster.
717
718If you cannot reboot the whole cluster ensure no High Availability services are
719configured and the stop the corosync service on all nodes. After corosync is
720stopped on all nodes start it one after the other again.
721
722Corosync Configuration
723----------------------
724
470d4313 725The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
e4ec4154
TL
726controls the cluster member ship and its network.
727For reading more about it check the corosync.conf man page:
728[source,bash]
4d19cb00 729----
e4ec4154 730man corosync.conf
4d19cb00 731----
e4ec4154
TL
732
733For node membership you should always use the `pvecm` tool provided by {pve}.
734You may have to edit the configuration file manually for other changes.
735Here are a few best practice tips for doing this.
736
737[[edit-corosync-conf]]
738Edit corosync.conf
739~~~~~~~~~~~~~~~~~~
740
741Editing the corosync.conf file can be not always straight forward. There are
742two on each cluster, one in `/etc/pve/corosync.conf` and the other in
743`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
744propagate the changes to the local one, but not vice versa.
745
746The configuration will get updated automatically as soon as the file changes.
747This means changes which can be integrated in a running corosync will take
748instantly effect. So you should always make a copy and edit that instead, to
749avoid triggering some unwanted changes by an in between safe.
750
751[source,bash]
4d19cb00 752----
e4ec4154 753cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
4d19cb00 754----
e4ec4154
TL
755
756Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
757preinstalled on {pve} for example.
758
759NOTE: Always increment the 'config_version' number on configuration changes,
760omitting this can lead to problems.
761
762After making the necessary changes create another copy of the current working
763configuration file. This serves as a backup if the new configuration fails to
764apply or makes problems in other ways.
765
766[source,bash]
4d19cb00 767----
e4ec4154 768cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
4d19cb00 769----
e4ec4154
TL
770
771Then move the new configuration file over the old one:
772[source,bash]
4d19cb00 773----
e4ec4154 774mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
4d19cb00 775----
e4ec4154
TL
776
777You may check with the commands
778[source,bash]
4d19cb00 779----
e4ec4154
TL
780systemctl status corosync
781journalctl -b -u corosync
4d19cb00 782----
e4ec4154
TL
783
784If the change could applied automatically. If not you may have to restart the
785corosync service via:
786[source,bash]
4d19cb00 787----
e4ec4154 788systemctl restart corosync
4d19cb00 789----
e4ec4154
TL
790
791On errors check the troubleshooting section below.
792
793Troubleshooting
794~~~~~~~~~~~~~~~
795
796Issue: 'quorum.expected_votes must be configured'
797^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
798
799When corosync starts to fail and you get the following message in the system log:
800
801----
802[...]
803corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
804corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
805 'configuration error: nodelist or quorum.expected_votes must be configured!'
806[...]
807----
808
809It means that the hostname you set for corosync 'ringX_addr' in the
810configuration could not be resolved.
811
812
813Write Configuration When Not Quorate
814^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
815
816If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
817know what you do, use:
818[source,bash]
4d19cb00 819----
e4ec4154 820pvecm expected 1
4d19cb00 821----
e4ec4154
TL
822
823This sets the expected vote count to 1 and makes the cluster quorate. You can
824now fix your configuration, or revert it back to the last working backup.
825
826This is not enough if corosync cannot start anymore. Here its best to edit the
827local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
828that corosync can start again. Ensure that on all nodes this configuration has
829the same content to avoid split brains. If you are not sure what went wrong
830it's best to ask the Proxmox Community to help you.
831
832
833[[corosync-conf-glossary]]
834Corosync Configuration Glossary
835~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
836
837ringX_addr::
838This names the different ring addresses for the corosync totem rings used for
839the cluster communication.
840
841bindnetaddr::
842Defines to which interface the ring should bind to. It may be any address of
843the subnet configured on the interface we want to use. In general its the
844recommended to just use an address a node uses on this interface.
845
846rrp_mode::
847Specifies the mode of the redundant ring protocol and may be passive, active or
848none. Note that use of active is highly experimental and not official
849supported. Passive is the preferred mode, it may double the cluster
850communication throughput and increases availability.
851
806ef12d
DM
852
853Cluster Cold Start
854------------------
855
856It is obvious that a cluster is not quorate when all nodes are
857offline. This is a common case after a power failure.
858
859NOTE: It is always a good idea to use an uninterruptible power supply
8c1189b6 860(``UPS'', also called ``battery backup'') to avoid this state, especially if
806ef12d
DM
861you want HA.
862
204231df 863On node startup, the `pve-guests` service is started and waits for
8c1189b6 864quorum. Once quorate, it starts all guests which have the `onboot`
612417fd
DM
865flag set.
866
867When you turn on nodes, or when power comes back after power failure,
868it is likely that some nodes boots faster than others. Please keep in
869mind that guest startup is delayed until you reach quorum.
806ef12d 870
054a7e7d 871
082ea7d9
TL
872Guest Migration
873---------------
874
054a7e7d
DM
875Migrating virtual guests to other nodes is a useful feature in a
876cluster. There are settings to control the behavior of such
877migrations. This can be done via the configuration file
878`datacenter.cfg` or for a specific migration via API or command line
879parameters.
880
da6c7dee
DC
881It makes a difference if a Guest is online or offline, or if it has
882local resources (like a local disk).
883
884For Details about Virtual Machine Migration see the
885xref:qm_migration[QEMU/KVM Migration Chapter]
886
887For Details about Container Migration see the
888xref:pct_migration[Container Migration Chapter]
082ea7d9
TL
889
890Migration Type
891~~~~~~~~~~~~~~
892
44f38275 893The migration type defines if the migration data should be sent over an
d63be10b 894encrypted (`secure`) channel or an unencrypted (`insecure`) one.
082ea7d9 895Setting the migration type to insecure means that the RAM content of a
470d4313 896virtual guest gets also transferred unencrypted, which can lead to
b1743473
DM
897information disclosure of critical data from inside the guest (for
898example passwords or encryption keys).
054a7e7d
DM
899
900Therefore, we strongly recommend using the secure channel if you do
901not have full control over the network and can not guarantee that no
902one is eavesdropping to it.
082ea7d9 903
054a7e7d
DM
904NOTE: Storage migration does not follow this setting. Currently, it
905always sends the storage content over a secure channel.
906
907Encryption requires a lot of computing power, so this setting is often
908changed to "unsafe" to achieve better performance. The impact on
909modern systems is lower because they implement AES encryption in
b1743473
DM
910hardware. The performance impact is particularly evident in fast
911networks where you can transfer 10 Gbps or more.
082ea7d9 912
082ea7d9
TL
913
914Migration Network
915~~~~~~~~~~~~~~~~~
916
a9baa444
TL
917By default, {pve} uses the network in which cluster communication
918takes place to send the migration traffic. This is not optimal because
919sensitive cluster traffic can be disrupted and this network may not
920have the best bandwidth available on the node.
921
922Setting the migration network parameter allows the use of a dedicated
923network for the entire migration traffic. In addition to the memory,
924this also affects the storage traffic for offline migrations.
925
926The migration network is set as a network in the CIDR notation. This
927has the advantage that you do not have to set individual IP addresses
928for each node. {pve} can determine the real address on the
929destination node from the network specified in the CIDR form. To
930enable this, the network must be specified so that each node has one,
931but only one IP in the respective network.
932
082ea7d9
TL
933
934Example
935^^^^^^^
936
a9baa444
TL
937We assume that we have a three-node setup with three separate
938networks. One for public communication with the Internet, one for
939cluster communication and a very fast one, which we want to use as a
940dedicated network for migration.
941
942A network configuration for such a setup might look as follows:
082ea7d9
TL
943
944----
7a0d4784 945iface eno1 inet manual
082ea7d9
TL
946
947# public network
948auto vmbr0
949iface vmbr0 inet static
950 address 192.X.Y.57
951 netmask 255.255.250.0
952 gateway 192.X.Y.1
7a0d4784 953 bridge_ports eno1
082ea7d9
TL
954 bridge_stp off
955 bridge_fd 0
956
957# cluster network
7a0d4784
WL
958auto eno2
959iface eno2 inet static
082ea7d9
TL
960 address 10.1.1.1
961 netmask 255.255.255.0
962
963# fast network
7a0d4784
WL
964auto eno3
965iface eno3 inet static
082ea7d9
TL
966 address 10.1.2.1
967 netmask 255.255.255.0
082ea7d9
TL
968----
969
a9baa444
TL
970Here, we will use the network 10.1.2.0/24 as a migration network. For
971a single migration, you can do this using the `migration_network`
972parameter of the command line tool:
973
082ea7d9 974----
b1743473 975# qm migrate 106 tre --online --migration_network 10.1.2.0/24
082ea7d9
TL
976----
977
a9baa444
TL
978To configure this as the default network for all migrations in the
979cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
980file:
981
082ea7d9 982----
a9baa444 983# use dedicated migration network
b1743473 984migration: secure,network=10.1.2.0/24
082ea7d9
TL
985----
986
a9baa444
TL
987NOTE: The migration type must always be set when the migration network
988gets set in `/etc/pve/datacenter.cfg`.
989
806ef12d 990
d8742b0c
DM
991ifdef::manvolnum[]
992include::pve-copyright.adoc[]
993endif::manvolnum[]