10 pvecm - Proxmox VE Cluster Manager
15 include::pvecm.1-synopsis.adoc[]
27 The {PVE} cluster manager `pvecm` is a tool to create a group of
28 physical servers. Such a group is called a *cluster*. We use the
29 http://www.corosync.org[Corosync Cluster Engine] for reliable group
30 communication, and such clusters can consist of up to 32 physical nodes
31 (probably more, dependent on network latency).
33 `pvecm` can be used to create a new cluster, join nodes to a cluster,
34 leave the cluster, get status information and do various other cluster
35 related tasks. The **P**rox**m**o**x** **C**luster **F**ile **S**ystem (``pmxcfs'')
36 is used to transparently distribute the cluster configuration to all cluster
39 Grouping nodes into a cluster has the following advantages:
41 * Centralized, web based management
43 * Multi-master clusters: each node can do all management tasks
45 * `pmxcfs`: database-driven file system for storing configuration files,
46 replicated in real-time on all nodes using `corosync`.
48 * Easy migration of virtual machines and containers between physical
53 * Cluster-wide services like firewall and HA
59 * All nodes must be able to connect to each other via UDP ports 5404 and 5405
62 * Date and time have to be synchronized.
64 * SSH tunnel on TCP port 22 between nodes is used.
66 * If you are interested in High Availability, you need to have at
67 least three nodes for reliable quorum. All nodes should have the
70 * We recommend a dedicated NIC for the cluster traffic, especially if
71 you use shared storage.
73 * Root password of a cluster node is required for adding nodes.
75 NOTE: It is not possible to mix {pve} 3.x and earlier with {pve} 4.X cluster
78 NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
79 production configuration and should only used temporarily during upgrading the
80 whole cluster from one to another major version.
82 NOTE: Running a cluster of {pve} 6.x with earlier versions is not possible. The
83 cluster protocol (corosync) between {pve} 6.x and earlier versions changed
84 fundamentally. The corosync 3 packages for {pve} 5.4 are only intended for the
85 upgrade procedure to {pve} 6.0.
91 First, install {PVE} on all nodes. Make sure that each node is
92 installed with the final hostname and IP configuration. Changing the
93 hostname and IP is not possible after cluster creation.
95 Currently the cluster creation can either be done on the console (login via
96 `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
99 While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
100 make their names resolvable through other means), this is not necessary for a
101 cluster to work. It may be useful however, as you can then connect from one node
102 to the other with SSH via the easier to remember node name (see also
103 xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
104 recommend to reference nodes by their IP addresses in the cluster configuration.
107 [[pvecm_create_cluster]]
111 Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
112 This name cannot be changed later. The cluster name follows the same rules as
116 hp1# pvecm create CLUSTERNAME
119 NOTE: It is possible to create multiple clusters in the same physical or logical
120 network. Use unique cluster names if you do so. To avoid human confusion, it is
121 also recommended to choose different names even if clusters do not share the
124 To check the state of your cluster use:
130 Multiple Clusters In Same Network
131 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
133 It is possible to create multiple clusters in the same physical or logical
134 network. Each such cluster must have a unique name, this does not only helps
135 admins to distinguish on which cluster they currently operate, it is also
136 required to avoid possible clashes in the cluster communication stack.
138 While the bandwidth requirement of a corosync cluster is relatively low, the
139 latency of packages and the package per second (PPS) rate is the limiting
140 factor. Different clusters in the same network can compete with each other for
141 these resources, so it may still make sense to use separate physical network
142 infrastructure for bigger clusters.
144 [[pvecm_join_node_to_cluster]]
145 Adding Nodes to the Cluster
146 ---------------------------
148 Login via `ssh` to the node you want to add.
151 hp2# pvecm add IP-ADDRESS-CLUSTER
154 For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
155 An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
157 CAUTION: A new node cannot hold any VMs, because you would get
158 conflicts about identical VM IDs. Also, all existing configuration in
159 `/etc/pve` is overwritten when you join a new node to the cluster. To
160 workaround, use `vzdump` to backup and restore to a different VMID after
161 adding the node to the cluster.
163 To check the state of the cluster use:
169 .Cluster status after adding 4 nodes
174 Date: Mon Apr 20 12:30:13 2015
175 Quorum provider: corosync_votequorum
181 Votequorum information
182 ~~~~~~~~~~~~~~~~~~~~~~
189 Membership information
190 ~~~~~~~~~~~~~~~~~~~~~~
192 0x00000001 1 192.168.15.91
193 0x00000002 1 192.168.15.92 (local)
194 0x00000003 1 192.168.15.93
195 0x00000004 1 192.168.15.94
198 If you only want the list of all nodes use:
204 .List nodes in a cluster
208 Membership information
209 ~~~~~~~~~~~~~~~~~~~~~~
217 [[pvecm_adding_nodes_with_separated_cluster_network]]
218 Adding Nodes With Separated Cluster Network
219 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
221 When adding a node to a cluster with a separated cluster network you need to
222 use the 'link0' parameter to set the nodes address on that network:
226 pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
229 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
230 kronosnet transport layer, also use the 'link1' parameter.
233 Remove a Cluster Node
234 ---------------------
236 CAUTION: Read carefully the procedure before proceeding, as it could
237 not be what you want or need.
239 Move all virtual machines from the node. Make sure you have no local
240 data or backups you want to keep, or save them accordingly.
241 In the following example we will remove the node hp4 from the cluster.
243 Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
244 command to identify the node ID to remove:
249 Membership information
250 ~~~~~~~~~~~~~~~~~~~~~~
259 At this point you must power off hp4 and
260 make sure that it will not power on again (in the network) as it
263 IMPORTANT: As said above, it is critical to power off the node
264 *before* removal, and make sure that it will *never* power on again
265 (in the existing cluster network) as it is.
266 If you power on the node as it is, your cluster will be screwed up and
267 it could be difficult to restore a clean cluster state.
269 After powering off the node hp4, we can safely remove it from the cluster.
272 hp1# pvecm delnode hp4
275 If the operation succeeds no output is returned, just check the node
276 list again with `pvecm nodes` or `pvecm status`. You should see
284 Date: Mon Apr 20 12:44:28 2015
285 Quorum provider: corosync_votequorum
291 Votequorum information
292 ~~~~~~~~~~~~~~~~~~~~~~
299 Membership information
300 ~~~~~~~~~~~~~~~~~~~~~~
302 0x00000001 1 192.168.15.90 (local)
303 0x00000002 1 192.168.15.91
304 0x00000003 1 192.168.15.92
307 If, for whatever reason, you want this server to join the same cluster again,
310 * reinstall {pve} on it from scratch
312 * then join it, as explained in the previous section.
314 NOTE: After removal of the node, its SSH fingerprint will still reside in the
315 'known_hosts' of the other nodes. If you receive an SSH error after rejoining
316 a node with the same IP or hostname, run `pvecm updatecerts` once on the
317 re-added node to update its fingerprint cluster wide.
319 [[pvecm_separate_node_without_reinstall]]
320 Separate A Node Without Reinstalling
321 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
323 CAUTION: This is *not* the recommended method, proceed with caution. Use the
324 above mentioned method if you're unsure.
326 You can also separate a node from a cluster without reinstalling it from
327 scratch. But after removing the node from the cluster it will still have
328 access to the shared storages! This must be resolved before you start removing
329 the node from the cluster. A {pve} cluster cannot share the exact same
330 storage with another cluster, as storage locking doesn't work over cluster
331 boundary. Further, it may also lead to VMID conflicts.
333 Its suggested that you create a new storage where only the node which you want
334 to separate has access. This can be a new export on your NFS or a new Ceph
335 pool, to name a few examples. Its just important that the exact same storage
336 does not gets accessed by multiple clusters. After setting this storage up move
337 all data from the node and its VMs to it. Then you are ready to separate the
338 node from the cluster.
340 WARNING: Ensure all shared resources are cleanly separated! Otherwise you will
341 run into conflicts and problems.
343 First stop the corosync and the pve-cluster services on the node:
346 systemctl stop pve-cluster
347 systemctl stop corosync
350 Start the cluster filesystem again in local mode:
356 Delete the corosync configuration files:
359 rm /etc/pve/corosync.conf
363 You can now start the filesystem again as normal service:
367 systemctl start pve-cluster
370 The node is now separated from the cluster. You can deleted it from a remaining
371 node of the cluster with:
374 pvecm delnode oldnode
377 If the command failed, because the remaining node in the cluster lost quorum
378 when the now separate node exited, you may set the expected votes to 1 as a workaround:
384 And then repeat the 'pvecm delnode' command.
386 Now switch back to the separated node, here delete all remaining files left
387 from the old cluster. This ensures that the node can be added to another
388 cluster again without problems.
392 rm /var/lib/corosync/*
395 As the configuration files from the other nodes are still in the cluster
396 filesystem you may want to clean those up too. Remove simply the whole
397 directory recursive from '/etc/pve/nodes/NODENAME', but check three times that
398 you used the correct one before deleting it.
400 CAUTION: The nodes SSH keys are still in the 'authorized_key' file, this means
401 the nodes can still connect to each other with public key authentication. This
402 should be fixed by removing the respective keys from the
403 '/etc/pve/priv/authorized_keys' file.
409 {pve} use a quorum-based technique to provide a consistent state among
412 [quote, from Wikipedia, Quorum (distributed computing)]
414 A quorum is the minimum number of votes that a distributed transaction
415 has to obtain in order to be allowed to perform an operation in a
419 In case of network partitioning, state changes requires that a
420 majority of nodes are online. The cluster switches to read-only mode
423 NOTE: {pve} assigns a single vote to each node by default.
429 The cluster network is the core of a cluster. All messages sent over it have to
430 be delivered reliably to all nodes in their respective order. In {pve} this
431 part is done by corosync, an implementation of a high performance, low overhead
432 high availability development toolkit. It serves our decentralized
433 configuration file system (`pmxcfs`).
435 [[pvecm_cluster_network_requirements]]
438 This needs a reliable network with latencies under 2 milliseconds (LAN
439 performance) to work properly. The network should not be used heavily by other
440 members, ideally corosync runs on its own network. Do not use a shared network
441 for corosync and storage (except as a potential low-priority fallback in a
442 xref:pvecm_redundancy[redundant] configuration).
444 Before setting up a cluster, it is good practice to check if the network is fit
445 for that purpose. To make sure the nodes can connect to each other on the
446 cluster network, you can test the connectivity between them with the `ping`
449 If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
450 be generated - no manual action is required.
452 NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
453 Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
454 communication, which, for now, only supports regular UDP unicast.
456 CAUTION: You can still enable Multicast or legacy unicast by setting your
457 transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
458 but keep in mind that this will disable all cryptography and redundancy support.
459 This is therefore not recommended.
461 Separate Cluster Network
462 ~~~~~~~~~~~~~~~~~~~~~~~~
464 When creating a cluster without any parameters the corosync cluster network is
465 generally shared with the Web UI and the VMs and their traffic. Depending on
466 your setup, even storage traffic may get sent over the same network. Its
467 recommended to change that, as corosync is a time critical real time
470 Setting Up A New Network
471 ^^^^^^^^^^^^^^^^^^^^^^^^
473 First you have to set up a new network interface. It should be on a physically
474 separate network. Ensure that your network fulfills the
475 xref:pvecm_cluster_network_requirements[cluster network requirements].
477 Separate On Cluster Creation
478 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
480 This is possible via the 'linkX' parameters of the 'pvecm create'
481 command used for creating a new cluster.
483 If you have set up an additional NIC with a static address on 10.10.10.1/25,
484 and want to send and receive all cluster communication over this interface,
489 pvecm create test --link0 10.10.10.1
492 To check if everything is working properly execute:
495 systemctl status corosync
498 Afterwards, proceed as described above to
499 xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
501 [[pvecm_separate_cluster_net_after_creation]]
502 Separate After Cluster Creation
503 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
505 You can do this if you have already created a cluster and want to switch
506 its communication to another network, without rebuilding the whole cluster.
507 This change may lead to short durations of quorum loss in the cluster, as nodes
508 have to restart corosync and come up one after the other on the new network.
510 Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
511 Then, open it and you should see a file similar to:
545 provider: corosync_votequorum
549 cluster_name: testcluster
561 NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
562 is a remnant of older corosync versions that is kept for backwards
565 The first thing you want to do is add the 'name' properties in the node entries
566 if you do not see them already. Those *must* match the node name.
568 Then replace all addresses from the 'ring0_addr' properties of all nodes with
569 the new addresses. You may use plain IP addresses or hostnames here. If you use
570 hostnames ensure that they are resolvable from all nodes. (see also
571 xref:pvecm_corosync_addresses[Link Address Types])
573 In this example, we want to switch the cluster communication to the
574 10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
576 NOTE: The exact same procedure can be used to change other 'ringX_addr' values
577 as well, although we recommend to not change multiple addresses at once, to make
578 it easier to recover if something goes wrong.
580 After we increase the 'config_version' property, the new configuration file
595 ring0_addr: 10.10.10.2
602 ring0_addr: 10.10.10.3
609 ring0_addr: 10.10.10.1
615 provider: corosync_votequorum
619 cluster_name: testcluster
631 Then, after a final check if all changed information is correct, we save it and
632 once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
633 section to bring it into effect.
635 The changes will be applied live, so restarting corosync is not strictly
636 necessary. If you changed other settings as well, or notice corosync
637 complaining, you can optionally trigger a restart.
639 On a single node execute:
643 systemctl restart corosync
646 Now check if everything is fine:
650 systemctl status corosync
653 If corosync runs again correct restart corosync also on all other nodes.
654 They will then join the cluster membership one by one on the new network.
656 [[pvecm_corosync_addresses]]
660 A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
661 `corosync.conf`) can be specified in two ways:
663 * **IPv4/v6 addresses** will be used directly. They are recommended, since they
664 are static and usually not changed carelessly.
666 * **Hostnames** will be resolved using `getaddrinfo`, which means that per
667 default, IPv6 addresses will be used first, if available (see also
668 `man gai.conf`). Keep this in mind, especially when upgrading an existing
671 CAUTION: Hostnames should be used with care, since the address they
672 resolve to can be changed without touching corosync or the node it runs on -
673 which may lead to a situation where an address is changed without thinking
674 about implications for corosync.
676 A seperate, static hostname specifically for corosync is recommended, if
677 hostnames are preferred. Also, make sure that every node in the cluster can
678 resolve all hostnames correctly.
680 Since {pve} 5.1, while supported, hostnames will be resolved at the time of
681 entry. Only the resolved IP is then saved to the configuration.
683 Nodes that joined the cluster on earlier versions likely still use their
684 unresolved hostname in `corosync.conf`. It might be a good idea to replace
685 them with IPs or a seperate hostname, as mentioned above.
692 Corosync supports redundant networking via its integrated kronosnet layer by
693 default (it is not supported on the legacy udp/udpu transports). It can be
694 enabled by specifying more than one link address, either via the '--linkX'
695 parameters of `pvecm` (while creating a cluster or adding a new node) or by
696 specifying more than one 'ringX_addr' in `corosync.conf`.
698 NOTE: To provide useful failover, every link should be on its own
699 physical network connection.
701 Links are used according to a priority setting. You can configure this priority
702 by setting 'knet_link_priority' in the corresponding interface section in
703 `corosync.conf`, or, preferrably, using the 'priority' parameter when creating
704 your cluster with `pvecm`:
707 # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=20 --link1 10.20.20.1,priority=15
710 This would cause 'link1' to be used first, since it has the lower priority.
712 If no priorities are configured manually (or two links have the same priority),
713 links will be used in order of their number, with the lower number having higher
716 Even if all links are working, only the one with the highest priority will see
717 corosync traffic. Link priorities cannot be mixed, i.e. links with different
718 priorities will not be able to communicate with each other.
720 Since lower priority links will not see traffic unless all higher priorities
721 have failed, it becomes a useful strategy to specify even networks used for
722 other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
723 worst, a higher-latency or more congested connection might be better than no
726 Adding Redundant Links To An Existing Cluster
727 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
729 To add a new link to a running configuration, first check how to
730 xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
732 Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
733 sure that your 'X' is the same for every node you add it to, and that it is
734 unique for each node.
736 Lastly, add a new 'interface', as shown below, to your `totem`
737 section, replacing 'X' with your link number chosen above.
739 Assuming you added a link with number 1, the new configuration file could look
754 ring0_addr: 10.10.10.2
755 ring1_addr: 10.20.20.2
762 ring0_addr: 10.10.10.3
763 ring1_addr: 10.20.20.3
770 ring0_addr: 10.10.10.1
771 ring1_addr: 10.20.20.1
777 provider: corosync_votequorum
781 cluster_name: testcluster
795 The new link will be enabled as soon as you follow the last steps to
796 xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
797 be necessary. You can check that corosync loaded the new link using:
800 journalctl -b -u corosync
803 It might be a good idea to test the new link by temporarily disconnecting the
804 old link on one node and making sure that its status remains online while
811 If you see a healthy cluster state, it means that your new link is being used.
814 Corosync External Vote Support
815 ------------------------------
817 This section describes a way to deploy an external voter in a {pve} cluster.
818 When configured, the cluster can sustain more node failures without
819 violating safety properties of the cluster communication.
821 For this to work there are two services involved:
823 * a so called qdevice daemon which runs on each {pve} node
825 * an external vote daemon which runs on an independent server.
827 As a result you can achieve higher availability even in smaller setups (for
830 QDevice Technical Overview
831 ~~~~~~~~~~~~~~~~~~~~~~~~~~
833 The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
834 node. It provides a configured number of votes to the clusters quorum
835 subsystem based on an external running third-party arbitrator's decision.
836 Its primary use is to allow a cluster to sustain more node failures than
837 standard quorum rules allow. This can be done safely as the external device
838 can see all nodes and thus choose only one set of nodes to give its vote.
839 This will only be done if said set of nodes can have quorum (again) when
840 receiving the third-party vote.
842 Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
843 a daemon which provides a vote to a cluster partition if it can reach the
844 partition members over the network. It will give only votes to one partition
845 of a cluster at any time.
846 It's designed to support multiple clusters and is almost configuration and
847 state free. New clusters are handled dynamically and no configuration file
848 is needed on the host running a QDevice.
850 The external host has the only requirement that it needs network access to the
851 cluster and a corosync-qnetd package available. We provide such a package
852 for Debian based hosts, other Linux distributions should also have a package
853 available through their respective package manager.
855 NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
856 TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
862 We support QDevices for clusters with an even number of nodes and recommend
863 it for 2 node clusters, if they should provide higher availability.
864 For clusters with an odd node count we discourage the use of QDevices
865 currently. The reason for this, is the difference of the votes the QDevice
866 provides for each cluster type. Even numbered clusters get single additional
867 vote, with this we can only increase availability, i.e. if the QDevice
868 itself fails we are in the same situation as with no QDevice at all.
870 Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
871 where 'N' corresponds to the cluster node count. This difference makes
872 sense, if we had only one additional vote the cluster can get into a split
874 This algorithm would allow that all nodes but one (and naturally the
875 QDevice itself) could fail.
876 There are two drawbacks with this:
878 * If the QNet daemon itself fails, no other node may fail or the cluster
879 immediately loses quorum. For example, in a cluster with 15 nodes 7
880 could fail before the cluster becomes inquorate. But, if a QDevice is
881 configured here and said QDevice fails itself **no single node** of
882 the 15 may fail. The QDevice acts almost as a single point of failure in
885 * The fact that all but one node plus QDevice may fail sound promising at
886 first, but this may result in a mass recovery of HA services that would
887 overload the single node left. Also ceph server will stop to provide
888 services after only '((N-1)/2)' nodes are online.
890 If you understand the drawbacks and implications you can decide yourself if
891 you should use this technology in an odd numbered cluster setup.
896 We recommend to run any daemon which provides votes to corosync-qdevice as an
897 unprivileged user. {pve} and Debian provides a package which is already
899 The traffic between the daemon and the cluster must be encrypted to ensure a
900 safe and secure QDevice integration in {pve}.
902 First install the 'corosync-qnetd' package on your external server and
903 the 'corosync-qdevice' package on all cluster nodes.
905 After that, ensure that all your nodes on the cluster are online.
907 You can now easily set up your QDevice by running the following command on one
911 pve# pvecm qdevice setup <QDEVICE-IP>
914 The SSH key from the cluster will be automatically copied to the QDevice. You
915 might need to enter an SSH password during this step.
917 After you enter the password and all the steps are successfully completed, you
918 will see "Done". You can check the status now:
925 Votequorum information
926 ~~~~~~~~~~~~~~~~~~~~~
931 Flags: Quorate Qdevice
933 Membership information
934 ~~~~~~~~~~~~~~~~~~~~~~
935 Nodeid Votes Qdevice Name
936 0x00000001 1 A,V,NMW 192.168.22.180 (local)
937 0x00000002 1 A,V,NMW 192.168.22.181
942 which means the QDevice is set up.
944 Frequently Asked Questions
945 ~~~~~~~~~~~~~~~~~~~~~~~~~~
950 In case of a tie, where two same-sized cluster partitions cannot see each other
951 but the QDevice, the QDevice chooses randomly one of those partitions and
952 provides a vote to it.
954 Possible Negative Implications
955 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
957 For clusters with an even node count there are no negative implications when
958 setting up a QDevice. If it fails to work, you are as good as without QDevice at
961 Adding/Deleting Nodes After QDevice Setup
962 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
964 If you want to add a new node or remove an existing one from a cluster with a
965 QDevice setup, you need to remove the QDevice first. After that, you can add or
966 remove nodes normally. Once you have a cluster with an even node count again,
967 you can set up the QDevice again as described above.
972 If you used the official `pvecm` tool to add the QDevice, you can remove it
973 trivially by running:
976 pve# pvecm qdevice remove
981 //There is still stuff to add here
984 Corosync Configuration
985 ----------------------
987 The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
988 controls the cluster membership and its network.
989 For further information about it, check the corosync.conf man page:
995 For node membership you should always use the `pvecm` tool provided by {pve}.
996 You may have to edit the configuration file manually for other changes.
997 Here are a few best practice tips for doing this.
999 [[pvecm_edit_corosync_conf]]
1003 Editing the corosync.conf file is not always very straightforward. There are
1004 two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
1005 `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
1006 propagate the changes to the local one, but not vice versa.
1008 The configuration will get updated automatically as soon as the file changes.
1009 This means changes which can be integrated in a running corosync will take
1010 effect immediately. So you should always make a copy and edit that instead, to
1011 avoid triggering some unwanted changes by an in-between safe.
1015 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
1018 Then open the config file with your favorite editor, `nano` and `vim.tiny` are
1019 preinstalled on any {pve} node for example.
1021 NOTE: Always increment the 'config_version' number on configuration changes,
1022 omitting this can lead to problems.
1024 After making the necessary changes create another copy of the current working
1025 configuration file. This serves as a backup if the new configuration fails to
1026 apply or makes problems in other ways.
1030 cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
1033 Then move the new configuration file over the old one:
1036 mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
1039 You may check with the commands
1042 systemctl status corosync
1043 journalctl -b -u corosync
1046 If the change could be applied automatically. If not you may have to restart the
1047 corosync service via:
1050 systemctl restart corosync
1053 On errors check the troubleshooting section below.
1058 Issue: 'quorum.expected_votes must be configured'
1059 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1061 When corosync starts to fail and you get the following message in the system log:
1065 corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
1066 corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
1067 'configuration error: nodelist or quorum.expected_votes must be configured!'
1071 It means that the hostname you set for corosync 'ringX_addr' in the
1072 configuration could not be resolved.
1074 Write Configuration When Not Quorate
1075 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1077 If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
1078 know what you do, use:
1084 This sets the expected vote count to 1 and makes the cluster quorate. You can
1085 now fix your configuration, or revert it back to the last working backup.
1087 This is not enough if corosync cannot start anymore. Here it is best to edit the
1088 local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
1089 that corosync can start again. Ensure that on all nodes this configuration has
1090 the same content to avoid split brains. If you are not sure what went wrong
1091 it's best to ask the Proxmox Community to help you.
1094 [[pvecm_corosync_conf_glossary]]
1095 Corosync Configuration Glossary
1096 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1099 This names the different link addresses for the kronosnet connections between
1106 It is obvious that a cluster is not quorate when all nodes are
1107 offline. This is a common case after a power failure.
1109 NOTE: It is always a good idea to use an uninterruptible power supply
1110 (``UPS'', also called ``battery backup'') to avoid this state, especially if
1113 On node startup, the `pve-guests` service is started and waits for
1114 quorum. Once quorate, it starts all guests which have the `onboot`
1117 When you turn on nodes, or when power comes back after power failure,
1118 it is likely that some nodes boots faster than others. Please keep in
1119 mind that guest startup is delayed until you reach quorum.
1125 Migrating virtual guests to other nodes is a useful feature in a
1126 cluster. There are settings to control the behavior of such
1127 migrations. This can be done via the configuration file
1128 `datacenter.cfg` or for a specific migration via API or command line
1131 It makes a difference if a Guest is online or offline, or if it has
1132 local resources (like a local disk).
1134 For Details about Virtual Machine Migration see the
1135 xref:qm_migration[QEMU/KVM Migration Chapter].
1137 For Details about Container Migration see the
1138 xref:pct_migration[Container Migration Chapter].
1143 The migration type defines if the migration data should be sent over an
1144 encrypted (`secure`) channel or an unencrypted (`insecure`) one.
1145 Setting the migration type to insecure means that the RAM content of a
1146 virtual guest gets also transferred unencrypted, which can lead to
1147 information disclosure of critical data from inside the guest (for
1148 example passwords or encryption keys).
1150 Therefore, we strongly recommend using the secure channel if you do
1151 not have full control over the network and can not guarantee that no
1152 one is eavesdropping on it.
1154 NOTE: Storage migration does not follow this setting. Currently, it
1155 always sends the storage content over a secure channel.
1157 Encryption requires a lot of computing power, so this setting is often
1158 changed to "unsafe" to achieve better performance. The impact on
1159 modern systems is lower because they implement AES encryption in
1160 hardware. The performance impact is particularly evident in fast
1161 networks where you can transfer 10 Gbps or more.
1166 By default, {pve} uses the network in which cluster communication
1167 takes place to send the migration traffic. This is not optimal because
1168 sensitive cluster traffic can be disrupted and this network may not
1169 have the best bandwidth available on the node.
1171 Setting the migration network parameter allows the use of a dedicated
1172 network for the entire migration traffic. In addition to the memory,
1173 this also affects the storage traffic for offline migrations.
1175 The migration network is set as a network in the CIDR notation. This
1176 has the advantage that you do not have to set individual IP addresses
1177 for each node. {pve} can determine the real address on the
1178 destination node from the network specified in the CIDR form. To
1179 enable this, the network must be specified so that each node has one,
1180 but only one IP in the respective network.
1185 We assume that we have a three-node setup with three separate
1186 networks. One for public communication with the Internet, one for
1187 cluster communication and a very fast one, which we want to use as a
1188 dedicated network for migration.
1190 A network configuration for such a setup might look as follows:
1193 iface eno1 inet manual
1197 iface vmbr0 inet static
1199 netmask 255.255.250.0
1207 iface eno2 inet static
1209 netmask 255.255.255.0
1213 iface eno3 inet static
1215 netmask 255.255.255.0
1218 Here, we will use the network 10.1.2.0/24 as a migration network. For
1219 a single migration, you can do this using the `migration_network`
1220 parameter of the command line tool:
1223 # qm migrate 106 tre --online --migration_network 10.1.2.0/24
1226 To configure this as the default network for all migrations in the
1227 cluster, set the `migration` property of the `/etc/pve/datacenter.cfg`
1231 # use dedicated migration network
1232 migration: secure,network=10.1.2.0/24
1235 NOTE: The migration type must always be set when the migration network
1236 gets set in `/etc/pve/datacenter.cfg`.
1240 include::pve-copyright.adoc[]