+Cluster Network
+---------------
+
+The cluster network is the core of a cluster. All messages sent over it have to
+be delivered reliably to all nodes in their respective order. In {pve} this
+part is done by corosync, an implementation of a high performance, low overhead
+high availability development toolkit. It serves our decentralized
+configuration file system (`pmxcfs`).
+
+[[pvecm_cluster_network_requirements]]
+Network Requirements
+~~~~~~~~~~~~~~~~~~~~
+This needs a reliable network with latencies under 2 milliseconds (LAN
+performance) to work properly. The network should not be used heavily by other
+members, ideally corosync runs on its own network. Do not use a shared network
+for corosync and storage (except as a potential low-priority fallback in a
+xref:pvecm_redundancy[redundant] configuration).
+
+Before setting up a cluster, it is good practice to check if the network is fit
+for that purpose. To make sure the nodes can connect to each other on the
+cluster network, you can test the connectivity between them with the `ping`
+tool.
+
+If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
+be generated - no manual action is required.
+
+NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
+Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
+communication, which, for now, only supports regular UDP unicast.
+
+CAUTION: You can still enable Multicast or legacy unicast by setting your
+transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
+but keep in mind that this will disable all cryptography and redundancy support.
+This is therefore not recommended.
+
+Separate Cluster Network
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+When creating a cluster without any parameters the corosync cluster network is
+generally shared with the Web UI and the VMs and their traffic. Depending on
+your setup, even storage traffic may get sent over the same network. Its
+recommended to change that, as corosync is a time critical real time
+application.
+
+Setting Up A New Network
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+First you have to set up a new network interface. It should be on a physically
+separate network. Ensure that your network fulfills the
+xref:pvecm_cluster_network_requirements[cluster network requirements].
+
+Separate On Cluster Creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This is possible via the 'linkX' parameters of the 'pvecm create'
+command used for creating a new cluster.
+
+If you have set up an additional NIC with a static address on 10.10.10.1/25,
+and want to send and receive all cluster communication over this interface,
+you would execute:
+
+[source,bash]
+----
+pvecm create test --link0 10.10.10.1
+----
+
+To check if everything is working properly execute:
+[source,bash]
+----
+systemctl status corosync
+----
+
+Afterwards, proceed as described above to
+xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
+
+[[pvecm_separate_cluster_net_after_creation]]
+Separate After Cluster Creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can do this if you have already created a cluster and want to switch
+its communication to another network, without rebuilding the whole cluster.
+This change may lead to short durations of quorum loss in the cluster, as nodes
+have to restart corosync and come up one after the other on the new network.
+
+Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
+Then, open it and you should see a file similar to:
+
+----
+logging {
+ debug: off
+ to_syslog: yes
+}
+
+nodelist {
+
+ node {
+ name: due
+ nodeid: 2
+ quorum_votes: 1
+ ring0_addr: due
+ }
+
+ node {
+ name: tre
+ nodeid: 3
+ quorum_votes: 1
+ ring0_addr: tre
+ }
+
+ node {
+ name: uno
+ nodeid: 1
+ quorum_votes: 1
+ ring0_addr: uno
+ }
+
+}
+
+quorum {
+ provider: corosync_votequorum
+}
+
+totem {
+ cluster_name: testcluster
+ config_version: 3
+ ip_version: ipv4-6
+ secauth: on
+ version: 2
+ interface {
+ linknumber: 0
+ }
+
+}
+----
+
+NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
+is a remnant of older corosync versions that is kept for backwards
+compatibility.
+
+The first thing you want to do is add the 'name' properties in the node entries
+if you do not see them already. Those *must* match the node name.
+
+Then replace all addresses from the 'ring0_addr' properties of all nodes with
+the new addresses. You may use plain IP addresses or hostnames here. If you use
+hostnames ensure that they are resolvable from all nodes. (see also
+xref:pvecm_corosync_addresses[Link Address Types])
+
+In this example, we want to switch the cluster communication to the
+10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
+
+NOTE: The exact same procedure can be used to change other 'ringX_addr' values
+as well, although we recommend to not change multiple addresses at once, to make
+it easier to recover if something goes wrong.
+
+After we increase the 'config_version' property, the new configuration file
+should look like:
+
+----
+logging {
+ debug: off
+ to_syslog: yes
+}
+
+nodelist {
+
+ node {
+ name: due
+ nodeid: 2
+ quorum_votes: 1
+ ring0_addr: 10.10.10.2
+ }
+
+ node {
+ name: tre
+ nodeid: 3
+ quorum_votes: 1
+ ring0_addr: 10.10.10.3
+ }
+
+ node {
+ name: uno
+ nodeid: 1
+ quorum_votes: 1
+ ring0_addr: 10.10.10.1
+ }
+
+}
+
+quorum {
+ provider: corosync_votequorum
+}
+
+totem {
+ cluster_name: testcluster
+ config_version: 4
+ ip_version: ipv4-6
+ secauth: on
+ version: 2
+ interface {
+ linknumber: 0
+ }
+
+}
+----
+
+Then, after a final check if all changed information is correct, we save it and
+once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
+section to bring it into effect.
+
+The changes will be applied live, so restarting corosync is not strictly
+necessary. If you changed other settings as well, or notice corosync
+complaining, you can optionally trigger a restart.
+
+On a single node execute:
+
+[source,bash]
+----
+systemctl restart corosync
+----
+
+Now check if everything is fine:
+
+[source,bash]
+----
+systemctl status corosync
+----
+
+If corosync runs again correct restart corosync also on all other nodes.
+They will then join the cluster membership one by one on the new network.
+
+[[pvecm_corosync_addresses]]
+Corosync addresses
+~~~~~~~~~~~~~~~~~~
+
+A corosync link address (for backwards compatibility denoted by 'ringX_addr' in
+`corosync.conf`) can be specified in two ways:
+
+* **IPv4/v6 addresses** will be used directly. They are recommended, since they
+are static and usually not changed carelessly.
+
+* **Hostnames** will be resolved using `getaddrinfo`, which means that per
+default, IPv6 addresses will be used first, if available (see also
+`man gai.conf`). Keep this in mind, especially when upgrading an existing
+cluster to IPv6.
+
+CAUTION: Hostnames should be used with care, since the address they
+resolve to can be changed without touching corosync or the node it runs on -
+which may lead to a situation where an address is changed without thinking
+about implications for corosync.
+
+A separate, static hostname specifically for corosync is recommended, if
+hostnames are preferred. Also, make sure that every node in the cluster can
+resolve all hostnames correctly.
+
+Since {pve} 5.1, while supported, hostnames will be resolved at the time of
+entry. Only the resolved IP is then saved to the configuration.
+
+Nodes that joined the cluster on earlier versions likely still use their
+unresolved hostname in `corosync.conf`. It might be a good idea to replace
+them with IPs or a separate hostname, as mentioned above.
+
+
+[[pvecm_redundancy]]
+Corosync Redundancy
+-------------------
+
+Corosync supports redundant networking via its integrated kronosnet layer by
+default (it is not supported on the legacy udp/udpu transports). It can be
+enabled by specifying more than one link address, either via the '--linkX'
+parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
+adding a new node) or by specifying more than one 'ringX_addr' in
+`corosync.conf`.
+
+NOTE: To provide useful failover, every link should be on its own
+physical network connection.
+
+Links are used according to a priority setting. You can configure this priority
+by setting 'knet_link_priority' in the corresponding interface section in
+`corosync.conf`, or, preferably, using the 'priority' parameter when creating
+your cluster with `pvecm`:
+
+----
+ # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
+----
+
+This would cause 'link1' to be used first, since it has the higher priority.
+
+If no priorities are configured manually (or two links have the same priority),
+links will be used in order of their number, with the lower number having higher
+priority.
+
+Even if all links are working, only the one with the highest priority will see
+corosync traffic. Link priorities cannot be mixed, i.e. links with different
+priorities will not be able to communicate with each other.
+
+Since lower priority links will not see traffic unless all higher priorities
+have failed, it becomes a useful strategy to specify even networks used for
+other tasks (VMs, storage, etc...) as low-priority links. If worst comes to
+worst, a higher-latency or more congested connection might be better than no
+connection at all.
+
+Adding Redundant Links To An Existing Cluster
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To add a new link to a running configuration, first check how to
+xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
+
+Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
+sure that your 'X' is the same for every node you add it to, and that it is
+unique for each node.
+
+Lastly, add a new 'interface', as shown below, to your `totem`
+section, replacing 'X' with your link number chosen above.
+
+Assuming you added a link with number 1, the new configuration file could look
+like this:
+
+----
+logging {
+ debug: off
+ to_syslog: yes
+}
+
+nodelist {
+
+ node {
+ name: due
+ nodeid: 2
+ quorum_votes: 1
+ ring0_addr: 10.10.10.2
+ ring1_addr: 10.20.20.2
+ }
+
+ node {
+ name: tre
+ nodeid: 3
+ quorum_votes: 1
+ ring0_addr: 10.10.10.3
+ ring1_addr: 10.20.20.3
+ }
+
+ node {
+ name: uno
+ nodeid: 1
+ quorum_votes: 1
+ ring0_addr: 10.10.10.1
+ ring1_addr: 10.20.20.1
+ }
+
+}
+
+quorum {
+ provider: corosync_votequorum
+}
+
+totem {
+ cluster_name: testcluster
+ config_version: 4
+ ip_version: ipv4-6
+ secauth: on
+ version: 2
+ interface {
+ linknumber: 0
+ }
+ interface {
+ linknumber: 1
+ }
+}
+----
+
+The new link will be enabled as soon as you follow the last steps to
+xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
+be necessary. You can check that corosync loaded the new link using:
+
+----
+journalctl -b -u corosync
+----
+
+It might be a good idea to test the new link by temporarily disconnecting the
+old link on one node and making sure that its status remains online while
+disconnected:
+
+----
+pvecm status
+----
+
+If you see a healthy cluster state, it means that your new link is being used.
+
+
+Corosync External Vote Support
+------------------------------
+
+This section describes a way to deploy an external voter in a {pve} cluster.
+When configured, the cluster can sustain more node failures without
+violating safety properties of the cluster communication.
+
+For this to work there are two services involved:
+
+* a so called qdevice daemon which runs on each {pve} node
+
+* an external vote daemon which runs on an independent server.
+
+As a result you can achieve higher availability even in smaller setups (for
+example 2+1 nodes).
+
+QDevice Technical Overview
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
+node. It provides a configured number of votes to the clusters quorum
+subsystem based on an external running third-party arbitrator's decision.
+Its primary use is to allow a cluster to sustain more node failures than
+standard quorum rules allow. This can be done safely as the external device
+can see all nodes and thus choose only one set of nodes to give its vote.
+This will only be done if said set of nodes can have quorum (again) when
+receiving the third-party vote.
+
+Currently only 'QDevice Net' is supported as a third-party arbitrator. It is
+a daemon which provides a vote to a cluster partition if it can reach the
+partition members over the network. It will give only votes to one partition
+of a cluster at any time.
+It's designed to support multiple clusters and is almost configuration and
+state free. New clusters are handled dynamically and no configuration file
+is needed on the host running a QDevice.
+
+The external host has the only requirement that it needs network access to the
+cluster and a corosync-qnetd package available. We provide such a package
+for Debian based hosts, other Linux distributions should also have a package
+available through their respective package manager.
+
+NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
+TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
+latencies than 2 ms.
+
+Supported Setups
+~~~~~~~~~~~~~~~~
+
+We support QDevices for clusters with an even number of nodes and recommend
+it for 2 node clusters, if they should provide higher availability.
+For clusters with an odd node count we discourage the use of QDevices
+currently. The reason for this, is the difference of the votes the QDevice
+provides for each cluster type. Even numbered clusters get single additional
+vote, with this we can only increase availability, i.e. if the QDevice
+itself fails we are in the same situation as with no QDevice at all.
+
+Now, with an odd numbered cluster size the QDevice provides '(N-1)' votes --
+where 'N' corresponds to the cluster node count. This difference makes
+sense, if we had only one additional vote the cluster can get into a split
+brain situation.
+This algorithm would allow that all nodes but one (and naturally the
+QDevice itself) could fail.
+There are two drawbacks with this:
+
+* If the QNet daemon itself fails, no other node may fail or the cluster
+ immediately loses quorum. For example, in a cluster with 15 nodes 7
+ could fail before the cluster becomes inquorate. But, if a QDevice is
+ configured here and said QDevice fails itself **no single node** of
+ the 15 may fail. The QDevice acts almost as a single point of failure in
+ this case.
+
+* The fact that all but one node plus QDevice may fail sound promising at
+ first, but this may result in a mass recovery of HA services that would
+ overload the single node left. Also ceph server will stop to provide
+ services after only '((N-1)/2)' nodes are online.
+
+If you understand the drawbacks and implications you can decide yourself if
+you should use this technology in an odd numbered cluster setup.
+
+QDevice-Net Setup
+~~~~~~~~~~~~~~~~~
+
+We recommend to run any daemon which provides votes to corosync-qdevice as an
+unprivileged user. {pve} and Debian provides a package which is already
+configured to do so.
+The traffic between the daemon and the cluster must be encrypted to ensure a
+safe and secure QDevice integration in {pve}.
+
+First install the 'corosync-qnetd' package on your external server and
+the 'corosync-qdevice' package on all cluster nodes.
+
+After that, ensure that all your nodes on the cluster are online.
+
+You can now easily set up your QDevice by running the following command on one
+of the {pve} nodes:
+
+----
+pve# pvecm qdevice setup <QDEVICE-IP>
+----
+
+The SSH key from the cluster will be automatically copied to the QDevice. You
+might need to enter an SSH password during this step.
+
+After you enter the password and all the steps are successfully completed, you
+will see "Done". You can check the status now:
+
+----
+pve# pvecm status
+
+...
+
+Votequorum information
+~~~~~~~~~~~~~~~~~~~~~
+Expected votes: 3
+Highest expected: 3
+Total votes: 3
+Quorum: 2
+Flags: Quorate Qdevice
+
+Membership information
+~~~~~~~~~~~~~~~~~~~~~~
+ Nodeid Votes Qdevice Name
+ 0x00000001 1 A,V,NMW 192.168.22.180 (local)
+ 0x00000002 1 A,V,NMW 192.168.22.181
+ 0x00000000 1 Qdevice
+
+----
+
+which means the QDevice is set up.
+
+Frequently Asked Questions
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Tie Breaking
+^^^^^^^^^^^^
+
+In case of a tie, where two same-sized cluster partitions cannot see each other
+but the QDevice, the QDevice chooses randomly one of those partitions and
+provides a vote to it.
+
+Possible Negative Implications
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For clusters with an even node count there are no negative implications when
+setting up a QDevice. If it fails to work, you are as good as without QDevice at
+all.
+
+Adding/Deleting Nodes After QDevice Setup
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want to add a new node or remove an existing one from a cluster with a
+QDevice setup, you need to remove the QDevice first. After that, you can add or
+remove nodes normally. Once you have a cluster with an even node count again,
+you can set up the QDevice again as described above.
+
+Removing the QDevice
+^^^^^^^^^^^^^^^^^^^^
+
+If you used the official `pvecm` tool to add the QDevice, you can remove it
+trivially by running:
+
+----
+pve# pvecm qdevice remove
+----
+
+//Still TODO
+//^^^^^^^^^^
+//There is still stuff to add here
+
+
+Corosync Configuration
+----------------------
+
+The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
+controls the cluster membership and its network.
+For further information about it, check the corosync.conf man page:
+[source,bash]
+----
+man corosync.conf
+----
+
+For node membership you should always use the `pvecm` tool provided by {pve}.
+You may have to edit the configuration file manually for other changes.
+Here are a few best practice tips for doing this.
+
+[[pvecm_edit_corosync_conf]]
+Edit corosync.conf
+~~~~~~~~~~~~~~~~~~
+
+Editing the corosync.conf file is not always very straightforward. There are
+two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
+`/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
+propagate the changes to the local one, but not vice versa.
+
+The configuration will get updated automatically as soon as the file changes.
+This means changes which can be integrated in a running corosync will take
+effect immediately. So you should always make a copy and edit that instead, to
+avoid triggering some unwanted changes by an in-between safe.
+
+[source,bash]
+----
+cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
+----
+
+Then open the config file with your favorite editor, `nano` and `vim.tiny` are
+preinstalled on any {pve} node for example.
+
+NOTE: Always increment the 'config_version' number on configuration changes,
+omitting this can lead to problems.
+
+After making the necessary changes create another copy of the current working
+configuration file. This serves as a backup if the new configuration fails to
+apply or makes problems in other ways.
+
+[source,bash]
+----
+cp /etc/pve/corosync.conf /etc/pve/corosync.conf.bak
+----
+
+Then move the new configuration file over the old one:
+[source,bash]
+----
+mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf
+----
+
+You may check with the commands
+[source,bash]
+----
+systemctl status corosync
+journalctl -b -u corosync
+----
+
+If the change could be applied automatically. If not you may have to restart the
+corosync service via:
+[source,bash]
+----
+systemctl restart corosync
+----
+
+On errors check the troubleshooting section below.
+
+Troubleshooting
+~~~~~~~~~~~~~~~
+
+Issue: 'quorum.expected_votes must be configured'
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When corosync starts to fail and you get the following message in the system log:
+
+----
+[...]
+corosync[1647]: [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
+corosync[1647]: [SERV ] Service engine 'corosync_quorum' failed to load for reason
+ 'configuration error: nodelist or quorum.expected_votes must be configured!'
+[...]
+----
+
+It means that the hostname you set for corosync 'ringX_addr' in the
+configuration could not be resolved.
+
+Write Configuration When Not Quorate
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you need to change '/etc/pve/corosync.conf' on an node with no quorum, and you
+know what you do, use:
+[source,bash]
+----
+pvecm expected 1
+----
+
+This sets the expected vote count to 1 and makes the cluster quorate. You can
+now fix your configuration, or revert it back to the last working backup.
+
+This is not enough if corosync cannot start anymore. Here it is best to edit the
+local copy of the corosync configuration in '/etc/corosync/corosync.conf' so
+that corosync can start again. Ensure that on all nodes this configuration has
+the same content to avoid split brains. If you are not sure what went wrong
+it's best to ask the Proxmox Community to help you.
+
+
+[[pvecm_corosync_conf_glossary]]
+Corosync Configuration Glossary
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ringX_addr::
+This names the different link addresses for the kronosnet connections between
+nodes.
+
+