+totem {
+ cluster_name: testcluster
+ config_version: 4
+ ip_version: ipv4-6
+ secauth: on
+ version: 2
+ interface {
+ linknumber: 0
+ }
+ interface {
+ linknumber: 1
+ }
+}
+----
+
+The new link will be enabled as soon as you follow the last steps to
+xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
+be necessary. You can check that corosync loaded the new link using:
+
+----
+journalctl -b -u corosync
+----
+
+It might be a good idea to test the new link by temporarily disconnecting the
+old link on one node and making sure that its status remains online while
+disconnected:
+
+----
+pvecm status
+----
+
+If you see a healthy cluster state, it means that your new link is being used.
+
+
+Role of SSH in {pve} Clusters
+-----------------------------
+
+{pve} utilizes SSH tunnels for various features.
+
+* Proxying console/shell sessions (node and guests)
++
+When using the shell for node B while being connected to node A, connects to a
+terminal proxy on node A, which is in turn connected to the login shell on node
+B via a non-interactive SSH tunnel.
+
+* VM and CT memory and local-storage migration in 'secure' mode.
++
+During the migration, one or more SSH tunnel(s) are established between the
+source and target nodes, in order to exchange migration information and
+transfer memory and disk contents.
+
+* Storage replication
+
+.Pitfalls due to automatic execution of `.bashrc` and siblings
+[IMPORTANT]
+====
+In case you have a custom `.bashrc`, or similar files that get executed on
+login by the configured shell, `ssh` will automatically run it once the session
+is established successfully. This can cause some unexpected behavior, as those
+commands may be executed with root permissions on any of the operations
+described above. This can cause possible problematic side-effects!
+
+In order to avoid such complications, it's recommended to add a check in
+`/root/.bashrc` to make sure the session is interactive, and only then run
+`.bashrc` commands.
+
+You can add this snippet at the beginning of your `.bashrc` file:
+
+----
+# Early exit if not running interactively to avoid side-effects!
+case $- in
+ *i*) ;;
+ *) return;;
+esac
+----
+====
+
+
+Corosync External Vote Support
+------------------------------
+
+This section describes a way to deploy an external voter in a {pve} cluster.
+When configured, the cluster can sustain more node failures without
+violating safety properties of the cluster communication.
+
+For this to work, there are two services involved:
+
+* A QDevice daemon which runs on each {pve} node
+
+* An external vote daemon which runs on an independent server
+
+As a result, you can achieve higher availability, even in smaller setups (for
+example 2+1 nodes).
+
+QDevice Technical Overview
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
+node. It provides a configured number of votes to the cluster's quorum
+subsystem, based on an externally running third-party arbitrator's decision.
+Its primary use is to allow a cluster to sustain more node failures than
+standard quorum rules allow. This can be done safely as the external device
+can see all nodes and thus choose only one set of nodes to give its vote.
+This will only be done if said set of nodes can have quorum (again) after
+receiving the third-party vote.
+
+Currently, only 'QDevice Net' is supported as a third-party arbitrator. This is
+a daemon which provides a vote to a cluster partition, if it can reach the
+partition members over the network. It will only give votes to one partition
+of a cluster at any time.
+It's designed to support multiple clusters and is almost configuration and
+state free. New clusters are handled dynamically and no configuration file
+is needed on the host running a QDevice.
+
+The only requirements for the external host are that it needs network access to
+the cluster and to have a corosync-qnetd package available. We provide a package
+for Debian based hosts, and other Linux distributions should also have a package
+available through their respective package manager.
+
+NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
+TCP/IP. The daemon may even run outside of the cluster's LAN and can have longer
+latencies than 2 ms.
+
+Supported Setups
+~~~~~~~~~~~~~~~~
+
+We support QDevices for clusters with an even number of nodes and recommend
+it for 2 node clusters, if they should provide higher availability.
+For clusters with an odd node count, we currently discourage the use of
+QDevices. The reason for this is the difference in the votes which the QDevice
+provides for each cluster type. Even numbered clusters get a single additional
+vote, which only increases availability, because if the QDevice
+itself fails, you are in the same position as with no QDevice at all.
+
+On the other hand, with an odd numbered cluster size, the QDevice provides
+'(N-1)' votes -- where 'N' corresponds to the cluster node count. This
+alternative behavior makes sense; if it had only one additional vote, the
+cluster could get into a split-brain situation. This algorithm allows for all
+nodes but one (and naturally the QDevice itself) to fail. However, there are two
+drawbacks to this:
+
+* If the QNet daemon itself fails, no other node may fail or the cluster
+ immediately loses quorum. For example, in a cluster with 15 nodes, 7
+ could fail before the cluster becomes inquorate. But, if a QDevice is
+ configured here and it itself fails, **no single node** of the 15 may fail.
+ The QDevice acts almost as a single point of failure in this case.
+
+* The fact that all but one node plus QDevice may fail sounds promising at
+ first, but this may result in a mass recovery of HA services, which could
+ overload the single remaining node. Furthermore, a Ceph server will stop
+ providing services if only '((N-1)/2)' nodes or less remain online.
+
+If you understand the drawbacks and implications, you can decide yourself if
+you want to use this technology in an odd numbered cluster setup.
+
+QDevice-Net Setup
+~~~~~~~~~~~~~~~~~
+
+We recommend running any daemon which provides votes to corosync-qdevice as an
+unprivileged user. {pve} and Debian provide a package which is already
+configured to do so.
+The traffic between the daemon and the cluster must be encrypted to ensure a
+safe and secure integration of the QDevice in {pve}.
+
+First, install the 'corosync-qnetd' package on your external server
+
+----
+external# apt install corosync-qnetd
+----
+
+and the 'corosync-qdevice' package on all cluster nodes