The {PVE} cluster manager `pvecm` is a tool to create a group of
physical servers. Such a group is called a *cluster*. We use the
http://www.corosync.org[Corosync Cluster Engine] for reliable group
-communication, and such clusters can consist of up to 32 physical nodes
-(probably more, dependent on network latency).
+communication. There's no explicit limit for the number of nodes in a cluster.
+In practice, the actual possible node count may be limited by the host and
+network performance. Currently (2021), there are reports of clusters (using
+high-end enterprise hardware) with over 50 nodes in production.
`pvecm` can be used to create a new cluster, join nodes to a cluster,
leave the cluster, get status information and do various other cluster
If you see a healthy cluster state, it means that your new link is being used.
-Role of SSH in {PVE} Clustering
--------------------------------
+Role of SSH in {PVE} Clusters
+-----------------------------
-{PVE} utilizes SSH tunnels for various operations:
+{PVE} utilizes SSH tunnels for various features.
-* Proxying terminal sessions on the GUI
+* Proxying console/shell sessions (node and guests)
++
+When using the shell for node B while being connected to node A, connects to a
+terminal proxy on node A, which is in turn connected to the login shell on node
+B via a non-interactive SSH tunnel.
-* VM/CT Migrations (if not configured 'insecure' mode)
+* VM and CT memory and local-storage migration in 'secure' mode.
++
+During the migration one or more SSH tunnel(s) are established between the
+source and target nodes, in order to exchange migration information and
+transfer memory and disk contents.
-* Storage replications
+* Storage replication
-For example when you connect another nodes shell through the interface, a
-non-interactive SSH tunnel is started in order to forward the necessary ports
-for the VNC connection.
-
-Similarly during a VM migration an SSH tunnel is established between the target
-and source nodes. This way the local `qemu` socket can be used for the migration.
-
-IMPORTANT: In case you have a custom `.bashrc` or similar file that gets
-executed on login, `ssh` will automatically run it once the session is
-established. This can cause some unexpected behavior (as commands may be
-executed as a side-effect).
+.Pitfalls due to automatic execution of `.bashrc` and siblings
+[IMPORTANT]
+====
+In case you have a custom `.bashrc`, or similar files that get executed on
+login by the configured shell, `ssh` will automatically run it once the session
+is established successfully. This can cause some unexpected behavior, as those
+commands may be executed with root permissions on any above described
+operation. That can cause possible problematic side-effects!
In order to avoid such complications, it's recommended to add a check in
`/root/.bashrc` to make sure the session is interactive, and only then run
You can add this snippet at the beginning of your `.bashrc` file:
----
-# If not running interactively, don't do anything
+# Early exit if not running interactively to avoid side-effects!
case $- in
*i*) ;;
*) return;;
esac
----
+====
Corosync External Vote Support