* Date and time have to be synchronized.
-* SSH tunnel on TCP port 22 between nodes is used.
+* SSH tunnel on TCP port 22 between nodes is used.
* If you are interested in High Availability, you need to have at
least three nodes for reliable quorum. All nodes should have the
----
hp1# pvecm delnode hp4
+ Killing node 4
----
-If the operation succeeds no output is returned, just check the node
-list again with `pvecm nodes` or `pvecm status`. You should see
-something like:
+Use `pvecm nodes` or `pvecm status` to check the node list again. It should
+look something like:
----
hp1# pvecm status
[source,bash]
----
rm /etc/pve/corosync.conf
-rm /etc/corosync/*
+rm -r /etc/corosync/*
----
You can now start the filesystem again as normal service:
If you see a healthy cluster state, it means that your new link is being used.
+Role of SSH in {PVE} Clusters
+-----------------------------
+
+{PVE} utilizes SSH tunnels for various operations.
+
+* Proxying terminal sessions of node and containers between nodes
++
+When you connect another nodes shell through the web interface, for example, a
+non-interactive SSH tunnel is started in order to forward the necessary ports
+for the VNC connection.
+
+* VM and CT memory and local-storage migration, if the cluster wide migration
+ settings are not configured 'insecure' mode. During a VM migration an SSH
+ tunnel is established between the target and source nodes.
+
+* Storage replication
+
+.Pitfalls due to automatic execution of `.bashrc` and siblings
+[IMPORTANT]
+====
+In case you have a custom `.bashrc`, or similar files that get executed on
+login by the configured shell, `ssh` will automatically run it once the session
+is established successfully. This can cause some unexpected behavior, as those
+commands may be executed with root permissions on any above described
+operation. That can cause possible problematic side-effects!
+
+In order to avoid such complications, it's recommended to add a check in
+`/root/.bashrc` to make sure the session is interactive, and only then run
+`.bashrc` commands.
+
+You can add this snippet at the beginning of your `.bashrc` file:
+
+----
+# Early exit if not running interactively to avoid side-effects!
+case $- in
+ *i*) ;;
+ *) return;;
+esac
+----
+====
+
+
Corosync External Vote Support
------------------------------
The traffic between the daemon and the cluster must be encrypted to ensure a
safe and secure QDevice integration in {pve}.
-First, install the 'corosync-qnetd' package on your external server and
-the 'corosync-qdevice' package on all cluster nodes.
+First, install the 'corosync-qnetd' package on your external server
+
+----
+external# apt install corosync-qnetd
+----
+
+and the 'corosync-qdevice' package on all cluster nodes
+
+----
+pve# apt install corosync-qdevice
+----
After that, ensure that all your nodes on the cluster are online.
pve# pvecm qdevice setup <QDEVICE-IP>
----
-The SSH key from the cluster will be automatically copied to the QDevice. You
-might need to enter an SSH password during this step.
+The SSH key from the cluster will be automatically copied to the QDevice.
+
+NOTE: Make sure that the SSH configuration on your external server allows root
+login via password, if you are asked for a password during this step.
After you enter the password and all the steps are successfully completed, you
will see "Done". You can check the status now:
address 192.X.Y.57
netmask 255.255.250.0
gateway 192.X.Y.1
- bridge_ports eno1
- bridge_stp off
- bridge_fd 0
+ bridge-ports eno1
+ bridge-stp off
+ bridge-fd 0
# cluster network
auto eno2