* then join it, as explained in the previous section.
+The configuration files for the removed node will still reside in
+'/etc/pve/nodes/hp4'. Recover any configuration you still need and remove the
+directory afterwards.
+
NOTE: After removal of the node, its SSH fingerprint will still reside in the
'known_hosts' of the other nodes. If you receive an SSH error after rejoining
a node with the same IP or hostname, run `pvecm updatecerts` once on the
xref:pvecm_corosync_addresses[Link Address Types]).
In this example, we want to switch cluster communication to the
-10.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively.
+10.10.10.0/25 network, so we change the 'ring0_addr' of each node respectively.
NOTE: The exact same procedure can be used to change other 'ringX_addr' values
as well. However, we recommend only changing one link address at a time, so
* Storage replication
-.Pitfalls due to automatic execution of `.bashrc` and siblings
-[IMPORTANT]
-====
+SSH setup
+~~~~~~~~~
+
+On {pve} systems, the following changes are made to the SSH configuration/setup:
+
+* the `root` user's SSH client config gets setup to prefer `AES` over `ChaCha20`
+
+* the `root` user's `authorized_keys` file gets linked to
+ `/etc/pve/priv/authorized_keys`, merging all authorized keys within a cluster
+
+* `sshd` is configured to allow logging in as root with a password
+
+NOTE: Older systems might also have `/etc/ssh/ssh_known_hosts` set up as symlink
+pointing to `/etc/pve/priv/known_hosts`, containing a merged version of all
+node host keys. This system was replaced with explicit host key pinning in
+`pve-cluster <<INSERT VERSION>>`, the symlink can be deconfigured if still in
+place by running `pvecm updatecerts --unmerge-known-hosts`.
+
+Pitfalls due to automatic execution of `.bashrc` and siblings
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
In case you have a custom `.bashrc`, or similar files that get executed on
login by the configured shell, `ssh` will automatically run it once the session
is established successfully. This can cause some unexpected behavior, as those
*) return;;
esac
----
-====
-
Corosync External Vote Support
------------------------------
The SSH key from the cluster will be automatically copied to the QDevice.
-NOTE: Make sure that the SSH configuration on your external server allows root
-login via password, if you are asked for a password during this step.
+NOTE: Make sure to setup key-based access for the root user on your external
+server, or temporarily allow root login with password during the setup phase.
If you receive an error such as 'Host key verification failed.' at this
stage, running `pvecm updatecerts` could fix the issue.
-After you enter the password and all the steps have successfully completed, you
-will see "Done". You can verify that the QDevice has been set up with:
+After all the steps have successfully completed, you will see "Done". You can
+verify that the QDevice has been set up with:
----
pve# pvecm status
columns:
* `A` / `NA`: Alive or Not Alive. Indicates if the communication to the external
- `corosync-qndetd` daemon works.
+ `corosync-qnetd` daemon works.
* `V` / `NV`: If the QDevice will cast a vote for the node. In a split-brain
situation, where the corosync connection between the nodes is down, but they
both can still communicate with the external `corosync-qnetd` daemon,
https://manpages.debian.org/bookworm/libvotequorum-dev/votequorum_qdevice_master_wins.3.en.html].
* `NR`: QDevice is not registered.
+NOTE: If your QDevice is listed as `Not Alive` (`NA` in the output above),
+ensure that port `5403` (the default port of the qnetd server) of your external
+server is reachable via TCP/IP!
+
Frequently Asked Questions
~~~~~~~~~~~~~~~~~~~~~~~~~~
Migrating virtual guests to other nodes is a useful feature in a
cluster. There are settings to control the behavior of such
migrations. This can be done via the configuration file
-`datacenter.cfg` or for a specific migration via API or command line
+`datacenter.cfg` or for a specific migration via API or command-line
parameters.
It makes a difference if a guest is online or offline, or if it has
Here, we will use the network 10.1.2.0/24 as a migration network. For
a single migration, you can do this using the `migration_network`
-parameter of the command line tool:
+parameter of the command-line tool:
----
# qm migrate 106 tre --online --migration_network 10.1.2.0/24