[source,bash]
----
-pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
+# pvecm add IP-ADDRESS-CLUSTER --link0 LOCAL-IP-ADDRESS-LINK0
----
If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
* then join it, as explained in the previous section.
+The configuration files for the removed node will still reside in
+'/etc/pve/nodes/hp4'. Recover any configuration you still need and remove the
+directory afterwards.
+
NOTE: After removal of the node, its SSH fingerprint will still reside in the
'known_hosts' of the other nodes. If you receive an SSH error after rejoining
a node with the same IP or hostname, run `pvecm updatecerts` once on the
xref:pvecm_corosync_addresses[Link Address Types]).
In this example, we want to switch cluster communication to the
-10.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively.
+10.10.10.0/25 network, so we change the 'ring0_addr' of each node respectively.
NOTE: The exact same procedure can be used to change other 'ringX_addr' values
as well. However, we recommend only changing one link address at a time, so
The SSH key from the cluster will be automatically copied to the QDevice.
-NOTE: Make sure that the SSH configuration on your external server allows root
-login via password, if you are asked for a password during this step.
+NOTE: Make sure to setup key-based access for the root user on your external
+server, or temporarily allow root login with password during the setup phase.
If you receive an error such as 'Host key verification failed.' at this
stage, running `pvecm updatecerts` could fix the issue.
-After you enter the password and all the steps have successfully completed, you
-will see "Done". You can verify that the QDevice has been set up with:
+After all the steps have successfully completed, you will see "Done". You can
+verify that the QDevice has been set up with:
----
pve# pvecm status
Membership information
~~~~~~~~~~~~~~~~~~~~~~
Nodeid Votes Qdevice Name
- 0x00000001 1 A,V,NMW 192.168.22.180 (local)
- 0x00000002 1 A,V,NMW 192.168.22.181
- 0x00000000 1 Qdevice
+ 0x00000001 1 A,V,NMW 192.168.22.180 (local)
+ 0x00000002 1 A,V,NMW 192.168.22.181
+ 0x00000000 1 Qdevice
----
+[[pvecm_qdevice_status_flags]]
+QDevice Status Flags
+^^^^^^^^^^^^^^^^^^^^
+
+The status output of the QDevice, as seen above, will usually contain three
+columns:
+
+* `A` / `NA`: Alive or Not Alive. Indicates if the communication to the external
+ `corosync-qnetd` daemon works.
+* `V` / `NV`: If the QDevice will cast a vote for the node. In a split-brain
+ situation, where the corosync connection between the nodes is down, but they
+ both can still communicate with the external `corosync-qnetd` daemon,
+ only one node will get the vote.
+* `MW` / `NMW`: Master wins (`MV`) or not (`NMW`). Default is `NMW`, see
+ footnote:[`votequorum_qdevice_master_wins` manual page
+ https://manpages.debian.org/bookworm/libvotequorum-dev/votequorum_qdevice_master_wins.3.en.html].
+* `NR`: QDevice is not registered.
+
+NOTE: If your QDevice is listed as `Not Alive` (`NA` in the output above),
+ensure that port `5403` (the default port of the qnetd server) of your external
+server is reachable via TCP/IP!
+
Frequently Asked Questions
~~~~~~~~~~~~~~~~~~~~~~~~~~
Migrating virtual guests to other nodes is a useful feature in a
cluster. There are settings to control the behavior of such
migrations. This can be done via the configuration file
-`datacenter.cfg` or for a specific migration via API or command line
+`datacenter.cfg` or for a specific migration via API or command-line
parameters.
It makes a difference if a guest is online or offline, or if it has
Here, we will use the network 10.1.2.0/24 as a migration network. For
a single migration, you can do this using the `migration_network`
-parameter of the command line tool:
+parameter of the command-line tool:
----
# qm migrate 106 tre --online --migration_network 10.1.2.0/24