X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pvecm.adoc;h=31dc18a74a2ff14ec04bab58820172f4d6732689;hb=508a36f599bd0f3707e84e84a96506ded9e3fb36;hp=4bf2d53ceb8843ef71db67a70cb661450d90352a;hpb=337a2d42384936e90900c938e9af9591131c435c;p=pve-docs.git diff --git a/pvecm.adoc b/pvecm.adoc index 4bf2d53..31dc18a 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -172,11 +172,14 @@ infrastructure for bigger clusters. Adding Nodes to the Cluster --------------------------- -CAUTION: A node that is about to be added to the cluster cannot hold any guests. -All existing configuration in `/etc/pve` is overwritten when joining a cluster, -since guest IDs could otherwise conflict. As a workaround, you can create a -backup of the guest (`vzdump`) and restore it under a different ID, after the -node has been added to the cluster. +CAUTION: All existing configuration in `/etc/pve` is overwritten when joining a +cluster. In particular, a joining node cannot hold any guests, since guest IDs +could otherwise conflict, and the node will inherit the cluster's storage +configuration. To join a node with existing guest, as a workaround, you can +create a backup of each guest (using `vzdump`) and restore it under a different +ID after joining. If the node's storage layout differs, you will need to re-add +the node's storages, and adapt each storage's node restriction to reflect on +which nodes the storage is actually available. Join Node to Cluster via GUI ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -292,7 +295,7 @@ use the 'link0' parameter to set the nodes address on that network: [source,bash] ---- -pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0 +# pvecm add IP-ADDRESS-CLUSTER --link0 LOCAL-IP-ADDRESS-LINK0 ---- If you want to use the built-in xref:pvecm_redundancy[redundancy] of the @@ -387,6 +390,10 @@ you have to: * then join it, as explained in the previous section. +The configuration files for the removed node will still reside in +'/etc/pve/nodes/hp4'. Recover any configuration you still need and remove the +directory afterwards. + NOTE: After removal of the node, its SSH fingerprint will still reside in the 'known_hosts' of the other nodes. If you receive an SSH error after rejoining a node with the same IP or hostname, run `pvecm updatecerts` once on the @@ -654,7 +661,7 @@ hostnames, ensure that they are resolvable from all nodes (see also xref:pvecm_corosync_addresses[Link Address Types]). In this example, we want to switch cluster communication to the -10.10.10.1/25 network, so we change the 'ring0_addr' of each node respectively. +10.10.10.0/25 network, so we change the 'ring0_addr' of each node respectively. NOTE: The exact same procedure can be used to change other 'ringX_addr' values as well. However, we recommend only changing one link address at a time, so @@ -1049,13 +1056,13 @@ pve# pvecm qdevice setup The SSH key from the cluster will be automatically copied to the QDevice. -NOTE: Make sure that the SSH configuration on your external server allows root -login via password, if you are asked for a password during this step. +NOTE: Make sure to setup key-based access for the root user on your external +server, or temporarily allow root login with password during the setup phase. If you receive an error such as 'Host key verification failed.' at this stage, running `pvecm updatecerts` could fix the issue. -After you enter the password and all the steps have successfully completed, you -will see "Done". You can verify that the QDevice has been set up with: +After all the steps have successfully completed, you will see "Done". You can +verify that the QDevice has been set up with: ---- pve# pvecm status @@ -1073,12 +1080,34 @@ Flags: Quorate Qdevice Membership information ~~~~~~~~~~~~~~~~~~~~~~ Nodeid Votes Qdevice Name - 0x00000001 1 A,V,NMW 192.168.22.180 (local) - 0x00000002 1 A,V,NMW 192.168.22.181 - 0x00000000 1 Qdevice + 0x00000001 1 A,V,NMW 192.168.22.180 (local) + 0x00000002 1 A,V,NMW 192.168.22.181 + 0x00000000 1 Qdevice ---- +[[pvecm_qdevice_status_flags]] +QDevice Status Flags +^^^^^^^^^^^^^^^^^^^^ + +The status output of the QDevice, as seen above, will usually contain three +columns: + +* `A` / `NA`: Alive or Not Alive. Indicates if the communication to the external + `corosync-qnetd` daemon works. +* `V` / `NV`: If the QDevice will cast a vote for the node. In a split-brain + situation, where the corosync connection between the nodes is down, but they + both can still communicate with the external `corosync-qnetd` daemon, + only one node will get the vote. +* `MW` / `NMW`: Master wins (`MV`) or not (`NMW`). Default is `NMW`, see + footnote:[`votequorum_qdevice_master_wins` manual page + https://manpages.debian.org/bookworm/libvotequorum-dev/votequorum_qdevice_master_wins.3.en.html]. +* `NR`: QDevice is not registered. + +NOTE: If your QDevice is listed as `Not Alive` (`NA` in the output above), +ensure that port `5403` (the default port of the qnetd server) of your external +server is reachable via TCP/IP! + Frequently Asked Questions ~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -1285,7 +1314,7 @@ Guest Migration Migrating virtual guests to other nodes is a useful feature in a cluster. There are settings to control the behavior of such migrations. This can be done via the configuration file -`datacenter.cfg` or for a specific migration via API or command line +`datacenter.cfg` or for a specific migration via API or command-line parameters. It makes a difference if a guest is online or offline, or if it has @@ -1374,7 +1403,7 @@ iface eno3 inet static Here, we will use the network 10.1.2.0/24 as a migration network. For a single migration, you can do this using the `migration_network` -parameter of the command line tool: +parameter of the command-line tool: ---- # qm migrate 106 tre --online --migration_network 10.1.2.0/24