Move all virtual machines from the node. Make sure you have no local
data or backups you want to keep, or save them accordingly.
+In the following example we will remove the node hp4 from the cluster.
-Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
-identify the node ID:
-
-----
-hp1# pvecm status
-
-Quorum information
-~~~~~~~~~~~~~~~~~~
-Date: Mon Apr 20 12:30:13 2015
-Quorum provider: corosync_votequorum
-Nodes: 4
-Node ID: 0x00000001
-Ring ID: 1928
-Quorate: Yes
-
-Votequorum information
-~~~~~~~~~~~~~~~~~~~~~~
-Expected votes: 4
-Highest expected: 4
-Total votes: 4
-Quorum: 2
-Flags: Quorate
-
-Membership information
-~~~~~~~~~~~~~~~~~~~~~~
- Nodeid Votes Name
-0x00000001 1 192.168.15.91 (local)
-0x00000002 1 192.168.15.92
-0x00000003 1 192.168.15.93
-0x00000004 1 192.168.15.94
-----
-
-IMPORTANT: at this point you must power off the node to be removed and
-make sure that it will not power on again (in the network) as it
-is.
+Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes`
+command to identify the node ID to remove:
----
hp1# pvecm nodes
4 1 hp4
----
-Log in to one remaining node via ssh. Issue the delete command (here
-deleting node `hp4`):
+
+At this point you must power off hp4 and
+make sure that it will not power on again (in the network) as it
+is.
+
+IMPORTANT: As said above, it is critical to power off the node
+*before* removal, and make sure that it will *never* power on again
+(in the existing cluster network) as it is.
+If you power on the node as it is, your cluster will be screwed up and
+it could be difficult to restore a clean cluster state.
+
+After powering off the node hp4, we can safely remove it from the cluster.
hp1# pvecm delnode hp4
0x00000003 1 192.168.15.92
----
-IMPORTANT: as said above, it is very important to power off the node
-*before* removal, and make sure that it will *never* power on again
-(in the existing cluster network) as it is.
-
-If you power on the node as it is, your cluster will be screwed up and
-it could be difficult to restore a clean cluster state.
-
If, for whatever reason, you want that this server joins the same
cluster again, you have to
scratch. But after removing the node from the cluster it will still have
access to the shared storages! This must be resolved before you start removing
the node from the cluster. A {pve} cluster cannot share the exact same
-storage with another cluster, as it leads to VMID conflicts.
+storage with another cluster, as storage locking doesn't work over cluster
+boundary. Further, it may also lead to VMID conflicts.
Its suggested that you create a new storage where only the node which you want
to separate has access. This can be an new export on your NFS or a new Ceph
* Ensure that multicast works in general and a high package rates. This can be
done with the `omping` tool. The final "%loss" number should be < 1%.
++
[source,bash]
----
omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
This covers up problems where IGMP snooping is activated on the network but
no multicast querier is active. This test has a duration of around 10
minutes.
++
[source,bash]
----
omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
----
-RRP On A Created Cluster
+RRP On Existing Clusters
~~~~~~~~~~~~~~~~~~~~~~~~
-When enabling an already running cluster to use RRP you will take similar steps
-as describe in
-<<separate-cluster-net-after-creation,separating the cluster network>>. You
-just do it on another ring.
+You will take similar steps as described in
+<<separate-cluster-net-after-creation,separating the cluster network>> to
+enable RRP on an already running cluster. The single difference is, that you
+will add `ring1` and use it instead of `ring0`.
First add a new `interface` subsection in the `totem` section, set its
`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
`datacenter.cfg` or for a specific migration via API or command line
parameters.
+It makes a difference if a Guest is online or offline, or if it has
+local resources (like a local disk).
+
+For Details about Virtual Machine Migration see the
+xref:qm_migration[QEMU/KVM Migration Chapter]
+
+For Details about Container Migration see the
+xref:pct_migration[Container Migration Chapter]
Migration Type
~~~~~~~~~~~~~~