scratch. But after removing the node from the cluster it will still have
access to the shared storages! This must be resolved before you start removing
the node from the cluster. A {pve} cluster cannot share the exact same
-storage with another cluster, as it leads to VMID conflicts.
+storage with another cluster, as storage locking doesn't work over cluster
+boundary. Further, it may also lead to VMID conflicts.
Its suggested that you create a new storage where only the node which you want
to separate has access. This can be an new export on your NFS or a new Ceph
* Ensure that multicast works in general and a high package rates. This can be
done with the `omping` tool. The final "%loss" number should be < 1%.
++
[source,bash]
----
omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
----
* Ensure that multicast communication works over an extended period of time.
- This covers up problems where IGMP snooping is activated on the network but
+ This uncovers problems where IGMP snooping is activated on the network but
no multicast querier is active. This test has a duration of around 10
minutes.
++
[source,bash]
----
omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
hostnames ensure that they are resolvable from all nodes.
In my example I want to switch my cluster communication to the 10.10.10.1/25
-network. So I replace all 'ring0_addr' respectively. I also set the bindetaddr
+network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
in the totem section of the config to an address of the new network. It can be
any address from the subnet configured on the new network interface.
-bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
----
-RRP On A Created Cluster
+RRP On Existing Clusters
~~~~~~~~~~~~~~~~~~~~~~~~
-When enabling an already running cluster to use RRP you will take similar steps
-as describe in
-<<separate-cluster-net-after-creation,separating the cluster network>>. You
-just do it on another ring.
+You will take similar steps as described in
+<<separate-cluster-net-after-creation,separating the cluster network>> to
+enable RRP on an already running cluster. The single difference is, that you
+will add `ring1` and use it instead of `ring0`.
First add a new `interface` subsection in the `totem` section, set its
`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
Corosync Configuration
----------------------
-The `/ect/pve/corosync.conf` file plays a central role in {pve} cluster. It
+The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
controls the cluster member ship and its network.
For reading more about it check the corosync.conf man page:
[source,bash]
(``UPS'', also called ``battery backup'') to avoid this state, especially if
you want HA.
-On node startup, service `pve-manager` is started and waits for
+On node startup, the `pve-guests` service is started and waits for
quorum. Once quorate, it starts all guests which have the `onboot`
flag set.
`datacenter.cfg` or for a specific migration via API or command line
parameters.
+It makes a difference if a Guest is online or offline, or if it has
+local resources (like a local disk).
+
+For Details about Virtual Machine Migration see the
+xref:qm_migration[QEMU/KVM Migration Chapter]
+
+For Details about Container Migration see the
+xref:pct_migration[Container Migration Chapter]
Migration Type
~~~~~~~~~~~~~~
The migration type defines if the migration data should be sent over a
encrypted (`secure`) channel or an unencrypted (`insecure`) one.
Setting the migration type to insecure means that the RAM content of a
-virtual guest gets also transfered unencrypted, which can lead to
+virtual guest gets also transferred unencrypted, which can lead to
information disclosure of critical data from inside the guest (for
example passwords or encryption keys).
A network configuration for such a setup might look as follows:
----
-iface eth0 inet manual
+iface eno1 inet manual
# public network
auto vmbr0
address 192.X.Y.57
netmask 255.255.250.0
gateway 192.X.Y.1
- bridge_ports eth0
+ bridge_ports eno1
bridge_stp off
bridge_fd 0
# cluster network
-auto eth1
-iface eth1 inet static
+auto eno2
+iface eno2 inet static
address 10.1.1.1
netmask 255.255.255.0
# fast network
-auto eth2
-iface eth2 inet static
+auto eno3
+iface eno3 inet static
address 10.1.2.1
netmask 255.255.255.0
----