X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pvecm.adoc;h=a8f017c7b9a9035d1bf7e9e96dfc3f1b3eeef3c3;hp=ed468f54e2014d36773a6e98ca6c5d7c25a41800;hb=c8a14deac5e67ade2198e01334d503b820d790e0;hpb=b2f242abe4c50227f5610767e6fcaa40654c2b88 diff --git a/pvecm.adoc b/pvecm.adoc index ed468f5..a8f017c 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -1,7 +1,6 @@ ifdef::manvolnum[] pvecm(1) ======== -include::attributes.txt[] :pve-toplevel: NAME @@ -21,11 +20,8 @@ endif::manvolnum[] ifndef::manvolnum[] Cluster Manager =============== -include::attributes.txt[] -endif::manvolnum[] -ifdef::wiki[] :pve-toplevel: -endif::wiki[] +endif::manvolnum[] The {PVE} cluster manager `pvecm` is a tool to create a group of physical servers. Such a group is called a *cluster*. We use the @@ -297,6 +293,7 @@ cluster again, you have to * then join it, as explained in the previous section. +[[pvecm_separate_node_without_reinstall]] Separate A Node Without Reinstalling ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -667,8 +664,9 @@ RRP On A Created Cluster ~~~~~~~~~~~~~~~~~~~~~~~~ When enabling an already running cluster to use RRP you will take similar steps -as describe in <>. You just do it on another ring. +as describe in +<>. You +just do it on another ring. First add a new `interface` subsection in the `totem` section, set its `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an @@ -723,8 +721,8 @@ nodelist { ---- -Bring it in effect like described in the <> section. +Bring it in effect like described in the +<> section. This is a change which cannot take live in effect and needs at least a restart of corosync. Recommended is a restart of the whole cluster. @@ -883,6 +881,117 @@ it is likely that some nodes boots faster than others. Please keep in mind that guest startup is delayed until you reach quorum. +Guest Migration +--------------- + +Migrating virtual guests to other nodes is a useful feature in a +cluster. There are settings to control the behavior of such +migrations. This can be done via the configuration file +`datacenter.cfg` or for a specific migration via API or command line +parameters. + + +Migration Type +~~~~~~~~~~~~~~ + +The migration type defines if the migration data should be sent over a +encrypted (`secure`) channel or an unencrypted (`insecure`) one. +Setting the migration type to insecure means that the RAM content of a +virtual guest gets also transfered unencrypted, which can lead to +information disclosure of critical data from inside the guest (for +example passwords or encryption keys). + +Therefore, we strongly recommend using the secure channel if you do +not have full control over the network and can not guarantee that no +one is eavesdropping to it. + +NOTE: Storage migration does not follow this setting. Currently, it +always sends the storage content over a secure channel. + +Encryption requires a lot of computing power, so this setting is often +changed to "unsafe" to achieve better performance. The impact on +modern systems is lower because they implement AES encryption in +hardware. The performance impact is particularly evident in fast +networks where you can transfer 10 Gbps or more. + + +Migration Network +~~~~~~~~~~~~~~~~~ + +By default, {pve} uses the network in which cluster communication +takes place to send the migration traffic. This is not optimal because +sensitive cluster traffic can be disrupted and this network may not +have the best bandwidth available on the node. + +Setting the migration network parameter allows the use of a dedicated +network for the entire migration traffic. In addition to the memory, +this also affects the storage traffic for offline migrations. + +The migration network is set as a network in the CIDR notation. This +has the advantage that you do not have to set individual IP addresses +for each node. {pve} can determine the real address on the +destination node from the network specified in the CIDR form. To +enable this, the network must be specified so that each node has one, +but only one IP in the respective network. + + +Example +^^^^^^^ + +We assume that we have a three-node setup with three separate +networks. One for public communication with the Internet, one for +cluster communication and a very fast one, which we want to use as a +dedicated network for migration. + +A network configuration for such a setup might look as follows: + +---- +iface eth0 inet manual + +# public network +auto vmbr0 +iface vmbr0 inet static + address 192.X.Y.57 + netmask 255.255.250.0 + gateway 192.X.Y.1 + bridge_ports eth0 + bridge_stp off + bridge_fd 0 + +# cluster network +auto eth1 +iface eth1 inet static + address 10.1.1.1 + netmask 255.255.255.0 + +# fast network +auto eth2 +iface eth2 inet static + address 10.1.2.1 + netmask 255.255.255.0 +---- + +Here, we will use the network 10.1.2.0/24 as a migration network. For +a single migration, you can do this using the `migration_network` +parameter of the command line tool: + +---- +# qm migrate 106 tre --online --migration_network 10.1.2.0/24 +---- + +To configure this as the default network for all migrations in the +cluster, set the `migration` property of the `/etc/pve/datacenter.cfg` +file: + +---- +# use dedicated migration network +migration: secure,network=10.1.2.0/24 +---- + +NOTE: The migration type must always be set when the migration network +gets set in `/etc/pve/datacenter.cfg`. + + ifdef::manvolnum[] include::pve-copyright.adoc[] endif::manvolnum[]