X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pvecm.adoc;h=491b2ac9cb25a080a651ce85da94a04c22dade12;hp=0924c756e9963e5120fa2c6050a67c17dd2ad6c3;hb=da6c7dee9c59f7ccaa746a5bc644fc0a4c8c94c1;hpb=b174347352500151ab61ef4c3768ba859cf4057d diff --git a/pvecm.adoc b/pvecm.adoc index 0924c75..491b2ac 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -193,42 +193,10 @@ not be what you want or need. Move all virtual machines from the node. Make sure you have no local data or backups you want to keep, or save them accordingly. +In the following example we will remove the node hp4 from the cluster. -Log in to one remaining node via ssh. Issue a `pvecm nodes` command to -identify the node ID: - ----- -hp1# pvecm status - -Quorum information -~~~~~~~~~~~~~~~~~~ -Date: Mon Apr 20 12:30:13 2015 -Quorum provider: corosync_votequorum -Nodes: 4 -Node ID: 0x00000001 -Ring ID: 1928 -Quorate: Yes - -Votequorum information -~~~~~~~~~~~~~~~~~~~~~~ -Expected votes: 4 -Highest expected: 4 -Total votes: 4 -Quorum: 2 -Flags: Quorate - -Membership information -~~~~~~~~~~~~~~~~~~~~~~ - Nodeid Votes Name -0x00000001 1 192.168.15.91 (local) -0x00000002 1 192.168.15.92 -0x00000003 1 192.168.15.93 -0x00000004 1 192.168.15.94 ----- - -IMPORTANT: at this point you must power off the node to be removed and -make sure that it will not power on again (in the network) as it -is. +Log in to a *different* cluster node (not hp4), and issue a `pvecm nodes` +command to identify the node ID to remove: ---- hp1# pvecm nodes @@ -242,8 +210,18 @@ Membership information 4 1 hp4 ---- -Log in to one remaining node via ssh. Issue the delete command (here -deleting node `hp4`): + +At this point you must power off hp4 and +make sure that it will not power on again (in the network) as it +is. + +IMPORTANT: As said above, it is critical to power off the node +*before* removal, and make sure that it will *never* power on again +(in the existing cluster network) as it is. +If you power on the node as it is, your cluster will be screwed up and +it could be difficult to restore a clean cluster state. + +After powering off the node hp4, we can safely remove it from the cluster. hp1# pvecm delnode hp4 @@ -279,13 +257,6 @@ Membership information 0x00000003 1 192.168.15.92 ---- -IMPORTANT: as said above, it is very important to power off the node -*before* removal, and make sure that it will *never* power on again -(in the existing cluster network) as it is. - -If you power on the node as it is, your cluster will be screwed up and -it could be difficult to restore a clean cluster state. - If, for whatever reason, you want that this server joins the same cluster again, you have to @@ -304,7 +275,8 @@ You can also separate a node from a cluster without reinstalling it from scratch. But after removing the node from the cluster it will still have access to the shared storages! This must be resolved before you start removing the node from the cluster. A {pve} cluster cannot share the exact same -storage with another cluster, as it leads to VMID conflicts. +storage with another cluster, as storage locking doesn't work over cluster +boundary. Further, it may also lead to VMID conflicts. Its suggested that you create a new storage where only the node which you want to separate has access. This can be an new export on your NFS or a new Ceph @@ -427,6 +399,7 @@ for that purpose. * Ensure that multicast works in general and a high package rates. This can be done with the `omping` tool. The final "%loss" number should be < 1%. ++ [source,bash] ---- omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ... @@ -436,6 +409,7 @@ omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ... This covers up problems where IGMP snooping is activated on the network but no multicast querier is active. This test has a duration of around 10 minutes. ++ [source,bash] ---- omping -c 600 -i 1 -q NODE1-IP NODE2-IP ... @@ -660,13 +634,13 @@ pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \ -bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1 ---- -RRP On A Created Cluster +RRP On Existing Clusters ~~~~~~~~~~~~~~~~~~~~~~~~ -When enabling an already running cluster to use RRP you will take similar steps -as describe in -<>. You -just do it on another ring. +You will take similar steps as described in +<> to +enable RRP on an already running cluster. The single difference is, that you +will add `ring1` and use it instead of `ring0`. First add a new `interface` subsection in the `totem` section, set its `ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an @@ -890,6 +864,14 @@ migrations. This can be done via the configuration file `datacenter.cfg` or for a specific migration via API or command line parameters. +It makes a difference if a Guest is online or offline, or if it has +local resources (like a local disk). + +For Details about Virtual Machine Migration see the +xref:qm_migration[QEMU/KVM Migration Chapter] + +For Details about Container Migration see the +xref:pct_migration[Container Migration Chapter] Migration Type ~~~~~~~~~~~~~~ @@ -918,29 +900,32 @@ networks where you can transfer 10 Gbps or more. Migration Network ~~~~~~~~~~~~~~~~~ -By default {pve} uses the network where the cluster communication happens -for sending the migration traffic. This is may be suboptimal, for one the -sensible cluster traffic can be disturbed and on the other hand it may not -have the best bandwidth available from all network interfaces on the node. +By default, {pve} uses the network in which cluster communication +takes place to send the migration traffic. This is not optimal because +sensitive cluster traffic can be disrupted and this network may not +have the best bandwidth available on the node. -Setting the migration network parameter allows using a dedicated network for -sending all the migration traffic when migrating a guest system. This -includes the traffic for offline storage migrations. +Setting the migration network parameter allows the use of a dedicated +network for the entire migration traffic. In addition to the memory, +this also affects the storage traffic for offline migrations. + +The migration network is set as a network in the CIDR notation. This +has the advantage that you do not have to set individual IP addresses +for each node. {pve} can determine the real address on the +destination node from the network specified in the CIDR form. To +enable this, the network must be specified so that each node has one, +but only one IP in the respective network. -The migration network is represented as a network in 'CIDR' notation. This -has the advantage that you do not need to set a IP for each node, {pve} is -able to figure out the real address from the given CIDR denoted network and -the networks configured on the target node. -To let this work the network must be specific enough, i.e. each node must -have one and only one IP configured in the given network. Example ^^^^^^^ -Lets assume that we have a three node setup with three networks, one for the -public communication with the Internet, one for the cluster communication -and one very fast one, which we want to use as an dedicated migration -network. A network configuration for such a setup could look like: +We assume that we have a three-node setup with three separate +networks. One for public communication with the Internet, one for +cluster communication and a very fast one, which we want to use as a +dedicated network for migration. + +A network configuration for such a setup might look as follows: ---- iface eth0 inet manual @@ -966,25 +951,28 @@ auto eth2 iface eth2 inet static address 10.1.2.1 netmask 255.255.255.0 - -# [...] ---- -Here we want to use the 10.1.2.0/24 network as migration network. -For a single migration you can achieve this by using the 'migration_network' -parameter: +Here, we will use the network 10.1.2.0/24 as a migration network. For +a single migration, you can do this using the `migration_network` +parameter of the command line tool: + ---- # qm migrate 106 tre --online --migration_network 10.1.2.0/24 ---- -To set this up as default network for all migrations cluster wide you can use -the migration property in '/etc/pve/datacenter.cfg': +To configure this as the default network for all migrations in the +cluster, set the `migration` property of the `/etc/pve/datacenter.cfg` +file: + ---- -# [...] +# use dedicated migration network migration: secure,network=10.1.2.0/24 ---- -Note that the migration type must be always set if the network gets set. +NOTE: The migration type must always be set when the migration network +gets set in `/etc/pve/datacenter.cfg`. + ifdef::manvolnum[] include::pve-copyright.adoc[]