For now we have two important resources types - virtual machines and
containers. One basic idea here is that we can bundle related software
-into such VM or container, so there is no need to compose one big
+into such a VM or container, so there is no need to compose one big
service from other services, like it was done with `rgmanager`. In
general, a HA managed resource should not depend on other resources.
The HA stack now tries to start the resources and keeps it
running. Please note that you can configure the ``requested''
-resources state. For example you may want that the HA stack stops the
+resources state. For example you may want the HA stack to stop the
resource:
----
stop`, which sets the requested state to `stopped`.
NOTE: The HA stack works fully asynchronous and needs to communicate
-with other cluster members. So it takes some seconds unless you see
+with other cluster members. So it takes some seconds until you see
the result of such actions.
To view the current HA resource configuration use:
NOTE: This does not start or stop the resource.
-But all HA related task can be done on the GUI, so there is no need to
+But all HA related tasks can be done in the GUI, so there is no need to
use the command line at all.
.Locks in the LRM & CRM
[NOTE]
Locks are provided by our distributed configuration file system (pmxcfs).
-They are used to guarantee that each LRM is active once and working. As a
-LRM only executes actions when it holds its lock we can mark a failed node
+They are used to guarantee that each LRM is active once and working. As an
+LRM only executes actions when it holds its lock, we can mark a failed node
as fenced if we can acquire its lock. This lets us then recover any failed
HA services securely without any interference from the now unknown failed node.
This all gets supervised by the CRM which holds currently the manager master
node, or when we restart the LRM daemon
(see xref:ha_manager_package_updates[Package Updates]).
+ignored::
+
+Act as if the service were not managed by HA at all.
+Useful, when full control over the service is desired temporarily,
+without removing it from the HA configuration.
+
+
migrate::
Migrate service (live) to other node.
After the LRM gets in the active state it reads the manager status
file in `/etc/pve/ha/manager_status` and determines the commands it
has to execute for the services it owns.
-For each command a worker gets started, this workers are running in
+For each command a worker gets started, these workers are running in
parallel and are limited to at most 4 by default. This default setting
may be changed through the datacenter configuration key `max_worker`.
When finished the worker process gets collected and its result saved for
a specific setup. For example may 4 live migrations happen at the same
time, which can lead to network congestions with slower networks and/or
big (memory wise) services. Ensure that also in the worst case no congestion
-happens and lower the `max_worker` value if needed. In the contrary, if you
+happens and lower the `max_worker` value if needed. On the contrary, if you
have a particularly powerful high end setup you may also want to increase it.
-Each command requested by the CRM is uniquely identifiable by an UID, when
-the worker finished its result will be processed and written in the LRM
+Each command requested by the CRM is uniquely identifiable by a UID, when
+the worker finishes its result will be processed and written in the LRM
status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
it and let its state machine - respective the commands output - act on it.
The actions on each service between CRM and LRM are normally always synced.
-This means that the CRM requests a state uniquely marked by an UID, the LRM
+This means that the CRM requests a state uniquely marked by a UID, the LRM
then executes this action *one time* and writes back the result, also
identifiable by the same UID. This is needed so that the LRM does not
-executes an outdated command.
+execute an outdated command.
With the exception of the `stop` and the `error` command,
those two do not depend on the result produced and are executed
always in the case of the stopped state and once in the case of
The CRM lost its lock, this means a failure happened and quorum was lost.
-It main task is to manage the services which are configured to be highly
+Its main task is to manage the services which are configured to be highly
available and try to always enforce the requested state. For example, a
service with the requested state 'started' will be started if its not
already running. If it crashes it will be automatically started again.
-Thus the CRM dictates the actions which the LRM needs to execute.
+Thus the CRM dictates the actions the LRM needs to execute.
When an node leaves the cluster quorum, its state changes to unknown.
If the current CRM then can secure the failed nodes lock, the services
Resources
~~~~~~~~~
-[thumbnail="gui-ha-manager-status.png"]
+[thumbnail="screenshot/gui-ha-manager-status.png"]
The resource configuration file `/etc/pve/ha/resources.cfg` stores
the list of resources managed by `ha-manager`. A resource configuration
-inside that list look like this:
+inside that list looks like this:
----
<type>: <name>
include::ha-resources-opts.adoc[]
Here is a real world example with one VM and one container. As you see,
-the syntax of those files is really simple, so it is even posiible to
+the syntax of those files is really simple, so it is even possible to
read or edit those files using your favorite editor:
.Configuration Example (`/etc/pve/ha/resources.cfg`)
# Note: use default settings for everything
----
-[thumbnail="gui-ha-manager-add-resource.png"]
+[thumbnail="screenshot/gui-ha-manager-add-resource.png"]
Above config was generated using the `ha-manager` command line tool:
Groups
~~~~~~
-[thumbnail="gui-ha-manager-groups-view.png"]
+[thumbnail="screenshot/gui-ha-manager-groups-view.png"]
The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
define groups of cluster nodes. A resource can be restricted to run
include::ha-groups-opts.adoc[]
-[thumbnail="gui-ha-manager-add-group.png"]
+[thumbnail="screenshot/gui-ha-manager-add-group.png"]
-A commom requirement is that a resource should run on a specific
+A common requirement is that a resource should run on a specific
node. Usually the resource is able to run on other nodes, so you can define
an unrestricted group with a single member:
For bigger clusters, it makes sense to define a more detailed failover
behavior. For example, you may want to run a set of services on
`node1` if possible. If `node1` is not available, you want to run them
-equally splitted on `node2` and `node3`. If those nodes also fail the
+equally split on `node2` and `node3`. If those nodes also fail the
services should run on `node4`. To achieve this you could set the node
list to:
The `nofailback` options is mostly useful to avoid unwanted resource
-movements during administartion tasks. For example, if you need to
+movements during administration tasks. For example, if you need to
migrate a service to a node which hasn't the highest priority in the
group, you need to tell the HA manager to not move this service
instantly back by setting the `nofailback` option.
really important task, because without, it would not be possible to
recover a resource on another node.
-If a node would not get fenced, it would be in an unknown state where
+If a node did not get fenced, it would be in an unknown state where
it may have still access to shared resources. This is really
dangerous! Imagine that every network but the storage one broke. Now,
while not reachable from the public network, the VM still runs and
---------------------
The start failure policy comes in effect if a service failed to start on a
-node once ore more times. It can be used to configure how often a restart
+node one or more times. It can be used to configure how often a restart
should be triggered on the same node and how often a service should be
relocated so that it gets a try to be started on another node.
The aim of this policy is to circumvent temporary unavailability of shared
the LRM makes a request to the CRM to freeze all its services. This prevents
that they get touched by the Cluster during the short time the LRM is restarting.
After that the LRM may safely close the watchdog during a restart.
-Such a restart happens on a update and as already stated a active master
-CRM is needed to acknowledge the requests from the LRM, if this is not the case
-the update process can be too long which, in the worst case, may result in
-a watchdog reset.
+Such a restart happens normally during a package update and, as already stated,
+an active master CRM is needed to acknowledge the requests from the LRM. If
+this is not the case the update process can take too long which, in the worst
+case, may result in a reset triggered by the watchdog.
Node Maintenance
The LRM tells the CRM that it wants to restart, and waits until the
CRM puts all resources into the `freeze` state (same mechanism is used
-for xref:ha_manager_package_updates[Pakage Updates]). This prevents
+for xref:ha_manager_package_updates[Package Updates]). This prevents
that those resources are moved to other nodes. Instead, the CRM start
the resources after the reboot on the same node.