How It Works
------------
-This section provides an in detail description of the {PVE} HA-manager
-internals. It describes how the CRM and the LRM work together.
-
-To provide High Availability two daemons run on each node:
+This section provides a detailed description of the {PVE} HA manager
+internals. It describes all involved daemons and how they work
+together. To provide HA, two daemons run on each node:
`pve-ha-lrm`::
This all gets supervised by the CRM which holds currently the manager master
lock.
+
+Service States
+~~~~~~~~~~~~~~
+
+The CRM use a service state enumeration to record the current service
+state. We display this state on the GUI and you can query it using
+the `ha-manager` command line tool:
+
+----
+# ha-manager status
+quorum OK
+master elsa (active, Mon Nov 21 07:23:29 2016)
+lrm elsa (active, Mon Nov 21 07:23:22 2016)
+service ct:100 (elsa, stopped)
+service ct:102 (elsa, started)
+service vm:501 (elsa, started)
+----
+
+Here is the list of possible states:
+
+stopped::
+
+Service is stopped (confirmed by LRM). If the LRM detects a stopped
+service is still running, it will stop it again.
+
+request_stop::
+
+Service should be stopped. The CRM waits for confirmation from the
+LRM.
+
+started::
+
+Service is active an LRM should start it ASAP if not already running.
+If the Service fails and is detected to be not running the LRM
+restarts it
+(see xref:ha_manager_start_failure_policy[Start Failure Policy]).
+
+fence::
+
+Wait for node fencing (service node is not inside quorate cluster
+partition). As soon as node gets fenced successfully the service will
+be recovered to another node, if possible
+(see xref:ha_manager_fencing[Fencing]).
+
+freeze::
+
+Do not touch the service state. We use this state while we reboot a
+node, or when we restart the LRM daemon
+(see xref:ha_manager_package_updates[Package Updates]).
+
+migrate::
+
+Migrate service (live) to other node.
+
+error::
+
+Service is disabled because of LRM errors. Needs manual intervention
+(see xref:ha_manager_error_recovery[Error Recovery]).
+
+
Local Resource Manager
~~~~~~~~~~~~~~~~~~~~~~
After that you can stop the LRM and CRM services. But note that the
watchdog triggers if you stop it with active services.
+
+[[ha_manager_package_updates]]
Package Updates
---------------
Fencing
-------
-What is Fencing
-~~~~~~~~~~~~~~~
+On node failures, fencing ensures that the erroneous node is
+guaranteed to be offline. This is required to make sure that no
+resource runs twice when it gets recovered on another node. This is a
+really important task, because without, it would not be possible to
+recover a resource on another node.
-Fencing secures that on a node failure the dangerous node gets will be rendered
-unable to do any damage and that no resource runs twice when it gets recovered
-from the failed node. This is a really important task and one of the base
-principles to make a system Highly Available.
+If a node would not get fenced, it would be in an unknown state where
+it may have still access to shared resources. This is really
+dangerous! Imagine that every network but the storage one broke. Now,
+while not reachable from the public network, the VM still runs and
+writes to the shared storage.
-If a node would not get fenced it would be in an unknown state where it may
-have still access to shared resources, this is really dangerous!
-Imagine that every network but the storage one broke, now while not
-reachable from the public network the VM still runs and writes on the shared
-storage. If we would not fence the node and just start up this VM on another
-Node we would get dangerous race conditions, atomicity violations the whole VM
-could be rendered unusable. The recovery could also simply fail if the storage
-protects from multiple mounts and thus defeat the purpose of HA.
+If we then simply start up this VM on another node, we would get a
+dangerous race conditions because we write from both nodes. Such
+condition can destroy all VM data and the whole VM could be rendered
+unusable. The recovery could also fail if the storage protects from
+multiple mounts.
-How {pve} Fences
-~~~~~~~~~~~~~~~~~
-There are different methods to fence a node, for example fence devices which
-cut off the power from the node or disable their communication completely.
-
-Those are often quite expensive and bring additional critical components in
-a system, because if they fail you cannot recover any service.
-
-We thus wanted to integrate a simpler method in the HA Manager first, namely
-self fencing with watchdogs.
-
-Watchdogs are widely used in critical and dependable systems since the
-beginning of micro controllers, they are often independent and simple
-integrated circuit which programs can use to watch them. After opening they need to
-report periodically. If, for whatever reason, a program becomes unable to do
-so the watchdogs triggers a reset of the whole server.
-
-Server motherboards often already include such hardware watchdogs, these need
-to be configured. If no watchdog is available or configured we fall back to the
-Linux Kernel softdog while still reliable it is not independent of the servers
-Hardware and thus has a lower reliability then a hardware watchdog.
+How {pve} Fences
+~~~~~~~~~~~~~~~~
+
+There are different methods to fence a node, for example, fence
+devices which cut off the power from the node or disable their
+communication completely. Those are often quite expensive and bring
+additional critical components into a system, because if they fail you
+cannot recover any service.
+
+We thus wanted to integrate a simpler fencing method, which does not
+require additional external hardware. This can be done using
+watchdog timers.
+
+.Possible Fencing Methods
+- external power switches
+- isolate nodes by disabling complete network traffic on the switch
+- self fencing using watchdog timers
+
+Watchdog timers are widely used in critical and dependable systems
+since the beginning of micro controllers. They are often independent
+and simple integrated circuits which are used to detect and recover
+from computer malfunctions.
+
+During normal operation, `ha-manager` regularly resets the watchdog
+timer to prevent it from elapsing. If, due to a hardware fault or
+program error, the computer fails to reset the watchdog, the timer
+will elapse and triggers a reset of the whole server (reboot).
+
+Recent server motherboards often include such hardware watchdogs, but
+these need to be configured. If no watchdog is available or
+configured, we fall back to the Linux Kernel 'softdog'. While still
+reliable, it is not independent of the servers hardware, and thus has
+a lower reliability than a hardware watchdog.
Configure Hardware Watchdog
~~~~~~~~~~~~~~~~~~~~~~~~~~~
cluster.
+[[ha_manager_start_failure_policy]]
Start Failure Policy
---------------------
re-enabled without fixing the error only the restart policy gets
repeated.
+
+[[ha_manager_error_recovery]]
Error Recovery
--------------
service state (enabled, disabled).
-Service States
---------------
-
-stopped::
-
-Service is stopped (confirmed by LRM), if detected running it will get stopped
-again.
-
-request_stop::
-
-Service should be stopped. Waiting for confirmation from LRM.
-
-started::
-
-Service is active an LRM should start it ASAP if not already running.
-If the Service fails and is detected to be not running the LRM restarts it.
-
-fence::
-
-Wait for node fencing (service node is not inside quorate cluster
-partition).
-As soon as node gets fenced successfully the service will be recovered to
-another node, if possible.
-
-freeze::
-
-Do not touch the service state. We use this state while we reboot a
-node, or when we restart the LRM daemon.
-
-migrate::
-
-Migrate service (live) to other node.
-
-error::
-
-Service disabled because of LRM errors. Needs manual intervention.
-
-
ifdef::manvolnum[]
include::pve-copyright.adoc[]
endif::manvolnum[]