+Our modern society depends heavily on information provided by
+computers over the network. Mobile devices amplified that dependency,
+because people can access the network any time from anywhere. If you
+provide such services, it is very important that they are available
+most of the time.
+
+We can mathematically define the availability as the ratio of (A) the
+total time a service is capable of being used during a given interval
+to (B) the length of the interval. It is normally expressed as a
+percentage of uptime in a given year.
+
+.Availability - Downtime per Year
+[width="60%",cols="<d,d",options="header"]
+|===========================================================
+|Availability % |Downtime per year
+|99 |3.65 days
+|99.9 |8.76 hours
+|99.99 |52.56 minutes
+|99.999 |5.26 minutes
+|99.9999 |31.5 seconds
+|99.99999 |3.15 seconds
+|===========================================================
+
+There are several ways to increase availability. The most elegant
+solution is to rewrite your software, so that you can run it on
+several host at the same time. The software itself need to have a way
+to detect errors and do failover. This is relatively easy if you just
+want to serve read-only web pages. But in general this is complex, and
+sometimes impossible because you cannot modify the software
+yourself. The following solutions works without modifying the
+software:
+
+* Use reliable ``server'' components
++
+NOTE: Computer components with same functionality can have varying
+reliability numbers, depending on the component quality. Most vendors
+sell components with higher reliability as ``server'' components -
+usually at higher price.
+
+* Eliminate single point of failure (redundant components)
+** use an uninterruptible power supply (UPS)
+** use redundant power supplies on the main boards
+** use ECC-RAM
+** use redundant network hardware
+** use RAID for local storage
+** use distributed, redundant storage for VM data
+
+* Reduce downtime
+** rapidly accessible administrators (24/7)
+** availability of spare parts (other nodes in a {pve} cluster)
+** automatic error detection (provided by `ha-manager`)
+** automatic failover (provided by `ha-manager`)
+
+Virtualization environments like {pve} make it much easier to reach
+high availability because they remove the ``hardware'' dependency. They
+also support to setup and use redundant storage and network
+devices. So if one host fail, you can simply start those services on
+another host within your cluster.
+
+Even better, {pve} provides a software stack called `ha-manager`,
+which can do that automatically for you. It is able to automatically
+detect errors and do automatic failover.
+
+{pve} `ha-manager` works like an ``automated'' administrator. First, you
+configure what resources (VMs, containers, ...) it should
+manage. `ha-manager` then observes correct functionality, and handles
+service failover to another node in case of errors. `ha-manager` can
+also handle normal user requests which may start, stop, relocate and
+migrate a service.
+
+But high availability comes at a price. High quality components are
+more expensive, and making them redundant duplicates the costs at
+least. Additional spare parts increase costs further. So you should
+carefully calculate the benefits, and compare with those additional
+costs.
+
+TIP: Increasing availability from 99% to 99.9% is relatively
+simply. But increasing availability from 99.9999% to 99.99999% is very
+hard and costly. `ha-manager` has typical error detection and failover
+times of about 2 minutes, so you can get no more than 99.999%
+availability.
+
+
+Requirements
+------------
+
+You must meet the following requirements before you start with HA:
+
+* at least three cluster nodes (to get reliable quorum)
+
+* shared storage for VMs and containers
+
+* hardware redundancy (everywhere)
+
+* use reliable “server” components
+
+* hardware watchdog - if not available we fall back to the
+ linux kernel software watchdog (`softdog`)
+
+* optional hardware fencing devices
+
+
+[[ha_manager_resources]]
+Resources
+---------
+
+We call the primary management unit handled by `ha-manager` a
+resource. A resource (also called ``service'') is uniquely
+identified by a service ID (SID), which consists of the resource type
+and an type specific ID, e.g.: `vm:100`. That example would be a
+resource of type `vm` (virtual machine) with the ID 100.
+
+For now we have two important resources types - virtual machines and
+containers. One basic idea here is that we can bundle related software
+into such a VM or container, so there is no need to compose one big
+service from other services, like it was done with `rgmanager`. In
+general, a HA managed resource should not depend on other resources.
+
+
+Management Tasks
+----------------
+
+This section provides a short overview of common management tasks. The
+first step is to enable HA for a resource. This is done by adding the
+resource to the HA resource configuration. You can do this using the
+GUI, or simply use the command line tool, for example:
+
+----
+# ha-manager add vm:100
+----
+
+The HA stack now tries to start the resources and keeps it
+running. Please note that you can configure the ``requested''
+resources state. For example you may want the HA stack to stop the
+resource:
+
+----
+# ha-manager set vm:100 --state stopped
+----
+
+and start it again later:
+
+----
+# ha-manager set vm:100 --state started
+----
+
+You can also use the normal VM and container management commands. They
+automatically forward the commands to the HA stack, so
+
+----
+# qm start 100
+----
+
+simply sets the requested state to `started`. Same applied to `qm
+stop`, which sets the requested state to `stopped`.
+
+NOTE: The HA stack works fully asynchronous and needs to communicate
+with other cluster members. So it takes some seconds until you see
+the result of such actions.
+
+To view the current HA resource configuration use:
+
+----
+# ha-manager config
+vm:100
+ state stopped
+----
+
+And you can view the actual HA manager and resource state with:
+
+----
+# ha-manager status
+quorum OK
+master node1 (active, Wed Nov 23 11:07:23 2016)
+lrm elsa (active, Wed Nov 23 11:07:19 2016)
+service vm:100 (node1, started)
+----
+
+You can also initiate resource migration to other nodes:
+
+----
+# ha-manager migrate vm:100 node2
+----
+
+This uses online migration and tries to keep the VM running. Online
+migration needs to transfer all used memory over the network, so it is
+sometimes faster to stop VM, then restart it on the new node. This can be
+done using the `relocate` command:
+
+----
+# ha-manager relocate vm:100 node2
+----
+
+Finally, you can remove the resource from the HA configuration using
+the following command:
+
+----
+# ha-manager remove vm:100
+----
+
+NOTE: This does not start or stop the resource.
+
+But all HA related tasks can be done in the GUI, so there is no need to
+use the command line at all.
+