-[[chapter-ha-manager]]
+[[chapter_ha_manager]]
ifdef::manvolnum[]
-PVE({manvolnum})
-================
-include::attributes.txt[]
+ha-manager(1)
+=============
+:pve-toplevel:
NAME
----
ha-manager - Proxmox VE HA Manager
-SYNOPSYS
+SYNOPSIS
--------
include::ha-manager.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
-
ifndef::manvolnum[]
High Availability
=================
-include::attributes.txt[]
+:pve-toplevel:
endif::manvolnum[]
-
Our modern society depends heavily on information provided by
computers over the network. Mobile devices amplified that dependency,
because people can access the network any time from anywhere. If you
software:
* Use reliable ``server'' components
-
++
NOTE: Computer components with same functionality can have varying
reliability numbers, depending on the component quality. Most vendors
sell components with higher reliability as ``server'' components -
times of about 2 minutes, so you can get no more than 99.999%
availability.
+
Requirements
------------
+You must meet the following requirements before you start with HA:
+
* at least three cluster nodes (to get reliable quorum)
* shared storage for VMs and containers
* hardware redundancy (everywhere)
+* use reliable “server” components
+
* hardware watchdog - if not available we fall back to the
linux kernel software watchdog (`softdog`)
* optional hardware fencing devices
+[[ha_manager_resources]]
Resources
---------
`pve-ha-lrm`::
-The local resource manager (LRM), it controls the services running on
-the local node.
-It reads the requested states for its services from the current manager
-status file and executes the respective commands.
+The local resource manager (LRM), which controls the services running on
+the local node. It reads the requested states for its services from
+the current manager status file and executes the respective commands.
`pve-ha-crm`::
-The cluster resource manager (CRM), it controls the cluster wide
-actions of the services, processes the LRM results and includes the state
-machine which controls the state of each service.
+The cluster resource manager (CRM), which makes the cluster wide
+decisions. It sends commands to the LRM, processes the results,
+and moves resources to other nodes if something fails. The CRM also
+handles node fencing.
+
.Locks in the LRM & CRM
[NOTE]
quorum the node cannot reset the watchdog. This will trigger a reboot
after the watchdog then times out, this happens after 60 seconds.
+
Configuration
-------------
-The HA stack is well integrated in the Proxmox VE API2. So, for
-example, HA can be configured via `ha-manager` or the PVE web
-interface, which both provide an easy to use tool.
+The HA stack is well integrated into the {pve} API. So, for example,
+HA can be configured via the `ha-manager` command line interface, or
+the {pve} web interface - both interfaces provide an easy way to
+manage HA. Automation tools can use the API directly.
+
+All HA configuration files are within `/etc/pve/ha/`, so they get
+automatically distributed to the cluster nodes, and all nodes share
+the same HA configuration.
+
+
+Resources
+~~~~~~~~~
+
+The resource configuration file `/etc/pve/ha/resources.cfg` stores
+the list of resources managed by `ha-manager`. A resource configuration
+inside that list look like this:
+
+----
+<type>:<name>
+ <property> <value>
+ ...
+----
+
+It starts with a resource type followed by a resource specific name,
+separated with colon. Together this forms the HA resource ID, which is
+used by all `ha-manager` commands to uniquely identify a resource
+(example: `vm:100` or `ct:101`). The next lines contain additional
+properties:
+
+include::ha-resources-opts.adoc[]
+
+
+Groups
+~~~~~~
+
+The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
+define groups of cluster nodes. A resource can be restricted to run
+only on the members of such group. A group configuration look like
+this:
+
+----
+group: <group>
+ nodes <node_list>
+ <property> <value>
+ ...
+----
+
+include::ha-groups-opts.adoc[]
-The resource configuration file can be located at
-`/etc/pve/ha/resources.cfg` and the group configuration file at
-`/etc/pve/ha/groups.cfg`. Use the provided tools to make changes,
-there shouldn't be any need to edit them manually.
Node Power Status
-----------------
a watchdog reset.
+[[ha_manager_fencing]]
Fencing
-------
unresponsive node and as a result a chain reaction of node failures in the
cluster.
+[[ha_manager_groups]]
Groups
------
get distributed to those node if not already there. The priorities have a
relative meaning only.
Example;;
- You want to run all services from a group on node1 if possible, if this node
- is not available you want them to run equally splitted on node2 and node3 and
- if those fail it should use the other group members.
+ You want to run all services from a group on `node1` if possible. If this node
+ is not available, you want them to run equally splitted on `node2` and `node3`, and
+ if those fail it should use `node4`.
To achieve this you could set the node list to:
[source,bash]
ha-manager groupset mygroup -nodes "node1:2,node2:1,node3:1,node4"
group. If no group node member is available the resource will be
placed in the stopped state.
Example;;
- A Service can run just on a few nodes, as he uses resources from only found
- on those, we created a group with said nodes and as we know that else all
- other nodes get implicitly added with lowest priority we set the restricted
- option.
+ Lets say a service uses resources only available on `node1` and `node2`,
+ so we need to make sure that HA manager does not use other nodes.
+ We need to create a 'restricted' group with said nodes:
+[source,bash]
+ ha-manager groupset mygroup -nodes "node1,node2" -restricted
nofailback::
Examples;;
* You need to migrate a service to a node which hasn't the highest priority
in the group at the moment, to tell the HA manager to not move this service
- instantly back set the nofailnback option and the service will stay on
+ instantly back set the 'nofailback' option and the service will stay on
+ the current node.
- * A service was fenced and he got recovered to another node. The admin
- repaired the node and brang it up online again but does not want that the
+ * A service was fenced and it got recovered to another node. The admin
+ repaired the node and brought it up online again but does not want that the
recovered services move straight back to the repaired node as he wants to
first investigate the failure cause and check if it runs stable. He can use
- the nofailback option to achieve this.
+ the 'nofailback' option to achieve this.
Start Failure Policy
* *after* you fixed all errors you may enable the service again
+[[ha_manager_service_operations]]
Service Operations
------------------