X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=ha-manager.adoc;h=a357d16c6592e70aed7d54ce1467e063f4b8469e;hp=eadf60e0810be148b8aecc438dbdc0495ad54251;hb=345f5fe0fcfb01f6bad73162c68b21b8c6427745;hpb=8c1189b640ae7d10119ff1c046580f48749d38bd diff --git a/ha-manager.adoc b/ha-manager.adoc index eadf60e..a357d16 100644 --- a/ha-manager.adoc +++ b/ha-manager.adoc @@ -165,7 +165,7 @@ Locks are provided by our distributed configuration file system (pmxcfs). They are used to guarantee that each LRM is active once and working. As a LRM only executes actions when it holds its lock we can mark a failed node as fenced if we can acquire its lock. This lets us then recover any failed -HA services securely without any interference from the now unknown failed Node. +HA services securely without any interference from the now unknown failed node. This all gets supervised by the CRM which holds currently the manager master lock. @@ -178,24 +178,31 @@ locks are working. It can be in three states: -* *wait for agent lock*: the LRM waits for our exclusive lock. This is - also used as idle sate if no service is configured -* *active*: the LRM holds its exclusive lock and has services configured -* *lost agent lock*: the LRM lost its lock, this means a failure happened - and quorum was lost. +wait for agent lock:: + +The LRM waits for our exclusive lock. This is also used as idle state if no +service is configured. + +active:: + +The LRM holds its exclusive lock and has services configured. + +lost agent lock:: + +The LRM lost its lock, this means a failure happened and quorum was lost. After the LRM gets in the active state it reads the manager status file in `/etc/pve/ha/manager_status` and determines the commands it has to execute for the services it owns. For each command a worker gets started, this workers are running in -parallel and are limited to maximal 4 by default. This default setting +parallel and are limited to at most 4 by default. This default setting may be changed through the datacenter configuration key `max_worker`. When finished the worker process gets collected and its result saved for the CRM. -.Maximal Concurrent Worker Adjustment Tips +.Maximum Concurrent Worker Adjustment Tips [NOTE] -The default value of 4 maximal concurrent Workers may be unsuited for +The default value of at most 4 concurrent workers may be unsuited for a specific setup. For example may 4 live migrations happen at the same time, which can lead to network congestions with slower networks and/or big (memory wise) services. Ensure that also in the worst case no congestion @@ -235,11 +242,18 @@ promoted to the CRM master. It can be in three states: -* *wait for agent lock*: the LRM waits for our exclusive lock. This is - also used as idle sate if no service is configured -* *active*: the LRM holds its exclusive lock and has services configured -* *lost agent lock*: the LRM lost its lock, this means a failure happened - and quorum was lost. +wait for agent lock:: + +The CRM waits for our exclusive lock. This is also used as idle state if no +service is configured + +active:: + +The CRM holds its exclusive lock and has services configured + +lost agent lock:: + +The CRM lost its lock, this means a failure happened and quorum was lost. It main task is to manage the services which are configured to be highly available and try to always enforce them to the wanted state, e.g.: a @@ -300,7 +314,7 @@ a watchdog reset. Fencing ------- -What Is Fencing +What is Fencing ~~~~~~~~~~~~~~~ Fencing secures that on a node failure the dangerous node gets will be rendered @@ -381,17 +395,41 @@ A service bound to this group will run on the nodes with the highest priority available. If more nodes are in the highest priority class the services will get distributed to those node if not already there. The priorities have a relative meaning only. + Example;; + You want to run all services from a group on `node1` if possible. If this node + is not available, you want them to run equally splitted on `node2` and `node3`, and + if those fail it should use `node4`. + To achieve this you could set the node list to: +[source,bash] + ha-manager groupset mygroup -nodes "node1:2,node2:1,node3:1,node4" restricted:: -resources bound to this group may only run on nodes defined by the +Resources bound to this group may only run on nodes defined by the group. If no group node member is available the resource will be placed in the stopped state. + Example;; + Lets say a service uses resources only available on `node1` and `node2`, + so we need to make sure that HA manager does not use other nodes. + We need to create a 'restricted' group with said nodes: +[source,bash] + ha-manager groupset mygroup -nodes "node1,node2" -restricted nofailback:: -the resource won't automatically fail back when a more preferred node +The resource won't automatically fail back when a more preferred node (re)joins the cluster. + Examples;; + * You need to migrate a service to a node which hasn't the highest priority + in the group at the moment, to tell the HA manager to not move this service + instantly back set the 'nofailnback' option and the service will stay on + the current node. + + * A service was fenced and it got recovered to another node. The admin + repaired the node and brought it up online again but does not want that the + recovered services move straight back to the repaired node as he wants to + first investigate the failure cause and check if it runs stable. He can use + the 'nofailback' option to achieve this. Start Failure Policy @@ -411,12 +449,12 @@ specific for each resource. max_restart:: -maximal number of tries to restart an failed service on the actual +Maximum number of tries to restart an failed service on the actual node. The default is set to one. max_relocate:: -maximal number of tries to relocate the service to a different node. +Maximum number of tries to relocate the service to a different node. A relocate only happens after the max_restart value is exceeded on the actual node. The default is set to one. @@ -433,7 +471,7 @@ placed in an error state. In this state the service won't get touched by the HA stack anymore. To recover from this state you should follow these steps: -* bring the resource back into an safe and consistent state (e.g: +* bring the resource back into a safe and consistent state (e.g., killing its process) * disable the ha resource to place it in an stopped state @@ -451,19 +489,19 @@ This are how the basic user-initiated service operations (via enable:: -the service will be started by the LRM if not already running. +The service will be started by the LRM if not already running. disable:: -the service will be stopped by the LRM if running. +The service will be stopped by the LRM if running. migrate/relocate:: -the service will be relocated (live) to another node. +The service will be relocated (live) to another node. remove:: -the service will be removed from the HA managed resource list. Its +The service will be removed from the HA managed resource list. Its current state will not be touched. start/stop::