X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=ha-manager.adoc;h=db67e59fbe34d491494867ed9a6e71857daae894;hp=7484ca9ad1d4cacfa3d70c6018a7bb60117c02be;hb=effd6c43413600787db1997160cd9926ed431b0a;hpb=734404b4727bd5bb38418fa69b5e6dcc78a6aa76 diff --git a/ha-manager.adoc b/ha-manager.adoc index 7484ca9..db67e59 100644 --- a/ha-manager.adoc +++ b/ha-manager.adoc @@ -1,15 +1,15 @@ -[[chapter-ha-manager]] +[[chapter_ha_manager]] ifdef::manvolnum[] -PVE({manvolnum}) -================ -include::attributes.txt[] +ha-manager(1) +============= +:pve-toplevel: NAME ---- ha-manager - Proxmox VE HA Manager -SYNOPSYS +SYNOPSIS -------- include::ha-manager.1-synopsis.adoc[] @@ -17,33 +17,417 @@ include::ha-manager.1-synopsis.adoc[] DESCRIPTION ----------- endif::manvolnum[] - ifndef::manvolnum[] High Availability ================= -include::attributes.txt[] +:pve-toplevel: endif::manvolnum[] -'ha-manager' handles management of user-defined cluster services. This -includes handling of user requests including service start, service -disable, service relocate, and service restart. The cluster resource -manager daemon also handles restarting and relocating services in the -event of failures. +Our modern society depends heavily on information provided by +computers over the network. Mobile devices amplified that dependency, +because people can access the network any time from anywhere. If you +provide such services, it is very important that they are available +most of the time. + +We can mathematically define the availability as the ratio of (A) the +total time a service is capable of being used during a given interval +to (B) the length of the interval. It is normally expressed as a +percentage of uptime in a given year. + +.Availability - Downtime per Year +[width="60%",cols="/lrm_status`. There the CRM may collect +it and let its state machine - respective the commands output - act on it. + +The actions on each service between CRM and LRM are normally always synced. +This means that the CRM requests a state uniquely marked by an UID, the LRM +then executes this action *one time* and writes back the result, also +identifiable by the same UID. This is needed so that the LRM does not +executes an outdated command. +With the exception of the `stop` and the `error` command, +those two do not depend on the result produced and are executed +always in the case of the stopped state and once in the case of +the error state. + +.Read the Logs +[NOTE] +The HA Stack logs every action it makes. This helps to understand what +and also why something happens in the cluster. Here its important to see +what both daemons, the LRM and the CRM, did. You may use +`journalctl -u pve-ha-lrm` on the node(s) where the service is and +the same command for the pve-ha-crm on the node which is the current master. + +Cluster Resource Manager +~~~~~~~~~~~~~~~~~~~~~~~~ + +The cluster resource manager (`pve-ha-crm`) starts on each node and waits there for the manager lock, which can only be held by one node at a time. The node which successfully acquires the manager lock gets -promoted to the CRM, it handles cluster wide actions like migrations -and failures. +promoted to the CRM master. + +It can be in three states: + +wait for agent lock:: + +The CRM waits for our exclusive lock. This is also used as idle state if no +service is configured + +active:: + +The CRM holds its exclusive lock and has services configured + +lost agent lock:: + +The CRM lost its lock, this means a failure happened and quorum was lost. + +It main task is to manage the services which are configured to be highly +available and try to always enforce the requested state. For example, a +service with the requested state 'started' will be started if its not +already running. If it crashes it will be automatically started again. +Thus the CRM dictates the actions which the LRM needs to execute. When an node leaves the cluster quorum, its state changes to unknown. If the current CRM then can secure the failed nodes lock, the services @@ -52,153 +436,376 @@ will be 'stolen' and restarted on another node. When a cluster member determines that it is no longer in the cluster quorum, the LRM waits for a new quorum to form. As long as there is no quorum the node cannot reset the watchdog. This will trigger a reboot -after 60 seconds. +after the watchdog then times out, this happens after 60 seconds. -CONFIGURATION + +Configuration ------------- -The HA stack is well integrated int the Proxmox VE API2. So, for -example, HA can be configured via 'ha-manager' or the PVE web -interface, which both provide an easy to use tool. +The HA stack is well integrated into the {pve} API. So, for example, +HA can be configured via the `ha-manager` command line interface, or +the {pve} web interface - both interfaces provide an easy way to +manage HA. Automation tools can use the API directly. -The resource configuration file can be located at -'/etc/pve/ha/resources.cfg' and the group configuration file at -'/etc/pve/ha/groups.cfg'. Use the provided tools to make changes, -there shouldn't be any need to edit them manually. +All HA configuration files are within `/etc/pve/ha/`, so they get +automatically distributed to the cluster nodes, and all nodes share +the same HA configuration. -RESOURCES/SERVICES AGENTS -------------------------- -A resource or also called service can be managed by the -ha-manager. Currently we support virtual machines and container. +[[ha_manager_resource_config]] +Resources +~~~~~~~~~ -GROUPS ------- +[thumbnail="gui-ha-manager-status.png"] -A group is a collection of cluster nodes which a service may be bound to. -GROUP SETTINGS -~~~~~~~~~~~~~~ +The resource configuration file `/etc/pve/ha/resources.cfg` stores +the list of resources managed by `ha-manager`. A resource configuration +inside that list look like this: + +---- +: + + ... +---- -nodes:: +It starts with a resource type followed by a resource specific name, +separated with colon. Together this forms the HA resource ID, which is +used by all `ha-manager` commands to uniquely identify a resource +(example: `vm:100` or `ct:101`). The next lines contain additional +properties: -list of group node members +include::ha-resources-opts.adoc[] -restricted:: +Here is a real world example with one VM and one container. As you see, +the syntax of those files is really simple, so it is even posiible to +read or edit those files using your favorite editor: -resources bound to this group may only run on nodes defined by the -group. If no group node member is available the resource will be -placed in the stopped state. +.Configuration Example (`/etc/pve/ha/resources.cfg`) +---- +vm: 501 + state started + max_relocate 2 -nofailback:: +ct: 102 + # Note: use default settings for everything +---- -the resource won't automatically fail back when a more preferred node -(re)joins the cluster. +[thumbnail="gui-ha-manager-add-resource.png"] +Above config was generated using the `ha-manager` command line tool: -RECOVERY POLICY ---------------- +---- +# ha-manager add vm:501 --state started --max_relocate 2 +# ha-manager add ct:102 +---- + + +[[ha_manager_groups]] +Groups +~~~~~~ + +[thumbnail="gui-ha-manager-groups-view.png"] + +The HA group configuration file `/etc/pve/ha/groups.cfg` is used to +define groups of cluster nodes. A resource can be restricted to run +only on the members of such group. A group configuration look like +this: + +---- +group: + nodes + + ... +---- + +include::ha-groups-opts.adoc[] + +[thumbnail="gui-ha-manager-add-group.png"] + +A common requirement is that a resource should run on a specific +node. Usually the resource is able to run on other nodes, so you can define +an unrestricted group with a single member: + +---- +# ha-manager groupadd prefer_node1 --nodes node1 +---- + +For bigger clusters, it makes sense to define a more detailed failover +behavior. For example, you may want to run a set of services on +`node1` if possible. If `node1` is not available, you want to run them +equally splitted on `node2` and `node3`. If those nodes also fail the +services should run on `node4`. To achieve this you could set the node +list to: + +---- +# ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4" +---- + +Another use case is if a resource uses other resources only available +on specific nodes, lets say `node1` and `node2`. We need to make sure +that HA manager does not use other nodes, so we need to create a +restricted group with said nodes: + +---- +# ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted +---- + +Above commands created the following group configuration fils: + +.Configuration Example (`/etc/pve/ha/groups.cfg`) +---- +group: prefer_node1 + nodes node1 + +group: mygroup1 + nodes node2:1,node4,node1:2,node3:1 + +group: mygroup2 + nodes node2,node1 + restricted 1 +---- + + +The `nofailback` options is mostly useful to avoid unwanted resource +movements during administration tasks. For example, if you need to +migrate a service to a node which hasn't the highest priority in the +group, you need to tell the HA manager to not move this service +instantly back by setting the `nofailback` option. -There are two service recover policy settings which can be configured +Another scenario is when a service was fenced and it got recovered to +another node. The admin tries to repair the fenced node and brings it +up online again to investigate the failure cause and check if it runs +stable again. Setting the `nofailback` flag prevents that the +recovered services move straight back to the fenced node. + + +[[ha_manager_fencing]] +Fencing +------- + +On node failures, fencing ensures that the erroneous node is +guaranteed to be offline. This is required to make sure that no +resource runs twice when it gets recovered on another node. This is a +really important task, because without, it would not be possible to +recover a resource on another node. + +If a node would not get fenced, it would be in an unknown state where +it may have still access to shared resources. This is really +dangerous! Imagine that every network but the storage one broke. Now, +while not reachable from the public network, the VM still runs and +writes to the shared storage. + +If we then simply start up this VM on another node, we would get a +dangerous race conditions because we write from both nodes. Such +condition can destroy all VM data and the whole VM could be rendered +unusable. The recovery could also fail if the storage protects from +multiple mounts. + + +How {pve} Fences +~~~~~~~~~~~~~~~~ + +There are different methods to fence a node, for example, fence +devices which cut off the power from the node or disable their +communication completely. Those are often quite expensive and bring +additional critical components into a system, because if they fail you +cannot recover any service. + +We thus wanted to integrate a simpler fencing method, which does not +require additional external hardware. This can be done using +watchdog timers. + +.Possible Fencing Methods +- external power switches +- isolate nodes by disabling complete network traffic on the switch +- self fencing using watchdog timers + +Watchdog timers are widely used in critical and dependable systems +since the beginning of micro controllers. They are often independent +and simple integrated circuits which are used to detect and recover +from computer malfunctions. + +During normal operation, `ha-manager` regularly resets the watchdog +timer to prevent it from elapsing. If, due to a hardware fault or +program error, the computer fails to reset the watchdog, the timer +will elapse and triggers a reset of the whole server (reboot). + +Recent server motherboards often include such hardware watchdogs, but +these need to be configured. If no watchdog is available or +configured, we fall back to the Linux Kernel 'softdog'. While still +reliable, it is not independent of the servers hardware, and thus has +a lower reliability than a hardware watchdog. + + +Configure Hardware Watchdog +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +By default, all hardware watchdog modules are blocked for security +reasons. They are like a loaded gun if not correctly initialized. To +enable a hardware watchdog, you need to specify the module to load in +'/etc/default/pve-ha-manager', for example: + +---- +# select watchdog module (default is softdog) +WATCHDOG_MODULE=iTCO_wdt +---- + +This configuration is read by the 'watchdog-mux' service, which load +the specified module at startup. + + +Recover Fenced Services +~~~~~~~~~~~~~~~~~~~~~~~ + +After a node failed and its fencing was successful, the CRM tries to +move services from the failed node to nodes which are still online. + +The selection of nodes, on which those services gets recovered, is +influenced by the resource `group` settings, the list of currently active +nodes, and their respective active service count. + +The CRM first builds a set out of the intersection between user selected +nodes (from `group` setting) and available nodes. It then choose the +subset of nodes with the highest priority, and finally select the node +with the lowest active service count. This minimizes the possibility +of an overloaded node. + +CAUTION: On node failure, the CRM distributes services to the +remaining nodes. This increase the service count on those nodes, and +can lead to high load, especially on small clusters. Please design +your cluster so that it can handle such worst case scenarios. + + +[[ha_manager_start_failure_policy]] +Start Failure Policy +--------------------- + +The start failure policy comes in effect if a service failed to start on a +node once ore more times. It can be used to configure how often a restart +should be triggered on the same node and how often a service should be +relocated so that it gets a try to be started on another node. +The aim of this policy is to circumvent temporary unavailability of shared +resources on a specific node. For example, if a shared storage isn't available +on a quorate node anymore, e.g. network problems, but still on other nodes, +the relocate policy allows then that the service gets started nonetheless. + +There are two service start recover policy settings which can be configured specific for each resource. max_restart:: -maximal number of tries to restart an failed service on the actual +Maximum number of tries to restart an failed service on the actual node. The default is set to one. max_relocate:: -maximal number of tries to relocate the service to a different node. +Maximum number of tries to relocate the service to a different node. A relocate only happens after the max_restart value is exceeded on the actual node. The default is set to one. -Note that the relocate count state will only reset to zero when the +NOTE: The relocate count state will only reset to zero when the service had at least one successful start. That means if a service is -re-enabled without fixing the error only the restart policy gets +re-started without fixing the error only the restart policy gets repeated. -ERROR RECOVERY + +[[ha_manager_error_recovery]] +Error Recovery -------------- If after all tries the service state could not be recovered it gets placed in an error state. In this state the service won't get touched -by the HA stack anymore. To recover from this state you should follow -these steps: - -* bring the resource back into an safe and consistent state (e.g: -killing its process) - -* disable the ha resource to place it in an stopped state - -* fix the error which led to this failures +by the HA stack anymore. The only way out is disabling a service: -* *after* you fixed all errors you may enable the service again - - -SERVICE OPERATIONS ------------------- - -This are how the basic user-initiated service operations (via -'ha-manager') work. - -enable:: +---- +# ha-manager set vm:100 --state disabled +---- -the service will be started by the LRM if not already running. +This can also be done in the web interface. -disable:: +To recover from the error state you should do the following: -the service will be stopped by the LRM if running. +* bring the resource back into a safe and consistent state (e.g.: +kill its process if the service could not be stopped) -migrate/relocate:: +* disable the resource to remove the error flag -the service will be relocated (live) to another node. +* fix the error which led to this failures -remove:: +* *after* you fixed all errors you may request that the service starts again -the service will be removed from the HA managed resource list. Its -current state will not be touched. -start/stop:: +[[ha_manager_package_updates]] +Package Updates +--------------- -start and stop commands can be issued to the resource specific tools -(like 'qm' or 'pct'), they will forward the request to the -'ha-manager' which then will execute the action and set the resulting -service state (enabled, disabled). +When updating the ha-manager you should do one node after the other, never +all at once for various reasons. First, while we test our software +thoughtfully, a bug affecting your specific setup cannot totally be ruled out. +Upgrading one node after the other and checking the functionality of each node +after finishing the update helps to recover from an eventual problems, while +updating all could render you in a broken cluster state and is generally not +good practice. +Also, the {pve} HA stack uses a request acknowledge protocol to perform +actions between the cluster and the local resource manager. For restarting, +the LRM makes a request to the CRM to freeze all its services. This prevents +that they get touched by the Cluster during the short time the LRM is restarting. +After that the LRM may safely close the watchdog during a restart. +Such a restart happens on a update and as already stated a active master +CRM is needed to acknowledge the requests from the LRM, if this is not the case +the update process can be too long which, in the worst case, may result in +a watchdog reset. -SERVICE STATES --------------- -stopped:: +Node Maintenance +---------------- -Service is stopped (confirmed by LRM) +It is sometimes possible to shutdown or reboot a node to do +maintenance tasks. Either to replace hardware, or simply to install a +new kernel image. -request_stop:: -Service should be stopped. Waiting for confirmation from LRM. +Shutdown +~~~~~~~~ -started:: +A shutdown ('poweroff') is usually done if the node is planned to stay +down for some time. The LRM stops all managed services in that +case. This means that other nodes will take over those service +afterwards. -Service is active an LRM should start it ASAP if not already running. +NOTE: Recent hardware has large amounts of RAM. So we stop all +resources, then restart them to avoid online migration of all that +RAM. If you want to use online migration, you need to invoke that +manually before you shutdown the node. -fence:: -Wait for node fencing (service node is not inside quorate cluster -partition). +Reboot +~~~~~~ -freeze:: +Node reboots are initiated with the 'reboot' command. This is usually +done after installing a new kernel. Please note that this is different +from ``shutdown'', because the node immediately starts again. -Do not touch the service state. We use this state while we reboot a -node, or when we restart the LRM daemon. +The LRM tells the CRM that it wants to restart, and waits until the +CRM puts all resources into the `freeze` state (same mechanism is used +for xref:ha_manager_package_updates[Pakage Updates]). This prevents +that those resources are moved to other nodes. Instead, the CRM start +the resources after the reboot on the same node. -migrate:: -Migrate service (live) to other node. +Manual Resource Movement +~~~~~~~~~~~~~~~~~~~~~~~~ -error:: +Last but not least, you can also move resources manually to other +nodes before you shutdown or restart a node. The advantage is that you +have full control, and you can decide if you want to use online +migration or not. -Service disabled because of LRM errors. Needs manual intervention. +NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or +`watchdog-mux`. They manage and use the watchdog, so this can result +in a node reboot. ifdef::manvolnum[]