+When updating the ha-manager you should do one node after the other, never
+all at once for various reasons. First, while we test our software
+thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
+Upgrading one node after the other and checking the functionality of each node
+after finishing the update helps to recover from an eventual problems, while
+updating all could render you in a broken cluster state and is generally not
+good practice.
+
+Also, the {pve} HA stack uses a request acknowledge protocol to perform
+actions between the cluster and the local resource manager. For restarting,
+the LRM makes a request to the CRM to freeze all its services. This prevents
+that they get touched by the Cluster during the short time the LRM is restarting.
+After that the LRM may safely close the watchdog during a restart.
+Such a restart happens on a update and as already stated a active master
+CRM is needed to acknowledge the requests from the LRM, if this is not the case
+the update process can be too long which, in the worst case, may result in
+a watchdog reset.
+
+
+[[ha_manager_fencing]]
+Fencing
+-------
+
+On node failures, fencing ensures that the erroneous node is
+guaranteed to be offline. This is required to make sure that no
+resource runs twice when it gets recovered on another node. This is a
+really important task, because without, it would not be possible to
+recover a resource on another node.
+
+If a node would not get fenced, it would be in an unknown state where
+it may have still access to shared resources. This is really
+dangerous! Imagine that every network but the storage one broke. Now,
+while not reachable from the public network, the VM still runs and
+writes to the shared storage.
+
+If we then simply start up this VM on another node, we would get a
+dangerous race conditions because we write from both nodes. Such
+condition can destroy all VM data and the whole VM could be rendered
+unusable. The recovery could also fail if the storage protects from
+multiple mounts.
+
+
+How {pve} Fences
+~~~~~~~~~~~~~~~~
+
+There are different methods to fence a node, for example fence devices which
+cut off the power from the node or disable their communication completely.
+
+Those are often quite expensive and bring additional critical components in
+a system, because if they fail you cannot recover any service.
+
+We thus wanted to integrate a simpler method in the HA Manager first, namely
+self fencing with watchdogs.
+
+Watchdogs are widely used in critical and dependable systems since the
+beginning of micro controllers, they are often independent and simple
+integrated circuit which programs can use to watch them. After opening they need to
+report periodically. If, for whatever reason, a program becomes unable to do
+so the watchdogs triggers a reset of the whole server.
+
+Server motherboards often already include such hardware watchdogs, these need
+to be configured. If no watchdog is available or configured we fall back to the
+Linux Kernel softdog while still reliable it is not independent of the servers
+Hardware and thus has a lower reliability then a hardware watchdog.
+
+Configure Hardware Watchdog
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+By default all watchdog modules are blocked for security reasons as they are
+like a loaded gun if not correctly initialized.
+If you have a hardware watchdog available remove its kernel module from the
+blacklist, load it with insmod and restart the `watchdog-mux` service or reboot
+the node.
+
+Recover Fenced Services
+~~~~~~~~~~~~~~~~~~~~~~~
+
+After a node failed and its fencing was successful we start to recover services
+to other available nodes and restart them there so that they can provide service
+again.
+
+The selection of the node on which the services gets recovered is influenced
+by the users group settings, the currently active nodes and their respective
+active service count.
+First we build a set out of the intersection between user selected nodes and
+available nodes. Then the subset with the highest priority of those nodes
+gets chosen as possible nodes for recovery. We select the node with the
+currently lowest active service count as a new node for the service.
+That minimizes the possibility of an overload, which else could cause an
+unresponsive node and as a result a chain reaction of node failures in the
+cluster.
+
+
+[[ha_manager_start_failure_policy]]
+Start Failure Policy
+---------------------
+
+The start failure policy comes in effect if a service failed to start on a
+node once ore more times. It can be used to configure how often a restart
+should be triggered on the same node and how often a service should be
+relocated so that it gets a try to be started on another node.
+The aim of this policy is to circumvent temporary unavailability of shared
+resources on a specific node. For example, if a shared storage isn't available
+on a quorate node anymore, e.g. network problems, but still on other nodes,
+the relocate policy allows then that the service gets started nonetheless.
+
+There are two service start recover policy settings which can be configured