+When updating the ha-manager you should do one node after the other, never
+all at once for various reasons. First, while we test our software
+thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
+Upgrading one node after the other and checking the functionality of each node
+after finishing the update helps to recover from an eventual problems, while
+updating all could render you in a broken cluster state and is generally not
+good practice.
+
+Also, the {pve} HA stack uses a request acknowledge protocol to perform
+actions between the cluster and the local resource manager. For restarting,
+the LRM makes a request to the CRM to freeze all its services. This prevents
+that they get touched by the Cluster during the short time the LRM is restarting.
+After that the LRM may safely close the watchdog during a restart.
+Such a restart happens on a update and as already stated a active master
+CRM is needed to acknowledge the requests from the LRM, if this is not the case
+the update process can be too long which, in the worst case, may result in
+a watchdog reset.
+
+
+Fencing
+-------
+
+What Is Fencing
+~~~~~~~~~~~~~~~
+
+Fencing secures that on a node failure the dangerous node gets will be rendered
+unable to do any damage and that no resource runs twice when it gets recovered
+from the failed node. This is a really important task and one of the base
+principles to make a system Highly Available.
+
+If a node would not get fenced it would be in an unknown state where it may
+have still access to shared resources, this is really dangerous!
+Imagine that every network but the storage one broke, now while not
+reachable from the public network the VM still runs and writes on the shared
+storage. If we would not fence the node and just start up this VM on another
+Node we would get dangerous race conditions, atomicity violations the whole VM
+could be rendered unusable. The recovery could also simply fail if the storage
+protects from multiple mounts and thus defeat the purpose of HA.
+
+How {pve} Fences
+~~~~~~~~~~~~~~~~~
+
+There are different methods to fence a node, for example fence devices which
+cut off the power from the node or disable their communication completely.
+
+Those are often quite expensive and bring additional critical components in
+a system, because if they fail you cannot recover any service.
+
+We thus wanted to integrate a simpler method in the HA Manager first, namely
+self fencing with watchdogs.
+
+Watchdogs are widely used in critical and dependable systems since the
+beginning of micro controllers, they are often independent and simple
+integrated circuit which programs can use to watch them. After opening they need to
+report periodically. If, for whatever reason, a program becomes unable to do
+so the watchdogs triggers a reset of the whole server.
+
+Server motherboards often already include such hardware watchdogs, these need
+to be configured. If no watchdog is available or configured we fall back to the
+Linux Kernel softdog while still reliable it is not independent of the servers
+Hardware and thus has a lower reliability then a hardware watchdog.
+
+Configure Hardware Watchdog
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+By default all watchdog modules are blocked for security reasons as they are
+like a loaded gun if not correctly initialized.
+If you have a hardware watchdog available remove its kernel module from the
+blacklist, load it with insmod and restart the 'watchdog-mux' service or reboot
+the node.
+
+Recover Fenced Services
+~~~~~~~~~~~~~~~~~~~~~~~
+
+After a node failed and its fencing was successful we start to recover services
+to other available nodes and restart them there so that they can provide service
+again.
+
+The selection of the node on which the services gets recovered is influenced
+by the users group settings, the currently active nodes and their respective
+active service count.
+First we build a set out of the intersection between user selected nodes and
+available nodes. Then the subset with the highest priority of those nodes
+gets chosen as possible nodes for recovery. We select the node with the
+currently lowest active service count as a new node for the service.
+That minimizes the possibility of an overload, which else could cause an
+unresponsive node and as a result a chain reaction of node failures in the
+cluster.
+
+Groups