+Failover
+^^^^^^^^
+
+This mode ensures that all services get stopped, but that they will also be
+recovered, if the current node is not online soon. It can be useful when doing
+maintenance on a cluster scale, where live-migrating VMs may not be possible if
+too many nodes are powered off at a time, but you still want to ensure HA
+services get recovered and started again as soon as possible.
+
+Freeze
+^^^^^^
+
+This mode ensures that all services get stopped and frozen, so that they won't
+get recovered until the current node is online again.
+
+Conditional
+^^^^^^^^^^^
+
+The 'Conditional' shutdown policy automatically detects if a shutdown or a
+reboot is requested, and changes behaviour accordingly.
+
+.Shutdown
+
+A shutdown ('poweroff') is usually done if it is planned for the node to stay
+down for some time. The LRM stops all managed services in this case. This means
+that other nodes will take over those services afterwards.
+
+NOTE: Recent hardware has large amounts of memory (RAM). So we stop all
+resources, then restart them to avoid online migration of all that RAM. If you
+want to use online migration, you need to invoke that manually before you
+shutdown the node.
+
+
+.Reboot
+
+Node reboots are initiated with the 'reboot' command. This is usually done
+after installing a new kernel. Please note that this is different from
+``shutdown'', because the node immediately starts again.
+
+The LRM tells the CRM that it wants to restart, and waits until the CRM puts
+all resources into the `freeze` state (same mechanism is used for
+xref:ha_manager_package_updates[Package Updates]). This prevents those resources
+from being moved to other nodes. Instead, the CRM starts the resources after the
+reboot on the same node.
+
+
+Manual Resource Movement
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Last but not least, you can also manually move resources to other nodes, before
+you shutdown or restart a node. The advantage is that you have full control,
+and you can decide if you want to use online migration or not.
+
+NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or
+`watchdog-mux`. They manage and use the watchdog, so this can result in an
+immediate node reboot or even reset.
+
+
+[[ha_manager_crs]]
+Cluster Resource Scheduling
+---------------------------
+
+The cluster resource scheduler (CRS) mode controls how HA selects nodes for the
+recovery of a service as well as for migrations that are triggered by a
+shutdown policy. The default mode is `basic`, you can change it in the Web UI
+(`Datacenter` -> `Options`), or directly in `datacenter.cfg`:
+
+----
+crs: ha=static
+----
+
+[thumbnail="screenshot/gui-datacenter-options-crs.png"]
+
+The change will be in effect starting with the next manager round (after a few
+seconds).
+
+For each service that needs to be recovered or migrated, the scheduler
+iteratively chooses the best node among the nodes with the highest priority in
+the service's group.
+
+NOTE: There are plans to add modes for (static and dynamic) load-balancing in
+the future.
+
+Basic Scheduler
+~~~~~~~~~~~~~~~
+
+The number of active HA services on each node is used to choose a recovery node.
+Non-HA-managed services are currently not counted.
+
+Static-Load Scheduler
+~~~~~~~~~~~~~~~~~~~~~
+
+IMPORTANT: The static mode is still a technology preview.
+
+Static usage information from HA services on each node is used to choose a
+recovery node. Usage of non-HA-managed services is currently not considered.
+
+For this selection, each node in turn is considered as if the service was
+already running on it, using CPU and memory usage from the associated guest
+configuration. Then for each such alternative, CPU and memory usage of all nodes
+are considered, with memory being weighted much more, because it's a truly
+limited resource. For both, CPU and memory, highest usage among nodes (weighted
+more, as ideally no node should be overcommitted) and average usage of all nodes
+(to still be able to distinguish in case there already is a more highly
+committed node) are considered.
+
+IMPORTANT: The more services the more possible combinations there are, so it's
+currently not recommended to use it if you have thousands of HA managed
+services.
+
+
+CRS Scheduling Points
+~~~~~~~~~~~~~~~~~~~~~
+
+The CRS algorithm is not applied for every service in every round, since this
+would mean a large number of constant migrations. Depending on the workload,
+this could put more strain on the cluster than could be avoided by constant
+balancing.
+That's why the {pve} HA manager favors keeping services on their current node.
+
+The CRS is currently used at the following scheduling points:
+
+- Service recovery (always active). When a node with active HA services fails,
+ all its services need to be recovered to other nodes. The CRS algorithm will
+ be used here to balance that recovery over the remaining nodes.