X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=ha-manager.adoc;h=79c90435601012813df4253b0f748a0f2f1c5c3d;hp=cef806d7af723599410a96f8c8a9a7b41e38b206;hb=b179764dcbed98e70fccfcab3f8476fe1eeef074;hpb=01911cf3ca3ed6f4560fe510f3cbbbf8b1219e0d diff --git a/ha-manager.adoc b/ha-manager.adoc index cef806d..79c9043 100644 --- a/ha-manager.adoc +++ b/ha-manager.adoc @@ -1,15 +1,15 @@ -[[chapter-ha-manager]] +[[chapter_ha_manager]] ifdef::manvolnum[] -PVE({manvolnum}) -================ -include::attributes.txt[] +ha-manager(1) +============= +:pve-toplevel: NAME ---- ha-manager - Proxmox VE HA Manager -SYNOPSYS +SYNOPSIS -------- include::ha-manager.1-synopsis.adoc[] @@ -17,14 +17,12 @@ include::ha-manager.1-synopsis.adoc[] DESCRIPTION ----------- endif::manvolnum[] - ifndef::manvolnum[] High Availability ================= -include::attributes.txt[] +:pve-toplevel: endif::manvolnum[] - Our modern society depends heavily on information provided by computers over the network. Mobile devices amplified that dependency, because people can access the network any time from anywhere. If you @@ -122,6 +120,7 @@ Requirements * optional hardware fencing devices +[[ha_manager_resources]] Resources --------- @@ -311,6 +310,7 @@ the update process can be too long which, in the worst case, may result in a watchdog reset. +[[ha_manager_fencing]] Fencing ------- @@ -380,6 +380,7 @@ That minimizes the possibility of an overload, which else could cause an unresponsive node and as a result a chain reaction of node failures in the cluster. +[[ha_manager_groups]] Groups ------ @@ -422,13 +423,14 @@ The resource won't automatically fail back when a more preferred node Examples;; * You need to migrate a service to a node which hasn't the highest priority in the group at the moment, to tell the HA manager to not move this service - instantly back set the nofailnback option and the service will stay on + instantly back set the 'nofailback' option and the service will stay on + the current node. - * A service was fenced and he got recovered to another node. The admin - repaired the node and brang it up online again but does not want that the + * A service was fenced and it got recovered to another node. The admin + repaired the node and brought it up online again but does not want that the recovered services move straight back to the repaired node as he wants to first investigate the failure cause and check if it runs stable. He can use - the nofailback option to achieve this. + the 'nofailback' option to achieve this. Start Failure Policy @@ -480,6 +482,7 @@ killing its process) * *after* you fixed all errors you may enable the service again +[[ha_manager_service_operations]] Service Operations ------------------