5 include::attributes.txt[]
10 ha-manager - Proxmox VE HA Manager
15 include::ha-manager.1-synopsis.adoc[]
24 include::attributes.txt[]
27 'ha-manager' handles management of user-defined cluster services. This
28 includes handling of user requests which may start, stop, relocate,
30 The cluster resource manager daemon also handles restarting and relocating
31 services to another node in the event of failures.
33 A service (also called resource) is uniquely identified by a service ID
34 (SID) which consists of the service type and an type specific id, e.g.:
35 'vm:100'. That example would be a service of type vm (Virtual machine)
41 * at least three nodes
47 * hardware watchdog - if not available we fall back to the
53 This section provides an in detail description of the {PVE} HA-manager
54 internals. It describes how the CRM and the LRM work together.
56 To provide High Availability two daemons run on each node:
60 The local resource manager (LRM), it controls the services running on
62 It reads the requested states for its services from the current manager
63 status file and executes the respective commands.
67 The cluster resource manager (CRM), it controls the cluster wide
68 actions of the services, processes the LRM result includes the state
69 machine which controls the state of each service.
71 .Locks in the LRM & CRM
73 Locks are provided by our distributed configuration file system (pmxcfs).
74 They are used to guarantee that each LRM is active and working as a
75 LRM only executes actions when he has its lock we can mark a failed node
76 as fenced if we get its lock. This lets us then recover the failed HA services
77 securely without the failed (but maybe still running) LRM interfering.
78 This all gets supervised by the CRM which holds currently the manager master
81 Local Resource Manager
82 ~~~~~~~~~~~~~~~~~~~~~~
84 The local resource manager ('pve-ha-lrm') is started as a daemon on
85 boot and waits until the HA cluster is quorate and thus cluster wide
88 It can be in three states:
90 * *wait for agent lock*: the LRM waits for our exclusive lock. This is
91 also used as idle sate if no service is configured
92 * *active*: the LRM holds its exclusive lock and has services configured
93 * *lost agent lock*: the LRM lost its lock, this means a failure happened
96 After the LRM gets in the active state it reads the manager status
97 file in '/etc/pve/ha/manager_status' and determines the commands it
98 has to execute for the service it owns.
99 For each command a worker gets started, this workers are running in
100 parallel and are limited to maximal 4 by default. This default setting
101 may be changed through the datacenter configuration key "max_worker".
103 .Maximal Concurrent Worker Adjustment Tips
105 The default value of 4 maximal concurrent Workers may be unsuited for
106 a specific setup. For example may 4 live migrations happen at the same
107 time, which can lead to network congestions with slower networks and/or
108 big (memory wise) services. Ensure that also in the worst case no congestion
109 happens and lower the "max_worker" value if needed. In the contrary, if you
110 have a particularly powerful high end setup you may also want to increase it.
112 Each command requested by the CRM is uniquely identifiable by an UID, when
113 the worker finished its result will be processed and written in the LRM
114 status file '/etc/pve/nodes/<nodename>/lrm_status'. There the CRM may collect
115 it and let its state machine - respective the commands output - act on it.
117 The actions on each service between CRM and LRM are normally always synced.
118 This means that the CRM requests a state uniquely marked by an UID, the LRM
119 then executes this action *one time* and writes back the result, also
120 identifiable by the same UID. This is needed so that the LRM does not
121 executes an outdated command.
122 With the exception of the 'stop' and the 'error' command,
123 those two do not depend on the result produce and are executed
124 always in the case of the stopped state and once in the case of
129 The HA Stack logs every action it makes. This helps to understand what
130 and also why something happens in the cluster. Here its important to see
131 what both daemons, the LRM and the CRM, did. You may use
132 `journalctl -u pve-ha-lrm` on the node(s) where the service is and
133 the same command for the pve-ha-crm on the node which is the current master.
135 Cluster Resource Manager
136 ~~~~~~~~~~~~~~~~~~~~~~~~
138 The cluster resource manager ('pve-ha-crm') starts on each node and
139 waits there for the manager lock, which can only be held by one node
140 at a time. The node which successfully acquires the manager lock gets
141 promoted to the CRM master.
143 It can be in three states: TODO
145 * *wait for agent lock*: the LRM waits for our exclusive lock. This is
146 also used as idle sate if no service is configured
147 * *active*: the LRM holds its exclusive lock and has services configured
148 * *lost agent lock*: the LRM lost its lock, this means a failure happened
151 It main task is to manage the services which are configured to be highly
152 available and try to get always bring them in the wanted state, e.g.: a
153 enabled service will be started if its not running, if it crashes it will
154 be started again. Thus it dictates the LRM the wanted actions.
156 When an node leaves the cluster quorum, its state changes to unknown.
157 If the current CRM then can secure the failed nodes lock, the services
158 will be 'stolen' and restarted on another node.
160 When a cluster member determines that it is no longer in the cluster
161 quorum, the LRM waits for a new quorum to form. As long as there is no
162 quorum the node cannot reset the watchdog. This will trigger a reboot
168 The HA stack is well integrated int the Proxmox VE API2. So, for
169 example, HA can be configured via 'ha-manager' or the PVE web
170 interface, which both provide an easy to use tool.
172 The resource configuration file can be located at
173 '/etc/pve/ha/resources.cfg' and the group configuration file at
174 '/etc/pve/ha/groups.cfg'. Use the provided tools to make changes,
175 there shouldn't be any need to edit them manually.
180 If a node needs maintenance you should migrate and or relocate all
181 services which are required to run always on another node first.
182 After that you can stop the LRM and CRM services. But note that the
183 watchdog triggers if you stop it with active services.
191 Fencing secures that on a node failure the dangerous node gets will be rendered
192 unable to do any damage and that no resource runs twice when it gets recovered
193 from the failed node.
195 Configure Hardware Watchdog
196 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
197 By default all watchdog modules are blocked for security reasons as they are
198 like a loaded gun if not correctly initialized.
199 If you have a hardware watchdog available remove its module from the blacklist
200 and restart 'the watchdog-mux' service.
203 Resource/Service Agents
204 -------------------------
206 A resource or also called service can be managed by the
207 ha-manager. Currently we support virtual machines and container.
212 A group is a collection of cluster nodes which a service may be bound to.
219 list of group node members
223 resources bound to this group may only run on nodes defined by the
224 group. If no group node member is available the resource will be
225 placed in the stopped state.
229 the resource won't automatically fail back when a more preferred node
230 (re)joins the cluster.
236 There are two service recover policy settings which can be configured
237 specific for each resource.
241 maximal number of tries to restart an failed service on the actual
242 node. The default is set to one.
246 maximal number of tries to relocate the service to a different node.
247 A relocate only happens after the max_restart value is exceeded on the
248 actual node. The default is set to one.
250 Note that the relocate count state will only reset to zero when the
251 service had at least one successful start. That means if a service is
252 re-enabled without fixing the error only the restart policy gets
258 If after all tries the service state could not be recovered it gets
259 placed in an error state. In this state the service won't get touched
260 by the HA stack anymore. To recover from this state you should follow
263 * bring the resource back into an safe and consistent state (e.g:
266 * disable the ha resource to place it in an stopped state
268 * fix the error which led to this failures
270 * *after* you fixed all errors you may enable the service again
276 This are how the basic user-initiated service operations (via
281 the service will be started by the LRM if not already running.
285 the service will be stopped by the LRM if running.
289 the service will be relocated (live) to another node.
293 the service will be removed from the HA managed resource list. Its
294 current state will not be touched.
298 start and stop commands can be issued to the resource specific tools
299 (like 'qm' or 'pct'), they will forward the request to the
300 'ha-manager' which then will execute the action and set the resulting
301 service state (enabled, disabled).
309 Service is stopped (confirmed by LRM)
313 Service should be stopped. Waiting for confirmation from LRM.
317 Service is active an LRM should start it ASAP if not already running.
321 Wait for node fencing (service node is not inside quorate cluster
326 Do not touch the service state. We use this state while we reboot a
327 node, or when we restart the LRM daemon.
331 Migrate service (live) to other node.
335 Service disabled because of LRM errors. Needs manual intervention.
339 include::pve-copyright.adoc[]