]> git.proxmox.com Git - pve-docs.git/blob - ha-manager.adoc
fix typos, spelling and grammar
[pve-docs.git] / ha-manager.adoc
1 [[chapter-ha-manager]]
2 ifdef::manvolnum[]
3 PVE({manvolnum})
4 ================
5 include::attributes.txt[]
6
7 NAME
8 ----
9
10 ha-manager - Proxmox VE HA Manager
11
12 SYNOPSYS
13 --------
14
15 include::ha-manager.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 High Availability
23 =================
24 include::attributes.txt[]
25 endif::manvolnum[]
26
27 'ha-manager' handles management of user-defined cluster services. This
28 includes handling of user requests which may start, stop, relocate,
29 migrate a service.
30 The cluster resource manager daemon also handles restarting and relocating
31 services to another node in the event of failures.
32
33 A service (also called resource) is uniquely identified by a service ID
34 (SID) which consists of the service type and an type specific id, e.g.:
35 'vm:100'. That example would be a service of type vm (Virtual machine)
36 with the VMID 100.
37
38 Requirements
39 ------------
40
41 * at least three nodes
42
43 * shared storage
44
45 * hardware redundancy
46
47 * hardware watchdog - if not available we fall back to the
48 linux kernel soft dog
49
50 How It Works
51 ------------
52
53 This section provides an in detail description of the {PVE} HA-manager
54 internals. It describes how the CRM and the LRM work together.
55
56 To provide High Availability two daemons run on each node:
57
58 'pve-ha-lrm'::
59
60 The local resource manager (LRM), it controls the services running on
61 the local node.
62 It reads the requested states for its services from the current manager
63 status file and executes the respective commands.
64
65 'pve-ha-crm'::
66
67 The cluster resource manager (CRM), it controls the cluster wide
68 actions of the services, processes the LRM result includes the state
69 machine which controls the state of each service.
70
71 .Locks in the LRM & CRM
72 [NOTE]
73 Locks are provided by our distributed configuration file system (pmxcfs).
74 They are used to guarantee that each LRM is active and working as a
75 LRM only executes actions when he has its lock we can mark a failed node
76 as fenced if we get its lock. This lets us then recover the failed HA services
77 securely without the failed (but maybe still running) LRM interfering.
78 This all gets supervised by the CRM which holds currently the manager master
79 lock.
80
81 Local Resource Manager
82 ~~~~~~~~~~~~~~~~~~~~~~
83
84 The local resource manager ('pve-ha-lrm') is started as a daemon on
85 boot and waits until the HA cluster is quorate and thus cluster wide
86 locks are working.
87
88 It can be in three states:
89
90 * *wait for agent lock*: the LRM waits for our exclusive lock. This is
91 also used as idle sate if no service is configured
92 * *active*: the LRM holds its exclusive lock and has services configured
93 * *lost agent lock*: the LRM lost its lock, this means a failure happened
94 and quorum was lost.
95
96 After the LRM gets in the active state it reads the manager status
97 file in '/etc/pve/ha/manager_status' and determines the commands it
98 has to execute for the service it owns.
99 For each command a worker gets started, this workers are running in
100 parallel and are limited to maximal 4 by default. This default setting
101 may be changed through the datacenter configuration key "max_worker".
102
103 .Maximal Concurrent Worker Adjustment Tips
104 [NOTE]
105 The default value of 4 maximal concurrent Workers may be unsuited for
106 a specific setup. For example may 4 live migrations happen at the same
107 time, which can lead to network congestions with slower networks and/or
108 big (memory wise) services. Ensure that also in the worst case no congestion
109 happens and lower the "max_worker" value if needed. In the contrary, if you
110 have a particularly powerful high end setup you may also want to increase it.
111
112 Each command requested by the CRM is uniquely identifiable by an UID, when
113 the worker finished its result will be processed and written in the LRM
114 status file '/etc/pve/nodes/<nodename>/lrm_status'. There the CRM may collect
115 it and let its state machine - respective the commands output - act on it.
116
117 The actions on each service between CRM and LRM are normally always synced.
118 This means that the CRM requests a state uniquely marked by an UID, the LRM
119 then executes this action *one time* and writes back the result, also
120 identifiable by the same UID. This is needed so that the LRM does not
121 executes an outdated command.
122 With the exception of the 'stop' and the 'error' command,
123 those two do not depend on the result produce and are executed
124 always in the case of the stopped state and once in the case of
125 the error state.
126
127 .Read the Logs
128 [NOTE]
129 The HA Stack logs every action it makes. This helps to understand what
130 and also why something happens in the cluster. Here its important to see
131 what both daemons, the LRM and the CRM, did. You may use
132 `journalctl -u pve-ha-lrm` on the node(s) where the service is and
133 the same command for the pve-ha-crm on the node which is the current master.
134
135 Cluster Resource Manager
136 ~~~~~~~~~~~~~~~~~~~~~~~~
137
138 The cluster resource manager ('pve-ha-crm') starts on each node and
139 waits there for the manager lock, which can only be held by one node
140 at a time. The node which successfully acquires the manager lock gets
141 promoted to the CRM master.
142
143 It can be in three states: TODO
144
145 * *wait for agent lock*: the LRM waits for our exclusive lock. This is
146 also used as idle sate if no service is configured
147 * *active*: the LRM holds its exclusive lock and has services configured
148 * *lost agent lock*: the LRM lost its lock, this means a failure happened
149 and quorum was lost.
150
151 It main task is to manage the services which are configured to be highly
152 available and try to get always bring them in the wanted state, e.g.: a
153 enabled service will be started if its not running, if it crashes it will
154 be started again. Thus it dictates the LRM the wanted actions.
155
156 When an node leaves the cluster quorum, its state changes to unknown.
157 If the current CRM then can secure the failed nodes lock, the services
158 will be 'stolen' and restarted on another node.
159
160 When a cluster member determines that it is no longer in the cluster
161 quorum, the LRM waits for a new quorum to form. As long as there is no
162 quorum the node cannot reset the watchdog. This will trigger a reboot
163 after 60 seconds.
164
165 Configuration
166 -------------
167
168 The HA stack is well integrated int the Proxmox VE API2. So, for
169 example, HA can be configured via 'ha-manager' or the PVE web
170 interface, which both provide an easy to use tool.
171
172 The resource configuration file can be located at
173 '/etc/pve/ha/resources.cfg' and the group configuration file at
174 '/etc/pve/ha/groups.cfg'. Use the provided tools to make changes,
175 there shouldn't be any need to edit them manually.
176
177 Node Power Status
178 -----------------
179
180 If a node needs maintenance you should migrate and or relocate all
181 services which are required to run always on another node first.
182 After that you can stop the LRM and CRM services. But note that the
183 watchdog triggers if you stop it with active services.
184
185 Fencing
186 -------
187
188 What Is Fencing
189 ~~~~~~~~~~~~~~~
190
191 Fencing secures that on a node failure the dangerous node gets will be rendered
192 unable to do any damage and that no resource runs twice when it gets recovered
193 from the failed node.
194
195 Configure Hardware Watchdog
196 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
197 By default all watchdog modules are blocked for security reasons as they are
198 like a loaded gun if not correctly initialized.
199 If you have a hardware watchdog available remove its module from the blacklist
200 and restart 'the watchdog-mux' service.
201
202
203 Resource/Service Agents
204 -------------------------
205
206 A resource or also called service can be managed by the
207 ha-manager. Currently we support virtual machines and container.
208
209 Groups
210 ------
211
212 A group is a collection of cluster nodes which a service may be bound to.
213
214 Group Settings
215 ~~~~~~~~~~~~~~
216
217 nodes::
218
219 list of group node members
220
221 restricted::
222
223 resources bound to this group may only run on nodes defined by the
224 group. If no group node member is available the resource will be
225 placed in the stopped state.
226
227 nofailback::
228
229 the resource won't automatically fail back when a more preferred node
230 (re)joins the cluster.
231
232
233 Recovery Policy
234 ---------------
235
236 There are two service recover policy settings which can be configured
237 specific for each resource.
238
239 max_restart::
240
241 maximal number of tries to restart an failed service on the actual
242 node. The default is set to one.
243
244 max_relocate::
245
246 maximal number of tries to relocate the service to a different node.
247 A relocate only happens after the max_restart value is exceeded on the
248 actual node. The default is set to one.
249
250 Note that the relocate count state will only reset to zero when the
251 service had at least one successful start. That means if a service is
252 re-enabled without fixing the error only the restart policy gets
253 repeated.
254
255 Error Recovery
256 --------------
257
258 If after all tries the service state could not be recovered it gets
259 placed in an error state. In this state the service won't get touched
260 by the HA stack anymore. To recover from this state you should follow
261 these steps:
262
263 * bring the resource back into an safe and consistent state (e.g:
264 killing its process)
265
266 * disable the ha resource to place it in an stopped state
267
268 * fix the error which led to this failures
269
270 * *after* you fixed all errors you may enable the service again
271
272
273 Service Operations
274 ------------------
275
276 This are how the basic user-initiated service operations (via
277 'ha-manager') work.
278
279 enable::
280
281 the service will be started by the LRM if not already running.
282
283 disable::
284
285 the service will be stopped by the LRM if running.
286
287 migrate/relocate::
288
289 the service will be relocated (live) to another node.
290
291 remove::
292
293 the service will be removed from the HA managed resource list. Its
294 current state will not be touched.
295
296 start/stop::
297
298 start and stop commands can be issued to the resource specific tools
299 (like 'qm' or 'pct'), they will forward the request to the
300 'ha-manager' which then will execute the action and set the resulting
301 service state (enabled, disabled).
302
303
304 Service States
305 --------------
306
307 stopped::
308
309 Service is stopped (confirmed by LRM)
310
311 request_stop::
312
313 Service should be stopped. Waiting for confirmation from LRM.
314
315 started::
316
317 Service is active an LRM should start it ASAP if not already running.
318
319 fence::
320
321 Wait for node fencing (service node is not inside quorate cluster
322 partition).
323
324 freeze::
325
326 Do not touch the service state. We use this state while we reboot a
327 node, or when we restart the LRM daemon.
328
329 migrate::
330
331 Migrate service (live) to other node.
332
333 error::
334
335 Service disabled because of LRM errors. Needs manual intervention.
336
337
338 ifdef::manvolnum[]
339 include::pve-copyright.adoc[]
340 endif::manvolnum[]
341