]> git.proxmox.com Git - pve-docs.git/blob - ha-manager.adoc
ha-manager.adoc: improve group configuration section
[pve-docs.git] / ha-manager.adoc
1 [[chapter_ha_manager]]
2 ifdef::manvolnum[]
3 ha-manager(1)
4 =============
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 ha-manager - Proxmox VE HA Manager
11
12 SYNOPSIS
13 --------
14
15 include::ha-manager.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 High Availability
22 =================
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 Our modern society depends heavily on information provided by
27 computers over the network. Mobile devices amplified that dependency,
28 because people can access the network any time from anywhere. If you
29 provide such services, it is very important that they are available
30 most of the time.
31
32 We can mathematically define the availability as the ratio of (A) the
33 total time a service is capable of being used during a given interval
34 to (B) the length of the interval. It is normally expressed as a
35 percentage of uptime in a given year.
36
37 .Availability - Downtime per Year
38 [width="60%",cols="<d,d",options="header"]
39 |===========================================================
40 |Availability % |Downtime per year
41 |99 |3.65 days
42 |99.9 |8.76 hours
43 |99.99 |52.56 minutes
44 |99.999 |5.26 minutes
45 |99.9999 |31.5 seconds
46 |99.99999 |3.15 seconds
47 |===========================================================
48
49 There are several ways to increase availability. The most elegant
50 solution is to rewrite your software, so that you can run it on
51 several host at the same time. The software itself need to have a way
52 to detect errors and do failover. This is relatively easy if you just
53 want to serve read-only web pages. But in general this is complex, and
54 sometimes impossible because you cannot modify the software
55 yourself. The following solutions works without modifying the
56 software:
57
58 * Use reliable ``server'' components
59
60 NOTE: Computer components with same functionality can have varying
61 reliability numbers, depending on the component quality. Most vendors
62 sell components with higher reliability as ``server'' components -
63 usually at higher price.
64
65 * Eliminate single point of failure (redundant components)
66 ** use an uninterruptible power supply (UPS)
67 ** use redundant power supplies on the main boards
68 ** use ECC-RAM
69 ** use redundant network hardware
70 ** use RAID for local storage
71 ** use distributed, redundant storage for VM data
72
73 * Reduce downtime
74 ** rapidly accessible administrators (24/7)
75 ** availability of spare parts (other nodes in a {pve} cluster)
76 ** automatic error detection (provided by `ha-manager`)
77 ** automatic failover (provided by `ha-manager`)
78
79 Virtualization environments like {pve} make it much easier to reach
80 high availability because they remove the ``hardware'' dependency. They
81 also support to setup and use redundant storage and network
82 devices. So if one host fail, you can simply start those services on
83 another host within your cluster.
84
85 Even better, {pve} provides a software stack called `ha-manager`,
86 which can do that automatically for you. It is able to automatically
87 detect errors and do automatic failover.
88
89 {pve} `ha-manager` works like an ``automated'' administrator. First, you
90 configure what resources (VMs, containers, ...) it should
91 manage. `ha-manager` then observes correct functionality, and handles
92 service failover to another node in case of errors. `ha-manager` can
93 also handle normal user requests which may start, stop, relocate and
94 migrate a service.
95
96 But high availability comes at a price. High quality components are
97 more expensive, and making them redundant duplicates the costs at
98 least. Additional spare parts increase costs further. So you should
99 carefully calculate the benefits, and compare with those additional
100 costs.
101
102 TIP: Increasing availability from 99% to 99.9% is relatively
103 simply. But increasing availability from 99.9999% to 99.99999% is very
104 hard and costly. `ha-manager` has typical error detection and failover
105 times of about 2 minutes, so you can get no more than 99.999%
106 availability.
107
108 Requirements
109 ------------
110
111 * at least three cluster nodes (to get reliable quorum)
112
113 * shared storage for VMs and containers
114
115 * hardware redundancy (everywhere)
116
117 * hardware watchdog - if not available we fall back to the
118 linux kernel software watchdog (`softdog`)
119
120 * optional hardware fencing devices
121
122
123 [[ha_manager_resources]]
124 Resources
125 ---------
126
127 We call the primary management unit handled by `ha-manager` a
128 resource. A resource (also called ``service'') is uniquely
129 identified by a service ID (SID), which consists of the resource type
130 and an type specific ID, e.g.: `vm:100`. That example would be a
131 resource of type `vm` (virtual machine) with the ID 100.
132
133 For now we have two important resources types - virtual machines and
134 containers. One basic idea here is that we can bundle related software
135 into such VM or container, so there is no need to compose one big
136 service from other services, like it was done with `rgmanager`. In
137 general, a HA enabled resource should not depend on other resources.
138
139
140 How It Works
141 ------------
142
143 This section provides an in detail description of the {PVE} HA-manager
144 internals. It describes how the CRM and the LRM work together.
145
146 To provide High Availability two daemons run on each node:
147
148 `pve-ha-lrm`::
149
150 The local resource manager (LRM), which controls the services running on
151 the local node. It reads the requested states for its services from
152 the current manager status file and executes the respective commands.
153
154 `pve-ha-crm`::
155
156 The cluster resource manager (CRM), which makes the cluster wide
157 decisions. It sends commands to the LRM, processes the results,
158 and moves resources to other nodes if something fails. The CRM also
159 handles node fencing.
160
161
162 .Locks in the LRM & CRM
163 [NOTE]
164 Locks are provided by our distributed configuration file system (pmxcfs).
165 They are used to guarantee that each LRM is active once and working. As a
166 LRM only executes actions when it holds its lock we can mark a failed node
167 as fenced if we can acquire its lock. This lets us then recover any failed
168 HA services securely without any interference from the now unknown failed node.
169 This all gets supervised by the CRM which holds currently the manager master
170 lock.
171
172 Local Resource Manager
173 ~~~~~~~~~~~~~~~~~~~~~~
174
175 The local resource manager (`pve-ha-lrm`) is started as a daemon on
176 boot and waits until the HA cluster is quorate and thus cluster wide
177 locks are working.
178
179 It can be in three states:
180
181 wait for agent lock::
182
183 The LRM waits for our exclusive lock. This is also used as idle state if no
184 service is configured.
185
186 active::
187
188 The LRM holds its exclusive lock and has services configured.
189
190 lost agent lock::
191
192 The LRM lost its lock, this means a failure happened and quorum was lost.
193
194 After the LRM gets in the active state it reads the manager status
195 file in `/etc/pve/ha/manager_status` and determines the commands it
196 has to execute for the services it owns.
197 For each command a worker gets started, this workers are running in
198 parallel and are limited to at most 4 by default. This default setting
199 may be changed through the datacenter configuration key `max_worker`.
200 When finished the worker process gets collected and its result saved for
201 the CRM.
202
203 .Maximum Concurrent Worker Adjustment Tips
204 [NOTE]
205 The default value of at most 4 concurrent workers may be unsuited for
206 a specific setup. For example may 4 live migrations happen at the same
207 time, which can lead to network congestions with slower networks and/or
208 big (memory wise) services. Ensure that also in the worst case no congestion
209 happens and lower the `max_worker` value if needed. In the contrary, if you
210 have a particularly powerful high end setup you may also want to increase it.
211
212 Each command requested by the CRM is uniquely identifiable by an UID, when
213 the worker finished its result will be processed and written in the LRM
214 status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
215 it and let its state machine - respective the commands output - act on it.
216
217 The actions on each service between CRM and LRM are normally always synced.
218 This means that the CRM requests a state uniquely marked by an UID, the LRM
219 then executes this action *one time* and writes back the result, also
220 identifiable by the same UID. This is needed so that the LRM does not
221 executes an outdated command.
222 With the exception of the `stop` and the `error` command,
223 those two do not depend on the result produced and are executed
224 always in the case of the stopped state and once in the case of
225 the error state.
226
227 .Read the Logs
228 [NOTE]
229 The HA Stack logs every action it makes. This helps to understand what
230 and also why something happens in the cluster. Here its important to see
231 what both daemons, the LRM and the CRM, did. You may use
232 `journalctl -u pve-ha-lrm` on the node(s) where the service is and
233 the same command for the pve-ha-crm on the node which is the current master.
234
235 Cluster Resource Manager
236 ~~~~~~~~~~~~~~~~~~~~~~~~
237
238 The cluster resource manager (`pve-ha-crm`) starts on each node and
239 waits there for the manager lock, which can only be held by one node
240 at a time. The node which successfully acquires the manager lock gets
241 promoted to the CRM master.
242
243 It can be in three states:
244
245 wait for agent lock::
246
247 The CRM waits for our exclusive lock. This is also used as idle state if no
248 service is configured
249
250 active::
251
252 The CRM holds its exclusive lock and has services configured
253
254 lost agent lock::
255
256 The CRM lost its lock, this means a failure happened and quorum was lost.
257
258 It main task is to manage the services which are configured to be highly
259 available and try to always enforce them to the wanted state, e.g.: a
260 enabled service will be started if its not running, if it crashes it will
261 be started again. Thus it dictates the LRM the actions it needs to execute.
262
263 When an node leaves the cluster quorum, its state changes to unknown.
264 If the current CRM then can secure the failed nodes lock, the services
265 will be 'stolen' and restarted on another node.
266
267 When a cluster member determines that it is no longer in the cluster
268 quorum, the LRM waits for a new quorum to form. As long as there is no
269 quorum the node cannot reset the watchdog. This will trigger a reboot
270 after the watchdog then times out, this happens after 60 seconds.
271
272
273 Configuration
274 -------------
275
276 The HA stack is well integrated into the {pve} API. So, for example,
277 HA can be configured via the `ha-manager` command line interface, or
278 the {pve} web interface - both interfaces provide an easy way to
279 manage HA. Automation tools can use the API directly.
280
281 All HA configuration files are within `/etc/pve/ha/`, so they get
282 automatically distributed to the cluster nodes, and all nodes share
283 the same HA configuration.
284
285
286 Resources
287 ~~~~~~~~~
288
289 The resource configuration file `/etc/pve/ha/resources.cfg` stores
290 the list of resources managed by `ha-manager`. A resource configuration
291 inside that list look like this:
292
293 ----
294 <sid>:
295 <property> <value>
296 ...
297 ----
298
299 It starts with the service ID followed by a collon. The next lines
300 contain additional properties:
301
302 include::ha-resources-opts.adoc[]
303
304
305 Groups
306 ~~~~~~
307
308 The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
309 define groups of cluster nodes. A resource can be restricted to run
310 only on the members of such group. A group configuration look like
311 this:
312
313 ----
314 group: <group>
315 nodes <node_list>
316 <property> <value>
317 ...
318 ----
319
320 include::ha-groups-opts.adoc[]
321
322
323 Node Power Status
324 -----------------
325
326 If a node needs maintenance you should migrate and or relocate all
327 services which are required to run always on another node first.
328 After that you can stop the LRM and CRM services. But note that the
329 watchdog triggers if you stop it with active services.
330
331 Package Updates
332 ---------------
333
334 When updating the ha-manager you should do one node after the other, never
335 all at once for various reasons. First, while we test our software
336 thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
337 Upgrading one node after the other and checking the functionality of each node
338 after finishing the update helps to recover from an eventual problems, while
339 updating all could render you in a broken cluster state and is generally not
340 good practice.
341
342 Also, the {pve} HA stack uses a request acknowledge protocol to perform
343 actions between the cluster and the local resource manager. For restarting,
344 the LRM makes a request to the CRM to freeze all its services. This prevents
345 that they get touched by the Cluster during the short time the LRM is restarting.
346 After that the LRM may safely close the watchdog during a restart.
347 Such a restart happens on a update and as already stated a active master
348 CRM is needed to acknowledge the requests from the LRM, if this is not the case
349 the update process can be too long which, in the worst case, may result in
350 a watchdog reset.
351
352
353 [[ha_manager_fencing]]
354 Fencing
355 -------
356
357 What is Fencing
358 ~~~~~~~~~~~~~~~
359
360 Fencing secures that on a node failure the dangerous node gets will be rendered
361 unable to do any damage and that no resource runs twice when it gets recovered
362 from the failed node. This is a really important task and one of the base
363 principles to make a system Highly Available.
364
365 If a node would not get fenced it would be in an unknown state where it may
366 have still access to shared resources, this is really dangerous!
367 Imagine that every network but the storage one broke, now while not
368 reachable from the public network the VM still runs and writes on the shared
369 storage. If we would not fence the node and just start up this VM on another
370 Node we would get dangerous race conditions, atomicity violations the whole VM
371 could be rendered unusable. The recovery could also simply fail if the storage
372 protects from multiple mounts and thus defeat the purpose of HA.
373
374 How {pve} Fences
375 ~~~~~~~~~~~~~~~~~
376
377 There are different methods to fence a node, for example fence devices which
378 cut off the power from the node or disable their communication completely.
379
380 Those are often quite expensive and bring additional critical components in
381 a system, because if they fail you cannot recover any service.
382
383 We thus wanted to integrate a simpler method in the HA Manager first, namely
384 self fencing with watchdogs.
385
386 Watchdogs are widely used in critical and dependable systems since the
387 beginning of micro controllers, they are often independent and simple
388 integrated circuit which programs can use to watch them. After opening they need to
389 report periodically. If, for whatever reason, a program becomes unable to do
390 so the watchdogs triggers a reset of the whole server.
391
392 Server motherboards often already include such hardware watchdogs, these need
393 to be configured. If no watchdog is available or configured we fall back to the
394 Linux Kernel softdog while still reliable it is not independent of the servers
395 Hardware and thus has a lower reliability then a hardware watchdog.
396
397 Configure Hardware Watchdog
398 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
399 By default all watchdog modules are blocked for security reasons as they are
400 like a loaded gun if not correctly initialized.
401 If you have a hardware watchdog available remove its kernel module from the
402 blacklist, load it with insmod and restart the `watchdog-mux` service or reboot
403 the node.
404
405 Recover Fenced Services
406 ~~~~~~~~~~~~~~~~~~~~~~~
407
408 After a node failed and its fencing was successful we start to recover services
409 to other available nodes and restart them there so that they can provide service
410 again.
411
412 The selection of the node on which the services gets recovered is influenced
413 by the users group settings, the currently active nodes and their respective
414 active service count.
415 First we build a set out of the intersection between user selected nodes and
416 available nodes. Then the subset with the highest priority of those nodes
417 gets chosen as possible nodes for recovery. We select the node with the
418 currently lowest active service count as a new node for the service.
419 That minimizes the possibility of an overload, which else could cause an
420 unresponsive node and as a result a chain reaction of node failures in the
421 cluster.
422
423 [[ha_manager_groups]]
424 Groups
425 ------
426
427 A group is a collection of cluster nodes which a service may be bound to.
428
429 Group Settings
430 ~~~~~~~~~~~~~~
431
432 nodes::
433
434 List of group node members where a priority can be given to each node.
435 A service bound to this group will run on the nodes with the highest priority
436 available. If more nodes are in the highest priority class the services will
437 get distributed to those node if not already there. The priorities have a
438 relative meaning only.
439 Example;;
440 You want to run all services from a group on `node1` if possible. If this node
441 is not available, you want them to run equally splitted on `node2` and `node3`, and
442 if those fail it should use `node4`.
443 To achieve this you could set the node list to:
444 [source,bash]
445 ha-manager groupset mygroup -nodes "node1:2,node2:1,node3:1,node4"
446
447 restricted::
448
449 Resources bound to this group may only run on nodes defined by the
450 group. If no group node member is available the resource will be
451 placed in the stopped state.
452 Example;;
453 Lets say a service uses resources only available on `node1` and `node2`,
454 so we need to make sure that HA manager does not use other nodes.
455 We need to create a 'restricted' group with said nodes:
456 [source,bash]
457 ha-manager groupset mygroup -nodes "node1,node2" -restricted
458
459 nofailback::
460
461 The resource won't automatically fail back when a more preferred node
462 (re)joins the cluster.
463 Examples;;
464 * You need to migrate a service to a node which hasn't the highest priority
465 in the group at the moment, to tell the HA manager to not move this service
466 instantly back set the 'nofailback' option and the service will stay on
467 the current node.
468
469 * A service was fenced and it got recovered to another node. The admin
470 repaired the node and brought it up online again but does not want that the
471 recovered services move straight back to the repaired node as he wants to
472 first investigate the failure cause and check if it runs stable. He can use
473 the 'nofailback' option to achieve this.
474
475
476 Start Failure Policy
477 ---------------------
478
479 The start failure policy comes in effect if a service failed to start on a
480 node once ore more times. It can be used to configure how often a restart
481 should be triggered on the same node and how often a service should be
482 relocated so that it gets a try to be started on another node.
483 The aim of this policy is to circumvent temporary unavailability of shared
484 resources on a specific node. For example, if a shared storage isn't available
485 on a quorate node anymore, e.g. network problems, but still on other nodes,
486 the relocate policy allows then that the service gets started nonetheless.
487
488 There are two service start recover policy settings which can be configured
489 specific for each resource.
490
491 max_restart::
492
493 Maximum number of tries to restart an failed service on the actual
494 node. The default is set to one.
495
496 max_relocate::
497
498 Maximum number of tries to relocate the service to a different node.
499 A relocate only happens after the max_restart value is exceeded on the
500 actual node. The default is set to one.
501
502 NOTE: The relocate count state will only reset to zero when the
503 service had at least one successful start. That means if a service is
504 re-enabled without fixing the error only the restart policy gets
505 repeated.
506
507 Error Recovery
508 --------------
509
510 If after all tries the service state could not be recovered it gets
511 placed in an error state. In this state the service won't get touched
512 by the HA stack anymore. To recover from this state you should follow
513 these steps:
514
515 * bring the resource back into a safe and consistent state (e.g.,
516 killing its process)
517
518 * disable the ha resource to place it in an stopped state
519
520 * fix the error which led to this failures
521
522 * *after* you fixed all errors you may enable the service again
523
524
525 [[ha_manager_service_operations]]
526 Service Operations
527 ------------------
528
529 This are how the basic user-initiated service operations (via
530 `ha-manager`) work.
531
532 enable::
533
534 The service will be started by the LRM if not already running.
535
536 disable::
537
538 The service will be stopped by the LRM if running.
539
540 migrate/relocate::
541
542 The service will be relocated (live) to another node.
543
544 remove::
545
546 The service will be removed from the HA managed resource list. Its
547 current state will not be touched.
548
549 start/stop::
550
551 `start` and `stop` commands can be issued to the resource specific tools
552 (like `qm` or `pct`), they will forward the request to the
553 `ha-manager` which then will execute the action and set the resulting
554 service state (enabled, disabled).
555
556
557 Service States
558 --------------
559
560 stopped::
561
562 Service is stopped (confirmed by LRM), if detected running it will get stopped
563 again.
564
565 request_stop::
566
567 Service should be stopped. Waiting for confirmation from LRM.
568
569 started::
570
571 Service is active an LRM should start it ASAP if not already running.
572 If the Service fails and is detected to be not running the LRM restarts it.
573
574 fence::
575
576 Wait for node fencing (service node is not inside quorate cluster
577 partition).
578 As soon as node gets fenced successfully the service will be recovered to
579 another node, if possible.
580
581 freeze::
582
583 Do not touch the service state. We use this state while we reboot a
584 node, or when we restart the LRM daemon.
585
586 migrate::
587
588 Migrate service (live) to other node.
589
590 error::
591
592 Service disabled because of LRM errors. Needs manual intervention.
593
594
595 ifdef::manvolnum[]
596 include::pve-copyright.adoc[]
597 endif::manvolnum[]
598