]> git.proxmox.com Git - pve-docs.git/blob - ha-manager.adoc
f63fd5e44f3e68ea377c612164285a1ce943c738
[pve-docs.git] / ha-manager.adoc
1 [[chapter_ha_manager]]
2 ifdef::manvolnum[]
3 ha-manager(1)
4 =============
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 ha-manager - Proxmox VE HA Manager
11
12 SYNOPSIS
13 --------
14
15 include::ha-manager.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 High Availability
22 =================
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 Our modern society depends heavily on information provided by
27 computers over the network. Mobile devices amplified that dependency,
28 because people can access the network any time from anywhere. If you
29 provide such services, it is very important that they are available
30 most of the time.
31
32 We can mathematically define the availability as the ratio of (A) the
33 total time a service is capable of being used during a given interval
34 to (B) the length of the interval. It is normally expressed as a
35 percentage of uptime in a given year.
36
37 .Availability - Downtime per Year
38 [width="60%",cols="<d,d",options="header"]
39 |===========================================================
40 |Availability % |Downtime per year
41 |99 |3.65 days
42 |99.9 |8.76 hours
43 |99.99 |52.56 minutes
44 |99.999 |5.26 minutes
45 |99.9999 |31.5 seconds
46 |99.99999 |3.15 seconds
47 |===========================================================
48
49 There are several ways to increase availability. The most elegant
50 solution is to rewrite your software, so that you can run it on
51 several host at the same time. The software itself need to have a way
52 to detect errors and do failover. This is relatively easy if you just
53 want to serve read-only web pages. But in general this is complex, and
54 sometimes impossible because you cannot modify the software
55 yourself. The following solutions works without modifying the
56 software:
57
58 * Use reliable ``server'' components
59 +
60 NOTE: Computer components with same functionality can have varying
61 reliability numbers, depending on the component quality. Most vendors
62 sell components with higher reliability as ``server'' components -
63 usually at higher price.
64
65 * Eliminate single point of failure (redundant components)
66 ** use an uninterruptible power supply (UPS)
67 ** use redundant power supplies on the main boards
68 ** use ECC-RAM
69 ** use redundant network hardware
70 ** use RAID for local storage
71 ** use distributed, redundant storage for VM data
72
73 * Reduce downtime
74 ** rapidly accessible administrators (24/7)
75 ** availability of spare parts (other nodes in a {pve} cluster)
76 ** automatic error detection (provided by `ha-manager`)
77 ** automatic failover (provided by `ha-manager`)
78
79 Virtualization environments like {pve} make it much easier to reach
80 high availability because they remove the ``hardware'' dependency. They
81 also support to setup and use redundant storage and network
82 devices. So if one host fail, you can simply start those services on
83 another host within your cluster.
84
85 Even better, {pve} provides a software stack called `ha-manager`,
86 which can do that automatically for you. It is able to automatically
87 detect errors and do automatic failover.
88
89 {pve} `ha-manager` works like an ``automated'' administrator. First, you
90 configure what resources (VMs, containers, ...) it should
91 manage. `ha-manager` then observes correct functionality, and handles
92 service failover to another node in case of errors. `ha-manager` can
93 also handle normal user requests which may start, stop, relocate and
94 migrate a service.
95
96 But high availability comes at a price. High quality components are
97 more expensive, and making them redundant duplicates the costs at
98 least. Additional spare parts increase costs further. So you should
99 carefully calculate the benefits, and compare with those additional
100 costs.
101
102 TIP: Increasing availability from 99% to 99.9% is relatively
103 simply. But increasing availability from 99.9999% to 99.99999% is very
104 hard and costly. `ha-manager` has typical error detection and failover
105 times of about 2 minutes, so you can get no more than 99.999%
106 availability.
107
108
109 Requirements
110 ------------
111
112 You must meet the following requirements before you start with HA:
113
114 * at least three cluster nodes (to get reliable quorum)
115
116 * shared storage for VMs and containers
117
118 * hardware redundancy (everywhere)
119
120 * use reliable “server” components
121
122 * hardware watchdog - if not available we fall back to the
123 linux kernel software watchdog (`softdog`)
124
125 * optional hardware fencing devices
126
127
128 [[ha_manager_resources]]
129 Resources
130 ---------
131
132 We call the primary management unit handled by `ha-manager` a
133 resource. A resource (also called ``service'') is uniquely
134 identified by a service ID (SID), which consists of the resource type
135 and an type specific ID, e.g.: `vm:100`. That example would be a
136 resource of type `vm` (virtual machine) with the ID 100.
137
138 For now we have two important resources types - virtual machines and
139 containers. One basic idea here is that we can bundle related software
140 into such VM or container, so there is no need to compose one big
141 service from other services, like it was done with `rgmanager`. In
142 general, a HA enabled resource should not depend on other resources.
143
144
145 How It Works
146 ------------
147
148 This section provides an in detail description of the {PVE} HA-manager
149 internals. It describes how the CRM and the LRM work together.
150
151 To provide High Availability two daemons run on each node:
152
153 `pve-ha-lrm`::
154
155 The local resource manager (LRM), which controls the services running on
156 the local node. It reads the requested states for its services from
157 the current manager status file and executes the respective commands.
158
159 `pve-ha-crm`::
160
161 The cluster resource manager (CRM), which makes the cluster wide
162 decisions. It sends commands to the LRM, processes the results,
163 and moves resources to other nodes if something fails. The CRM also
164 handles node fencing.
165
166
167 .Locks in the LRM & CRM
168 [NOTE]
169 Locks are provided by our distributed configuration file system (pmxcfs).
170 They are used to guarantee that each LRM is active once and working. As a
171 LRM only executes actions when it holds its lock we can mark a failed node
172 as fenced if we can acquire its lock. This lets us then recover any failed
173 HA services securely without any interference from the now unknown failed node.
174 This all gets supervised by the CRM which holds currently the manager master
175 lock.
176
177 Local Resource Manager
178 ~~~~~~~~~~~~~~~~~~~~~~
179
180 The local resource manager (`pve-ha-lrm`) is started as a daemon on
181 boot and waits until the HA cluster is quorate and thus cluster wide
182 locks are working.
183
184 It can be in three states:
185
186 wait for agent lock::
187
188 The LRM waits for our exclusive lock. This is also used as idle state if no
189 service is configured.
190
191 active::
192
193 The LRM holds its exclusive lock and has services configured.
194
195 lost agent lock::
196
197 The LRM lost its lock, this means a failure happened and quorum was lost.
198
199 After the LRM gets in the active state it reads the manager status
200 file in `/etc/pve/ha/manager_status` and determines the commands it
201 has to execute for the services it owns.
202 For each command a worker gets started, this workers are running in
203 parallel and are limited to at most 4 by default. This default setting
204 may be changed through the datacenter configuration key `max_worker`.
205 When finished the worker process gets collected and its result saved for
206 the CRM.
207
208 .Maximum Concurrent Worker Adjustment Tips
209 [NOTE]
210 The default value of at most 4 concurrent workers may be unsuited for
211 a specific setup. For example may 4 live migrations happen at the same
212 time, which can lead to network congestions with slower networks and/or
213 big (memory wise) services. Ensure that also in the worst case no congestion
214 happens and lower the `max_worker` value if needed. In the contrary, if you
215 have a particularly powerful high end setup you may also want to increase it.
216
217 Each command requested by the CRM is uniquely identifiable by an UID, when
218 the worker finished its result will be processed and written in the LRM
219 status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
220 it and let its state machine - respective the commands output - act on it.
221
222 The actions on each service between CRM and LRM are normally always synced.
223 This means that the CRM requests a state uniquely marked by an UID, the LRM
224 then executes this action *one time* and writes back the result, also
225 identifiable by the same UID. This is needed so that the LRM does not
226 executes an outdated command.
227 With the exception of the `stop` and the `error` command,
228 those two do not depend on the result produced and are executed
229 always in the case of the stopped state and once in the case of
230 the error state.
231
232 .Read the Logs
233 [NOTE]
234 The HA Stack logs every action it makes. This helps to understand what
235 and also why something happens in the cluster. Here its important to see
236 what both daemons, the LRM and the CRM, did. You may use
237 `journalctl -u pve-ha-lrm` on the node(s) where the service is and
238 the same command for the pve-ha-crm on the node which is the current master.
239
240 Cluster Resource Manager
241 ~~~~~~~~~~~~~~~~~~~~~~~~
242
243 The cluster resource manager (`pve-ha-crm`) starts on each node and
244 waits there for the manager lock, which can only be held by one node
245 at a time. The node which successfully acquires the manager lock gets
246 promoted to the CRM master.
247
248 It can be in three states:
249
250 wait for agent lock::
251
252 The CRM waits for our exclusive lock. This is also used as idle state if no
253 service is configured
254
255 active::
256
257 The CRM holds its exclusive lock and has services configured
258
259 lost agent lock::
260
261 The CRM lost its lock, this means a failure happened and quorum was lost.
262
263 It main task is to manage the services which are configured to be highly
264 available and try to always enforce them to the wanted state, e.g.: a
265 enabled service will be started if its not running, if it crashes it will
266 be started again. Thus it dictates the LRM the actions it needs to execute.
267
268 When an node leaves the cluster quorum, its state changes to unknown.
269 If the current CRM then can secure the failed nodes lock, the services
270 will be 'stolen' and restarted on another node.
271
272 When a cluster member determines that it is no longer in the cluster
273 quorum, the LRM waits for a new quorum to form. As long as there is no
274 quorum the node cannot reset the watchdog. This will trigger a reboot
275 after the watchdog then times out, this happens after 60 seconds.
276
277
278 Configuration
279 -------------
280
281 The HA stack is well integrated into the {pve} API. So, for example,
282 HA can be configured via the `ha-manager` command line interface, or
283 the {pve} web interface - both interfaces provide an easy way to
284 manage HA. Automation tools can use the API directly.
285
286 All HA configuration files are within `/etc/pve/ha/`, so they get
287 automatically distributed to the cluster nodes, and all nodes share
288 the same HA configuration.
289
290
291 Resources
292 ~~~~~~~~~
293
294 The resource configuration file `/etc/pve/ha/resources.cfg` stores
295 the list of resources managed by `ha-manager`. A resource configuration
296 inside that list look like this:
297
298 ----
299 <type>: <name>
300 <property> <value>
301 ...
302 ----
303
304 It starts with a resource type followed by a resource specific name,
305 separated with colon. Together this forms the HA resource ID, which is
306 used by all `ha-manager` commands to uniquely identify a resource
307 (example: `vm:100` or `ct:101`). The next lines contain additional
308 properties:
309
310 include::ha-resources-opts.adoc[]
311
312 Here is a real world example with one VM and one container. As you see,
313 the syntax of those files is really simple, so it is even posiible to
314 read or edit those files using your favorite editor:
315
316 .Configuration Example (`/etc/pve/ha/resources.cfg`)
317 ----
318 vm: 501
319 state started
320 max_relocate 2
321
322 ct: 102
323 # use default settings for everything
324 ----
325
326
327 Groups
328 ~~~~~~
329
330 The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
331 define groups of cluster nodes. A resource can be restricted to run
332 only on the members of such group. A group configuration look like
333 this:
334
335 ----
336 group: <group>
337 nodes <node_list>
338 <property> <value>
339 ...
340 ----
341
342 include::ha-groups-opts.adoc[]
343
344
345 Node Power Status
346 -----------------
347
348 If a node needs maintenance you should migrate and or relocate all
349 services which are required to run always on another node first.
350 After that you can stop the LRM and CRM services. But note that the
351 watchdog triggers if you stop it with active services.
352
353 Package Updates
354 ---------------
355
356 When updating the ha-manager you should do one node after the other, never
357 all at once for various reasons. First, while we test our software
358 thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
359 Upgrading one node after the other and checking the functionality of each node
360 after finishing the update helps to recover from an eventual problems, while
361 updating all could render you in a broken cluster state and is generally not
362 good practice.
363
364 Also, the {pve} HA stack uses a request acknowledge protocol to perform
365 actions between the cluster and the local resource manager. For restarting,
366 the LRM makes a request to the CRM to freeze all its services. This prevents
367 that they get touched by the Cluster during the short time the LRM is restarting.
368 After that the LRM may safely close the watchdog during a restart.
369 Such a restart happens on a update and as already stated a active master
370 CRM is needed to acknowledge the requests from the LRM, if this is not the case
371 the update process can be too long which, in the worst case, may result in
372 a watchdog reset.
373
374
375 [[ha_manager_fencing]]
376 Fencing
377 -------
378
379 What is Fencing
380 ~~~~~~~~~~~~~~~
381
382 Fencing secures that on a node failure the dangerous node gets will be rendered
383 unable to do any damage and that no resource runs twice when it gets recovered
384 from the failed node. This is a really important task and one of the base
385 principles to make a system Highly Available.
386
387 If a node would not get fenced it would be in an unknown state where it may
388 have still access to shared resources, this is really dangerous!
389 Imagine that every network but the storage one broke, now while not
390 reachable from the public network the VM still runs and writes on the shared
391 storage. If we would not fence the node and just start up this VM on another
392 Node we would get dangerous race conditions, atomicity violations the whole VM
393 could be rendered unusable. The recovery could also simply fail if the storage
394 protects from multiple mounts and thus defeat the purpose of HA.
395
396 How {pve} Fences
397 ~~~~~~~~~~~~~~~~~
398
399 There are different methods to fence a node, for example fence devices which
400 cut off the power from the node or disable their communication completely.
401
402 Those are often quite expensive and bring additional critical components in
403 a system, because if they fail you cannot recover any service.
404
405 We thus wanted to integrate a simpler method in the HA Manager first, namely
406 self fencing with watchdogs.
407
408 Watchdogs are widely used in critical and dependable systems since the
409 beginning of micro controllers, they are often independent and simple
410 integrated circuit which programs can use to watch them. After opening they need to
411 report periodically. If, for whatever reason, a program becomes unable to do
412 so the watchdogs triggers a reset of the whole server.
413
414 Server motherboards often already include such hardware watchdogs, these need
415 to be configured. If no watchdog is available or configured we fall back to the
416 Linux Kernel softdog while still reliable it is not independent of the servers
417 Hardware and thus has a lower reliability then a hardware watchdog.
418
419 Configure Hardware Watchdog
420 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
421 By default all watchdog modules are blocked for security reasons as they are
422 like a loaded gun if not correctly initialized.
423 If you have a hardware watchdog available remove its kernel module from the
424 blacklist, load it with insmod and restart the `watchdog-mux` service or reboot
425 the node.
426
427 Recover Fenced Services
428 ~~~~~~~~~~~~~~~~~~~~~~~
429
430 After a node failed and its fencing was successful we start to recover services
431 to other available nodes and restart them there so that they can provide service
432 again.
433
434 The selection of the node on which the services gets recovered is influenced
435 by the users group settings, the currently active nodes and their respective
436 active service count.
437 First we build a set out of the intersection between user selected nodes and
438 available nodes. Then the subset with the highest priority of those nodes
439 gets chosen as possible nodes for recovery. We select the node with the
440 currently lowest active service count as a new node for the service.
441 That minimizes the possibility of an overload, which else could cause an
442 unresponsive node and as a result a chain reaction of node failures in the
443 cluster.
444
445 [[ha_manager_groups]]
446 Groups
447 ------
448
449 A group is a collection of cluster nodes which a service may be bound to.
450
451 Group Settings
452 ~~~~~~~~~~~~~~
453
454 nodes::
455
456 List of group node members where a priority can be given to each node.
457 A service bound to this group will run on the nodes with the highest priority
458 available. If more nodes are in the highest priority class the services will
459 get distributed to those node if not already there. The priorities have a
460 relative meaning only.
461 Example;;
462 You want to run all services from a group on `node1` if possible. If this node
463 is not available, you want them to run equally splitted on `node2` and `node3`, and
464 if those fail it should use `node4`.
465 To achieve this you could set the node list to:
466 [source,bash]
467 ha-manager groupset mygroup -nodes "node1:2,node2:1,node3:1,node4"
468
469 restricted::
470
471 Resources bound to this group may only run on nodes defined by the
472 group. If no group node member is available the resource will be
473 placed in the stopped state.
474 Example;;
475 Lets say a service uses resources only available on `node1` and `node2`,
476 so we need to make sure that HA manager does not use other nodes.
477 We need to create a 'restricted' group with said nodes:
478 [source,bash]
479 ha-manager groupset mygroup -nodes "node1,node2" -restricted
480
481 nofailback::
482
483 The resource won't automatically fail back when a more preferred node
484 (re)joins the cluster.
485 Examples;;
486 * You need to migrate a service to a node which hasn't the highest priority
487 in the group at the moment, to tell the HA manager to not move this service
488 instantly back set the 'nofailback' option and the service will stay on
489 the current node.
490
491 * A service was fenced and it got recovered to another node. The admin
492 repaired the node and brought it up online again but does not want that the
493 recovered services move straight back to the repaired node as he wants to
494 first investigate the failure cause and check if it runs stable. He can use
495 the 'nofailback' option to achieve this.
496
497
498 Start Failure Policy
499 ---------------------
500
501 The start failure policy comes in effect if a service failed to start on a
502 node once ore more times. It can be used to configure how often a restart
503 should be triggered on the same node and how often a service should be
504 relocated so that it gets a try to be started on another node.
505 The aim of this policy is to circumvent temporary unavailability of shared
506 resources on a specific node. For example, if a shared storage isn't available
507 on a quorate node anymore, e.g. network problems, but still on other nodes,
508 the relocate policy allows then that the service gets started nonetheless.
509
510 There are two service start recover policy settings which can be configured
511 specific for each resource.
512
513 max_restart::
514
515 Maximum number of tries to restart an failed service on the actual
516 node. The default is set to one.
517
518 max_relocate::
519
520 Maximum number of tries to relocate the service to a different node.
521 A relocate only happens after the max_restart value is exceeded on the
522 actual node. The default is set to one.
523
524 NOTE: The relocate count state will only reset to zero when the
525 service had at least one successful start. That means if a service is
526 re-enabled without fixing the error only the restart policy gets
527 repeated.
528
529 Error Recovery
530 --------------
531
532 If after all tries the service state could not be recovered it gets
533 placed in an error state. In this state the service won't get touched
534 by the HA stack anymore. To recover from this state you should follow
535 these steps:
536
537 * bring the resource back into a safe and consistent state (e.g.,
538 killing its process)
539
540 * disable the ha resource to place it in an stopped state
541
542 * fix the error which led to this failures
543
544 * *after* you fixed all errors you may enable the service again
545
546
547 [[ha_manager_service_operations]]
548 Service Operations
549 ------------------
550
551 This are how the basic user-initiated service operations (via
552 `ha-manager`) work.
553
554 enable::
555
556 The service will be started by the LRM if not already running.
557
558 disable::
559
560 The service will be stopped by the LRM if running.
561
562 migrate/relocate::
563
564 The service will be relocated (live) to another node.
565
566 remove::
567
568 The service will be removed from the HA managed resource list. Its
569 current state will not be touched.
570
571 start/stop::
572
573 `start` and `stop` commands can be issued to the resource specific tools
574 (like `qm` or `pct`), they will forward the request to the
575 `ha-manager` which then will execute the action and set the resulting
576 service state (enabled, disabled).
577
578
579 Service States
580 --------------
581
582 stopped::
583
584 Service is stopped (confirmed by LRM), if detected running it will get stopped
585 again.
586
587 request_stop::
588
589 Service should be stopped. Waiting for confirmation from LRM.
590
591 started::
592
593 Service is active an LRM should start it ASAP if not already running.
594 If the Service fails and is detected to be not running the LRM restarts it.
595
596 fence::
597
598 Wait for node fencing (service node is not inside quorate cluster
599 partition).
600 As soon as node gets fenced successfully the service will be recovered to
601 another node, if possible.
602
603 freeze::
604
605 Do not touch the service state. We use this state while we reboot a
606 node, or when we restart the LRM daemon.
607
608 migrate::
609
610 Migrate service (live) to other node.
611
612 error::
613
614 Service disabled because of LRM errors. Needs manual intervention.
615
616
617 ifdef::manvolnum[]
618 include::pve-copyright.adoc[]
619 endif::manvolnum[]
620