]> git.proxmox.com Git - pve-docs.git/blame - ha-manager.adoc
remove test code
[pve-docs.git] / ha-manager.adoc
CommitLineData
22653ac8
DM
1[[chapter-ha-manager]]
2ifdef::manvolnum[]
b2f242ab
DM
3ha-manager(1)
4=============
22653ac8 5include::attributes.txt[]
5f09af76
DM
6:pve-toplevel:
7
22653ac8
DM
8NAME
9----
10
734404b4 11ha-manager - Proxmox VE HA Manager
22653ac8 12
49a5e11c 13SYNOPSIS
22653ac8
DM
14--------
15
16include::ha-manager.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23High Availability
24=================
25include::attributes.txt[]
26endif::manvolnum[]
5f09af76
DM
27ifdef::wiki[]
28:pve-toplevel:
29endif::wiki[]
b5266e9f
DM
30
31Our modern society depends heavily on information provided by
32computers over the network. Mobile devices amplified that dependency,
33because people can access the network any time from anywhere. If you
34provide such services, it is very important that they are available
35most of the time.
36
37We can mathematically define the availability as the ratio of (A) the
38total time a service is capable of being used during a given interval
39to (B) the length of the interval. It is normally expressed as a
40percentage of uptime in a given year.
41
42.Availability - Downtime per Year
43[width="60%",cols="<d,d",options="header"]
44|===========================================================
45|Availability % |Downtime per year
46|99 |3.65 days
47|99.9 |8.76 hours
48|99.99 |52.56 minutes
49|99.999 |5.26 minutes
50|99.9999 |31.5 seconds
51|99.99999 |3.15 seconds
52|===========================================================
53
04bde502
DM
54There are several ways to increase availability. The most elegant
55solution is to rewrite your software, so that you can run it on
56several host at the same time. The software itself need to have a way
2af6af05 57to detect errors and do failover. This is relatively easy if you just
04bde502
DM
58want to serve read-only web pages. But in general this is complex, and
59sometimes impossible because you cannot modify the software
60yourself. The following solutions works without modifying the
61software:
62
8c1189b6 63* Use reliable ``server'' components
04bde502
DM
64
65NOTE: Computer components with same functionality can have varying
2af6af05 66reliability numbers, depending on the component quality. Most vendors
8c1189b6 67sell components with higher reliability as ``server'' components -
04bde502 68usually at higher price.
b5266e9f
DM
69
70* Eliminate single point of failure (redundant components)
8c1189b6
FG
71** use an uninterruptible power supply (UPS)
72** use redundant power supplies on the main boards
73** use ECC-RAM
74** use redundant network hardware
75** use RAID for local storage
76** use distributed, redundant storage for VM data
b5266e9f
DM
77
78* Reduce downtime
8c1189b6
FG
79** rapidly accessible administrators (24/7)
80** availability of spare parts (other nodes in a {pve} cluster)
81** automatic error detection (provided by `ha-manager`)
82** automatic failover (provided by `ha-manager`)
b5266e9f 83
5771d9b0 84Virtualization environments like {pve} make it much easier to reach
8c1189b6 85high availability because they remove the ``hardware'' dependency. They
04bde502
DM
86also support to setup and use redundant storage and network
87devices. So if one host fail, you can simply start those services on
43da8322
DM
88another host within your cluster.
89
8c1189b6 90Even better, {pve} provides a software stack called `ha-manager`,
43da8322
DM
91which can do that automatically for you. It is able to automatically
92detect errors and do automatic failover.
93
8c1189b6 94{pve} `ha-manager` works like an ``automated'' administrator. First, you
43da8322 95configure what resources (VMs, containers, ...) it should
8c1189b6
FG
96manage. `ha-manager` then observes correct functionality, and handles
97service failover to another node in case of errors. `ha-manager` can
43da8322
DM
98also handle normal user requests which may start, stop, relocate and
99migrate a service.
04bde502
DM
100
101But high availability comes at a price. High quality components are
102more expensive, and making them redundant duplicates the costs at
103least. Additional spare parts increase costs further. So you should
104carefully calculate the benefits, and compare with those additional
105costs.
106
107TIP: Increasing availability from 99% to 99.9% is relatively
108simply. But increasing availability from 99.9999% to 99.99999% is very
8c1189b6 109hard and costly. `ha-manager` has typical error detection and failover
43da8322
DM
110times of about 2 minutes, so you can get no more than 99.999%
111availability.
b5266e9f 112
5bd515d4
DM
113Requirements
114------------
3810ae1e 115
5bd515d4 116* at least three cluster nodes (to get reliable quorum)
43da8322 117
5bd515d4 118* shared storage for VMs and containers
43da8322 119
5bd515d4 120* hardware redundancy (everywhere)
3810ae1e 121
5bd515d4 122* hardware watchdog - if not available we fall back to the
8c1189b6 123 linux kernel software watchdog (`softdog`)
3810ae1e 124
5bd515d4 125* optional hardware fencing devices
3810ae1e 126
3810ae1e 127
5bd515d4
DM
128Resources
129---------
130
8c1189b6
FG
131We call the primary management unit handled by `ha-manager` a
132resource. A resource (also called ``service'') is uniquely
5bd515d4 133identified by a service ID (SID), which consists of the resource type
8c1189b6
FG
134and an type specific ID, e.g.: `vm:100`. That example would be a
135resource of type `vm` (virtual machine) with the ID 100.
5bd515d4
DM
136
137For now we have two important resources types - virtual machines and
138containers. One basic idea here is that we can bundle related software
139into such VM or container, so there is no need to compose one big
8c1189b6 140service from other services, like it was done with `rgmanager`. In
5bd515d4 141general, a HA enabled resource should not depend on other resources.
3810ae1e 142
22653ac8 143
2b52e195 144How It Works
22653ac8
DM
145------------
146
3810ae1e
TL
147This section provides an in detail description of the {PVE} HA-manager
148internals. It describes how the CRM and the LRM work together.
149
150To provide High Availability two daemons run on each node:
151
8c1189b6 152`pve-ha-lrm`::
3810ae1e
TL
153
154The local resource manager (LRM), it controls the services running on
155the local node.
156It reads the requested states for its services from the current manager
157status file and executes the respective commands.
158
8c1189b6 159`pve-ha-crm`::
3810ae1e
TL
160
161The cluster resource manager (CRM), it controls the cluster wide
2af6af05 162actions of the services, processes the LRM results and includes the state
3810ae1e
TL
163machine which controls the state of each service.
164
165.Locks in the LRM & CRM
166[NOTE]
167Locks are provided by our distributed configuration file system (pmxcfs).
5771d9b0
TL
168They are used to guarantee that each LRM is active once and working. As a
169LRM only executes actions when it holds its lock we can mark a failed node
170as fenced if we can acquire its lock. This lets us then recover any failed
5eba0743 171HA services securely without any interference from the now unknown failed node.
3810ae1e
TL
172This all gets supervised by the CRM which holds currently the manager master
173lock.
174
175Local Resource Manager
176~~~~~~~~~~~~~~~~~~~~~~
177
8c1189b6 178The local resource manager (`pve-ha-lrm`) is started as a daemon on
3810ae1e
TL
179boot and waits until the HA cluster is quorate and thus cluster wide
180locks are working.
181
182It can be in three states:
183
b8663359 184wait for agent lock::
e1ea726a
FG
185
186The LRM waits for our exclusive lock. This is also used as idle state if no
187service is configured.
188
b8663359 189active::
e1ea726a
FG
190
191The LRM holds its exclusive lock and has services configured.
192
b8663359 193lost agent lock::
e1ea726a
FG
194
195The LRM lost its lock, this means a failure happened and quorum was lost.
3810ae1e
TL
196
197After the LRM gets in the active state it reads the manager status
8c1189b6 198file in `/etc/pve/ha/manager_status` and determines the commands it
2af6af05 199has to execute for the services it owns.
3810ae1e 200For each command a worker gets started, this workers are running in
5eba0743 201parallel and are limited to at most 4 by default. This default setting
8c1189b6 202may be changed through the datacenter configuration key `max_worker`.
2af6af05
TL
203When finished the worker process gets collected and its result saved for
204the CRM.
3810ae1e 205
5eba0743 206.Maximum Concurrent Worker Adjustment Tips
3810ae1e 207[NOTE]
5eba0743 208The default value of at most 4 concurrent workers may be unsuited for
3810ae1e
TL
209a specific setup. For example may 4 live migrations happen at the same
210time, which can lead to network congestions with slower networks and/or
211big (memory wise) services. Ensure that also in the worst case no congestion
8c1189b6 212happens and lower the `max_worker` value if needed. In the contrary, if you
3810ae1e
TL
213have a particularly powerful high end setup you may also want to increase it.
214
215Each command requested by the CRM is uniquely identifiable by an UID, when
216the worker finished its result will be processed and written in the LRM
8c1189b6 217status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
3810ae1e
TL
218it and let its state machine - respective the commands output - act on it.
219
220The actions on each service between CRM and LRM are normally always synced.
221This means that the CRM requests a state uniquely marked by an UID, the LRM
222then executes this action *one time* and writes back the result, also
223identifiable by the same UID. This is needed so that the LRM does not
224executes an outdated command.
8c1189b6 225With the exception of the `stop` and the `error` command,
c9aa5d47 226those two do not depend on the result produced and are executed
3810ae1e
TL
227always in the case of the stopped state and once in the case of
228the error state.
229
230.Read the Logs
231[NOTE]
232The HA Stack logs every action it makes. This helps to understand what
233and also why something happens in the cluster. Here its important to see
234what both daemons, the LRM and the CRM, did. You may use
235`journalctl -u pve-ha-lrm` on the node(s) where the service is and
236the same command for the pve-ha-crm on the node which is the current master.
237
238Cluster Resource Manager
239~~~~~~~~~~~~~~~~~~~~~~~~
22653ac8 240
8c1189b6 241The cluster resource manager (`pve-ha-crm`) starts on each node and
22653ac8
DM
242waits there for the manager lock, which can only be held by one node
243at a time. The node which successfully acquires the manager lock gets
3810ae1e
TL
244promoted to the CRM master.
245
2af6af05 246It can be in three states:
3810ae1e 247
b8663359 248wait for agent lock::
e1ea726a 249
97ae300a 250The CRM waits for our exclusive lock. This is also used as idle state if no
e1ea726a
FG
251service is configured
252
b8663359 253active::
e1ea726a 254
97ae300a 255The CRM holds its exclusive lock and has services configured
e1ea726a 256
b8663359 257lost agent lock::
e1ea726a 258
97ae300a 259The CRM lost its lock, this means a failure happened and quorum was lost.
3810ae1e
TL
260
261It main task is to manage the services which are configured to be highly
2af6af05 262available and try to always enforce them to the wanted state, e.g.: a
3810ae1e 263enabled service will be started if its not running, if it crashes it will
2af6af05 264be started again. Thus it dictates the LRM the actions it needs to execute.
22653ac8
DM
265
266When an node leaves the cluster quorum, its state changes to unknown.
267If the current CRM then can secure the failed nodes lock, the services
268will be 'stolen' and restarted on another node.
269
270When a cluster member determines that it is no longer in the cluster
271quorum, the LRM waits for a new quorum to form. As long as there is no
272quorum the node cannot reset the watchdog. This will trigger a reboot
2af6af05 273after the watchdog then times out, this happens after 60 seconds.
22653ac8 274
2b52e195 275Configuration
22653ac8
DM
276-------------
277
2af6af05 278The HA stack is well integrated in the Proxmox VE API2. So, for
8c1189b6 279example, HA can be configured via `ha-manager` or the PVE web
22653ac8
DM
280interface, which both provide an easy to use tool.
281
282The resource configuration file can be located at
8c1189b6
FG
283`/etc/pve/ha/resources.cfg` and the group configuration file at
284`/etc/pve/ha/groups.cfg`. Use the provided tools to make changes,
22653ac8
DM
285there shouldn't be any need to edit them manually.
286
3810ae1e
TL
287Node Power Status
288-----------------
289
290If a node needs maintenance you should migrate and or relocate all
291services which are required to run always on another node first.
292After that you can stop the LRM and CRM services. But note that the
293watchdog triggers if you stop it with active services.
294
5771d9b0
TL
295Package Updates
296---------------
297
2af6af05 298When updating the ha-manager you should do one node after the other, never
5771d9b0
TL
299all at once for various reasons. First, while we test our software
300thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
301Upgrading one node after the other and checking the functionality of each node
302after finishing the update helps to recover from an eventual problems, while
303updating all could render you in a broken cluster state and is generally not
304good practice.
305
306Also, the {pve} HA stack uses a request acknowledge protocol to perform
307actions between the cluster and the local resource manager. For restarting,
308the LRM makes a request to the CRM to freeze all its services. This prevents
309that they get touched by the Cluster during the short time the LRM is restarting.
310After that the LRM may safely close the watchdog during a restart.
311Such a restart happens on a update and as already stated a active master
312CRM is needed to acknowledge the requests from the LRM, if this is not the case
313the update process can be too long which, in the worst case, may result in
314a watchdog reset.
315
2af6af05 316
3810ae1e
TL
317Fencing
318-------
319
5eba0743 320What is Fencing
3810ae1e
TL
321~~~~~~~~~~~~~~~
322
323Fencing secures that on a node failure the dangerous node gets will be rendered
324unable to do any damage and that no resource runs twice when it gets recovered
5771d9b0
TL
325from the failed node. This is a really important task and one of the base
326principles to make a system Highly Available.
327
328If a node would not get fenced it would be in an unknown state where it may
329have still access to shared resources, this is really dangerous!
330Imagine that every network but the storage one broke, now while not
331reachable from the public network the VM still runs and writes on the shared
332storage. If we would not fence the node and just start up this VM on another
333Node we would get dangerous race conditions, atomicity violations the whole VM
334could be rendered unusable. The recovery could also simply fail if the storage
335protects from multiple mounts and thus defeat the purpose of HA.
336
337How {pve} Fences
338~~~~~~~~~~~~~~~~~
339
340There are different methods to fence a node, for example fence devices which
341cut off the power from the node or disable their communication completely.
342
343Those are often quite expensive and bring additional critical components in
344a system, because if they fail you cannot recover any service.
345
346We thus wanted to integrate a simpler method in the HA Manager first, namely
347self fencing with watchdogs.
348
349Watchdogs are widely used in critical and dependable systems since the
350beginning of micro controllers, they are often independent and simple
351integrated circuit which programs can use to watch them. After opening they need to
352report periodically. If, for whatever reason, a program becomes unable to do
353so the watchdogs triggers a reset of the whole server.
354
355Server motherboards often already include such hardware watchdogs, these need
356to be configured. If no watchdog is available or configured we fall back to the
357Linux Kernel softdog while still reliable it is not independent of the servers
358Hardware and thus has a lower reliability then a hardware watchdog.
3810ae1e
TL
359
360Configure Hardware Watchdog
361~~~~~~~~~~~~~~~~~~~~~~~~~~~
362By default all watchdog modules are blocked for security reasons as they are
363like a loaded gun if not correctly initialized.
c9aa5d47 364If you have a hardware watchdog available remove its kernel module from the
8c1189b6 365blacklist, load it with insmod and restart the `watchdog-mux` service or reboot
c9aa5d47 366the node.
3810ae1e 367
2957ef80
TL
368Recover Fenced Services
369~~~~~~~~~~~~~~~~~~~~~~~
370
371After a node failed and its fencing was successful we start to recover services
372to other available nodes and restart them there so that they can provide service
373again.
374
375The selection of the node on which the services gets recovered is influenced
376by the users group settings, the currently active nodes and their respective
377active service count.
378First we build a set out of the intersection between user selected nodes and
379available nodes. Then the subset with the highest priority of those nodes
380gets chosen as possible nodes for recovery. We select the node with the
381currently lowest active service count as a new node for the service.
382That minimizes the possibility of an overload, which else could cause an
383unresponsive node and as a result a chain reaction of node failures in the
384cluster.
385
2b52e195 386Groups
22653ac8
DM
387------
388
389A group is a collection of cluster nodes which a service may be bound to.
390
2b52e195 391Group Settings
22653ac8
DM
392~~~~~~~~~~~~~~
393
394nodes::
395
c9aa5d47
TL
396List of group node members where a priority can be given to each node.
397A service bound to this group will run on the nodes with the highest priority
398available. If more nodes are in the highest priority class the services will
399get distributed to those node if not already there. The priorities have a
400relative meaning only.
93d2a4f9 401 Example;;
b352bff4
DM
402 You want to run all services from a group on `node1` if possible. If this node
403 is not available, you want them to run equally splitted on `node2` and `node3`, and
404 if those fail it should use `node4`.
93d2a4f9
TL
405 To achieve this you could set the node list to:
406[source,bash]
407 ha-manager groupset mygroup -nodes "node1:2,node2:1,node3:1,node4"
22653ac8
DM
408
409restricted::
410
5eba0743 411Resources bound to this group may only run on nodes defined by the
22653ac8
DM
412group. If no group node member is available the resource will be
413placed in the stopped state.
93d2a4f9 414 Example;;
01911cf3
DM
415 Lets say a service uses resources only available on `node1` and `node2`,
416 so we need to make sure that HA manager does not use other nodes.
417 We need to create a 'restricted' group with said nodes:
418[source,bash]
419 ha-manager groupset mygroup -nodes "node1,node2" -restricted
22653ac8
DM
420
421nofailback::
422
5eba0743 423The resource won't automatically fail back when a more preferred node
22653ac8 424(re)joins the cluster.
93d2a4f9
TL
425 Examples;;
426 * You need to migrate a service to a node which hasn't the highest priority
427 in the group at the moment, to tell the HA manager to not move this service
20fa8c22 428 instantly back set the 'nofailback' option and the service will stay on
345f5fe0 429 the current node.
93d2a4f9 430
345f5fe0
DM
431 * A service was fenced and it got recovered to another node. The admin
432 repaired the node and brought it up online again but does not want that the
93d2a4f9
TL
433 recovered services move straight back to the repaired node as he wants to
434 first investigate the failure cause and check if it runs stable. He can use
345f5fe0 435 the 'nofailback' option to achieve this.
22653ac8
DM
436
437
a3189ad1
TL
438Start Failure Policy
439---------------------
440
441The start failure policy comes in effect if a service failed to start on a
442node once ore more times. It can be used to configure how often a restart
443should be triggered on the same node and how often a service should be
444relocated so that it gets a try to be started on another node.
445The aim of this policy is to circumvent temporary unavailability of shared
446resources on a specific node. For example, if a shared storage isn't available
447on a quorate node anymore, e.g. network problems, but still on other nodes,
448the relocate policy allows then that the service gets started nonetheless.
449
450There are two service start recover policy settings which can be configured
22653ac8
DM
451specific for each resource.
452
453max_restart::
454
5eba0743 455Maximum number of tries to restart an failed service on the actual
22653ac8
DM
456node. The default is set to one.
457
458max_relocate::
459
5eba0743 460Maximum number of tries to relocate the service to a different node.
22653ac8
DM
461A relocate only happens after the max_restart value is exceeded on the
462actual node. The default is set to one.
463
0abc65b0 464NOTE: The relocate count state will only reset to zero when the
22653ac8
DM
465service had at least one successful start. That means if a service is
466re-enabled without fixing the error only the restart policy gets
467repeated.
468
2b52e195 469Error Recovery
22653ac8
DM
470--------------
471
472If after all tries the service state could not be recovered it gets
473placed in an error state. In this state the service won't get touched
474by the HA stack anymore. To recover from this state you should follow
475these steps:
476
5eba0743 477* bring the resource back into a safe and consistent state (e.g.,
22653ac8
DM
478killing its process)
479
480* disable the ha resource to place it in an stopped state
481
482* fix the error which led to this failures
483
484* *after* you fixed all errors you may enable the service again
485
486
2b52e195 487Service Operations
22653ac8
DM
488------------------
489
490This are how the basic user-initiated service operations (via
8c1189b6 491`ha-manager`) work.
22653ac8
DM
492
493enable::
494
5eba0743 495The service will be started by the LRM if not already running.
22653ac8
DM
496
497disable::
498
5eba0743 499The service will be stopped by the LRM if running.
22653ac8
DM
500
501migrate/relocate::
502
5eba0743 503The service will be relocated (live) to another node.
22653ac8
DM
504
505remove::
506
5eba0743 507The service will be removed from the HA managed resource list. Its
22653ac8
DM
508current state will not be touched.
509
510start/stop::
511
8c1189b6
FG
512`start` and `stop` commands can be issued to the resource specific tools
513(like `qm` or `pct`), they will forward the request to the
514`ha-manager` which then will execute the action and set the resulting
22653ac8
DM
515service state (enabled, disabled).
516
517
2b52e195 518Service States
22653ac8
DM
519--------------
520
521stopped::
522
c9aa5d47
TL
523Service is stopped (confirmed by LRM), if detected running it will get stopped
524again.
22653ac8
DM
525
526request_stop::
527
528Service should be stopped. Waiting for confirmation from LRM.
529
530started::
531
532Service is active an LRM should start it ASAP if not already running.
c9aa5d47 533If the Service fails and is detected to be not running the LRM restarts it.
22653ac8
DM
534
535fence::
536
537Wait for node fencing (service node is not inside quorate cluster
538partition).
c9aa5d47
TL
539As soon as node gets fenced successfully the service will be recovered to
540another node, if possible.
22653ac8
DM
541
542freeze::
543
544Do not touch the service state. We use this state while we reboot a
545node, or when we restart the LRM daemon.
546
547migrate::
548
549Migrate service (live) to other node.
550
551error::
552
553Service disabled because of LRM errors. Needs manual intervention.
554
555
556ifdef::manvolnum[]
557include::pve-copyright.adoc[]
558endif::manvolnum[]
559