]> git.proxmox.com Git - pve-docs.git/blame_incremental - ha-manager.adoc
bump version to 4.3-8
[pve-docs.git] / ha-manager.adoc
... / ...
CommitLineData
1[[chapter_ha_manager]]
2ifdef::manvolnum[]
3ha-manager(1)
4=============
5include::attributes.txt[]
6:pve-toplevel:
7
8NAME
9----
10
11ha-manager - Proxmox VE HA Manager
12
13SYNOPSIS
14--------
15
16include::ha-manager.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22High Availability
23=================
24include::attributes.txt[]
25:pve-toplevel:
26endif::manvolnum[]
27
28Our modern society depends heavily on information provided by
29computers over the network. Mobile devices amplified that dependency,
30because people can access the network any time from anywhere. If you
31provide such services, it is very important that they are available
32most of the time.
33
34We can mathematically define the availability as the ratio of (A) the
35total time a service is capable of being used during a given interval
36to (B) the length of the interval. It is normally expressed as a
37percentage of uptime in a given year.
38
39.Availability - Downtime per Year
40[width="60%",cols="<d,d",options="header"]
41|===========================================================
42|Availability % |Downtime per year
43|99 |3.65 days
44|99.9 |8.76 hours
45|99.99 |52.56 minutes
46|99.999 |5.26 minutes
47|99.9999 |31.5 seconds
48|99.99999 |3.15 seconds
49|===========================================================
50
51There are several ways to increase availability. The most elegant
52solution is to rewrite your software, so that you can run it on
53several host at the same time. The software itself need to have a way
54to detect errors and do failover. This is relatively easy if you just
55want to serve read-only web pages. But in general this is complex, and
56sometimes impossible because you cannot modify the software
57yourself. The following solutions works without modifying the
58software:
59
60* Use reliable ``server'' components
61
62NOTE: Computer components with same functionality can have varying
63reliability numbers, depending on the component quality. Most vendors
64sell components with higher reliability as ``server'' components -
65usually at higher price.
66
67* Eliminate single point of failure (redundant components)
68** use an uninterruptible power supply (UPS)
69** use redundant power supplies on the main boards
70** use ECC-RAM
71** use redundant network hardware
72** use RAID for local storage
73** use distributed, redundant storage for VM data
74
75* Reduce downtime
76** rapidly accessible administrators (24/7)
77** availability of spare parts (other nodes in a {pve} cluster)
78** automatic error detection (provided by `ha-manager`)
79** automatic failover (provided by `ha-manager`)
80
81Virtualization environments like {pve} make it much easier to reach
82high availability because they remove the ``hardware'' dependency. They
83also support to setup and use redundant storage and network
84devices. So if one host fail, you can simply start those services on
85another host within your cluster.
86
87Even better, {pve} provides a software stack called `ha-manager`,
88which can do that automatically for you. It is able to automatically
89detect errors and do automatic failover.
90
91{pve} `ha-manager` works like an ``automated'' administrator. First, you
92configure what resources (VMs, containers, ...) it should
93manage. `ha-manager` then observes correct functionality, and handles
94service failover to another node in case of errors. `ha-manager` can
95also handle normal user requests which may start, stop, relocate and
96migrate a service.
97
98But high availability comes at a price. High quality components are
99more expensive, and making them redundant duplicates the costs at
100least. Additional spare parts increase costs further. So you should
101carefully calculate the benefits, and compare with those additional
102costs.
103
104TIP: Increasing availability from 99% to 99.9% is relatively
105simply. But increasing availability from 99.9999% to 99.99999% is very
106hard and costly. `ha-manager` has typical error detection and failover
107times of about 2 minutes, so you can get no more than 99.999%
108availability.
109
110Requirements
111------------
112
113* at least three cluster nodes (to get reliable quorum)
114
115* shared storage for VMs and containers
116
117* hardware redundancy (everywhere)
118
119* hardware watchdog - if not available we fall back to the
120 linux kernel software watchdog (`softdog`)
121
122* optional hardware fencing devices
123
124
125[[ha_manager_resources]]
126Resources
127---------
128
129We call the primary management unit handled by `ha-manager` a
130resource. A resource (also called ``service'') is uniquely
131identified by a service ID (SID), which consists of the resource type
132and an type specific ID, e.g.: `vm:100`. That example would be a
133resource of type `vm` (virtual machine) with the ID 100.
134
135For now we have two important resources types - virtual machines and
136containers. One basic idea here is that we can bundle related software
137into such VM or container, so there is no need to compose one big
138service from other services, like it was done with `rgmanager`. In
139general, a HA enabled resource should not depend on other resources.
140
141
142How It Works
143------------
144
145This section provides an in detail description of the {PVE} HA-manager
146internals. It describes how the CRM and the LRM work together.
147
148To provide High Availability two daemons run on each node:
149
150`pve-ha-lrm`::
151
152The local resource manager (LRM), it controls the services running on
153the local node.
154It reads the requested states for its services from the current manager
155status file and executes the respective commands.
156
157`pve-ha-crm`::
158
159The cluster resource manager (CRM), it controls the cluster wide
160actions of the services, processes the LRM results and includes the state
161machine which controls the state of each service.
162
163.Locks in the LRM & CRM
164[NOTE]
165Locks are provided by our distributed configuration file system (pmxcfs).
166They are used to guarantee that each LRM is active once and working. As a
167LRM only executes actions when it holds its lock we can mark a failed node
168as fenced if we can acquire its lock. This lets us then recover any failed
169HA services securely without any interference from the now unknown failed node.
170This all gets supervised by the CRM which holds currently the manager master
171lock.
172
173Local Resource Manager
174~~~~~~~~~~~~~~~~~~~~~~
175
176The local resource manager (`pve-ha-lrm`) is started as a daemon on
177boot and waits until the HA cluster is quorate and thus cluster wide
178locks are working.
179
180It can be in three states:
181
182wait for agent lock::
183
184The LRM waits for our exclusive lock. This is also used as idle state if no
185service is configured.
186
187active::
188
189The LRM holds its exclusive lock and has services configured.
190
191lost agent lock::
192
193The LRM lost its lock, this means a failure happened and quorum was lost.
194
195After the LRM gets in the active state it reads the manager status
196file in `/etc/pve/ha/manager_status` and determines the commands it
197has to execute for the services it owns.
198For each command a worker gets started, this workers are running in
199parallel and are limited to at most 4 by default. This default setting
200may be changed through the datacenter configuration key `max_worker`.
201When finished the worker process gets collected and its result saved for
202the CRM.
203
204.Maximum Concurrent Worker Adjustment Tips
205[NOTE]
206The default value of at most 4 concurrent workers may be unsuited for
207a specific setup. For example may 4 live migrations happen at the same
208time, which can lead to network congestions with slower networks and/or
209big (memory wise) services. Ensure that also in the worst case no congestion
210happens and lower the `max_worker` value if needed. In the contrary, if you
211have a particularly powerful high end setup you may also want to increase it.
212
213Each command requested by the CRM is uniquely identifiable by an UID, when
214the worker finished its result will be processed and written in the LRM
215status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
216it and let its state machine - respective the commands output - act on it.
217
218The actions on each service between CRM and LRM are normally always synced.
219This means that the CRM requests a state uniquely marked by an UID, the LRM
220then executes this action *one time* and writes back the result, also
221identifiable by the same UID. This is needed so that the LRM does not
222executes an outdated command.
223With the exception of the `stop` and the `error` command,
224those two do not depend on the result produced and are executed
225always in the case of the stopped state and once in the case of
226the error state.
227
228.Read the Logs
229[NOTE]
230The HA Stack logs every action it makes. This helps to understand what
231and also why something happens in the cluster. Here its important to see
232what both daemons, the LRM and the CRM, did. You may use
233`journalctl -u pve-ha-lrm` on the node(s) where the service is and
234the same command for the pve-ha-crm on the node which is the current master.
235
236Cluster Resource Manager
237~~~~~~~~~~~~~~~~~~~~~~~~
238
239The cluster resource manager (`pve-ha-crm`) starts on each node and
240waits there for the manager lock, which can only be held by one node
241at a time. The node which successfully acquires the manager lock gets
242promoted to the CRM master.
243
244It can be in three states:
245
246wait for agent lock::
247
248The CRM waits for our exclusive lock. This is also used as idle state if no
249service is configured
250
251active::
252
253The CRM holds its exclusive lock and has services configured
254
255lost agent lock::
256
257The CRM lost its lock, this means a failure happened and quorum was lost.
258
259It main task is to manage the services which are configured to be highly
260available and try to always enforce them to the wanted state, e.g.: a
261enabled service will be started if its not running, if it crashes it will
262be started again. Thus it dictates the LRM the actions it needs to execute.
263
264When an node leaves the cluster quorum, its state changes to unknown.
265If the current CRM then can secure the failed nodes lock, the services
266will be 'stolen' and restarted on another node.
267
268When a cluster member determines that it is no longer in the cluster
269quorum, the LRM waits for a new quorum to form. As long as there is no
270quorum the node cannot reset the watchdog. This will trigger a reboot
271after the watchdog then times out, this happens after 60 seconds.
272
273Configuration
274-------------
275
276The HA stack is well integrated in the Proxmox VE API2. So, for
277example, HA can be configured via `ha-manager` or the PVE web
278interface, which both provide an easy to use tool.
279
280The resource configuration file can be located at
281`/etc/pve/ha/resources.cfg` and the group configuration file at
282`/etc/pve/ha/groups.cfg`. Use the provided tools to make changes,
283there shouldn't be any need to edit them manually.
284
285Node Power Status
286-----------------
287
288If a node needs maintenance you should migrate and or relocate all
289services which are required to run always on another node first.
290After that you can stop the LRM and CRM services. But note that the
291watchdog triggers if you stop it with active services.
292
293Package Updates
294---------------
295
296When updating the ha-manager you should do one node after the other, never
297all at once for various reasons. First, while we test our software
298thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
299Upgrading one node after the other and checking the functionality of each node
300after finishing the update helps to recover from an eventual problems, while
301updating all could render you in a broken cluster state and is generally not
302good practice.
303
304Also, the {pve} HA stack uses a request acknowledge protocol to perform
305actions between the cluster and the local resource manager. For restarting,
306the LRM makes a request to the CRM to freeze all its services. This prevents
307that they get touched by the Cluster during the short time the LRM is restarting.
308After that the LRM may safely close the watchdog during a restart.
309Such a restart happens on a update and as already stated a active master
310CRM is needed to acknowledge the requests from the LRM, if this is not the case
311the update process can be too long which, in the worst case, may result in
312a watchdog reset.
313
314
315[[ha_manager_fencing]]
316Fencing
317-------
318
319What is Fencing
320~~~~~~~~~~~~~~~
321
322Fencing secures that on a node failure the dangerous node gets will be rendered
323unable to do any damage and that no resource runs twice when it gets recovered
324from the failed node. This is a really important task and one of the base
325principles to make a system Highly Available.
326
327If a node would not get fenced it would be in an unknown state where it may
328have still access to shared resources, this is really dangerous!
329Imagine that every network but the storage one broke, now while not
330reachable from the public network the VM still runs and writes on the shared
331storage. If we would not fence the node and just start up this VM on another
332Node we would get dangerous race conditions, atomicity violations the whole VM
333could be rendered unusable. The recovery could also simply fail if the storage
334protects from multiple mounts and thus defeat the purpose of HA.
335
336How {pve} Fences
337~~~~~~~~~~~~~~~~~
338
339There are different methods to fence a node, for example fence devices which
340cut off the power from the node or disable their communication completely.
341
342Those are often quite expensive and bring additional critical components in
343a system, because if they fail you cannot recover any service.
344
345We thus wanted to integrate a simpler method in the HA Manager first, namely
346self fencing with watchdogs.
347
348Watchdogs are widely used in critical and dependable systems since the
349beginning of micro controllers, they are often independent and simple
350integrated circuit which programs can use to watch them. After opening they need to
351report periodically. If, for whatever reason, a program becomes unable to do
352so the watchdogs triggers a reset of the whole server.
353
354Server motherboards often already include such hardware watchdogs, these need
355to be configured. If no watchdog is available or configured we fall back to the
356Linux Kernel softdog while still reliable it is not independent of the servers
357Hardware and thus has a lower reliability then a hardware watchdog.
358
359Configure Hardware Watchdog
360~~~~~~~~~~~~~~~~~~~~~~~~~~~
361By default all watchdog modules are blocked for security reasons as they are
362like a loaded gun if not correctly initialized.
363If you have a hardware watchdog available remove its kernel module from the
364blacklist, load it with insmod and restart the `watchdog-mux` service or reboot
365the node.
366
367Recover Fenced Services
368~~~~~~~~~~~~~~~~~~~~~~~
369
370After a node failed and its fencing was successful we start to recover services
371to other available nodes and restart them there so that they can provide service
372again.
373
374The selection of the node on which the services gets recovered is influenced
375by the users group settings, the currently active nodes and their respective
376active service count.
377First we build a set out of the intersection between user selected nodes and
378available nodes. Then the subset with the highest priority of those nodes
379gets chosen as possible nodes for recovery. We select the node with the
380currently lowest active service count as a new node for the service.
381That minimizes the possibility of an overload, which else could cause an
382unresponsive node and as a result a chain reaction of node failures in the
383cluster.
384
385[[ha_manager_groups]]
386Groups
387------
388
389A group is a collection of cluster nodes which a service may be bound to.
390
391Group Settings
392~~~~~~~~~~~~~~
393
394nodes::
395
396List of group node members where a priority can be given to each node.
397A service bound to this group will run on the nodes with the highest priority
398available. If more nodes are in the highest priority class the services will
399get distributed to those node if not already there. The priorities have a
400relative meaning only.
401 Example;;
402 You want to run all services from a group on `node1` if possible. If this node
403 is not available, you want them to run equally splitted on `node2` and `node3`, and
404 if those fail it should use `node4`.
405 To achieve this you could set the node list to:
406[source,bash]
407 ha-manager groupset mygroup -nodes "node1:2,node2:1,node3:1,node4"
408
409restricted::
410
411Resources bound to this group may only run on nodes defined by the
412group. If no group node member is available the resource will be
413placed in the stopped state.
414 Example;;
415 Lets say a service uses resources only available on `node1` and `node2`,
416 so we need to make sure that HA manager does not use other nodes.
417 We need to create a 'restricted' group with said nodes:
418[source,bash]
419 ha-manager groupset mygroup -nodes "node1,node2" -restricted
420
421nofailback::
422
423The resource won't automatically fail back when a more preferred node
424(re)joins the cluster.
425 Examples;;
426 * You need to migrate a service to a node which hasn't the highest priority
427 in the group at the moment, to tell the HA manager to not move this service
428 instantly back set the 'nofailback' option and the service will stay on
429 the current node.
430
431 * A service was fenced and it got recovered to another node. The admin
432 repaired the node and brought it up online again but does not want that the
433 recovered services move straight back to the repaired node as he wants to
434 first investigate the failure cause and check if it runs stable. He can use
435 the 'nofailback' option to achieve this.
436
437
438Start Failure Policy
439---------------------
440
441The start failure policy comes in effect if a service failed to start on a
442node once ore more times. It can be used to configure how often a restart
443should be triggered on the same node and how often a service should be
444relocated so that it gets a try to be started on another node.
445The aim of this policy is to circumvent temporary unavailability of shared
446resources on a specific node. For example, if a shared storage isn't available
447on a quorate node anymore, e.g. network problems, but still on other nodes,
448the relocate policy allows then that the service gets started nonetheless.
449
450There are two service start recover policy settings which can be configured
451specific for each resource.
452
453max_restart::
454
455Maximum number of tries to restart an failed service on the actual
456node. The default is set to one.
457
458max_relocate::
459
460Maximum number of tries to relocate the service to a different node.
461A relocate only happens after the max_restart value is exceeded on the
462actual node. The default is set to one.
463
464NOTE: The relocate count state will only reset to zero when the
465service had at least one successful start. That means if a service is
466re-enabled without fixing the error only the restart policy gets
467repeated.
468
469Error Recovery
470--------------
471
472If after all tries the service state could not be recovered it gets
473placed in an error state. In this state the service won't get touched
474by the HA stack anymore. To recover from this state you should follow
475these steps:
476
477* bring the resource back into a safe and consistent state (e.g.,
478killing its process)
479
480* disable the ha resource to place it in an stopped state
481
482* fix the error which led to this failures
483
484* *after* you fixed all errors you may enable the service again
485
486
487Service Operations
488------------------
489
490This are how the basic user-initiated service operations (via
491`ha-manager`) work.
492
493enable::
494
495The service will be started by the LRM if not already running.
496
497disable::
498
499The service will be stopped by the LRM if running.
500
501migrate/relocate::
502
503The service will be relocated (live) to another node.
504
505remove::
506
507The service will be removed from the HA managed resource list. Its
508current state will not be touched.
509
510start/stop::
511
512`start` and `stop` commands can be issued to the resource specific tools
513(like `qm` or `pct`), they will forward the request to the
514`ha-manager` which then will execute the action and set the resulting
515service state (enabled, disabled).
516
517
518Service States
519--------------
520
521stopped::
522
523Service is stopped (confirmed by LRM), if detected running it will get stopped
524again.
525
526request_stop::
527
528Service should be stopped. Waiting for confirmation from LRM.
529
530started::
531
532Service is active an LRM should start it ASAP if not already running.
533If the Service fails and is detected to be not running the LRM restarts it.
534
535fence::
536
537Wait for node fencing (service node is not inside quorate cluster
538partition).
539As soon as node gets fenced successfully the service will be recovered to
540another node, if possible.
541
542freeze::
543
544Do not touch the service state. We use this state while we reboot a
545node, or when we restart the LRM daemon.
546
547migrate::
548
549Migrate service (live) to other node.
550
551error::
552
553Service disabled because of LRM errors. Needs manual intervention.
554
555
556ifdef::manvolnum[]
557include::pve-copyright.adoc[]
558endif::manvolnum[]
559