]>
Commit | Line | Data |
---|---|---|
1 | [[chapter-ha-manager]] | |
2 | ifdef::manvolnum[] | |
3 | PVE({manvolnum}) | |
4 | ================ | |
5 | include::attributes.txt[] | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | ha-manager - Proxmox VE HA Manager | |
11 | ||
12 | SYNOPSYS | |
13 | -------- | |
14 | ||
15 | include::ha-manager.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | High Availability | |
23 | ================= | |
24 | include::attributes.txt[] | |
25 | endif::manvolnum[] | |
26 | ||
27 | ||
28 | Our modern society depends heavily on information provided by | |
29 | computers over the network. Mobile devices amplified that dependency, | |
30 | because people can access the network any time from anywhere. If you | |
31 | provide such services, it is very important that they are available | |
32 | most of the time. | |
33 | ||
34 | We can mathematically define the availability as the ratio of (A) the | |
35 | total time a service is capable of being used during a given interval | |
36 | to (B) the length of the interval. It is normally expressed as a | |
37 | percentage of uptime in a given year. | |
38 | ||
39 | .Availability - Downtime per Year | |
40 | [width="60%",cols="<d,d",options="header"] | |
41 | |=========================================================== | |
42 | |Availability % |Downtime per year | |
43 | |99 |3.65 days | |
44 | |99.9 |8.76 hours | |
45 | |99.99 |52.56 minutes | |
46 | |99.999 |5.26 minutes | |
47 | |99.9999 |31.5 seconds | |
48 | |99.99999 |3.15 seconds | |
49 | |=========================================================== | |
50 | ||
51 | There are several ways to increase availability. The most elegant | |
52 | solution is to rewrite your software, so that you can run it on | |
53 | several host at the same time. The software itself need to have a way | |
54 | to detect errors and do failover. This is relatively easy if you just | |
55 | want to serve read-only web pages. But in general this is complex, and | |
56 | sometimes impossible because you cannot modify the software | |
57 | yourself. The following solutions works without modifying the | |
58 | software: | |
59 | ||
60 | * Use reliable ``server'' components | |
61 | ||
62 | NOTE: Computer components with same functionality can have varying | |
63 | reliability numbers, depending on the component quality. Most vendors | |
64 | sell components with higher reliability as ``server'' components - | |
65 | usually at higher price. | |
66 | ||
67 | * Eliminate single point of failure (redundant components) | |
68 | ** use an uninterruptible power supply (UPS) | |
69 | ** use redundant power supplies on the main boards | |
70 | ** use ECC-RAM | |
71 | ** use redundant network hardware | |
72 | ** use RAID for local storage | |
73 | ** use distributed, redundant storage for VM data | |
74 | ||
75 | * Reduce downtime | |
76 | ** rapidly accessible administrators (24/7) | |
77 | ** availability of spare parts (other nodes in a {pve} cluster) | |
78 | ** automatic error detection (provided by `ha-manager`) | |
79 | ** automatic failover (provided by `ha-manager`) | |
80 | ||
81 | Virtualization environments like {pve} make it much easier to reach | |
82 | high availability because they remove the ``hardware'' dependency. They | |
83 | also support to setup and use redundant storage and network | |
84 | devices. So if one host fail, you can simply start those services on | |
85 | another host within your cluster. | |
86 | ||
87 | Even better, {pve} provides a software stack called `ha-manager`, | |
88 | which can do that automatically for you. It is able to automatically | |
89 | detect errors and do automatic failover. | |
90 | ||
91 | {pve} `ha-manager` works like an ``automated'' administrator. First, you | |
92 | configure what resources (VMs, containers, ...) it should | |
93 | manage. `ha-manager` then observes correct functionality, and handles | |
94 | service failover to another node in case of errors. `ha-manager` can | |
95 | also handle normal user requests which may start, stop, relocate and | |
96 | migrate a service. | |
97 | ||
98 | But high availability comes at a price. High quality components are | |
99 | more expensive, and making them redundant duplicates the costs at | |
100 | least. Additional spare parts increase costs further. So you should | |
101 | carefully calculate the benefits, and compare with those additional | |
102 | costs. | |
103 | ||
104 | TIP: Increasing availability from 99% to 99.9% is relatively | |
105 | simply. But increasing availability from 99.9999% to 99.99999% is very | |
106 | hard and costly. `ha-manager` has typical error detection and failover | |
107 | times of about 2 minutes, so you can get no more than 99.999% | |
108 | availability. | |
109 | ||
110 | Requirements | |
111 | ------------ | |
112 | ||
113 | * at least three cluster nodes (to get reliable quorum) | |
114 | ||
115 | * shared storage for VMs and containers | |
116 | ||
117 | * hardware redundancy (everywhere) | |
118 | ||
119 | * hardware watchdog - if not available we fall back to the | |
120 | linux kernel software watchdog (`softdog`) | |
121 | ||
122 | * optional hardware fencing devices | |
123 | ||
124 | ||
125 | Resources | |
126 | --------- | |
127 | ||
128 | We call the primary management unit handled by `ha-manager` a | |
129 | resource. A resource (also called ``service'') is uniquely | |
130 | identified by a service ID (SID), which consists of the resource type | |
131 | and an type specific ID, e.g.: `vm:100`. That example would be a | |
132 | resource of type `vm` (virtual machine) with the ID 100. | |
133 | ||
134 | For now we have two important resources types - virtual machines and | |
135 | containers. One basic idea here is that we can bundle related software | |
136 | into such VM or container, so there is no need to compose one big | |
137 | service from other services, like it was done with `rgmanager`. In | |
138 | general, a HA enabled resource should not depend on other resources. | |
139 | ||
140 | ||
141 | How It Works | |
142 | ------------ | |
143 | ||
144 | This section provides an in detail description of the {PVE} HA-manager | |
145 | internals. It describes how the CRM and the LRM work together. | |
146 | ||
147 | To provide High Availability two daemons run on each node: | |
148 | ||
149 | `pve-ha-lrm`:: | |
150 | ||
151 | The local resource manager (LRM), it controls the services running on | |
152 | the local node. | |
153 | It reads the requested states for its services from the current manager | |
154 | status file and executes the respective commands. | |
155 | ||
156 | `pve-ha-crm`:: | |
157 | ||
158 | The cluster resource manager (CRM), it controls the cluster wide | |
159 | actions of the services, processes the LRM results and includes the state | |
160 | machine which controls the state of each service. | |
161 | ||
162 | .Locks in the LRM & CRM | |
163 | [NOTE] | |
164 | Locks are provided by our distributed configuration file system (pmxcfs). | |
165 | They are used to guarantee that each LRM is active once and working. As a | |
166 | LRM only executes actions when it holds its lock we can mark a failed node | |
167 | as fenced if we can acquire its lock. This lets us then recover any failed | |
168 | HA services securely without any interference from the now unknown failed node. | |
169 | This all gets supervised by the CRM which holds currently the manager master | |
170 | lock. | |
171 | ||
172 | Local Resource Manager | |
173 | ~~~~~~~~~~~~~~~~~~~~~~ | |
174 | ||
175 | The local resource manager (`pve-ha-lrm`) is started as a daemon on | |
176 | boot and waits until the HA cluster is quorate and thus cluster wide | |
177 | locks are working. | |
178 | ||
179 | It can be in three states: | |
180 | ||
181 | wait for agent lock:: | |
182 | ||
183 | The LRM waits for our exclusive lock. This is also used as idle state if no | |
184 | service is configured. | |
185 | ||
186 | active:: | |
187 | ||
188 | The LRM holds its exclusive lock and has services configured. | |
189 | ||
190 | lost agent lock:: | |
191 | ||
192 | The LRM lost its lock, this means a failure happened and quorum was lost. | |
193 | ||
194 | After the LRM gets in the active state it reads the manager status | |
195 | file in `/etc/pve/ha/manager_status` and determines the commands it | |
196 | has to execute for the services it owns. | |
197 | For each command a worker gets started, this workers are running in | |
198 | parallel and are limited to at most 4 by default. This default setting | |
199 | may be changed through the datacenter configuration key `max_worker`. | |
200 | When finished the worker process gets collected and its result saved for | |
201 | the CRM. | |
202 | ||
203 | .Maximum Concurrent Worker Adjustment Tips | |
204 | [NOTE] | |
205 | The default value of at most 4 concurrent workers may be unsuited for | |
206 | a specific setup. For example may 4 live migrations happen at the same | |
207 | time, which can lead to network congestions with slower networks and/or | |
208 | big (memory wise) services. Ensure that also in the worst case no congestion | |
209 | happens and lower the `max_worker` value if needed. In the contrary, if you | |
210 | have a particularly powerful high end setup you may also want to increase it. | |
211 | ||
212 | Each command requested by the CRM is uniquely identifiable by an UID, when | |
213 | the worker finished its result will be processed and written in the LRM | |
214 | status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect | |
215 | it and let its state machine - respective the commands output - act on it. | |
216 | ||
217 | The actions on each service between CRM and LRM are normally always synced. | |
218 | This means that the CRM requests a state uniquely marked by an UID, the LRM | |
219 | then executes this action *one time* and writes back the result, also | |
220 | identifiable by the same UID. This is needed so that the LRM does not | |
221 | executes an outdated command. | |
222 | With the exception of the `stop` and the `error` command, | |
223 | those two do not depend on the result produced and are executed | |
224 | always in the case of the stopped state and once in the case of | |
225 | the error state. | |
226 | ||
227 | .Read the Logs | |
228 | [NOTE] | |
229 | The HA Stack logs every action it makes. This helps to understand what | |
230 | and also why something happens in the cluster. Here its important to see | |
231 | what both daemons, the LRM and the CRM, did. You may use | |
232 | `journalctl -u pve-ha-lrm` on the node(s) where the service is and | |
233 | the same command for the pve-ha-crm on the node which is the current master. | |
234 | ||
235 | Cluster Resource Manager | |
236 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
237 | ||
238 | The cluster resource manager (`pve-ha-crm`) starts on each node and | |
239 | waits there for the manager lock, which can only be held by one node | |
240 | at a time. The node which successfully acquires the manager lock gets | |
241 | promoted to the CRM master. | |
242 | ||
243 | It can be in three states: | |
244 | ||
245 | wait for agent lock:: | |
246 | ||
247 | The CRM waits for our exclusive lock. This is also used as idle state if no | |
248 | service is configured | |
249 | ||
250 | active:: | |
251 | ||
252 | The CRM holds its exclusive lock and has services configured | |
253 | ||
254 | lost agent lock:: | |
255 | ||
256 | The CRM lost its lock, this means a failure happened and quorum was lost. | |
257 | ||
258 | It main task is to manage the services which are configured to be highly | |
259 | available and try to always enforce them to the wanted state, e.g.: a | |
260 | enabled service will be started if its not running, if it crashes it will | |
261 | be started again. Thus it dictates the LRM the actions it needs to execute. | |
262 | ||
263 | When an node leaves the cluster quorum, its state changes to unknown. | |
264 | If the current CRM then can secure the failed nodes lock, the services | |
265 | will be 'stolen' and restarted on another node. | |
266 | ||
267 | When a cluster member determines that it is no longer in the cluster | |
268 | quorum, the LRM waits for a new quorum to form. As long as there is no | |
269 | quorum the node cannot reset the watchdog. This will trigger a reboot | |
270 | after the watchdog then times out, this happens after 60 seconds. | |
271 | ||
272 | Configuration | |
273 | ------------- | |
274 | ||
275 | The HA stack is well integrated in the Proxmox VE API2. So, for | |
276 | example, HA can be configured via `ha-manager` or the PVE web | |
277 | interface, which both provide an easy to use tool. | |
278 | ||
279 | The resource configuration file can be located at | |
280 | `/etc/pve/ha/resources.cfg` and the group configuration file at | |
281 | `/etc/pve/ha/groups.cfg`. Use the provided tools to make changes, | |
282 | there shouldn't be any need to edit them manually. | |
283 | ||
284 | Node Power Status | |
285 | ----------------- | |
286 | ||
287 | If a node needs maintenance you should migrate and or relocate all | |
288 | services which are required to run always on another node first. | |
289 | After that you can stop the LRM and CRM services. But note that the | |
290 | watchdog triggers if you stop it with active services. | |
291 | ||
292 | Package Updates | |
293 | --------------- | |
294 | ||
295 | When updating the ha-manager you should do one node after the other, never | |
296 | all at once for various reasons. First, while we test our software | |
297 | thoughtfully, a bug affecting your specific setup cannot totally be ruled out. | |
298 | Upgrading one node after the other and checking the functionality of each node | |
299 | after finishing the update helps to recover from an eventual problems, while | |
300 | updating all could render you in a broken cluster state and is generally not | |
301 | good practice. | |
302 | ||
303 | Also, the {pve} HA stack uses a request acknowledge protocol to perform | |
304 | actions between the cluster and the local resource manager. For restarting, | |
305 | the LRM makes a request to the CRM to freeze all its services. This prevents | |
306 | that they get touched by the Cluster during the short time the LRM is restarting. | |
307 | After that the LRM may safely close the watchdog during a restart. | |
308 | Such a restart happens on a update and as already stated a active master | |
309 | CRM is needed to acknowledge the requests from the LRM, if this is not the case | |
310 | the update process can be too long which, in the worst case, may result in | |
311 | a watchdog reset. | |
312 | ||
313 | ||
314 | Fencing | |
315 | ------- | |
316 | ||
317 | What is Fencing | |
318 | ~~~~~~~~~~~~~~~ | |
319 | ||
320 | Fencing secures that on a node failure the dangerous node gets will be rendered | |
321 | unable to do any damage and that no resource runs twice when it gets recovered | |
322 | from the failed node. This is a really important task and one of the base | |
323 | principles to make a system Highly Available. | |
324 | ||
325 | If a node would not get fenced it would be in an unknown state where it may | |
326 | have still access to shared resources, this is really dangerous! | |
327 | Imagine that every network but the storage one broke, now while not | |
328 | reachable from the public network the VM still runs and writes on the shared | |
329 | storage. If we would not fence the node and just start up this VM on another | |
330 | Node we would get dangerous race conditions, atomicity violations the whole VM | |
331 | could be rendered unusable. The recovery could also simply fail if the storage | |
332 | protects from multiple mounts and thus defeat the purpose of HA. | |
333 | ||
334 | How {pve} Fences | |
335 | ~~~~~~~~~~~~~~~~~ | |
336 | ||
337 | There are different methods to fence a node, for example fence devices which | |
338 | cut off the power from the node or disable their communication completely. | |
339 | ||
340 | Those are often quite expensive and bring additional critical components in | |
341 | a system, because if they fail you cannot recover any service. | |
342 | ||
343 | We thus wanted to integrate a simpler method in the HA Manager first, namely | |
344 | self fencing with watchdogs. | |
345 | ||
346 | Watchdogs are widely used in critical and dependable systems since the | |
347 | beginning of micro controllers, they are often independent and simple | |
348 | integrated circuit which programs can use to watch them. After opening they need to | |
349 | report periodically. If, for whatever reason, a program becomes unable to do | |
350 | so the watchdogs triggers a reset of the whole server. | |
351 | ||
352 | Server motherboards often already include such hardware watchdogs, these need | |
353 | to be configured. If no watchdog is available or configured we fall back to the | |
354 | Linux Kernel softdog while still reliable it is not independent of the servers | |
355 | Hardware and thus has a lower reliability then a hardware watchdog. | |
356 | ||
357 | Configure Hardware Watchdog | |
358 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
359 | By default all watchdog modules are blocked for security reasons as they are | |
360 | like a loaded gun if not correctly initialized. | |
361 | If you have a hardware watchdog available remove its kernel module from the | |
362 | blacklist, load it with insmod and restart the `watchdog-mux` service or reboot | |
363 | the node. | |
364 | ||
365 | Recover Fenced Services | |
366 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
367 | ||
368 | After a node failed and its fencing was successful we start to recover services | |
369 | to other available nodes and restart them there so that they can provide service | |
370 | again. | |
371 | ||
372 | The selection of the node on which the services gets recovered is influenced | |
373 | by the users group settings, the currently active nodes and their respective | |
374 | active service count. | |
375 | First we build a set out of the intersection between user selected nodes and | |
376 | available nodes. Then the subset with the highest priority of those nodes | |
377 | gets chosen as possible nodes for recovery. We select the node with the | |
378 | currently lowest active service count as a new node for the service. | |
379 | That minimizes the possibility of an overload, which else could cause an | |
380 | unresponsive node and as a result a chain reaction of node failures in the | |
381 | cluster. | |
382 | ||
383 | Groups | |
384 | ------ | |
385 | ||
386 | A group is a collection of cluster nodes which a service may be bound to. | |
387 | ||
388 | Group Settings | |
389 | ~~~~~~~~~~~~~~ | |
390 | ||
391 | nodes:: | |
392 | ||
393 | List of group node members where a priority can be given to each node. | |
394 | A service bound to this group will run on the nodes with the highest priority | |
395 | available. If more nodes are in the highest priority class the services will | |
396 | get distributed to those node if not already there. The priorities have a | |
397 | relative meaning only. | |
398 | Example;; | |
399 | You want to run all services from a group on `node1` if possible. If this node | |
400 | is not available, you want them to run equally splitted on `node2` and `node3`, and | |
401 | if those fail it should use `node4`. | |
402 | To achieve this you could set the node list to: | |
403 | [source,bash] | |
404 | ha-manager groupset mygroup -nodes "node1:2,node2:1,node3:1,node4" | |
405 | ||
406 | restricted:: | |
407 | ||
408 | Resources bound to this group may only run on nodes defined by the | |
409 | group. If no group node member is available the resource will be | |
410 | placed in the stopped state. | |
411 | Example;; | |
412 | Lets say a service uses resources only available on `node1` and `node2`, | |
413 | so we need to make sure that HA manager does not use other nodes. | |
414 | We need to create a 'restricted' group with said nodes: | |
415 | [source,bash] | |
416 | ha-manager groupset mygroup -nodes "node1,node2" -restricted | |
417 | ||
418 | nofailback:: | |
419 | ||
420 | The resource won't automatically fail back when a more preferred node | |
421 | (re)joins the cluster. | |
422 | Examples;; | |
423 | * You need to migrate a service to a node which hasn't the highest priority | |
424 | in the group at the moment, to tell the HA manager to not move this service | |
425 | instantly back set the 'nofailback' option and the service will stay on | |
426 | the current node. | |
427 | ||
428 | * A service was fenced and it got recovered to another node. The admin | |
429 | repaired the node and brought it up online again but does not want that the | |
430 | recovered services move straight back to the repaired node as he wants to | |
431 | first investigate the failure cause and check if it runs stable. He can use | |
432 | the 'nofailback' option to achieve this. | |
433 | ||
434 | ||
435 | Start Failure Policy | |
436 | --------------------- | |
437 | ||
438 | The start failure policy comes in effect if a service failed to start on a | |
439 | node once ore more times. It can be used to configure how often a restart | |
440 | should be triggered on the same node and how often a service should be | |
441 | relocated so that it gets a try to be started on another node. | |
442 | The aim of this policy is to circumvent temporary unavailability of shared | |
443 | resources on a specific node. For example, if a shared storage isn't available | |
444 | on a quorate node anymore, e.g. network problems, but still on other nodes, | |
445 | the relocate policy allows then that the service gets started nonetheless. | |
446 | ||
447 | There are two service start recover policy settings which can be configured | |
448 | specific for each resource. | |
449 | ||
450 | max_restart:: | |
451 | ||
452 | Maximum number of tries to restart an failed service on the actual | |
453 | node. The default is set to one. | |
454 | ||
455 | max_relocate:: | |
456 | ||
457 | Maximum number of tries to relocate the service to a different node. | |
458 | A relocate only happens after the max_restart value is exceeded on the | |
459 | actual node. The default is set to one. | |
460 | ||
461 | NOTE: The relocate count state will only reset to zero when the | |
462 | service had at least one successful start. That means if a service is | |
463 | re-enabled without fixing the error only the restart policy gets | |
464 | repeated. | |
465 | ||
466 | Error Recovery | |
467 | -------------- | |
468 | ||
469 | If after all tries the service state could not be recovered it gets | |
470 | placed in an error state. In this state the service won't get touched | |
471 | by the HA stack anymore. To recover from this state you should follow | |
472 | these steps: | |
473 | ||
474 | * bring the resource back into a safe and consistent state (e.g., | |
475 | killing its process) | |
476 | ||
477 | * disable the ha resource to place it in an stopped state | |
478 | ||
479 | * fix the error which led to this failures | |
480 | ||
481 | * *after* you fixed all errors you may enable the service again | |
482 | ||
483 | ||
484 | Service Operations | |
485 | ------------------ | |
486 | ||
487 | This are how the basic user-initiated service operations (via | |
488 | `ha-manager`) work. | |
489 | ||
490 | enable:: | |
491 | ||
492 | The service will be started by the LRM if not already running. | |
493 | ||
494 | disable:: | |
495 | ||
496 | The service will be stopped by the LRM if running. | |
497 | ||
498 | migrate/relocate:: | |
499 | ||
500 | The service will be relocated (live) to another node. | |
501 | ||
502 | remove:: | |
503 | ||
504 | The service will be removed from the HA managed resource list. Its | |
505 | current state will not be touched. | |
506 | ||
507 | start/stop:: | |
508 | ||
509 | `start` and `stop` commands can be issued to the resource specific tools | |
510 | (like `qm` or `pct`), they will forward the request to the | |
511 | `ha-manager` which then will execute the action and set the resulting | |
512 | service state (enabled, disabled). | |
513 | ||
514 | ||
515 | Service States | |
516 | -------------- | |
517 | ||
518 | stopped:: | |
519 | ||
520 | Service is stopped (confirmed by LRM), if detected running it will get stopped | |
521 | again. | |
522 | ||
523 | request_stop:: | |
524 | ||
525 | Service should be stopped. Waiting for confirmation from LRM. | |
526 | ||
527 | started:: | |
528 | ||
529 | Service is active an LRM should start it ASAP if not already running. | |
530 | If the Service fails and is detected to be not running the LRM restarts it. | |
531 | ||
532 | fence:: | |
533 | ||
534 | Wait for node fencing (service node is not inside quorate cluster | |
535 | partition). | |
536 | As soon as node gets fenced successfully the service will be recovered to | |
537 | another node, if possible. | |
538 | ||
539 | freeze:: | |
540 | ||
541 | Do not touch the service state. We use this state while we reboot a | |
542 | node, or when we restart the LRM daemon. | |
543 | ||
544 | migrate:: | |
545 | ||
546 | Migrate service (live) to other node. | |
547 | ||
548 | error:: | |
549 | ||
550 | Service disabled because of LRM errors. Needs manual intervention. | |
551 | ||
552 | ||
553 | ifdef::manvolnum[] | |
554 | include::pve-copyright.adoc[] | |
555 | endif::manvolnum[] | |
556 |