]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_ha_manager]] | |
2 | ifdef::manvolnum[] | |
3 | ha-manager(1) | |
4 | ============= | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | ha-manager - Proxmox VE HA Manager | |
11 | ||
12 | SYNOPSIS | |
13 | -------- | |
14 | ||
15 | include::ha-manager.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ifndef::manvolnum[] | |
21 | High Availability | |
22 | ================= | |
23 | :pve-toplevel: | |
24 | endif::manvolnum[] | |
25 | ||
26 | Our modern society depends heavily on information provided by | |
27 | computers over the network. Mobile devices amplified that dependency, | |
28 | because people can access the network any time from anywhere. If you | |
29 | provide such services, it is very important that they are available | |
30 | most of the time. | |
31 | ||
32 | We can mathematically define the availability as the ratio of (A), the | |
33 | total time a service is capable of being used during a given interval | |
34 | to (B), the length of the interval. It is normally expressed as a | |
35 | percentage of uptime in a given year. | |
36 | ||
37 | .Availability - Downtime per Year | |
38 | [width="60%",cols="<d,d",options="header"] | |
39 | |=========================================================== | |
40 | |Availability % |Downtime per year | |
41 | |99 |3.65 days | |
42 | |99.9 |8.76 hours | |
43 | |99.99 |52.56 minutes | |
44 | |99.999 |5.26 minutes | |
45 | |99.9999 |31.5 seconds | |
46 | |99.99999 |3.15 seconds | |
47 | |=========================================================== | |
48 | ||
49 | There are several ways to increase availability. The most elegant | |
50 | solution is to rewrite your software, so that you can run it on | |
51 | several hosts at the same time. The software itself needs to have a way | |
52 | to detect errors and do failover. If you only want to serve read-only | |
53 | web pages, then this is relatively simple. However, this is generally complex | |
54 | and sometimes impossible, because you cannot modify the software yourself. The | |
55 | following solutions works without modifying the software: | |
56 | ||
57 | * Use reliable ``server'' components | |
58 | + | |
59 | NOTE: Computer components with the same functionality can have varying | |
60 | reliability numbers, depending on the component quality. Most vendors | |
61 | sell components with higher reliability as ``server'' components - | |
62 | usually at higher price. | |
63 | ||
64 | * Eliminate single point of failure (redundant components) | |
65 | ** use an uninterruptible power supply (UPS) | |
66 | ** use redundant power supplies in your servers | |
67 | ** use ECC-RAM | |
68 | ** use redundant network hardware | |
69 | ** use RAID for local storage | |
70 | ** use distributed, redundant storage for VM data | |
71 | ||
72 | * Reduce downtime | |
73 | ** rapidly accessible administrators (24/7) | |
74 | ** availability of spare parts (other nodes in a {pve} cluster) | |
75 | ** automatic error detection (provided by `ha-manager`) | |
76 | ** automatic failover (provided by `ha-manager`) | |
77 | ||
78 | Virtualization environments like {pve} make it much easier to reach | |
79 | high availability because they remove the ``hardware'' dependency. They | |
80 | also support the setup and use of redundant storage and network | |
81 | devices, so if one host fails, you can simply start those services on | |
82 | another host within your cluster. | |
83 | ||
84 | Better still, {pve} provides a software stack called `ha-manager`, | |
85 | which can do that automatically for you. It is able to automatically | |
86 | detect errors and do automatic failover. | |
87 | ||
88 | {pve} `ha-manager` works like an ``automated'' administrator. First, you | |
89 | configure what resources (VMs, containers, ...) it should | |
90 | manage. Then, `ha-manager` observes the correct functionality, and handles | |
91 | service failover to another node in case of errors. `ha-manager` can | |
92 | also handle normal user requests which may start, stop, relocate and | |
93 | migrate a service. | |
94 | ||
95 | But high availability comes at a price. High quality components are | |
96 | more expensive, and making them redundant doubles the costs at | |
97 | least. Additional spare parts increase costs further. So you should | |
98 | carefully calculate the benefits, and compare with those additional | |
99 | costs. | |
100 | ||
101 | TIP: Increasing availability from 99% to 99.9% is relatively | |
102 | simple. But increasing availability from 99.9999% to 99.99999% is very | |
103 | hard and costly. `ha-manager` has typical error detection and failover | |
104 | times of about 2 minutes, so you can get no more than 99.999% | |
105 | availability. | |
106 | ||
107 | ||
108 | Requirements | |
109 | ------------ | |
110 | ||
111 | You must meet the following requirements before you start with HA: | |
112 | ||
113 | * at least three cluster nodes (to get reliable quorum) | |
114 | ||
115 | * shared storage for VMs and containers | |
116 | ||
117 | * hardware redundancy (everywhere) | |
118 | ||
119 | * use reliable “server” components | |
120 | ||
121 | * hardware watchdog - if not available we fall back to the | |
122 | linux kernel software watchdog (`softdog`) | |
123 | ||
124 | * optional hardware fencing devices | |
125 | ||
126 | ||
127 | [[ha_manager_resources]] | |
128 | Resources | |
129 | --------- | |
130 | ||
131 | We call the primary management unit handled by `ha-manager` a | |
132 | resource. A resource (also called ``service'') is uniquely | |
133 | identified by a service ID (SID), which consists of the resource type | |
134 | and a type specific ID, for example `vm:100`. That example would be a | |
135 | resource of type `vm` (virtual machine) with the ID 100. | |
136 | ||
137 | For now we have two important resources types - virtual machines and | |
138 | containers. One basic idea here is that we can bundle related software | |
139 | into such a VM or container, so there is no need to compose one big | |
140 | service from other services, as was done with `rgmanager`. In | |
141 | general, a HA managed resource should not depend on other resources. | |
142 | ||
143 | ||
144 | Management Tasks | |
145 | ---------------- | |
146 | ||
147 | This section provides a short overview of common management tasks. The | |
148 | first step is to enable HA for a resource. This is done by adding the | |
149 | resource to the HA resource configuration. You can do this using the | |
150 | GUI, or simply use the command-line tool, for example: | |
151 | ||
152 | ---- | |
153 | # ha-manager add vm:100 | |
154 | ---- | |
155 | ||
156 | The HA stack now tries to start the resources and keep them | |
157 | running. Please note that you can configure the ``requested'' | |
158 | resources state. For example you may want the HA stack to stop the | |
159 | resource: | |
160 | ||
161 | ---- | |
162 | # ha-manager set vm:100 --state stopped | |
163 | ---- | |
164 | ||
165 | and start it again later: | |
166 | ||
167 | ---- | |
168 | # ha-manager set vm:100 --state started | |
169 | ---- | |
170 | ||
171 | You can also use the normal VM and container management commands. They | |
172 | automatically forward the commands to the HA stack, so | |
173 | ||
174 | ---- | |
175 | # qm start 100 | |
176 | ---- | |
177 | ||
178 | simply sets the requested state to `started`. The same applies to `qm | |
179 | stop`, which sets the requested state to `stopped`. | |
180 | ||
181 | NOTE: The HA stack works fully asynchronous and needs to communicate | |
182 | with other cluster members. Therefore, it takes some seconds until you see | |
183 | the result of such actions. | |
184 | ||
185 | To view the current HA resource configuration use: | |
186 | ||
187 | ---- | |
188 | # ha-manager config | |
189 | vm:100 | |
190 | state stopped | |
191 | ---- | |
192 | ||
193 | And you can view the actual HA manager and resource state with: | |
194 | ||
195 | ---- | |
196 | # ha-manager status | |
197 | quorum OK | |
198 | master node1 (active, Wed Nov 23 11:07:23 2016) | |
199 | lrm elsa (active, Wed Nov 23 11:07:19 2016) | |
200 | service vm:100 (node1, started) | |
201 | ---- | |
202 | ||
203 | You can also initiate resource migration to other nodes: | |
204 | ||
205 | ---- | |
206 | # ha-manager migrate vm:100 node2 | |
207 | ---- | |
208 | ||
209 | This uses online migration and tries to keep the VM running. Online | |
210 | migration needs to transfer all used memory over the network, so it is | |
211 | sometimes faster to stop the VM, then restart it on the new node. This can be | |
212 | done using the `relocate` command: | |
213 | ||
214 | ---- | |
215 | # ha-manager relocate vm:100 node2 | |
216 | ---- | |
217 | ||
218 | Finally, you can remove the resource from the HA configuration using | |
219 | the following command: | |
220 | ||
221 | ---- | |
222 | # ha-manager remove vm:100 | |
223 | ---- | |
224 | ||
225 | NOTE: This does not start or stop the resource. | |
226 | ||
227 | But all HA related tasks can be done in the GUI, so there is no need to | |
228 | use the command line at all. | |
229 | ||
230 | ||
231 | How It Works | |
232 | ------------ | |
233 | ||
234 | This section provides a detailed description of the {PVE} HA manager | |
235 | internals. It describes all involved daemons and how they work | |
236 | together. To provide HA, two daemons run on each node: | |
237 | ||
238 | `pve-ha-lrm`:: | |
239 | ||
240 | The local resource manager (LRM), which controls the services running on | |
241 | the local node. It reads the requested states for its services from | |
242 | the current manager status file and executes the respective commands. | |
243 | ||
244 | `pve-ha-crm`:: | |
245 | ||
246 | The cluster resource manager (CRM), which makes the cluster-wide | |
247 | decisions. It sends commands to the LRM, processes the results, | |
248 | and moves resources to other nodes if something fails. The CRM also | |
249 | handles node fencing. | |
250 | ||
251 | ||
252 | .Locks in the LRM & CRM | |
253 | [NOTE] | |
254 | Locks are provided by our distributed configuration file system (pmxcfs). | |
255 | They are used to guarantee that each LRM is active once and working. As an | |
256 | LRM only executes actions when it holds its lock, we can mark a failed node | |
257 | as fenced if we can acquire its lock. This then lets us recover any failed | |
258 | HA services securely without any interference from the now unknown failed node. | |
259 | This all gets supervised by the CRM which currently holds the manager master | |
260 | lock. | |
261 | ||
262 | ||
263 | [[ha_manager_service_states]] | |
264 | Service States | |
265 | ~~~~~~~~~~~~~~ | |
266 | ||
267 | The CRM uses a service state enumeration to record the current service | |
268 | state. This state is displayed on the GUI and can be queried using | |
269 | the `ha-manager` command-line tool: | |
270 | ||
271 | ---- | |
272 | # ha-manager status | |
273 | quorum OK | |
274 | master elsa (active, Mon Nov 21 07:23:29 2016) | |
275 | lrm elsa (active, Mon Nov 21 07:23:22 2016) | |
276 | service ct:100 (elsa, stopped) | |
277 | service ct:102 (elsa, started) | |
278 | service vm:501 (elsa, started) | |
279 | ---- | |
280 | ||
281 | Here is the list of possible states: | |
282 | ||
283 | stopped:: | |
284 | ||
285 | Service is stopped (confirmed by LRM). If the LRM detects a stopped | |
286 | service is still running, it will stop it again. | |
287 | ||
288 | request_stop:: | |
289 | ||
290 | Service should be stopped. The CRM waits for confirmation from the | |
291 | LRM. | |
292 | ||
293 | stopping:: | |
294 | ||
295 | Pending stop request. But the CRM did not get the request so far. | |
296 | ||
297 | started:: | |
298 | ||
299 | Service is active an LRM should start it ASAP if not already running. | |
300 | If the Service fails and is detected to be not running the LRM | |
301 | restarts it | |
302 | (see xref:ha_manager_start_failure_policy[Start Failure Policy]). | |
303 | ||
304 | starting:: | |
305 | ||
306 | Pending start request. But the CRM has not got any confirmation from the | |
307 | LRM that the service is running. | |
308 | ||
309 | fence:: | |
310 | ||
311 | Wait for node fencing as the service node is not inside the quorate cluster | |
312 | partition (see xref:ha_manager_fencing[Fencing]). | |
313 | As soon as node gets fenced successfully the service will be placed into the | |
314 | recovery state. | |
315 | ||
316 | recovery:: | |
317 | ||
318 | Wait for recovery of the service. The HA manager tries to find a new node where | |
319 | the service can run on. This search depends not only on the list of online and | |
320 | quorate nodes, but also if the service is a group member and how such a group | |
321 | is limited. | |
322 | As soon as a new available node is found, the service will be moved there and | |
323 | initially placed into stopped state. If it's configured to run the new node | |
324 | will do so. | |
325 | ||
326 | freeze:: | |
327 | ||
328 | Do not touch the service state. We use this state while we reboot a | |
329 | node, or when we restart the LRM daemon | |
330 | (see xref:ha_manager_package_updates[Package Updates]). | |
331 | ||
332 | ignored:: | |
333 | ||
334 | Act as if the service were not managed by HA at all. | |
335 | Useful, when full control over the service is desired temporarily, without | |
336 | removing it from the HA configuration. | |
337 | ||
338 | migrate:: | |
339 | ||
340 | Migrate service (live) to other node. | |
341 | ||
342 | error:: | |
343 | ||
344 | Service is disabled because of LRM errors. Needs manual intervention | |
345 | (see xref:ha_manager_error_recovery[Error Recovery]). | |
346 | ||
347 | queued:: | |
348 | ||
349 | Service is newly added, and the CRM has not seen it so far. | |
350 | ||
351 | disabled:: | |
352 | ||
353 | Service is stopped and marked as `disabled` | |
354 | ||
355 | ||
356 | [[ha_manager_lrm]] | |
357 | Local Resource Manager | |
358 | ~~~~~~~~~~~~~~~~~~~~~~ | |
359 | ||
360 | The local resource manager (`pve-ha-lrm`) is started as a daemon on | |
361 | boot and waits until the HA cluster is quorate and thus cluster-wide | |
362 | locks are working. | |
363 | ||
364 | It can be in three states: | |
365 | ||
366 | wait for agent lock:: | |
367 | ||
368 | The LRM waits for our exclusive lock. This is also used as idle state if no | |
369 | service is configured. | |
370 | ||
371 | active:: | |
372 | ||
373 | The LRM holds its exclusive lock and has services configured. | |
374 | ||
375 | lost agent lock:: | |
376 | ||
377 | The LRM lost its lock, this means a failure happened and quorum was lost. | |
378 | ||
379 | After the LRM gets in the active state it reads the manager status | |
380 | file in `/etc/pve/ha/manager_status` and determines the commands it | |
381 | has to execute for the services it owns. | |
382 | For each command a worker gets started, these workers are running in | |
383 | parallel and are limited to at most 4 by default. This default setting | |
384 | may be changed through the datacenter configuration key `max_worker`. | |
385 | When finished the worker process gets collected and its result saved for | |
386 | the CRM. | |
387 | ||
388 | .Maximum Concurrent Worker Adjustment Tips | |
389 | [NOTE] | |
390 | The default value of at most 4 concurrent workers may be unsuited for | |
391 | a specific setup. For example, 4 live migrations may occur at the same | |
392 | time, which can lead to network congestions with slower networks and/or | |
393 | big (memory wise) services. Also, ensure that in the worst case, congestion is | |
394 | at a minimum, even if this means lowering the `max_worker` value. On the | |
395 | contrary, if you have a particularly powerful, high-end setup you may also want | |
396 | to increase it. | |
397 | ||
398 | Each command requested by the CRM is uniquely identifiable by a UID. When | |
399 | the worker finishes, its result will be processed and written in the LRM | |
400 | status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect | |
401 | it and let its state machine - respective to the commands output - act on it. | |
402 | ||
403 | The actions on each service between CRM and LRM are normally always synced. | |
404 | This means that the CRM requests a state uniquely marked by a UID, the LRM | |
405 | then executes this action *one time* and writes back the result, which is also | |
406 | identifiable by the same UID. This is needed so that the LRM does not | |
407 | execute an outdated command. | |
408 | The only exceptions to this behaviour are the `stop` and `error` commands; | |
409 | these two do not depend on the result produced and are executed | |
410 | always in the case of the stopped state and once in the case of | |
411 | the error state. | |
412 | ||
413 | .Read the Logs | |
414 | [NOTE] | |
415 | The HA Stack logs every action it makes. This helps to understand what | |
416 | and also why something happens in the cluster. Here its important to see | |
417 | what both daemons, the LRM and the CRM, did. You may use | |
418 | `journalctl -u pve-ha-lrm` on the node(s) where the service is and | |
419 | the same command for the pve-ha-crm on the node which is the current master. | |
420 | ||
421 | ||
422 | [[ha_manager_crm]] | |
423 | Cluster Resource Manager | |
424 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
425 | ||
426 | The cluster resource manager (`pve-ha-crm`) starts on each node and | |
427 | waits there for the manager lock, which can only be held by one node | |
428 | at a time. The node which successfully acquires the manager lock gets | |
429 | promoted to the CRM master. | |
430 | ||
431 | It can be in three states: | |
432 | ||
433 | wait for agent lock:: | |
434 | ||
435 | The CRM waits for our exclusive lock. This is also used as idle state if no | |
436 | service is configured | |
437 | ||
438 | active:: | |
439 | ||
440 | The CRM holds its exclusive lock and has services configured | |
441 | ||
442 | lost agent lock:: | |
443 | ||
444 | The CRM lost its lock, this means a failure happened and quorum was lost. | |
445 | ||
446 | Its main task is to manage the services which are configured to be highly | |
447 | available and try to always enforce the requested state. For example, a | |
448 | service with the requested state 'started' will be started if its not | |
449 | already running. If it crashes it will be automatically started again. | |
450 | Thus the CRM dictates the actions the LRM needs to execute. | |
451 | ||
452 | When a node leaves the cluster quorum, its state changes to unknown. | |
453 | If the current CRM can then secure the failed node's lock, the services | |
454 | will be 'stolen' and restarted on another node. | |
455 | ||
456 | When a cluster member determines that it is no longer in the cluster | |
457 | quorum, the LRM waits for a new quorum to form. As long as there is no | |
458 | quorum the node cannot reset the watchdog. This will trigger a reboot | |
459 | after the watchdog times out (this happens after 60 seconds). | |
460 | ||
461 | ||
462 | HA Simulator | |
463 | ------------ | |
464 | ||
465 | [thumbnail="screenshot/gui-ha-manager-status.png"] | |
466 | ||
467 | By using the HA simulator you can test and learn all functionalities of the | |
468 | Proxmox VE HA solutions. | |
469 | ||
470 | By default, the simulator allows you to watch and test the behaviour of a | |
471 | real-world 3 node cluster with 6 VMs. You can also add or remove additional VMs | |
472 | or Container. | |
473 | ||
474 | You do not have to setup or configure a real cluster, the HA simulator runs out | |
475 | of the box. | |
476 | ||
477 | Install with apt: | |
478 | ||
479 | ---- | |
480 | apt install pve-ha-simulator | |
481 | ---- | |
482 | ||
483 | You can even install the package on any Debian-based system without any | |
484 | other Proxmox VE packages. For that you will need to download the package and | |
485 | copy it to the system you want to run it on for installation. When you install | |
486 | the package with apt from the local file system it will also resolve the | |
487 | required dependencies for you. | |
488 | ||
489 | ||
490 | To start the simulator on a remote machine you must have an X11 redirection to | |
491 | your current system. | |
492 | ||
493 | If you are on a Linux machine you can use: | |
494 | ||
495 | ---- | |
496 | ssh root@<IPofPVE> -Y | |
497 | ---- | |
498 | ||
499 | On Windows it works with https://mobaxterm.mobatek.net/[mobaxterm]. | |
500 | ||
501 | After connecting to an existing {pve} with the simulator installed or | |
502 | installing it on your local Debian-based system manually, you can try it out as | |
503 | follows. | |
504 | ||
505 | First you need to create a working directory where the simulator saves its | |
506 | current state and writes its default config: | |
507 | ||
508 | ---- | |
509 | mkdir working | |
510 | ---- | |
511 | ||
512 | Then, simply pass the created directory as a parameter to 'pve-ha-simulator': | |
513 | ||
514 | ---- | |
515 | pve-ha-simulator working/ | |
516 | ---- | |
517 | ||
518 | You can then start, stop, migrate the simulated HA services, or even check out | |
519 | what happens on a node failure. | |
520 | ||
521 | Configuration | |
522 | ------------- | |
523 | ||
524 | The HA stack is well integrated into the {pve} API. So, for example, | |
525 | HA can be configured via the `ha-manager` command-line interface, or | |
526 | the {pve} web interface - both interfaces provide an easy way to | |
527 | manage HA. Automation tools can use the API directly. | |
528 | ||
529 | All HA configuration files are within `/etc/pve/ha/`, so they get | |
530 | automatically distributed to the cluster nodes, and all nodes share | |
531 | the same HA configuration. | |
532 | ||
533 | ||
534 | [[ha_manager_resource_config]] | |
535 | Resources | |
536 | ~~~~~~~~~ | |
537 | ||
538 | [thumbnail="screenshot/gui-ha-manager-status.png"] | |
539 | ||
540 | ||
541 | The resource configuration file `/etc/pve/ha/resources.cfg` stores | |
542 | the list of resources managed by `ha-manager`. A resource configuration | |
543 | inside that list looks like this: | |
544 | ||
545 | ---- | |
546 | <type>: <name> | |
547 | <property> <value> | |
548 | ... | |
549 | ---- | |
550 | ||
551 | It starts with a resource type followed by a resource specific name, | |
552 | separated with colon. Together this forms the HA resource ID, which is | |
553 | used by all `ha-manager` commands to uniquely identify a resource | |
554 | (example: `vm:100` or `ct:101`). The next lines contain additional | |
555 | properties: | |
556 | ||
557 | include::ha-resources-opts.adoc[] | |
558 | ||
559 | Here is a real world example with one VM and one container. As you see, | |
560 | the syntax of those files is really simple, so it is even possible to | |
561 | read or edit those files using your favorite editor: | |
562 | ||
563 | .Configuration Example (`/etc/pve/ha/resources.cfg`) | |
564 | ---- | |
565 | vm: 501 | |
566 | state started | |
567 | max_relocate 2 | |
568 | ||
569 | ct: 102 | |
570 | # Note: use default settings for everything | |
571 | ---- | |
572 | ||
573 | [thumbnail="screenshot/gui-ha-manager-add-resource.png"] | |
574 | ||
575 | The above config was generated using the `ha-manager` command-line tool: | |
576 | ||
577 | ---- | |
578 | # ha-manager add vm:501 --state started --max_relocate 2 | |
579 | # ha-manager add ct:102 | |
580 | ---- | |
581 | ||
582 | ||
583 | [[ha_manager_groups]] | |
584 | Groups | |
585 | ~~~~~~ | |
586 | ||
587 | [thumbnail="screenshot/gui-ha-manager-groups-view.png"] | |
588 | ||
589 | The HA group configuration file `/etc/pve/ha/groups.cfg` is used to | |
590 | define groups of cluster nodes. A resource can be restricted to run | |
591 | only on the members of such group. A group configuration look like | |
592 | this: | |
593 | ||
594 | ---- | |
595 | group: <group> | |
596 | nodes <node_list> | |
597 | <property> <value> | |
598 | ... | |
599 | ---- | |
600 | ||
601 | include::ha-groups-opts.adoc[] | |
602 | ||
603 | [thumbnail="screenshot/gui-ha-manager-add-group.png"] | |
604 | ||
605 | A common requirement is that a resource should run on a specific | |
606 | node. Usually the resource is able to run on other nodes, so you can define | |
607 | an unrestricted group with a single member: | |
608 | ||
609 | ---- | |
610 | # ha-manager groupadd prefer_node1 --nodes node1 | |
611 | ---- | |
612 | ||
613 | For bigger clusters, it makes sense to define a more detailed failover | |
614 | behavior. For example, you may want to run a set of services on | |
615 | `node1` if possible. If `node1` is not available, you want to run them | |
616 | equally split on `node2` and `node3`. If those nodes also fail, the | |
617 | services should run on `node4`. To achieve this you could set the node | |
618 | list to: | |
619 | ||
620 | ---- | |
621 | # ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4" | |
622 | ---- | |
623 | ||
624 | Another use case is if a resource uses other resources only available | |
625 | on specific nodes, lets say `node1` and `node2`. We need to make sure | |
626 | that HA manager does not use other nodes, so we need to create a | |
627 | restricted group with said nodes: | |
628 | ||
629 | ---- | |
630 | # ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted | |
631 | ---- | |
632 | ||
633 | The above commands created the following group configuration file: | |
634 | ||
635 | .Configuration Example (`/etc/pve/ha/groups.cfg`) | |
636 | ---- | |
637 | group: prefer_node1 | |
638 | nodes node1 | |
639 | ||
640 | group: mygroup1 | |
641 | nodes node2:1,node4,node1:2,node3:1 | |
642 | ||
643 | group: mygroup2 | |
644 | nodes node2,node1 | |
645 | restricted 1 | |
646 | ---- | |
647 | ||
648 | ||
649 | The `nofailback` options is mostly useful to avoid unwanted resource | |
650 | movements during administration tasks. For example, if you need to | |
651 | migrate a service to a node which doesn't have the highest priority in the | |
652 | group, you need to tell the HA manager not to instantly move this service | |
653 | back by setting the `nofailback` option. | |
654 | ||
655 | Another scenario is when a service was fenced and it got recovered to | |
656 | another node. The admin tries to repair the fenced node and brings it | |
657 | up online again to investigate the cause of failure and check if it runs | |
658 | stably again. Setting the `nofailback` flag prevents the recovered services from | |
659 | moving straight back to the fenced node. | |
660 | ||
661 | ||
662 | [[ha_manager_fencing]] | |
663 | Fencing | |
664 | ------- | |
665 | ||
666 | On node failures, fencing ensures that the erroneous node is | |
667 | guaranteed to be offline. This is required to make sure that no | |
668 | resource runs twice when it gets recovered on another node. This is a | |
669 | really important task, because without this, it would not be possible to | |
670 | recover a resource on another node. | |
671 | ||
672 | If a node did not get fenced, it would be in an unknown state where | |
673 | it may have still access to shared resources. This is really | |
674 | dangerous! Imagine that every network but the storage one broke. Now, | |
675 | while not reachable from the public network, the VM still runs and | |
676 | writes to the shared storage. | |
677 | ||
678 | If we then simply start up this VM on another node, we would get a | |
679 | dangerous race condition, because we write from both nodes. Such | |
680 | conditions can destroy all VM data and the whole VM could be rendered | |
681 | unusable. The recovery could also fail if the storage protects against | |
682 | multiple mounts. | |
683 | ||
684 | ||
685 | How {pve} Fences | |
686 | ~~~~~~~~~~~~~~~~ | |
687 | ||
688 | There are different methods to fence a node, for example, fence | |
689 | devices which cut off the power from the node or disable their | |
690 | communication completely. Those are often quite expensive and bring | |
691 | additional critical components into a system, because if they fail you | |
692 | cannot recover any service. | |
693 | ||
694 | We thus wanted to integrate a simpler fencing method, which does not | |
695 | require additional external hardware. This can be done using | |
696 | watchdog timers. | |
697 | ||
698 | .Possible Fencing Methods | |
699 | - external power switches | |
700 | - isolate nodes by disabling complete network traffic on the switch | |
701 | - self fencing using watchdog timers | |
702 | ||
703 | Watchdog timers have been widely used in critical and dependable systems | |
704 | since the beginning of microcontrollers. They are often simple, independent | |
705 | integrated circuits which are used to detect and recover from computer malfunctions. | |
706 | ||
707 | During normal operation, `ha-manager` regularly resets the watchdog | |
708 | timer to prevent it from elapsing. If, due to a hardware fault or | |
709 | program error, the computer fails to reset the watchdog, the timer | |
710 | will elapse and trigger a reset of the whole server (reboot). | |
711 | ||
712 | Recent server motherboards often include such hardware watchdogs, but | |
713 | these need to be configured. If no watchdog is available or | |
714 | configured, we fall back to the Linux Kernel 'softdog'. While still | |
715 | reliable, it is not independent of the servers hardware, and thus has | |
716 | a lower reliability than a hardware watchdog. | |
717 | ||
718 | ||
719 | Configure Hardware Watchdog | |
720 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
721 | ||
722 | By default, all hardware watchdog modules are blocked for security | |
723 | reasons. They are like a loaded gun if not correctly initialized. To | |
724 | enable a hardware watchdog, you need to specify the module to load in | |
725 | '/etc/default/pve-ha-manager', for example: | |
726 | ||
727 | ---- | |
728 | # select watchdog module (default is softdog) | |
729 | WATCHDOG_MODULE=iTCO_wdt | |
730 | ---- | |
731 | ||
732 | This configuration is read by the 'watchdog-mux' service, which loads | |
733 | the specified module at startup. | |
734 | ||
735 | ||
736 | Recover Fenced Services | |
737 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
738 | ||
739 | After a node failed and its fencing was successful, the CRM tries to | |
740 | move services from the failed node to nodes which are still online. | |
741 | ||
742 | The selection of nodes, on which those services gets recovered, is | |
743 | influenced by the resource `group` settings, the list of currently active | |
744 | nodes, and their respective active service count. | |
745 | ||
746 | The CRM first builds a set out of the intersection between user selected | |
747 | nodes (from `group` setting) and available nodes. It then choose the | |
748 | subset of nodes with the highest priority, and finally select the node | |
749 | with the lowest active service count. This minimizes the possibility | |
750 | of an overloaded node. | |
751 | ||
752 | CAUTION: On node failure, the CRM distributes services to the | |
753 | remaining nodes. This increases the service count on those nodes, and | |
754 | can lead to high load, especially on small clusters. Please design | |
755 | your cluster so that it can handle such worst case scenarios. | |
756 | ||
757 | ||
758 | [[ha_manager_start_failure_policy]] | |
759 | Start Failure Policy | |
760 | --------------------- | |
761 | ||
762 | The start failure policy comes into effect if a service failed to start on a | |
763 | node one or more times. It can be used to configure how often a restart | |
764 | should be triggered on the same node and how often a service should be | |
765 | relocated, so that it has an attempt to be started on another node. | |
766 | The aim of this policy is to circumvent temporary unavailability of shared | |
767 | resources on a specific node. For example, if a shared storage isn't available | |
768 | on a quorate node anymore, for instance due to network problems, but is still | |
769 | available on other nodes, the relocate policy allows the service to start | |
770 | nonetheless. | |
771 | ||
772 | There are two service start recover policy settings which can be configured | |
773 | specific for each resource. | |
774 | ||
775 | max_restart:: | |
776 | ||
777 | Maximum number of attempts to restart a failed service on the actual | |
778 | node. The default is set to one. | |
779 | ||
780 | max_relocate:: | |
781 | ||
782 | Maximum number of attempts to relocate the service to a different node. | |
783 | A relocate only happens after the max_restart value is exceeded on the | |
784 | actual node. The default is set to one. | |
785 | ||
786 | NOTE: The relocate count state will only reset to zero when the | |
787 | service had at least one successful start. That means if a service is | |
788 | re-started without fixing the error only the restart policy gets | |
789 | repeated. | |
790 | ||
791 | ||
792 | [[ha_manager_error_recovery]] | |
793 | Error Recovery | |
794 | -------------- | |
795 | ||
796 | If, after all attempts, the service state could not be recovered, it gets | |
797 | placed in an error state. In this state, the service won't get touched | |
798 | by the HA stack anymore. The only way out is disabling a service: | |
799 | ||
800 | ---- | |
801 | # ha-manager set vm:100 --state disabled | |
802 | ---- | |
803 | ||
804 | This can also be done in the web interface. | |
805 | ||
806 | To recover from the error state you should do the following: | |
807 | ||
808 | * bring the resource back into a safe and consistent state (e.g.: | |
809 | kill its process if the service could not be stopped) | |
810 | ||
811 | * disable the resource to remove the error flag | |
812 | ||
813 | * fix the error which led to this failures | |
814 | ||
815 | * *after* you fixed all errors you may request that the service starts again | |
816 | ||
817 | ||
818 | [[ha_manager_package_updates]] | |
819 | Package Updates | |
820 | --------------- | |
821 | ||
822 | When updating the ha-manager, you should do one node after the other, never | |
823 | all at once for various reasons. First, while we test our software | |
824 | thoroughly, a bug affecting your specific setup cannot totally be ruled out. | |
825 | Updating one node after the other and checking the functionality of each node | |
826 | after finishing the update helps to recover from eventual problems, while | |
827 | updating all at once could result in a broken cluster and is generally not | |
828 | good practice. | |
829 | ||
830 | Also, the {pve} HA stack uses a request acknowledge protocol to perform | |
831 | actions between the cluster and the local resource manager. For restarting, | |
832 | the LRM makes a request to the CRM to freeze all its services. This prevents | |
833 | them from getting touched by the Cluster during the short time the LRM is restarting. | |
834 | After that, the LRM may safely close the watchdog during a restart. | |
835 | Such a restart happens normally during a package update and, as already stated, | |
836 | an active master CRM is needed to acknowledge the requests from the LRM. If | |
837 | this is not the case the update process can take too long which, in the worst | |
838 | case, may result in a reset triggered by the watchdog. | |
839 | ||
840 | ||
841 | [[ha_manager_node_maintenance]] | |
842 | Node Maintenance | |
843 | ---------------- | |
844 | ||
845 | Sometimes it is necessary to perform maintenance on a node, such as replacing | |
846 | hardware or simply installing a new kernel image. This also applies while the | |
847 | HA stack is in use. | |
848 | ||
849 | The HA stack can support you mainly in two types of maintenance: | |
850 | ||
851 | * for general shutdowns or reboots, the behavior can be configured, see | |
852 | xref:ha_manager_shutdown_policy[Shutdown Policy]. | |
853 | * for maintenance that does not require a shutdown or reboot, or that should | |
854 | not be switched off automatically after only one reboot, you can enable the | |
855 | manual maintenance mode. | |
856 | ||
857 | ||
858 | Maintenance Mode | |
859 | ~~~~~~~~~~~~~~~~ | |
860 | ||
861 | You can use the manual maintenance mode to mark the node as unavailable for HA | |
862 | operation, prompting all services managed by HA to migrate to other nodes. | |
863 | ||
864 | The target nodes for these migrations are selected from the other currently | |
865 | available nodes, and determined by the HA group configuration and the configured | |
866 | cluster resource scheduler (CRS) mode. | |
867 | During each migration, the original node will be recorded in the HA managers' | |
868 | state, so that the service can be moved back again automatically once the | |
869 | maintenance mode is disabled and the node is back online. | |
870 | ||
871 | Currently you can enabled or disable the maintenance mode using the ha-manager | |
872 | CLI tool. | |
873 | ||
874 | .Enabling maintenance mode for a node | |
875 | ---- | |
876 | # ha-manager crm-command node-maintenance enable NODENAME | |
877 | ---- | |
878 | ||
879 | This will queue a CRM command, when the manager processes this command it will | |
880 | record the request for maintenance-mode in the manager status. This allows you | |
881 | to submit the command on any node, not just on the one you want to place in, or | |
882 | out of the maintenance mode. | |
883 | ||
884 | Once the LRM on the respective node picks the command up it will mark itself as | |
885 | unavailable, but still process all migration commands. This means that the LRM | |
886 | self-fencing watchdog will stay active until all active services got moved, and | |
887 | all running workers finished. | |
888 | ||
889 | Note that the LRM status will read `maintenance` mode as soon as the LRM | |
890 | picked the requested state up, not only when all services got moved away, this | |
891 | user experience is planned to be improved in the future. | |
892 | For now, you can check for any active HA service left on the node, or watching | |
893 | out for a log line like: `pve-ha-lrm[PID]: watchdog closed (disabled)` to know | |
894 | when the node finished its transition into the maintenance mode. | |
895 | ||
896 | NOTE: The manual maintenance mode is not automatically deleted on node reboot, | |
897 | but only if it is either manually deactivated using the `ha-manager` CLI or if | |
898 | the manager-status is manually cleared. | |
899 | ||
900 | .Disabling maintenance mode for a node | |
901 | ---- | |
902 | # ha-manager crm-command node-maintenance disable NODENAME | |
903 | ---- | |
904 | ||
905 | The process of disabling the manual maintenance mode is similar to enabling it. | |
906 | Using the `ha-manager` CLI command shown above will queue a CRM command that, | |
907 | once processed, marks the respective LRM node as available again. | |
908 | ||
909 | If you deactivate the maintenance mode, all services that were on the node when | |
910 | the maintenance mode was activated will be moved back. | |
911 | ||
912 | [[ha_manager_shutdown_policy]] | |
913 | Shutdown Policy | |
914 | ~~~~~~~~~~~~~~~ | |
915 | ||
916 | Below you will find a description of the different HA policies for a node | |
917 | shutdown. Currently 'Conditional' is the default due to backward compatibility. | |
918 | Some users may find that 'Migrate' behaves more as expected. | |
919 | ||
920 | The shutdown policy can be configured in the Web UI (`Datacenter` -> `Options` | |
921 | -> `HA Settings`), or directly in `datacenter.cfg`: | |
922 | ||
923 | ---- | |
924 | ha: shutdown_policy=<value> | |
925 | ---- | |
926 | ||
927 | Migrate | |
928 | ^^^^^^^ | |
929 | ||
930 | Once the Local Resource manager (LRM) gets a shutdown request and this policy | |
931 | is enabled, it will mark itself as unavailable for the current HA manager. | |
932 | This triggers a migration of all HA Services currently located on this node. | |
933 | The LRM will try to delay the shutdown process, until all running services get | |
934 | moved away. But, this expects that the running services *can* be migrated to | |
935 | another node. In other words, the service must not be locally bound, for example | |
936 | by using hardware passthrough. As non-group member nodes are considered as | |
937 | runnable target if no group member is available, this policy can still be used | |
938 | when making use of HA groups with only some nodes selected. But, marking a group | |
939 | as 'restricted' tells the HA manager that the service cannot run outside of the | |
940 | chosen set of nodes. If all of those nodes are unavailable, the shutdown will | |
941 | hang until you manually intervene. Once the shut down node comes back online | |
942 | again, the previously displaced services will be moved back, if they were not | |
943 | already manually migrated in-between. | |
944 | ||
945 | NOTE: The watchdog is still active during the migration process on shutdown. | |
946 | If the node loses quorum it will be fenced and the services will be recovered. | |
947 | ||
948 | If you start a (previously stopped) service on a node which is currently being | |
949 | maintained, the node needs to be fenced to ensure that the service can be moved | |
950 | and started on another available node. | |
951 | ||
952 | Failover | |
953 | ^^^^^^^^ | |
954 | ||
955 | This mode ensures that all services get stopped, but that they will also be | |
956 | recovered, if the current node is not online soon. It can be useful when doing | |
957 | maintenance on a cluster scale, where live-migrating VMs may not be possible if | |
958 | too many nodes are powered off at a time, but you still want to ensure HA | |
959 | services get recovered and started again as soon as possible. | |
960 | ||
961 | Freeze | |
962 | ^^^^^^ | |
963 | ||
964 | This mode ensures that all services get stopped and frozen, so that they won't | |
965 | get recovered until the current node is online again. | |
966 | ||
967 | Conditional | |
968 | ^^^^^^^^^^^ | |
969 | ||
970 | The 'Conditional' shutdown policy automatically detects if a shutdown or a | |
971 | reboot is requested, and changes behaviour accordingly. | |
972 | ||
973 | .Shutdown | |
974 | ||
975 | A shutdown ('poweroff') is usually done if it is planned for the node to stay | |
976 | down for some time. The LRM stops all managed services in this case. This means | |
977 | that other nodes will take over those services afterwards. | |
978 | ||
979 | NOTE: Recent hardware has large amounts of memory (RAM). So we stop all | |
980 | resources, then restart them to avoid online migration of all that RAM. If you | |
981 | want to use online migration, you need to invoke that manually before you | |
982 | shutdown the node. | |
983 | ||
984 | ||
985 | .Reboot | |
986 | ||
987 | Node reboots are initiated with the 'reboot' command. This is usually done | |
988 | after installing a new kernel. Please note that this is different from | |
989 | ``shutdown'', because the node immediately starts again. | |
990 | ||
991 | The LRM tells the CRM that it wants to restart, and waits until the CRM puts | |
992 | all resources into the `freeze` state (same mechanism is used for | |
993 | xref:ha_manager_package_updates[Package Updates]). This prevents those resources | |
994 | from being moved to other nodes. Instead, the CRM starts the resources after the | |
995 | reboot on the same node. | |
996 | ||
997 | ||
998 | Manual Resource Movement | |
999 | ^^^^^^^^^^^^^^^^^^^^^^^^ | |
1000 | ||
1001 | Last but not least, you can also manually move resources to other nodes, before | |
1002 | you shutdown or restart a node. The advantage is that you have full control, | |
1003 | and you can decide if you want to use online migration or not. | |
1004 | ||
1005 | NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or | |
1006 | `watchdog-mux`. They manage and use the watchdog, so this can result in an | |
1007 | immediate node reboot or even reset. | |
1008 | ||
1009 | ||
1010 | [[ha_manager_crs]] | |
1011 | Cluster Resource Scheduling | |
1012 | --------------------------- | |
1013 | ||
1014 | The cluster resource scheduler (CRS) mode controls how HA selects nodes for the | |
1015 | recovery of a service as well as for migrations that are triggered by a | |
1016 | shutdown policy. The default mode is `basic`, you can change it in the Web UI | |
1017 | (`Datacenter` -> `Options`), or directly in `datacenter.cfg`: | |
1018 | ||
1019 | ---- | |
1020 | crs: ha=static | |
1021 | ---- | |
1022 | ||
1023 | [thumbnail="screenshot/gui-datacenter-options-crs.png"] | |
1024 | ||
1025 | The change will be in effect starting with the next manager round (after a few | |
1026 | seconds). | |
1027 | ||
1028 | For each service that needs to be recovered or migrated, the scheduler | |
1029 | iteratively chooses the best node among the nodes with the highest priority in | |
1030 | the service's group. | |
1031 | ||
1032 | NOTE: There are plans to add modes for (static and dynamic) load-balancing in | |
1033 | the future. | |
1034 | ||
1035 | Basic Scheduler | |
1036 | ~~~~~~~~~~~~~~~ | |
1037 | ||
1038 | The number of active HA services on each node is used to choose a recovery node. | |
1039 | Non-HA-managed services are currently not counted. | |
1040 | ||
1041 | Static-Load Scheduler | |
1042 | ~~~~~~~~~~~~~~~~~~~~~ | |
1043 | ||
1044 | IMPORTANT: The static mode is still a technology preview. | |
1045 | ||
1046 | Static usage information from HA services on each node is used to choose a | |
1047 | recovery node. Usage of non-HA-managed services is currently not considered. | |
1048 | ||
1049 | For this selection, each node in turn is considered as if the service was | |
1050 | already running on it, using CPU and memory usage from the associated guest | |
1051 | configuration. Then for each such alternative, CPU and memory usage of all nodes | |
1052 | are considered, with memory being weighted much more, because it's a truly | |
1053 | limited resource. For both, CPU and memory, highest usage among nodes (weighted | |
1054 | more, as ideally no node should be overcommitted) and average usage of all nodes | |
1055 | (to still be able to distinguish in case there already is a more highly | |
1056 | committed node) are considered. | |
1057 | ||
1058 | IMPORTANT: The more services the more possible combinations there are, so it's | |
1059 | currently not recommended to use it if you have thousands of HA managed | |
1060 | services. | |
1061 | ||
1062 | ||
1063 | CRS Scheduling Points | |
1064 | ~~~~~~~~~~~~~~~~~~~~~ | |
1065 | ||
1066 | The CRS algorithm is not applied for every service in every round, since this | |
1067 | would mean a large number of constant migrations. Depending on the workload, | |
1068 | this could put more strain on the cluster than could be avoided by constant | |
1069 | balancing. | |
1070 | That's why the {pve} HA manager favors keeping services on their current node. | |
1071 | ||
1072 | The CRS is currently used at the following scheduling points: | |
1073 | ||
1074 | - Service recovery (always active). When a node with active HA services fails, | |
1075 | all its services need to be recovered to other nodes. The CRS algorithm will | |
1076 | be used here to balance that recovery over the remaining nodes. | |
1077 | ||
1078 | - HA group config changes (always active). If a node is removed from a group, | |
1079 | or its priority is reduced, the HA stack will use the CRS algorithm to find a | |
1080 | new target node for the HA services in that group, matching the adapted | |
1081 | priority constraints. | |
1082 | ||
1083 | - HA service stopped -> start transtion (opt-in). Requesting that a stopped | |
1084 | service should be started is an good opportunity to check for the best suited | |
1085 | node as per the CRS algorithm, as moving stopped services is cheaper to do | |
1086 | than moving them started, especially if their disk volumes reside on shared | |
1087 | storage. You can enable this by setting the **`ha-rebalance-on-start`** | |
1088 | CRS option in the datacenter config. You can change that option also in the | |
1089 | Web UI, under `Datacenter` -> `Options` -> `Cluster Resource Scheduling`. | |
1090 | ||
1091 | ifdef::manvolnum[] | |
1092 | include::pve-copyright.adoc[] | |
1093 | endif::manvolnum[] | |
1094 |