]> git.proxmox.com Git - pve-docs.git/blob - ha-manager.adoc
backup: clarify that CLI means FS-level and highlight retention-note
[pve-docs.git] / ha-manager.adoc
1 [[chapter_ha_manager]]
2 ifdef::manvolnum[]
3 ha-manager(1)
4 =============
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 ha-manager - Proxmox VE HA Manager
11
12 SYNOPSIS
13 --------
14
15 include::ha-manager.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 High Availability
22 =================
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 Our modern society depends heavily on information provided by
27 computers over the network. Mobile devices amplified that dependency,
28 because people can access the network any time from anywhere. If you
29 provide such services, it is very important that they are available
30 most of the time.
31
32 We can mathematically define the availability as the ratio of (A), the
33 total time a service is capable of being used during a given interval
34 to (B), the length of the interval. It is normally expressed as a
35 percentage of uptime in a given year.
36
37 .Availability - Downtime per Year
38 [width="60%",cols="<d,d",options="header"]
39 |===========================================================
40 |Availability % |Downtime per year
41 |99 |3.65 days
42 |99.9 |8.76 hours
43 |99.99 |52.56 minutes
44 |99.999 |5.26 minutes
45 |99.9999 |31.5 seconds
46 |99.99999 |3.15 seconds
47 |===========================================================
48
49 There are several ways to increase availability. The most elegant
50 solution is to rewrite your software, so that you can run it on
51 several hosts at the same time. The software itself needs to have a way
52 to detect errors and do failover. If you only want to serve read-only
53 web pages, then this is relatively simple. However, this is generally complex
54 and sometimes impossible, because you cannot modify the software yourself. The
55 following solutions works without modifying the software:
56
57 * Use reliable ``server'' components
58 +
59 NOTE: Computer components with the same functionality can have varying
60 reliability numbers, depending on the component quality. Most vendors
61 sell components with higher reliability as ``server'' components -
62 usually at higher price.
63
64 * Eliminate single point of failure (redundant components)
65 ** use an uninterruptible power supply (UPS)
66 ** use redundant power supplies on the main boards
67 ** use ECC-RAM
68 ** use redundant network hardware
69 ** use RAID for local storage
70 ** use distributed, redundant storage for VM data
71
72 * Reduce downtime
73 ** rapidly accessible administrators (24/7)
74 ** availability of spare parts (other nodes in a {pve} cluster)
75 ** automatic error detection (provided by `ha-manager`)
76 ** automatic failover (provided by `ha-manager`)
77
78 Virtualization environments like {pve} make it much easier to reach
79 high availability because they remove the ``hardware'' dependency. They
80 also support the setup and use of redundant storage and network
81 devices, so if one host fails, you can simply start those services on
82 another host within your cluster.
83
84 Better still, {pve} provides a software stack called `ha-manager`,
85 which can do that automatically for you. It is able to automatically
86 detect errors and do automatic failover.
87
88 {pve} `ha-manager` works like an ``automated'' administrator. First, you
89 configure what resources (VMs, containers, ...) it should
90 manage. Then, `ha-manager` observes the correct functionality, and handles
91 service failover to another node in case of errors. `ha-manager` can
92 also handle normal user requests which may start, stop, relocate and
93 migrate a service.
94
95 But high availability comes at a price. High quality components are
96 more expensive, and making them redundant doubles the costs at
97 least. Additional spare parts increase costs further. So you should
98 carefully calculate the benefits, and compare with those additional
99 costs.
100
101 TIP: Increasing availability from 99% to 99.9% is relatively
102 simple. But increasing availability from 99.9999% to 99.99999% is very
103 hard and costly. `ha-manager` has typical error detection and failover
104 times of about 2 minutes, so you can get no more than 99.999%
105 availability.
106
107
108 Requirements
109 ------------
110
111 You must meet the following requirements before you start with HA:
112
113 * at least three cluster nodes (to get reliable quorum)
114
115 * shared storage for VMs and containers
116
117 * hardware redundancy (everywhere)
118
119 * use reliable “server” components
120
121 * hardware watchdog - if not available we fall back to the
122 linux kernel software watchdog (`softdog`)
123
124 * optional hardware fencing devices
125
126
127 [[ha_manager_resources]]
128 Resources
129 ---------
130
131 We call the primary management unit handled by `ha-manager` a
132 resource. A resource (also called ``service'') is uniquely
133 identified by a service ID (SID), which consists of the resource type
134 and a type specific ID, for example `vm:100`. That example would be a
135 resource of type `vm` (virtual machine) with the ID 100.
136
137 For now we have two important resources types - virtual machines and
138 containers. One basic idea here is that we can bundle related software
139 into such a VM or container, so there is no need to compose one big
140 service from other services, as was done with `rgmanager`. In
141 general, a HA managed resource should not depend on other resources.
142
143
144 Management Tasks
145 ----------------
146
147 This section provides a short overview of common management tasks. The
148 first step is to enable HA for a resource. This is done by adding the
149 resource to the HA resource configuration. You can do this using the
150 GUI, or simply use the command line tool, for example:
151
152 ----
153 # ha-manager add vm:100
154 ----
155
156 The HA stack now tries to start the resources and keep them
157 running. Please note that you can configure the ``requested''
158 resources state. For example you may want the HA stack to stop the
159 resource:
160
161 ----
162 # ha-manager set vm:100 --state stopped
163 ----
164
165 and start it again later:
166
167 ----
168 # ha-manager set vm:100 --state started
169 ----
170
171 You can also use the normal VM and container management commands. They
172 automatically forward the commands to the HA stack, so
173
174 ----
175 # qm start 100
176 ----
177
178 simply sets the requested state to `started`. The same applies to `qm
179 stop`, which sets the requested state to `stopped`.
180
181 NOTE: The HA stack works fully asynchronous and needs to communicate
182 with other cluster members. Therefore, it takes some seconds until you see
183 the result of such actions.
184
185 To view the current HA resource configuration use:
186
187 ----
188 # ha-manager config
189 vm:100
190 state stopped
191 ----
192
193 And you can view the actual HA manager and resource state with:
194
195 ----
196 # ha-manager status
197 quorum OK
198 master node1 (active, Wed Nov 23 11:07:23 2016)
199 lrm elsa (active, Wed Nov 23 11:07:19 2016)
200 service vm:100 (node1, started)
201 ----
202
203 You can also initiate resource migration to other nodes:
204
205 ----
206 # ha-manager migrate vm:100 node2
207 ----
208
209 This uses online migration and tries to keep the VM running. Online
210 migration needs to transfer all used memory over the network, so it is
211 sometimes faster to stop the VM, then restart it on the new node. This can be
212 done using the `relocate` command:
213
214 ----
215 # ha-manager relocate vm:100 node2
216 ----
217
218 Finally, you can remove the resource from the HA configuration using
219 the following command:
220
221 ----
222 # ha-manager remove vm:100
223 ----
224
225 NOTE: This does not start or stop the resource.
226
227 But all HA related tasks can be done in the GUI, so there is no need to
228 use the command line at all.
229
230
231 How It Works
232 ------------
233
234 This section provides a detailed description of the {PVE} HA manager
235 internals. It describes all involved daemons and how they work
236 together. To provide HA, two daemons run on each node:
237
238 `pve-ha-lrm`::
239
240 The local resource manager (LRM), which controls the services running on
241 the local node. It reads the requested states for its services from
242 the current manager status file and executes the respective commands.
243
244 `pve-ha-crm`::
245
246 The cluster resource manager (CRM), which makes the cluster-wide
247 decisions. It sends commands to the LRM, processes the results,
248 and moves resources to other nodes if something fails. The CRM also
249 handles node fencing.
250
251
252 .Locks in the LRM & CRM
253 [NOTE]
254 Locks are provided by our distributed configuration file system (pmxcfs).
255 They are used to guarantee that each LRM is active once and working. As an
256 LRM only executes actions when it holds its lock, we can mark a failed node
257 as fenced if we can acquire its lock. This then lets us recover any failed
258 HA services securely without any interference from the now unknown failed node.
259 This all gets supervised by the CRM which currently holds the manager master
260 lock.
261
262
263 Service States
264 ~~~~~~~~~~~~~~
265
266 The CRM uses a service state enumeration to record the current service
267 state. This state is displayed on the GUI and can be queried using
268 the `ha-manager` command line tool:
269
270 ----
271 # ha-manager status
272 quorum OK
273 master elsa (active, Mon Nov 21 07:23:29 2016)
274 lrm elsa (active, Mon Nov 21 07:23:22 2016)
275 service ct:100 (elsa, stopped)
276 service ct:102 (elsa, started)
277 service vm:501 (elsa, started)
278 ----
279
280 Here is the list of possible states:
281
282 stopped::
283
284 Service is stopped (confirmed by LRM). If the LRM detects a stopped
285 service is still running, it will stop it again.
286
287 request_stop::
288
289 Service should be stopped. The CRM waits for confirmation from the
290 LRM.
291
292 stopping::
293
294 Pending stop request. But the CRM did not get the request so far.
295
296 started::
297
298 Service is active an LRM should start it ASAP if not already running.
299 If the Service fails and is detected to be not running the LRM
300 restarts it
301 (see xref:ha_manager_start_failure_policy[Start Failure Policy]).
302
303 starting::
304
305 Pending start request. But the CRM has not got any confirmation from the
306 LRM that the service is running.
307
308 fence::
309
310 Wait for node fencing as the service node is not inside the quorate cluster
311 partition (see xref:ha_manager_fencing[Fencing]).
312 As soon as node gets fenced successfully the service will be placed into the
313 recovery state.
314
315 recovery::
316
317 Wait for recovery of the service. The HA manager tries to find a new node where
318 the service can run on. This search depends not only on the list of online and
319 quorate nodes, but also if the service is a group member and how such a group
320 is limited.
321 As soon as a new available node is found, the service will be moved there and
322 initially placed into stopped state. If it's configured to run the new node
323 will do so.
324
325 freeze::
326
327 Do not touch the service state. We use this state while we reboot a
328 node, or when we restart the LRM daemon
329 (see xref:ha_manager_package_updates[Package Updates]).
330
331 ignored::
332
333 Act as if the service were not managed by HA at all.
334 Useful, when full control over the service is desired temporarily, without
335 removing it from the HA configuration.
336
337 migrate::
338
339 Migrate service (live) to other node.
340
341 error::
342
343 Service is disabled because of LRM errors. Needs manual intervention
344 (see xref:ha_manager_error_recovery[Error Recovery]).
345
346 queued::
347
348 Service is newly added, and the CRM has not seen it so far.
349
350 disabled::
351
352 Service is stopped and marked as `disabled`
353
354
355 Local Resource Manager
356 ~~~~~~~~~~~~~~~~~~~~~~
357
358 The local resource manager (`pve-ha-lrm`) is started as a daemon on
359 boot and waits until the HA cluster is quorate and thus cluster-wide
360 locks are working.
361
362 It can be in three states:
363
364 wait for agent lock::
365
366 The LRM waits for our exclusive lock. This is also used as idle state if no
367 service is configured.
368
369 active::
370
371 The LRM holds its exclusive lock and has services configured.
372
373 lost agent lock::
374
375 The LRM lost its lock, this means a failure happened and quorum was lost.
376
377 After the LRM gets in the active state it reads the manager status
378 file in `/etc/pve/ha/manager_status` and determines the commands it
379 has to execute for the services it owns.
380 For each command a worker gets started, these workers are running in
381 parallel and are limited to at most 4 by default. This default setting
382 may be changed through the datacenter configuration key `max_worker`.
383 When finished the worker process gets collected and its result saved for
384 the CRM.
385
386 .Maximum Concurrent Worker Adjustment Tips
387 [NOTE]
388 The default value of at most 4 concurrent workers may be unsuited for
389 a specific setup. For example, 4 live migrations may occur at the same
390 time, which can lead to network congestions with slower networks and/or
391 big (memory wise) services. Also, ensure that in the worst case, congestion is
392 at a minimum, even if this means lowering the `max_worker` value. On the
393 contrary, if you have a particularly powerful, high-end setup you may also want
394 to increase it.
395
396 Each command requested by the CRM is uniquely identifiable by a UID. When
397 the worker finishes, its result will be processed and written in the LRM
398 status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
399 it and let its state machine - respective to the commands output - act on it.
400
401 The actions on each service between CRM and LRM are normally always synced.
402 This means that the CRM requests a state uniquely marked by a UID, the LRM
403 then executes this action *one time* and writes back the result, which is also
404 identifiable by the same UID. This is needed so that the LRM does not
405 execute an outdated command.
406 The only exceptions to this behaviour are the `stop` and `error` commands;
407 these two do not depend on the result produced and are executed
408 always in the case of the stopped state and once in the case of
409 the error state.
410
411 .Read the Logs
412 [NOTE]
413 The HA Stack logs every action it makes. This helps to understand what
414 and also why something happens in the cluster. Here its important to see
415 what both daemons, the LRM and the CRM, did. You may use
416 `journalctl -u pve-ha-lrm` on the node(s) where the service is and
417 the same command for the pve-ha-crm on the node which is the current master.
418
419 Cluster Resource Manager
420 ~~~~~~~~~~~~~~~~~~~~~~~~
421
422 The cluster resource manager (`pve-ha-crm`) starts on each node and
423 waits there for the manager lock, which can only be held by one node
424 at a time. The node which successfully acquires the manager lock gets
425 promoted to the CRM master.
426
427 It can be in three states:
428
429 wait for agent lock::
430
431 The CRM waits for our exclusive lock. This is also used as idle state if no
432 service is configured
433
434 active::
435
436 The CRM holds its exclusive lock and has services configured
437
438 lost agent lock::
439
440 The CRM lost its lock, this means a failure happened and quorum was lost.
441
442 Its main task is to manage the services which are configured to be highly
443 available and try to always enforce the requested state. For example, a
444 service with the requested state 'started' will be started if its not
445 already running. If it crashes it will be automatically started again.
446 Thus the CRM dictates the actions the LRM needs to execute.
447
448 When a node leaves the cluster quorum, its state changes to unknown.
449 If the current CRM can then secure the failed node's lock, the services
450 will be 'stolen' and restarted on another node.
451
452 When a cluster member determines that it is no longer in the cluster
453 quorum, the LRM waits for a new quorum to form. As long as there is no
454 quorum the node cannot reset the watchdog. This will trigger a reboot
455 after the watchdog times out (this happens after 60 seconds).
456
457
458 HA Simulator
459 ------------
460
461 [thumbnail="screenshot/gui-ha-manager-status.png"]
462
463 By using the HA simulator you can test and learn all functionalities of the
464 Proxmox VE HA solutions.
465
466 By default, the simulator allows you to watch and test the behaviour of a
467 real-world 3 node cluster with 6 VMs. You can also add or remove additional VMs
468 or Container.
469
470 You do not have to setup or configure a real cluster, the HA simulator runs out
471 of the box.
472
473 Install with apt:
474
475 ----
476 apt install pve-ha-simulator
477 ----
478
479 You can even install the package on any Debian-based system without any
480 other Proxmox VE packages. For that you will need to download the package and
481 copy it to the system you want to run it on for installation. When you install
482 the package with apt from the local file system it will also resolve the
483 required dependencies for you.
484
485
486 To start the simulator on a remote machine you must have an X11 redirection to
487 your current system.
488
489 If you are on a Linux machine you can use:
490
491 ----
492 ssh root@<IPofPVE> -Y
493 ----
494
495 On Windows it works with https://mobaxterm.mobatek.net/[mobaxterm].
496
497 After connecting to an existing {pve} with the simulator installed or
498 installing it on your local Debian-based system manually, you can try it out as
499 follows.
500
501 First you need to create a working directory where the simulator saves its
502 current state and writes its default config:
503
504 ----
505 mkdir working
506 ----
507
508 Then, simply pass the created directory as a parameter to 'pve-ha-simulator':
509
510 ----
511 pve-ha-simulator working/
512 ----
513
514 You can then start, stop, migrate the simulated HA services, or even check out
515 what happens on a node failure.
516
517 Configuration
518 -------------
519
520 The HA stack is well integrated into the {pve} API. So, for example,
521 HA can be configured via the `ha-manager` command line interface, or
522 the {pve} web interface - both interfaces provide an easy way to
523 manage HA. Automation tools can use the API directly.
524
525 All HA configuration files are within `/etc/pve/ha/`, so they get
526 automatically distributed to the cluster nodes, and all nodes share
527 the same HA configuration.
528
529
530 [[ha_manager_resource_config]]
531 Resources
532 ~~~~~~~~~
533
534 [thumbnail="screenshot/gui-ha-manager-status.png"]
535
536
537 The resource configuration file `/etc/pve/ha/resources.cfg` stores
538 the list of resources managed by `ha-manager`. A resource configuration
539 inside that list looks like this:
540
541 ----
542 <type>: <name>
543 <property> <value>
544 ...
545 ----
546
547 It starts with a resource type followed by a resource specific name,
548 separated with colon. Together this forms the HA resource ID, which is
549 used by all `ha-manager` commands to uniquely identify a resource
550 (example: `vm:100` or `ct:101`). The next lines contain additional
551 properties:
552
553 include::ha-resources-opts.adoc[]
554
555 Here is a real world example with one VM and one container. As you see,
556 the syntax of those files is really simple, so it is even possible to
557 read or edit those files using your favorite editor:
558
559 .Configuration Example (`/etc/pve/ha/resources.cfg`)
560 ----
561 vm: 501
562 state started
563 max_relocate 2
564
565 ct: 102
566 # Note: use default settings for everything
567 ----
568
569 [thumbnail="screenshot/gui-ha-manager-add-resource.png"]
570
571 The above config was generated using the `ha-manager` command line tool:
572
573 ----
574 # ha-manager add vm:501 --state started --max_relocate 2
575 # ha-manager add ct:102
576 ----
577
578
579 [[ha_manager_groups]]
580 Groups
581 ~~~~~~
582
583 [thumbnail="screenshot/gui-ha-manager-groups-view.png"]
584
585 The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
586 define groups of cluster nodes. A resource can be restricted to run
587 only on the members of such group. A group configuration look like
588 this:
589
590 ----
591 group: <group>
592 nodes <node_list>
593 <property> <value>
594 ...
595 ----
596
597 include::ha-groups-opts.adoc[]
598
599 [thumbnail="screenshot/gui-ha-manager-add-group.png"]
600
601 A common requirement is that a resource should run on a specific
602 node. Usually the resource is able to run on other nodes, so you can define
603 an unrestricted group with a single member:
604
605 ----
606 # ha-manager groupadd prefer_node1 --nodes node1
607 ----
608
609 For bigger clusters, it makes sense to define a more detailed failover
610 behavior. For example, you may want to run a set of services on
611 `node1` if possible. If `node1` is not available, you want to run them
612 equally split on `node2` and `node3`. If those nodes also fail, the
613 services should run on `node4`. To achieve this you could set the node
614 list to:
615
616 ----
617 # ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4"
618 ----
619
620 Another use case is if a resource uses other resources only available
621 on specific nodes, lets say `node1` and `node2`. We need to make sure
622 that HA manager does not use other nodes, so we need to create a
623 restricted group with said nodes:
624
625 ----
626 # ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted
627 ----
628
629 The above commands created the following group configuration file:
630
631 .Configuration Example (`/etc/pve/ha/groups.cfg`)
632 ----
633 group: prefer_node1
634 nodes node1
635
636 group: mygroup1
637 nodes node2:1,node4,node1:2,node3:1
638
639 group: mygroup2
640 nodes node2,node1
641 restricted 1
642 ----
643
644
645 The `nofailback` options is mostly useful to avoid unwanted resource
646 movements during administration tasks. For example, if you need to
647 migrate a service to a node which doesn't have the highest priority in the
648 group, you need to tell the HA manager not to instantly move this service
649 back by setting the `nofailback` option.
650
651 Another scenario is when a service was fenced and it got recovered to
652 another node. The admin tries to repair the fenced node and brings it
653 up online again to investigate the cause of failure and check if it runs
654 stably again. Setting the `nofailback` flag prevents the recovered services from
655 moving straight back to the fenced node.
656
657
658 [[ha_manager_fencing]]
659 Fencing
660 -------
661
662 On node failures, fencing ensures that the erroneous node is
663 guaranteed to be offline. This is required to make sure that no
664 resource runs twice when it gets recovered on another node. This is a
665 really important task, because without this, it would not be possible to
666 recover a resource on another node.
667
668 If a node did not get fenced, it would be in an unknown state where
669 it may have still access to shared resources. This is really
670 dangerous! Imagine that every network but the storage one broke. Now,
671 while not reachable from the public network, the VM still runs and
672 writes to the shared storage.
673
674 If we then simply start up this VM on another node, we would get a
675 dangerous race condition, because we write from both nodes. Such
676 conditions can destroy all VM data and the whole VM could be rendered
677 unusable. The recovery could also fail if the storage protects against
678 multiple mounts.
679
680
681 How {pve} Fences
682 ~~~~~~~~~~~~~~~~
683
684 There are different methods to fence a node, for example, fence
685 devices which cut off the power from the node or disable their
686 communication completely. Those are often quite expensive and bring
687 additional critical components into a system, because if they fail you
688 cannot recover any service.
689
690 We thus wanted to integrate a simpler fencing method, which does not
691 require additional external hardware. This can be done using
692 watchdog timers.
693
694 .Possible Fencing Methods
695 - external power switches
696 - isolate nodes by disabling complete network traffic on the switch
697 - self fencing using watchdog timers
698
699 Watchdog timers have been widely used in critical and dependable systems
700 since the beginning of microcontrollers. They are often simple, independent
701 integrated circuits which are used to detect and recover from computer malfunctions.
702
703 During normal operation, `ha-manager` regularly resets the watchdog
704 timer to prevent it from elapsing. If, due to a hardware fault or
705 program error, the computer fails to reset the watchdog, the timer
706 will elapse and trigger a reset of the whole server (reboot).
707
708 Recent server motherboards often include such hardware watchdogs, but
709 these need to be configured. If no watchdog is available or
710 configured, we fall back to the Linux Kernel 'softdog'. While still
711 reliable, it is not independent of the servers hardware, and thus has
712 a lower reliability than a hardware watchdog.
713
714
715 Configure Hardware Watchdog
716 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
717
718 By default, all hardware watchdog modules are blocked for security
719 reasons. They are like a loaded gun if not correctly initialized. To
720 enable a hardware watchdog, you need to specify the module to load in
721 '/etc/default/pve-ha-manager', for example:
722
723 ----
724 # select watchdog module (default is softdog)
725 WATCHDOG_MODULE=iTCO_wdt
726 ----
727
728 This configuration is read by the 'watchdog-mux' service, which loads
729 the specified module at startup.
730
731
732 Recover Fenced Services
733 ~~~~~~~~~~~~~~~~~~~~~~~
734
735 After a node failed and its fencing was successful, the CRM tries to
736 move services from the failed node to nodes which are still online.
737
738 The selection of nodes, on which those services gets recovered, is
739 influenced by the resource `group` settings, the list of currently active
740 nodes, and their respective active service count.
741
742 The CRM first builds a set out of the intersection between user selected
743 nodes (from `group` setting) and available nodes. It then choose the
744 subset of nodes with the highest priority, and finally select the node
745 with the lowest active service count. This minimizes the possibility
746 of an overloaded node.
747
748 CAUTION: On node failure, the CRM distributes services to the
749 remaining nodes. This increases the service count on those nodes, and
750 can lead to high load, especially on small clusters. Please design
751 your cluster so that it can handle such worst case scenarios.
752
753
754 [[ha_manager_start_failure_policy]]
755 Start Failure Policy
756 ---------------------
757
758 The start failure policy comes into effect if a service failed to start on a
759 node one or more times. It can be used to configure how often a restart
760 should be triggered on the same node and how often a service should be
761 relocated, so that it has an attempt to be started on another node.
762 The aim of this policy is to circumvent temporary unavailability of shared
763 resources on a specific node. For example, if a shared storage isn't available
764 on a quorate node anymore, for instance due to network problems, but is still
765 available on other nodes, the relocate policy allows the service to start
766 nonetheless.
767
768 There are two service start recover policy settings which can be configured
769 specific for each resource.
770
771 max_restart::
772
773 Maximum number of attempts to restart a failed service on the actual
774 node. The default is set to one.
775
776 max_relocate::
777
778 Maximum number of attempts to relocate the service to a different node.
779 A relocate only happens after the max_restart value is exceeded on the
780 actual node. The default is set to one.
781
782 NOTE: The relocate count state will only reset to zero when the
783 service had at least one successful start. That means if a service is
784 re-started without fixing the error only the restart policy gets
785 repeated.
786
787
788 [[ha_manager_error_recovery]]
789 Error Recovery
790 --------------
791
792 If, after all attempts, the service state could not be recovered, it gets
793 placed in an error state. In this state, the service won't get touched
794 by the HA stack anymore. The only way out is disabling a service:
795
796 ----
797 # ha-manager set vm:100 --state disabled
798 ----
799
800 This can also be done in the web interface.
801
802 To recover from the error state you should do the following:
803
804 * bring the resource back into a safe and consistent state (e.g.:
805 kill its process if the service could not be stopped)
806
807 * disable the resource to remove the error flag
808
809 * fix the error which led to this failures
810
811 * *after* you fixed all errors you may request that the service starts again
812
813
814 [[ha_manager_package_updates]]
815 Package Updates
816 ---------------
817
818 When updating the ha-manager, you should do one node after the other, never
819 all at once for various reasons. First, while we test our software
820 thoroughly, a bug affecting your specific setup cannot totally be ruled out.
821 Updating one node after the other and checking the functionality of each node
822 after finishing the update helps to recover from eventual problems, while
823 updating all at once could result in a broken cluster and is generally not
824 good practice.
825
826 Also, the {pve} HA stack uses a request acknowledge protocol to perform
827 actions between the cluster and the local resource manager. For restarting,
828 the LRM makes a request to the CRM to freeze all its services. This prevents
829 them from getting touched by the Cluster during the short time the LRM is restarting.
830 After that, the LRM may safely close the watchdog during a restart.
831 Such a restart happens normally during a package update and, as already stated,
832 an active master CRM is needed to acknowledge the requests from the LRM. If
833 this is not the case the update process can take too long which, in the worst
834 case, may result in a reset triggered by the watchdog.
835
836
837 Node Maintenance
838 ----------------
839
840 It is sometimes necessary to shutdown or reboot a node to do maintenance tasks,
841 such as to replace hardware, or simply to install a new kernel image. This is
842 also true when using the HA stack. The behaviour of the HA stack during a
843 shutdown can be configured.
844
845 [[ha_manager_shutdown_policy]]
846 Shutdown Policy
847 ~~~~~~~~~~~~~~~
848
849 Below you will find a description of the different HA policies for a node
850 shutdown. Currently 'Conditional' is the default due to backward compatibility.
851 Some users may find that 'Migrate' behaves more as expected.
852
853 Migrate
854 ^^^^^^^
855
856 Once the Local Resource manager (LRM) gets a shutdown request and this policy
857 is enabled, it will mark itself as unavailable for the current HA manager.
858 This triggers a migration of all HA Services currently located on this node.
859 The LRM will try to delay the shutdown process, until all running services get
860 moved away. But, this expects that the running services *can* be migrated to
861 another node. In other words, the service must not be locally bound, for example
862 by using hardware passthrough. As non-group member nodes are considered as
863 runnable target if no group member is available, this policy can still be used
864 when making use of HA groups with only some nodes selected. But, marking a group
865 as 'restricted' tells the HA manager that the service cannot run outside of the
866 chosen set of nodes. If all of those nodes are unavailable, the shutdown will
867 hang until you manually intervene. Once the shut down node comes back online
868 again, the previously displaced services will be moved back, if they were not
869 already manually migrated in-between.
870
871 NOTE: The watchdog is still active during the migration process on shutdown.
872 If the node loses quorum it will be fenced and the services will be recovered.
873
874 If you start a (previously stopped) service on a node which is currently being
875 maintained, the node needs to be fenced to ensure that the service can be moved
876 and started on another available node.
877
878 Failover
879 ^^^^^^^^
880
881 This mode ensures that all services get stopped, but that they will also be
882 recovered, if the current node is not online soon. It can be useful when doing
883 maintenance on a cluster scale, where live-migrating VMs may not be possible if
884 too many nodes are powered off at a time, but you still want to ensure HA
885 services get recovered and started again as soon as possible.
886
887 Freeze
888 ^^^^^^
889
890 This mode ensures that all services get stopped and frozen, so that they won't
891 get recovered until the current node is online again.
892
893 Conditional
894 ^^^^^^^^^^^
895
896 The 'Conditional' shutdown policy automatically detects if a shutdown or a
897 reboot is requested, and changes behaviour accordingly.
898
899 .Shutdown
900
901 A shutdown ('poweroff') is usually done if it is planned for the node to stay
902 down for some time. The LRM stops all managed services in this case. This means
903 that other nodes will take over those services afterwards.
904
905 NOTE: Recent hardware has large amounts of memory (RAM). So we stop all
906 resources, then restart them to avoid online migration of all that RAM. If you
907 want to use online migration, you need to invoke that manually before you
908 shutdown the node.
909
910
911 .Reboot
912
913 Node reboots are initiated with the 'reboot' command. This is usually done
914 after installing a new kernel. Please note that this is different from
915 ``shutdown'', because the node immediately starts again.
916
917 The LRM tells the CRM that it wants to restart, and waits until the CRM puts
918 all resources into the `freeze` state (same mechanism is used for
919 xref:ha_manager_package_updates[Package Updates]). This prevents those resources
920 from being moved to other nodes. Instead, the CRM starts the resources after the
921 reboot on the same node.
922
923
924 Manual Resource Movement
925 ^^^^^^^^^^^^^^^^^^^^^^^^
926
927 Last but not least, you can also manually move resources to other nodes, before
928 you shutdown or restart a node. The advantage is that you have full control,
929 and you can decide if you want to use online migration or not.
930
931 NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or
932 `watchdog-mux`. They manage and use the watchdog, so this can result in an
933 immediate node reboot or even reset.
934
935
936 ifdef::manvolnum[]
937 include::pve-copyright.adoc[]
938 endif::manvolnum[]
939