]> git.proxmox.com Git - pve-docs.git/blob - ha-manager.adoc
backup restore: Fix syntax for bwlimit example
[pve-docs.git] / ha-manager.adoc
1 [[chapter_ha_manager]]
2 ifdef::manvolnum[]
3 ha-manager(1)
4 =============
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 ha-manager - Proxmox VE HA Manager
11
12 SYNOPSIS
13 --------
14
15 include::ha-manager.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 High Availability
22 =================
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 Our modern society depends heavily on information provided by
27 computers over the network. Mobile devices amplified that dependency,
28 because people can access the network any time from anywhere. If you
29 provide such services, it is very important that they are available
30 most of the time.
31
32 We can mathematically define the availability as the ratio of (A) the
33 total time a service is capable of being used during a given interval
34 to (B) the length of the interval. It is normally expressed as a
35 percentage of uptime in a given year.
36
37 .Availability - Downtime per Year
38 [width="60%",cols="<d,d",options="header"]
39 |===========================================================
40 |Availability % |Downtime per year
41 |99 |3.65 days
42 |99.9 |8.76 hours
43 |99.99 |52.56 minutes
44 |99.999 |5.26 minutes
45 |99.9999 |31.5 seconds
46 |99.99999 |3.15 seconds
47 |===========================================================
48
49 There are several ways to increase availability. The most elegant
50 solution is to rewrite your software, so that you can run it on
51 several host at the same time. The software itself need to have a way
52 to detect errors and do failover. This is relatively easy if you just
53 want to serve read-only web pages. But in general this is complex, and
54 sometimes impossible because you cannot modify the software
55 yourself. The following solutions works without modifying the
56 software:
57
58 * Use reliable ``server'' components
59 +
60 NOTE: Computer components with same functionality can have varying
61 reliability numbers, depending on the component quality. Most vendors
62 sell components with higher reliability as ``server'' components -
63 usually at higher price.
64
65 * Eliminate single point of failure (redundant components)
66 ** use an uninterruptible power supply (UPS)
67 ** use redundant power supplies on the main boards
68 ** use ECC-RAM
69 ** use redundant network hardware
70 ** use RAID for local storage
71 ** use distributed, redundant storage for VM data
72
73 * Reduce downtime
74 ** rapidly accessible administrators (24/7)
75 ** availability of spare parts (other nodes in a {pve} cluster)
76 ** automatic error detection (provided by `ha-manager`)
77 ** automatic failover (provided by `ha-manager`)
78
79 Virtualization environments like {pve} make it much easier to reach
80 high availability because they remove the ``hardware'' dependency. They
81 also support to setup and use redundant storage and network
82 devices. So if one host fail, you can simply start those services on
83 another host within your cluster.
84
85 Even better, {pve} provides a software stack called `ha-manager`,
86 which can do that automatically for you. It is able to automatically
87 detect errors and do automatic failover.
88
89 {pve} `ha-manager` works like an ``automated'' administrator. First, you
90 configure what resources (VMs, containers, ...) it should
91 manage. `ha-manager` then observes correct functionality, and handles
92 service failover to another node in case of errors. `ha-manager` can
93 also handle normal user requests which may start, stop, relocate and
94 migrate a service.
95
96 But high availability comes at a price. High quality components are
97 more expensive, and making them redundant duplicates the costs at
98 least. Additional spare parts increase costs further. So you should
99 carefully calculate the benefits, and compare with those additional
100 costs.
101
102 TIP: Increasing availability from 99% to 99.9% is relatively
103 simple. But increasing availability from 99.9999% to 99.99999% is very
104 hard and costly. `ha-manager` has typical error detection and failover
105 times of about 2 minutes, so you can get no more than 99.999%
106 availability.
107
108
109 Requirements
110 ------------
111
112 You must meet the following requirements before you start with HA:
113
114 * at least three cluster nodes (to get reliable quorum)
115
116 * shared storage for VMs and containers
117
118 * hardware redundancy (everywhere)
119
120 * use reliable “server” components
121
122 * hardware watchdog - if not available we fall back to the
123 linux kernel software watchdog (`softdog`)
124
125 * optional hardware fencing devices
126
127
128 [[ha_manager_resources]]
129 Resources
130 ---------
131
132 We call the primary management unit handled by `ha-manager` a
133 resource. A resource (also called ``service'') is uniquely
134 identified by a service ID (SID), which consists of the resource type
135 and an type specific ID, e.g.: `vm:100`. That example would be a
136 resource of type `vm` (virtual machine) with the ID 100.
137
138 For now we have two important resources types - virtual machines and
139 containers. One basic idea here is that we can bundle related software
140 into such a VM or container, so there is no need to compose one big
141 service from other services, like it was done with `rgmanager`. In
142 general, a HA managed resource should not depend on other resources.
143
144
145 Management Tasks
146 ----------------
147
148 This section provides a short overview of common management tasks. The
149 first step is to enable HA for a resource. This is done by adding the
150 resource to the HA resource configuration. You can do this using the
151 GUI, or simply use the command line tool, for example:
152
153 ----
154 # ha-manager add vm:100
155 ----
156
157 The HA stack now tries to start the resources and keeps it
158 running. Please note that you can configure the ``requested''
159 resources state. For example you may want the HA stack to stop the
160 resource:
161
162 ----
163 # ha-manager set vm:100 --state stopped
164 ----
165
166 and start it again later:
167
168 ----
169 # ha-manager set vm:100 --state started
170 ----
171
172 You can also use the normal VM and container management commands. They
173 automatically forward the commands to the HA stack, so
174
175 ----
176 # qm start 100
177 ----
178
179 simply sets the requested state to `started`. Same applied to `qm
180 stop`, which sets the requested state to `stopped`.
181
182 NOTE: The HA stack works fully asynchronous and needs to communicate
183 with other cluster members. So it takes some seconds until you see
184 the result of such actions.
185
186 To view the current HA resource configuration use:
187
188 ----
189 # ha-manager config
190 vm:100
191 state stopped
192 ----
193
194 And you can view the actual HA manager and resource state with:
195
196 ----
197 # ha-manager status
198 quorum OK
199 master node1 (active, Wed Nov 23 11:07:23 2016)
200 lrm elsa (active, Wed Nov 23 11:07:19 2016)
201 service vm:100 (node1, started)
202 ----
203
204 You can also initiate resource migration to other nodes:
205
206 ----
207 # ha-manager migrate vm:100 node2
208 ----
209
210 This uses online migration and tries to keep the VM running. Online
211 migration needs to transfer all used memory over the network, so it is
212 sometimes faster to stop VM, then restart it on the new node. This can be
213 done using the `relocate` command:
214
215 ----
216 # ha-manager relocate vm:100 node2
217 ----
218
219 Finally, you can remove the resource from the HA configuration using
220 the following command:
221
222 ----
223 # ha-manager remove vm:100
224 ----
225
226 NOTE: This does not start or stop the resource.
227
228 But all HA related tasks can be done in the GUI, so there is no need to
229 use the command line at all.
230
231
232 How It Works
233 ------------
234
235 This section provides a detailed description of the {PVE} HA manager
236 internals. It describes all involved daemons and how they work
237 together. To provide HA, two daemons run on each node:
238
239 `pve-ha-lrm`::
240
241 The local resource manager (LRM), which controls the services running on
242 the local node. It reads the requested states for its services from
243 the current manager status file and executes the respective commands.
244
245 `pve-ha-crm`::
246
247 The cluster resource manager (CRM), which makes the cluster wide
248 decisions. It sends commands to the LRM, processes the results,
249 and moves resources to other nodes if something fails. The CRM also
250 handles node fencing.
251
252
253 .Locks in the LRM & CRM
254 [NOTE]
255 Locks are provided by our distributed configuration file system (pmxcfs).
256 They are used to guarantee that each LRM is active once and working. As an
257 LRM only executes actions when it holds its lock, we can mark a failed node
258 as fenced if we can acquire its lock. This lets us then recover any failed
259 HA services securely without any interference from the now unknown failed node.
260 This all gets supervised by the CRM which holds currently the manager master
261 lock.
262
263
264 Service States
265 ~~~~~~~~~~~~~~
266
267 The CRM use a service state enumeration to record the current service
268 state. We display this state on the GUI and you can query it using
269 the `ha-manager` command line tool:
270
271 ----
272 # ha-manager status
273 quorum OK
274 master elsa (active, Mon Nov 21 07:23:29 2016)
275 lrm elsa (active, Mon Nov 21 07:23:22 2016)
276 service ct:100 (elsa, stopped)
277 service ct:102 (elsa, started)
278 service vm:501 (elsa, started)
279 ----
280
281 Here is the list of possible states:
282
283 stopped::
284
285 Service is stopped (confirmed by LRM). If the LRM detects a stopped
286 service is still running, it will stop it again.
287
288 request_stop::
289
290 Service should be stopped. The CRM waits for confirmation from the
291 LRM.
292
293 stopping::
294
295 Pending stop request. But the CRM did not get the request so far.
296
297 started::
298
299 Service is active an LRM should start it ASAP if not already running.
300 If the Service fails and is detected to be not running the LRM
301 restarts it
302 (see xref:ha_manager_start_failure_policy[Start Failure Policy]).
303
304 starting::
305
306 Pending start request. But the CRM has not got any confirmation from the
307 LRM that the service is running.
308
309 fence::
310
311 Wait for node fencing (service node is not inside quorate cluster
312 partition). As soon as node gets fenced successfully the service will
313 be recovered to another node, if possible
314 (see xref:ha_manager_fencing[Fencing]).
315
316 freeze::
317
318 Do not touch the service state. We use this state while we reboot a
319 node, or when we restart the LRM daemon
320 (see xref:ha_manager_package_updates[Package Updates]).
321
322 ignored::
323
324 Act as if the service were not managed by HA at all.
325 Useful, when full control over the service is desired temporarily,
326 without removing it from the HA configuration.
327
328
329 migrate::
330
331 Migrate service (live) to other node.
332
333 error::
334
335 Service is disabled because of LRM errors. Needs manual intervention
336 (see xref:ha_manager_error_recovery[Error Recovery]).
337
338 queued::
339
340 Service is newly added, and the CRM has not seen it so far.
341
342 disabled::
343
344 Service is stopped and marked as `disabled`
345
346
347 Local Resource Manager
348 ~~~~~~~~~~~~~~~~~~~~~~
349
350 The local resource manager (`pve-ha-lrm`) is started as a daemon on
351 boot and waits until the HA cluster is quorate and thus cluster wide
352 locks are working.
353
354 It can be in three states:
355
356 wait for agent lock::
357
358 The LRM waits for our exclusive lock. This is also used as idle state if no
359 service is configured.
360
361 active::
362
363 The LRM holds its exclusive lock and has services configured.
364
365 lost agent lock::
366
367 The LRM lost its lock, this means a failure happened and quorum was lost.
368
369 After the LRM gets in the active state it reads the manager status
370 file in `/etc/pve/ha/manager_status` and determines the commands it
371 has to execute for the services it owns.
372 For each command a worker gets started, these workers are running in
373 parallel and are limited to at most 4 by default. This default setting
374 may be changed through the datacenter configuration key `max_worker`.
375 When finished the worker process gets collected and its result saved for
376 the CRM.
377
378 .Maximum Concurrent Worker Adjustment Tips
379 [NOTE]
380 The default value of at most 4 concurrent workers may be unsuited for
381 a specific setup. For example may 4 live migrations happen at the same
382 time, which can lead to network congestions with slower networks and/or
383 big (memory wise) services. Ensure that also in the worst case no congestion
384 happens and lower the `max_worker` value if needed. On the contrary, if you
385 have a particularly powerful high end setup you may also want to increase it.
386
387 Each command requested by the CRM is uniquely identifiable by a UID, when
388 the worker finishes its result will be processed and written in the LRM
389 status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
390 it and let its state machine - respective the commands output - act on it.
391
392 The actions on each service between CRM and LRM are normally always synced.
393 This means that the CRM requests a state uniquely marked by a UID, the LRM
394 then executes this action *one time* and writes back the result, also
395 identifiable by the same UID. This is needed so that the LRM does not
396 execute an outdated command.
397 With the exception of the `stop` and the `error` command,
398 those two do not depend on the result produced and are executed
399 always in the case of the stopped state and once in the case of
400 the error state.
401
402 .Read the Logs
403 [NOTE]
404 The HA Stack logs every action it makes. This helps to understand what
405 and also why something happens in the cluster. Here its important to see
406 what both daemons, the LRM and the CRM, did. You may use
407 `journalctl -u pve-ha-lrm` on the node(s) where the service is and
408 the same command for the pve-ha-crm on the node which is the current master.
409
410 Cluster Resource Manager
411 ~~~~~~~~~~~~~~~~~~~~~~~~
412
413 The cluster resource manager (`pve-ha-crm`) starts on each node and
414 waits there for the manager lock, which can only be held by one node
415 at a time. The node which successfully acquires the manager lock gets
416 promoted to the CRM master.
417
418 It can be in three states:
419
420 wait for agent lock::
421
422 The CRM waits for our exclusive lock. This is also used as idle state if no
423 service is configured
424
425 active::
426
427 The CRM holds its exclusive lock and has services configured
428
429 lost agent lock::
430
431 The CRM lost its lock, this means a failure happened and quorum was lost.
432
433 Its main task is to manage the services which are configured to be highly
434 available and try to always enforce the requested state. For example, a
435 service with the requested state 'started' will be started if its not
436 already running. If it crashes it will be automatically started again.
437 Thus the CRM dictates the actions the LRM needs to execute.
438
439 When an node leaves the cluster quorum, its state changes to unknown.
440 If the current CRM then can secure the failed nodes lock, the services
441 will be 'stolen' and restarted on another node.
442
443 When a cluster member determines that it is no longer in the cluster
444 quorum, the LRM waits for a new quorum to form. As long as there is no
445 quorum the node cannot reset the watchdog. This will trigger a reboot
446 after the watchdog then times out, this happens after 60 seconds.
447
448
449 HA Simulator
450 ------------
451
452 [thumbnail="screenshot/gui-ha-manager-status.png"]
453
454 By using the HA simulator you can test and learn all functionalities of the
455 Proxmox VE HA solutions.
456
457 By default, the simulator allows you to watch and test the behaviour of a
458 real-world 3 node cluster with 6 VMs. You can also add or remove additional VMs
459 or Container.
460
461 You do not have to setup or configure a real cluster, the HA simulator runs out
462 of the box.
463
464 Install with apt:
465
466 ----
467 apt install pve-ha-simulator
468 ----
469
470 You can even install the package on any Debian based system without any
471 other Proxmox VE packages. For that you will need to download the package and
472 copy it to the system you want to run it on for installation. When you install
473 the package with apt from the local file system it will also resolve the
474 required dependencies for you.
475
476
477 To start the simulator on a remote machine you must have a X11 redirection to
478 your current system.
479
480 If you are on a Linux machine you can use:
481
482 ----
483 ssh root@<IPofPVE> -Y
484 ----
485
486 On Windows it is working with https://mobaxterm.mobatek.net/[mobaxterm].
487
488 After either connecting to a existing {pve} with the simulator installed, or
489 installing it on your local Debian based system manually you can try it out as
490 follows.
491
492 First you need to create a working directory where the simulator saves it's
493 current state and writes its the default config:
494
495 ----
496 mkdir working
497 ----
498
499 Then, simply pass the created directory as parameter to 'pve-ha-simulator':
500
501 ----
502 pve-ha-simulator working/
503 ----
504
505 You can then start, stop, migrate the simulated HA services, or even check out
506 what happens on a node failure.
507
508 Configuration
509 -------------
510
511 The HA stack is well integrated into the {pve} API. So, for example,
512 HA can be configured via the `ha-manager` command line interface, or
513 the {pve} web interface - both interfaces provide an easy way to
514 manage HA. Automation tools can use the API directly.
515
516 All HA configuration files are within `/etc/pve/ha/`, so they get
517 automatically distributed to the cluster nodes, and all nodes share
518 the same HA configuration.
519
520
521 [[ha_manager_resource_config]]
522 Resources
523 ~~~~~~~~~
524
525 [thumbnail="screenshot/gui-ha-manager-status.png"]
526
527
528 The resource configuration file `/etc/pve/ha/resources.cfg` stores
529 the list of resources managed by `ha-manager`. A resource configuration
530 inside that list looks like this:
531
532 ----
533 <type>: <name>
534 <property> <value>
535 ...
536 ----
537
538 It starts with a resource type followed by a resource specific name,
539 separated with colon. Together this forms the HA resource ID, which is
540 used by all `ha-manager` commands to uniquely identify a resource
541 (example: `vm:100` or `ct:101`). The next lines contain additional
542 properties:
543
544 include::ha-resources-opts.adoc[]
545
546 Here is a real world example with one VM and one container. As you see,
547 the syntax of those files is really simple, so it is even possible to
548 read or edit those files using your favorite editor:
549
550 .Configuration Example (`/etc/pve/ha/resources.cfg`)
551 ----
552 vm: 501
553 state started
554 max_relocate 2
555
556 ct: 102
557 # Note: use default settings for everything
558 ----
559
560 [thumbnail="screenshot/gui-ha-manager-add-resource.png"]
561
562 Above config was generated using the `ha-manager` command line tool:
563
564 ----
565 # ha-manager add vm:501 --state started --max_relocate 2
566 # ha-manager add ct:102
567 ----
568
569
570 [[ha_manager_groups]]
571 Groups
572 ~~~~~~
573
574 [thumbnail="screenshot/gui-ha-manager-groups-view.png"]
575
576 The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
577 define groups of cluster nodes. A resource can be restricted to run
578 only on the members of such group. A group configuration look like
579 this:
580
581 ----
582 group: <group>
583 nodes <node_list>
584 <property> <value>
585 ...
586 ----
587
588 include::ha-groups-opts.adoc[]
589
590 [thumbnail="screenshot/gui-ha-manager-add-group.png"]
591
592 A common requirement is that a resource should run on a specific
593 node. Usually the resource is able to run on other nodes, so you can define
594 an unrestricted group with a single member:
595
596 ----
597 # ha-manager groupadd prefer_node1 --nodes node1
598 ----
599
600 For bigger clusters, it makes sense to define a more detailed failover
601 behavior. For example, you may want to run a set of services on
602 `node1` if possible. If `node1` is not available, you want to run them
603 equally split on `node2` and `node3`. If those nodes also fail the
604 services should run on `node4`. To achieve this you could set the node
605 list to:
606
607 ----
608 # ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4"
609 ----
610
611 Another use case is if a resource uses other resources only available
612 on specific nodes, lets say `node1` and `node2`. We need to make sure
613 that HA manager does not use other nodes, so we need to create a
614 restricted group with said nodes:
615
616 ----
617 # ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted
618 ----
619
620 Above commands created the following group configuration fils:
621
622 .Configuration Example (`/etc/pve/ha/groups.cfg`)
623 ----
624 group: prefer_node1
625 nodes node1
626
627 group: mygroup1
628 nodes node2:1,node4,node1:2,node3:1
629
630 group: mygroup2
631 nodes node2,node1
632 restricted 1
633 ----
634
635
636 The `nofailback` options is mostly useful to avoid unwanted resource
637 movements during administration tasks. For example, if you need to
638 migrate a service to a node which hasn't the highest priority in the
639 group, you need to tell the HA manager to not move this service
640 instantly back by setting the `nofailback` option.
641
642 Another scenario is when a service was fenced and it got recovered to
643 another node. The admin tries to repair the fenced node and brings it
644 up online again to investigate the failure cause and check if it runs
645 stable again. Setting the `nofailback` flag prevents that the
646 recovered services move straight back to the fenced node.
647
648
649 [[ha_manager_fencing]]
650 Fencing
651 -------
652
653 On node failures, fencing ensures that the erroneous node is
654 guaranteed to be offline. This is required to make sure that no
655 resource runs twice when it gets recovered on another node. This is a
656 really important task, because without, it would not be possible to
657 recover a resource on another node.
658
659 If a node did not get fenced, it would be in an unknown state where
660 it may have still access to shared resources. This is really
661 dangerous! Imagine that every network but the storage one broke. Now,
662 while not reachable from the public network, the VM still runs and
663 writes to the shared storage.
664
665 If we then simply start up this VM on another node, we would get a
666 dangerous race conditions because we write from both nodes. Such
667 condition can destroy all VM data and the whole VM could be rendered
668 unusable. The recovery could also fail if the storage protects from
669 multiple mounts.
670
671
672 How {pve} Fences
673 ~~~~~~~~~~~~~~~~
674
675 There are different methods to fence a node, for example, fence
676 devices which cut off the power from the node or disable their
677 communication completely. Those are often quite expensive and bring
678 additional critical components into a system, because if they fail you
679 cannot recover any service.
680
681 We thus wanted to integrate a simpler fencing method, which does not
682 require additional external hardware. This can be done using
683 watchdog timers.
684
685 .Possible Fencing Methods
686 - external power switches
687 - isolate nodes by disabling complete network traffic on the switch
688 - self fencing using watchdog timers
689
690 Watchdog timers are widely used in critical and dependable systems
691 since the beginning of micro controllers. They are often independent
692 and simple integrated circuits which are used to detect and recover
693 from computer malfunctions.
694
695 During normal operation, `ha-manager` regularly resets the watchdog
696 timer to prevent it from elapsing. If, due to a hardware fault or
697 program error, the computer fails to reset the watchdog, the timer
698 will elapse and triggers a reset of the whole server (reboot).
699
700 Recent server motherboards often include such hardware watchdogs, but
701 these need to be configured. If no watchdog is available or
702 configured, we fall back to the Linux Kernel 'softdog'. While still
703 reliable, it is not independent of the servers hardware, and thus has
704 a lower reliability than a hardware watchdog.
705
706
707 Configure Hardware Watchdog
708 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
709
710 By default, all hardware watchdog modules are blocked for security
711 reasons. They are like a loaded gun if not correctly initialized. To
712 enable a hardware watchdog, you need to specify the module to load in
713 '/etc/default/pve-ha-manager', for example:
714
715 ----
716 # select watchdog module (default is softdog)
717 WATCHDOG_MODULE=iTCO_wdt
718 ----
719
720 This configuration is read by the 'watchdog-mux' service, which load
721 the specified module at startup.
722
723
724 Recover Fenced Services
725 ~~~~~~~~~~~~~~~~~~~~~~~
726
727 After a node failed and its fencing was successful, the CRM tries to
728 move services from the failed node to nodes which are still online.
729
730 The selection of nodes, on which those services gets recovered, is
731 influenced by the resource `group` settings, the list of currently active
732 nodes, and their respective active service count.
733
734 The CRM first builds a set out of the intersection between user selected
735 nodes (from `group` setting) and available nodes. It then choose the
736 subset of nodes with the highest priority, and finally select the node
737 with the lowest active service count. This minimizes the possibility
738 of an overloaded node.
739
740 CAUTION: On node failure, the CRM distributes services to the
741 remaining nodes. This increase the service count on those nodes, and
742 can lead to high load, especially on small clusters. Please design
743 your cluster so that it can handle such worst case scenarios.
744
745
746 [[ha_manager_start_failure_policy]]
747 Start Failure Policy
748 ---------------------
749
750 The start failure policy comes in effect if a service failed to start on a
751 node one or more times. It can be used to configure how often a restart
752 should be triggered on the same node and how often a service should be
753 relocated so that it gets a try to be started on another node.
754 The aim of this policy is to circumvent temporary unavailability of shared
755 resources on a specific node. For example, if a shared storage isn't available
756 on a quorate node anymore, e.g. network problems, but still on other nodes,
757 the relocate policy allows then that the service gets started nonetheless.
758
759 There are two service start recover policy settings which can be configured
760 specific for each resource.
761
762 max_restart::
763
764 Maximum number of tries to restart a failed service on the actual
765 node. The default is set to one.
766
767 max_relocate::
768
769 Maximum number of tries to relocate the service to a different node.
770 A relocate only happens after the max_restart value is exceeded on the
771 actual node. The default is set to one.
772
773 NOTE: The relocate count state will only reset to zero when the
774 service had at least one successful start. That means if a service is
775 re-started without fixing the error only the restart policy gets
776 repeated.
777
778
779 [[ha_manager_error_recovery]]
780 Error Recovery
781 --------------
782
783 If after all tries the service state could not be recovered it gets
784 placed in an error state. In this state the service won't get touched
785 by the HA stack anymore. The only way out is disabling a service:
786
787 ----
788 # ha-manager set vm:100 --state disabled
789 ----
790
791 This can also be done in the web interface.
792
793 To recover from the error state you should do the following:
794
795 * bring the resource back into a safe and consistent state (e.g.:
796 kill its process if the service could not be stopped)
797
798 * disable the resource to remove the error flag
799
800 * fix the error which led to this failures
801
802 * *after* you fixed all errors you may request that the service starts again
803
804
805 [[ha_manager_package_updates]]
806 Package Updates
807 ---------------
808
809 When updating the ha-manager you should do one node after the other, never
810 all at once for various reasons. First, while we test our software
811 thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
812 Updating one node after the other and checking the functionality of each node
813 after finishing the update helps to recover from eventual problems, while
814 updating all at once could result in a broken cluster and is generally not
815 good practice.
816
817 Also, the {pve} HA stack uses a request acknowledge protocol to perform
818 actions between the cluster and the local resource manager. For restarting,
819 the LRM makes a request to the CRM to freeze all its services. This prevents
820 that they get touched by the Cluster during the short time the LRM is restarting.
821 After that the LRM may safely close the watchdog during a restart.
822 Such a restart happens normally during a package update and, as already stated,
823 an active master CRM is needed to acknowledge the requests from the LRM. If
824 this is not the case the update process can take too long which, in the worst
825 case, may result in a reset triggered by the watchdog.
826
827
828 Node Maintenance
829 ----------------
830
831 It is sometimes possible to shutdown or reboot a node to do maintenance tasks.
832 Either to replace hardware, or simply to install a new kernel image.
833 This is also true when using the HA stack. The behaviour of the HA stack during
834 a shutdown can be configured.
835
836 [[ha_manager_shutdown_policy]]
837 Shutdown Policy
838 ~~~~~~~~~~~~~~~
839
840 Below you will find a description of the different HA policies for a node
841 shutdown. Currently 'Conditional' is the default due to backward compatibility.
842 Some users may find that the 'Migrate' behaves more as expected.
843
844 Migrate
845 ^^^^^^^
846
847 Once the Local Resource manager (LRM) gets a shutdown request and this policy
848 is enabled, it will mark it self as unavailable for the current HA manager.
849 This triggers a migration of all HA Services currently located on this node.
850 Until all running Services got moved away, the LRM will try to delay the
851 shutdown process. But, this expects that the running services *can* be migrated
852 to another node. In other words, the service must not be locally bound, for
853 example by using hardware passthrough. As non-group member nodes are considered
854 as runnable target if no group member is available, this policy can still be
855 used when making use of HA groups with only some nodes selected. But, marking a
856 group as 'restricted' tells the HA manager that the service cannot run outside
857 of the chosen set of nodes, if all of those nodes are unavailable the shutdown
858 will hang until you manually intervene. Once the shut down node comes back
859 online again, the previously displaced services will be moved back, if they did
860 not get migrated manually in-between.
861
862 NOTE: The watchdog is still active during the migration process on shutdown.
863 If the node loses quorum it will be fenced and the services will be recovered.
864
865 If you start a (previously stopped) service on a node which is currently being
866 maintained, the node needs to be fenced to ensure that the service can be moved
867 and started on another, available, node.
868
869 Failover
870 ^^^^^^^^
871
872 This mode ensures that all services get stopped, but that they will also be
873 recovered, if the current node is not online soon. It can be useful when doing
874 maintenance on a cluster scale, were live-migrating VMs may not be possible if
875 to many nodes are powered-off at a time, but you still want to ensure HA
876 services get recovered and started again as soon as possible.
877
878 Freeze
879 ^^^^^^
880
881 This mode ensures that all services get stopped and frozen, so that they won't
882 get recovered until the current node is online again.
883
884 Conditional
885 ^^^^^^^^^^^
886
887 The 'Conditional' shutdown policy automatically detects if a shutdown or a
888 reboot is requested, and changes behaviour accordingly.
889
890 .Shutdown
891
892 A shutdown ('poweroff') is usually done if the node is planned to stay down for
893 some time. The LRM stops all managed services in that case. This means that
894 other nodes will take over those service afterwards.
895
896 NOTE: Recent hardware has large amounts of memory (RAM). So we stop all
897 resources, then restart them to avoid online migration of all that RAM. If you
898 want to use online migration, you need to invoke that manually before you
899 shutdown the node.
900
901
902 .Reboot
903
904 Node reboots are initiated with the 'reboot' command. This is usually done
905 after installing a new kernel. Please note that this is different from
906 ``shutdown'', because the node immediately starts again.
907
908 The LRM tells the CRM that it wants to restart, and waits until the CRM puts
909 all resources into the `freeze` state (same mechanism is used for
910 xref:ha_manager_package_updates[Package Updates]). This prevents that those
911 resources are moved to other nodes. Instead, the CRM start the resources after
912 the reboot on the same node.
913
914
915 Manual Resource Movement
916 ^^^^^^^^^^^^^^^^^^^^^^^^
917
918 Last but not least, you can also move resources manually to other nodes before
919 you shutdown or restart a node. The advantage is that you have full control,
920 and you can decide if you want to use online migration or not.
921
922 NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or
923 `watchdog-mux`. They manage and use the watchdog, so this can result in a
924 immediate node reboot or even reset.
925
926
927 ifdef::manvolnum[]
928 include::pve-copyright.adoc[]
929 endif::manvolnum[]
930