]> git.proxmox.com Git - pve-docs.git/blame - ha-manager.adoc
fix #2092: add HA Simulator documentation
[pve-docs.git] / ha-manager.adoc
CommitLineData
80c0adcb 1[[chapter_ha_manager]]
22653ac8 2ifdef::manvolnum[]
b2f242ab
DM
3ha-manager(1)
4=============
5f09af76
DM
5:pve-toplevel:
6
22653ac8
DM
7NAME
8----
9
734404b4 10ha-manager - Proxmox VE HA Manager
22653ac8 11
49a5e11c 12SYNOPSIS
22653ac8
DM
13--------
14
15include::ha-manager.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
22653ac8
DM
20ifndef::manvolnum[]
21High Availability
22=================
5f09af76 23:pve-toplevel:
194d2f29 24endif::manvolnum[]
b5266e9f
DM
25
26Our modern society depends heavily on information provided by
27computers over the network. Mobile devices amplified that dependency,
28because people can access the network any time from anywhere. If you
29provide such services, it is very important that they are available
30most of the time.
31
32We can mathematically define the availability as the ratio of (A) the
33total time a service is capable of being used during a given interval
34to (B) the length of the interval. It is normally expressed as a
35percentage of uptime in a given year.
36
37.Availability - Downtime per Year
38[width="60%",cols="<d,d",options="header"]
39|===========================================================
40|Availability % |Downtime per year
41|99 |3.65 days
42|99.9 |8.76 hours
43|99.99 |52.56 minutes
44|99.999 |5.26 minutes
45|99.9999 |31.5 seconds
46|99.99999 |3.15 seconds
47|===========================================================
48
04bde502
DM
49There are several ways to increase availability. The most elegant
50solution is to rewrite your software, so that you can run it on
51several host at the same time. The software itself need to have a way
2af6af05 52to detect errors and do failover. This is relatively easy if you just
04bde502
DM
53want to serve read-only web pages. But in general this is complex, and
54sometimes impossible because you cannot modify the software
55yourself. The following solutions works without modifying the
56software:
57
8c1189b6 58* Use reliable ``server'' components
fd9e8984 59+
04bde502 60NOTE: Computer components with same functionality can have varying
2af6af05 61reliability numbers, depending on the component quality. Most vendors
8c1189b6 62sell components with higher reliability as ``server'' components -
04bde502 63usually at higher price.
b5266e9f
DM
64
65* Eliminate single point of failure (redundant components)
8c1189b6
FG
66** use an uninterruptible power supply (UPS)
67** use redundant power supplies on the main boards
68** use ECC-RAM
69** use redundant network hardware
70** use RAID for local storage
71** use distributed, redundant storage for VM data
b5266e9f
DM
72
73* Reduce downtime
8c1189b6
FG
74** rapidly accessible administrators (24/7)
75** availability of spare parts (other nodes in a {pve} cluster)
76** automatic error detection (provided by `ha-manager`)
77** automatic failover (provided by `ha-manager`)
b5266e9f 78
5771d9b0 79Virtualization environments like {pve} make it much easier to reach
8c1189b6 80high availability because they remove the ``hardware'' dependency. They
04bde502
DM
81also support to setup and use redundant storage and network
82devices. So if one host fail, you can simply start those services on
43da8322
DM
83another host within your cluster.
84
8c1189b6 85Even better, {pve} provides a software stack called `ha-manager`,
43da8322
DM
86which can do that automatically for you. It is able to automatically
87detect errors and do automatic failover.
88
8c1189b6 89{pve} `ha-manager` works like an ``automated'' administrator. First, you
43da8322 90configure what resources (VMs, containers, ...) it should
8c1189b6
FG
91manage. `ha-manager` then observes correct functionality, and handles
92service failover to another node in case of errors. `ha-manager` can
43da8322
DM
93also handle normal user requests which may start, stop, relocate and
94migrate a service.
04bde502
DM
95
96But high availability comes at a price. High quality components are
97more expensive, and making them redundant duplicates the costs at
98least. Additional spare parts increase costs further. So you should
99carefully calculate the benefits, and compare with those additional
100costs.
101
102TIP: Increasing availability from 99% to 99.9% is relatively
103simply. But increasing availability from 99.9999% to 99.99999% is very
8c1189b6 104hard and costly. `ha-manager` has typical error detection and failover
43da8322
DM
105times of about 2 minutes, so you can get no more than 99.999%
106availability.
b5266e9f 107
823fa863 108
5bd515d4
DM
109Requirements
110------------
3810ae1e 111
823fa863
DM
112You must meet the following requirements before you start with HA:
113
5bd515d4 114* at least three cluster nodes (to get reliable quorum)
43da8322 115
5bd515d4 116* shared storage for VMs and containers
43da8322 117
5bd515d4 118* hardware redundancy (everywhere)
3810ae1e 119
823fa863
DM
120* use reliable “server” components
121
5bd515d4 122* hardware watchdog - if not available we fall back to the
8c1189b6 123 linux kernel software watchdog (`softdog`)
3810ae1e 124
5bd515d4 125* optional hardware fencing devices
3810ae1e 126
3810ae1e 127
80c0adcb 128[[ha_manager_resources]]
5bd515d4
DM
129Resources
130---------
131
8c1189b6
FG
132We call the primary management unit handled by `ha-manager` a
133resource. A resource (also called ``service'') is uniquely
5bd515d4 134identified by a service ID (SID), which consists of the resource type
8c1189b6
FG
135and an type specific ID, e.g.: `vm:100`. That example would be a
136resource of type `vm` (virtual machine) with the ID 100.
5bd515d4
DM
137
138For now we have two important resources types - virtual machines and
139containers. One basic idea here is that we can bundle related software
a35aad4a 140into such a VM or container, so there is no need to compose one big
8c1189b6 141service from other services, like it was done with `rgmanager`. In
4c34defd 142general, a HA managed resource should not depend on other resources.
3810ae1e 143
22653ac8 144
d4642672
DM
145Management Tasks
146----------------
147
148This section provides a short overview of common management tasks. The
149first step is to enable HA for a resource. This is done by adding the
150resource to the HA resource configuration. You can do this using the
151GUI, or simply use the command line tool, for example:
152
153----
154# ha-manager add vm:100
155----
156
157The HA stack now tries to start the resources and keeps it
158running. Please note that you can configure the ``requested''
a35aad4a 159resources state. For example you may want the HA stack to stop the
d4642672
DM
160resource:
161
162----
163# ha-manager set vm:100 --state stopped
164----
165
166and start it again later:
167
168----
169# ha-manager set vm:100 --state started
170----
171
172You can also use the normal VM and container management commands. They
173automatically forward the commands to the HA stack, so
174
175----
176# qm start 100
177----
178
179simply sets the requested state to `started`. Same applied to `qm
180stop`, which sets the requested state to `stopped`.
181
182NOTE: The HA stack works fully asynchronous and needs to communicate
3821ecaf 183with other cluster members. So it takes some seconds until you see
d4642672
DM
184the result of such actions.
185
186To view the current HA resource configuration use:
187
188----
189# ha-manager config
190vm:100
191 state stopped
192----
193
194And you can view the actual HA manager and resource state with:
195
196----
197# ha-manager status
198quorum OK
199master node1 (active, Wed Nov 23 11:07:23 2016)
200lrm elsa (active, Wed Nov 23 11:07:19 2016)
201service vm:100 (node1, started)
202----
203
204You can also initiate resource migration to other nodes:
205
206----
207# ha-manager migrate vm:100 node2
208----
209
210This uses online migration and tries to keep the VM running. Online
211migration needs to transfer all used memory over the network, so it is
212sometimes faster to stop VM, then restart it on the new node. This can be
213done using the `relocate` command:
214
215----
216# ha-manager relocate vm:100 node2
217----
218
219Finally, you can remove the resource from the HA configuration using
220the following command:
221
222----
223# ha-manager remove vm:100
224----
225
226NOTE: This does not start or stop the resource.
227
a35aad4a 228But all HA related tasks can be done in the GUI, so there is no need to
d4642672
DM
229use the command line at all.
230
231
2b52e195 232How It Works
22653ac8
DM
233------------
234
c7470421
DM
235This section provides a detailed description of the {PVE} HA manager
236internals. It describes all involved daemons and how they work
237together. To provide HA, two daemons run on each node:
3810ae1e 238
8c1189b6 239`pve-ha-lrm`::
3810ae1e 240
1600c60a
DM
241The local resource manager (LRM), which controls the services running on
242the local node. It reads the requested states for its services from
243the current manager status file and executes the respective commands.
3810ae1e 244
8c1189b6 245`pve-ha-crm`::
3810ae1e 246
1600c60a
DM
247The cluster resource manager (CRM), which makes the cluster wide
248decisions. It sends commands to the LRM, processes the results,
249and moves resources to other nodes if something fails. The CRM also
250handles node fencing.
251
3810ae1e
TL
252
253.Locks in the LRM & CRM
254[NOTE]
255Locks are provided by our distributed configuration file system (pmxcfs).
a35aad4a 256They are used to guarantee that each LRM is active once and working. As an
3821ecaf 257LRM only executes actions when it holds its lock, we can mark a failed node
5771d9b0 258as fenced if we can acquire its lock. This lets us then recover any failed
5eba0743 259HA services securely without any interference from the now unknown failed node.
3810ae1e
TL
260This all gets supervised by the CRM which holds currently the manager master
261lock.
262
c7470421
DM
263
264Service States
265~~~~~~~~~~~~~~
266
267The CRM use a service state enumeration to record the current service
268state. We display this state on the GUI and you can query it using
269the `ha-manager` command line tool:
270
271----
272# ha-manager status
273quorum OK
274master elsa (active, Mon Nov 21 07:23:29 2016)
275lrm elsa (active, Mon Nov 21 07:23:22 2016)
276service ct:100 (elsa, stopped)
277service ct:102 (elsa, started)
278service vm:501 (elsa, started)
279----
280
281Here is the list of possible states:
282
283stopped::
284
285Service is stopped (confirmed by LRM). If the LRM detects a stopped
286service is still running, it will stop it again.
287
288request_stop::
289
290Service should be stopped. The CRM waits for confirmation from the
291LRM.
292
1cd01666
DM
293stopping::
294
295Pending stop request. But the CRM did not get the request so far.
296
c7470421
DM
297started::
298
299Service is active an LRM should start it ASAP if not already running.
300If the Service fails and is detected to be not running the LRM
301restarts it
302(see xref:ha_manager_start_failure_policy[Start Failure Policy]).
303
1cd01666
DM
304starting::
305
306Pending start request. But the CRM has not got any confirmation from the
307LRM that the service is running.
308
c7470421
DM
309fence::
310
311Wait for node fencing (service node is not inside quorate cluster
312partition). As soon as node gets fenced successfully the service will
313be recovered to another node, if possible
314(see xref:ha_manager_fencing[Fencing]).
315
316freeze::
317
318Do not touch the service state. We use this state while we reboot a
319node, or when we restart the LRM daemon
320(see xref:ha_manager_package_updates[Package Updates]).
321
581f2240
TL
322ignored::
323
fb29acdd 324Act as if the service were not managed by HA at all.
581f2240
TL
325Useful, when full control over the service is desired temporarily,
326without removing it from the HA configuration.
327
328
c7470421
DM
329migrate::
330
331Migrate service (live) to other node.
332
333error::
334
335Service is disabled because of LRM errors. Needs manual intervention
336(see xref:ha_manager_error_recovery[Error Recovery]).
337
1cd01666
DM
338queued::
339
340Service is newly added, and the CRM has not seen it so far.
341
342disabled::
343
344Service is stopped and marked as `disabled`
345
c7470421 346
3810ae1e
TL
347Local Resource Manager
348~~~~~~~~~~~~~~~~~~~~~~
349
8c1189b6 350The local resource manager (`pve-ha-lrm`) is started as a daemon on
3810ae1e
TL
351boot and waits until the HA cluster is quorate and thus cluster wide
352locks are working.
353
354It can be in three states:
355
b8663359 356wait for agent lock::
e1ea726a
FG
357
358The LRM waits for our exclusive lock. This is also used as idle state if no
359service is configured.
360
b8663359 361active::
e1ea726a
FG
362
363The LRM holds its exclusive lock and has services configured.
364
b8663359 365lost agent lock::
e1ea726a
FG
366
367The LRM lost its lock, this means a failure happened and quorum was lost.
3810ae1e
TL
368
369After the LRM gets in the active state it reads the manager status
8c1189b6 370file in `/etc/pve/ha/manager_status` and determines the commands it
2af6af05 371has to execute for the services it owns.
a35aad4a 372For each command a worker gets started, these workers are running in
5eba0743 373parallel and are limited to at most 4 by default. This default setting
8c1189b6 374may be changed through the datacenter configuration key `max_worker`.
2af6af05
TL
375When finished the worker process gets collected and its result saved for
376the CRM.
3810ae1e 377
5eba0743 378.Maximum Concurrent Worker Adjustment Tips
3810ae1e 379[NOTE]
5eba0743 380The default value of at most 4 concurrent workers may be unsuited for
3810ae1e
TL
381a specific setup. For example may 4 live migrations happen at the same
382time, which can lead to network congestions with slower networks and/or
383big (memory wise) services. Ensure that also in the worst case no congestion
a35aad4a 384happens and lower the `max_worker` value if needed. On the contrary, if you
3810ae1e
TL
385have a particularly powerful high end setup you may also want to increase it.
386
a35aad4a
DL
387Each command requested by the CRM is uniquely identifiable by a UID, when
388the worker finishes its result will be processed and written in the LRM
8c1189b6 389status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
3810ae1e
TL
390it and let its state machine - respective the commands output - act on it.
391
392The actions on each service between CRM and LRM are normally always synced.
a35aad4a 393This means that the CRM requests a state uniquely marked by a UID, the LRM
3810ae1e
TL
394then executes this action *one time* and writes back the result, also
395identifiable by the same UID. This is needed so that the LRM does not
a35aad4a 396execute an outdated command.
8c1189b6 397With the exception of the `stop` and the `error` command,
c9aa5d47 398those two do not depend on the result produced and are executed
3810ae1e
TL
399always in the case of the stopped state and once in the case of
400the error state.
401
402.Read the Logs
403[NOTE]
404The HA Stack logs every action it makes. This helps to understand what
405and also why something happens in the cluster. Here its important to see
406what both daemons, the LRM and the CRM, did. You may use
407`journalctl -u pve-ha-lrm` on the node(s) where the service is and
408the same command for the pve-ha-crm on the node which is the current master.
409
410Cluster Resource Manager
411~~~~~~~~~~~~~~~~~~~~~~~~
22653ac8 412
8c1189b6 413The cluster resource manager (`pve-ha-crm`) starts on each node and
22653ac8
DM
414waits there for the manager lock, which can only be held by one node
415at a time. The node which successfully acquires the manager lock gets
3810ae1e
TL
416promoted to the CRM master.
417
2af6af05 418It can be in three states:
3810ae1e 419
b8663359 420wait for agent lock::
e1ea726a 421
97ae300a 422The CRM waits for our exclusive lock. This is also used as idle state if no
e1ea726a
FG
423service is configured
424
b8663359 425active::
e1ea726a 426
97ae300a 427The CRM holds its exclusive lock and has services configured
e1ea726a 428
b8663359 429lost agent lock::
e1ea726a 430
97ae300a 431The CRM lost its lock, this means a failure happened and quorum was lost.
3810ae1e 432
a35aad4a 433Its main task is to manage the services which are configured to be highly
4c34defd
TL
434available and try to always enforce the requested state. For example, a
435service with the requested state 'started' will be started if its not
436already running. If it crashes it will be automatically started again.
a35aad4a 437Thus the CRM dictates the actions the LRM needs to execute.
22653ac8
DM
438
439When an node leaves the cluster quorum, its state changes to unknown.
440If the current CRM then can secure the failed nodes lock, the services
441will be 'stolen' and restarted on another node.
442
443When a cluster member determines that it is no longer in the cluster
444quorum, the LRM waits for a new quorum to form. As long as there is no
445quorum the node cannot reset the watchdog. This will trigger a reboot
2af6af05 446after the watchdog then times out, this happens after 60 seconds.
22653ac8 447
85363588 448
b8633a34
RV
449HA Simulator
450------------
451
452[thumbnail="screenshot/gui-ha-manager-status.png"]
453
454By using the HA simulator you can test and learn all functionalities of the
455Proxmox VE HA solutions.
456
457The simulator allows you to watch and test the behaviour of a real-world 3 node
458cluster with 6 VMs. You can also add or remove additional VMs or Container.
459
460You do not have to setup or configure a real cluster, the HA simulator runs out
461of the box.
462
463Install with apt:
464
465----
466apt install pve-ha-simulator
467----
468
469You can even install the package on a Debian or Debian based system without any
470other Proxmox VE packages. For that you will need to download the package and
471copy it to the system you want to run it on for installation. When you install
472the package with apt from the local file system it will also resolve the
473required dependencies for you.
474
475
476To start the simulator on a remote machine you must have a X11 redirection to
477your current system.
478
479If you are on a Linux machine you can use:
480
481----
482ssh root@<IPofPVE4> -Y
483----
484
485On Windows it is working with https://mobaxterm.mobatek.net/[mobaxterm].
486
487After starting the simulator create a working directory:
488
489----
490mkdir working
491----
492
493To start the simulator type
494
495----
496pve-ha-simulator working/
497----
498
499
2b52e195 500Configuration
22653ac8
DM
501-------------
502
85363588
DM
503The HA stack is well integrated into the {pve} API. So, for example,
504HA can be configured via the `ha-manager` command line interface, or
505the {pve} web interface - both interfaces provide an easy way to
506manage HA. Automation tools can use the API directly.
507
508All HA configuration files are within `/etc/pve/ha/`, so they get
509automatically distributed to the cluster nodes, and all nodes share
510the same HA configuration.
511
206c2476 512
4c34defd 513[[ha_manager_resource_config]]
206c2476
DM
514Resources
515~~~~~~~~~
516
1ff5e4e8 517[thumbnail="screenshot/gui-ha-manager-status.png"]
863a8f3a 518
4d63b3cc 519
85363588
DM
520The resource configuration file `/etc/pve/ha/resources.cfg` stores
521the list of resources managed by `ha-manager`. A resource configuration
a35aad4a 522inside that list looks like this:
85363588
DM
523
524----
8bdc398c 525<type>: <name>
85363588
DM
526 <property> <value>
527 ...
528----
529
698e5dd2
DM
530It starts with a resource type followed by a resource specific name,
531separated with colon. Together this forms the HA resource ID, which is
532used by all `ha-manager` commands to uniquely identify a resource
a9c77fec
DM
533(example: `vm:100` or `ct:101`). The next lines contain additional
534properties:
85363588
DM
535
536include::ha-resources-opts.adoc[]
537
8bdc398c 538Here is a real world example with one VM and one container. As you see,
470d4313 539the syntax of those files is really simple, so it is even possible to
8bdc398c
DM
540read or edit those files using your favorite editor:
541
e7b9b0ac 542.Configuration Example (`/etc/pve/ha/resources.cfg`)
8bdc398c
DM
543----
544vm: 501
545 state started
546 max_relocate 2
547
548ct: 102
a319e18b
DM
549 # Note: use default settings for everything
550----
551
1ff5e4e8 552[thumbnail="screenshot/gui-ha-manager-add-resource.png"]
4d63b3cc 553
a319e18b
DM
554Above config was generated using the `ha-manager` command line tool:
555
556----
557# ha-manager add vm:501 --state started --max_relocate 2
558# ha-manager add ct:102
8bdc398c
DM
559----
560
85363588 561
1acab952 562[[ha_manager_groups]]
206c2476
DM
563Groups
564~~~~~~
565
1ff5e4e8 566[thumbnail="screenshot/gui-ha-manager-groups-view.png"]
4d63b3cc 567
85363588
DM
568The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
569define groups of cluster nodes. A resource can be restricted to run
206c2476
DM
570only on the members of such group. A group configuration look like
571this:
85363588 572
206c2476
DM
573----
574group: <group>
575 nodes <node_list>
576 <property> <value>
577 ...
578----
85363588 579
206c2476 580include::ha-groups-opts.adoc[]
22653ac8 581
1ff5e4e8 582[thumbnail="screenshot/gui-ha-manager-add-group.png"]
4d63b3cc 583
e60ce90c 584A common requirement is that a resource should run on a specific
1acab952
DM
585node. Usually the resource is able to run on other nodes, so you can define
586an unrestricted group with a single member:
587
588----
589# ha-manager groupadd prefer_node1 --nodes node1
590----
591
592For bigger clusters, it makes sense to define a more detailed failover
593behavior. For example, you may want to run a set of services on
594`node1` if possible. If `node1` is not available, you want to run them
470d4313 595equally split on `node2` and `node3`. If those nodes also fail the
1acab952
DM
596services should run on `node4`. To achieve this you could set the node
597list to:
598
599----
600# ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4"
601----
602
603Another use case is if a resource uses other resources only available
604on specific nodes, lets say `node1` and `node2`. We need to make sure
605that HA manager does not use other nodes, so we need to create a
606restricted group with said nodes:
607
608----
609# ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted
610----
611
612Above commands created the following group configuration fils:
613
614.Configuration Example (`/etc/pve/ha/groups.cfg`)
615----
616group: prefer_node1
617 nodes node1
618
619group: mygroup1
620 nodes node2:1,node4,node1:2,node3:1
621
622group: mygroup2
623 nodes node2,node1
624 restricted 1
625----
626
627
628The `nofailback` options is mostly useful to avoid unwanted resource
e60ce90c 629movements during administration tasks. For example, if you need to
1acab952
DM
630migrate a service to a node which hasn't the highest priority in the
631group, you need to tell the HA manager to not move this service
632instantly back by setting the `nofailback` option.
633
634Another scenario is when a service was fenced and it got recovered to
635another node. The admin tries to repair the fenced node and brings it
636up online again to investigate the failure cause and check if it runs
637stable again. Setting the `nofailback` flag prevents that the
638recovered services move straight back to the fenced node.
639
22653ac8 640
80c0adcb 641[[ha_manager_fencing]]
3810ae1e
TL
642Fencing
643-------
644
0d427077
DM
645On node failures, fencing ensures that the erroneous node is
646guaranteed to be offline. This is required to make sure that no
647resource runs twice when it gets recovered on another node. This is a
648really important task, because without, it would not be possible to
649recover a resource on another node.
650
bdfd4601 651If a node did not get fenced, it would be in an unknown state where
0d427077
DM
652it may have still access to shared resources. This is really
653dangerous! Imagine that every network but the storage one broke. Now,
654while not reachable from the public network, the VM still runs and
655writes to the shared storage.
656
657If we then simply start up this VM on another node, we would get a
658dangerous race conditions because we write from both nodes. Such
659condition can destroy all VM data and the whole VM could be rendered
660unusable. The recovery could also fail if the storage protects from
661multiple mounts.
662
5771d9b0
TL
663
664How {pve} Fences
0d427077 665~~~~~~~~~~~~~~~~
5771d9b0 666
61972f55
DM
667There are different methods to fence a node, for example, fence
668devices which cut off the power from the node or disable their
669communication completely. Those are often quite expensive and bring
670additional critical components into a system, because if they fail you
671cannot recover any service.
672
673We thus wanted to integrate a simpler fencing method, which does not
674require additional external hardware. This can be done using
675watchdog timers.
676
677.Possible Fencing Methods
678- external power switches
679- isolate nodes by disabling complete network traffic on the switch
680- self fencing using watchdog timers
681
682Watchdog timers are widely used in critical and dependable systems
683since the beginning of micro controllers. They are often independent
684and simple integrated circuits which are used to detect and recover
685from computer malfunctions.
686
687During normal operation, `ha-manager` regularly resets the watchdog
688timer to prevent it from elapsing. If, due to a hardware fault or
689program error, the computer fails to reset the watchdog, the timer
690will elapse and triggers a reset of the whole server (reboot).
691
692Recent server motherboards often include such hardware watchdogs, but
693these need to be configured. If no watchdog is available or
694configured, we fall back to the Linux Kernel 'softdog'. While still
695reliable, it is not independent of the servers hardware, and thus has
696a lower reliability than a hardware watchdog.
3810ae1e 697
a472fde8 698
3810ae1e
TL
699Configure Hardware Watchdog
700~~~~~~~~~~~~~~~~~~~~~~~~~~~
a472fde8
DM
701
702By default, all hardware watchdog modules are blocked for security
703reasons. They are like a loaded gun if not correctly initialized. To
704enable a hardware watchdog, you need to specify the module to load in
705'/etc/default/pve-ha-manager', for example:
706
707----
708# select watchdog module (default is softdog)
709WATCHDOG_MODULE=iTCO_wdt
710----
711
712This configuration is read by the 'watchdog-mux' service, which load
713the specified module at startup.
714
3810ae1e 715
2957ef80
TL
716Recover Fenced Services
717~~~~~~~~~~~~~~~~~~~~~~~
718
480e67e1
DM
719After a node failed and its fencing was successful, the CRM tries to
720move services from the failed node to nodes which are still online.
721
722The selection of nodes, on which those services gets recovered, is
723influenced by the resource `group` settings, the list of currently active
724nodes, and their respective active service count.
725
726The CRM first builds a set out of the intersection between user selected
727nodes (from `group` setting) and available nodes. It then choose the
728subset of nodes with the highest priority, and finally select the node
729with the lowest active service count. This minimizes the possibility
730of an overloaded node.
731
732CAUTION: On node failure, the CRM distributes services to the
733remaining nodes. This increase the service count on those nodes, and
734can lead to high load, especially on small clusters. Please design
735your cluster so that it can handle such worst case scenarios.
2957ef80 736
22653ac8 737
c7470421 738[[ha_manager_start_failure_policy]]
a3189ad1
TL
739Start Failure Policy
740---------------------
741
742The start failure policy comes in effect if a service failed to start on a
a35aad4a 743node one or more times. It can be used to configure how often a restart
a3189ad1
TL
744should be triggered on the same node and how often a service should be
745relocated so that it gets a try to be started on another node.
746The aim of this policy is to circumvent temporary unavailability of shared
747resources on a specific node. For example, if a shared storage isn't available
748on a quorate node anymore, e.g. network problems, but still on other nodes,
749the relocate policy allows then that the service gets started nonetheless.
750
751There are two service start recover policy settings which can be configured
22653ac8
DM
752specific for each resource.
753
754max_restart::
755
5eba0743 756Maximum number of tries to restart an failed service on the actual
22653ac8
DM
757node. The default is set to one.
758
759max_relocate::
760
5eba0743 761Maximum number of tries to relocate the service to a different node.
22653ac8
DM
762A relocate only happens after the max_restart value is exceeded on the
763actual node. The default is set to one.
764
0abc65b0 765NOTE: The relocate count state will only reset to zero when the
22653ac8 766service had at least one successful start. That means if a service is
4c34defd 767re-started without fixing the error only the restart policy gets
22653ac8
DM
768repeated.
769
c7470421
DM
770
771[[ha_manager_error_recovery]]
2b52e195 772Error Recovery
22653ac8
DM
773--------------
774
775If after all tries the service state could not be recovered it gets
776placed in an error state. In this state the service won't get touched
c5bca1ae 777by the HA stack anymore. The only way out is disabling a service:
d02982f7 778
c5bca1ae
TL
779----
780# ha-manager set vm:100 --state disabled
781----
d02982f7 782
c5bca1ae
TL
783This can also be done in the web interface.
784
785To recover from the error state you should do the following:
22653ac8 786
c5bca1ae
TL
787* bring the resource back into a safe and consistent state (e.g.:
788kill its process if the service could not be stopped)
22653ac8 789
c5bca1ae 790* disable the resource to remove the error flag
22653ac8
DM
791
792* fix the error which led to this failures
793
4c34defd 794* *after* you fixed all errors you may request that the service starts again
22653ac8
DM
795
796
26513dae
DM
797[[ha_manager_package_updates]]
798Package Updates
799---------------
800
801When updating the ha-manager you should do one node after the other, never
802all at once for various reasons. First, while we test our software
803thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
804Upgrading one node after the other and checking the functionality of each node
805after finishing the update helps to recover from an eventual problems, while
806updating all could render you in a broken cluster state and is generally not
807good practice.
808
809Also, the {pve} HA stack uses a request acknowledge protocol to perform
810actions between the cluster and the local resource manager. For restarting,
811the LRM makes a request to the CRM to freeze all its services. This prevents
812that they get touched by the Cluster during the short time the LRM is restarting.
813After that the LRM may safely close the watchdog during a restart.
7dd7a0b7
TL
814Such a restart happens normally during a package update and, as already stated,
815an active master CRM is needed to acknowledge the requests from the LRM. If
fb29acdd
FG
816this is not the case the update process can take too long which, in the worst
817case, may result in a reset triggered by the watchdog.
26513dae
DM
818
819
a9023144
DM
820Node Maintenance
821----------------
52a75187 822
a9023144
DM
823It is sometimes possible to shutdown or reboot a node to do
824maintenance tasks. Either to replace hardware, or simply to install a
825new kernel image.
826
827
828Shutdown
829~~~~~~~~
830
831A shutdown ('poweroff') is usually done if the node is planned to stay
832down for some time. The LRM stops all managed services in that
833case. This means that other nodes will take over those service
834afterwards.
835
836NOTE: Recent hardware has large amounts of RAM. So we stop all
837resources, then restart them to avoid online migration of all that
838RAM. If you want to use online migration, you need to invoke that
839manually before you shutdown the node.
840
841
842Reboot
843~~~~~~
844
845Node reboots are initiated with the 'reboot' command. This is usually
846done after installing a new kernel. Please note that this is different
847from ``shutdown'', because the node immediately starts again.
848
849The LRM tells the CRM that it wants to restart, and waits until the
26513dae 850CRM puts all resources into the `freeze` state (same mechanism is used
470d4313 851for xref:ha_manager_package_updates[Package Updates]). This prevents
26513dae
DM
852that those resources are moved to other nodes. Instead, the CRM start
853the resources after the reboot on the same node.
a9023144
DM
854
855
856Manual Resource Movement
857~~~~~~~~~~~~~~~~~~~~~~~~
858
859Last but not least, you can also move resources manually to other
860nodes before you shutdown or restart a node. The advantage is that you
861have full control, and you can decide if you want to use online
862migration or not.
863
864NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or
865`watchdog-mux`. They manage and use the watchdog, so this can result
866in a node reboot.
52a75187
DM
867
868
22653ac8
DM
869ifdef::manvolnum[]
870include::pve-copyright.adoc[]
871endif::manvolnum[]
872