]> git.proxmox.com Git - pve-docs.git/blame - ha-manager.adoc
ha: crs: fix typo
[pve-docs.git] / ha-manager.adoc
CommitLineData
80c0adcb 1[[chapter_ha_manager]]
22653ac8 2ifdef::manvolnum[]
b2f242ab
DM
3ha-manager(1)
4=============
5f09af76
DM
5:pve-toplevel:
6
22653ac8
DM
7NAME
8----
9
734404b4 10ha-manager - Proxmox VE HA Manager
22653ac8 11
49a5e11c 12SYNOPSIS
22653ac8
DM
13--------
14
15include::ha-manager.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
22653ac8
DM
20ifndef::manvolnum[]
21High Availability
22=================
5f09af76 23:pve-toplevel:
194d2f29 24endif::manvolnum[]
b5266e9f
DM
25
26Our modern society depends heavily on information provided by
27computers over the network. Mobile devices amplified that dependency,
28because people can access the network any time from anywhere. If you
29provide such services, it is very important that they are available
30most of the time.
31
049fc557 32We can mathematically define the availability as the ratio of (A), the
b5266e9f 33total time a service is capable of being used during a given interval
049fc557 34to (B), the length of the interval. It is normally expressed as a
b5266e9f
DM
35percentage of uptime in a given year.
36
37.Availability - Downtime per Year
38[width="60%",cols="<d,d",options="header"]
39|===========================================================
40|Availability % |Downtime per year
41|99 |3.65 days
42|99.9 |8.76 hours
43|99.99 |52.56 minutes
44|99.999 |5.26 minutes
45|99.9999 |31.5 seconds
46|99.99999 |3.15 seconds
47|===========================================================
48
04bde502
DM
49There are several ways to increase availability. The most elegant
50solution is to rewrite your software, so that you can run it on
049fc557
DW
51several hosts at the same time. The software itself needs to have a way
52to detect errors and do failover. If you only want to serve read-only
53web pages, then this is relatively simple. However, this is generally complex
54and sometimes impossible, because you cannot modify the software yourself. The
55following solutions works without modifying the software:
04bde502 56
8c1189b6 57* Use reliable ``server'' components
fd9e8984 58+
049fc557 59NOTE: Computer components with the same functionality can have varying
2af6af05 60reliability numbers, depending on the component quality. Most vendors
8c1189b6 61sell components with higher reliability as ``server'' components -
04bde502 62usually at higher price.
b5266e9f
DM
63
64* Eliminate single point of failure (redundant components)
8c1189b6
FG
65** use an uninterruptible power supply (UPS)
66** use redundant power supplies on the main boards
67** use ECC-RAM
68** use redundant network hardware
69** use RAID for local storage
70** use distributed, redundant storage for VM data
b5266e9f
DM
71
72* Reduce downtime
8c1189b6
FG
73** rapidly accessible administrators (24/7)
74** availability of spare parts (other nodes in a {pve} cluster)
75** automatic error detection (provided by `ha-manager`)
76** automatic failover (provided by `ha-manager`)
b5266e9f 77
5771d9b0 78Virtualization environments like {pve} make it much easier to reach
8c1189b6 79high availability because they remove the ``hardware'' dependency. They
049fc557
DW
80also support the setup and use of redundant storage and network
81devices, so if one host fails, you can simply start those services on
43da8322
DM
82another host within your cluster.
83
049fc557 84Better still, {pve} provides a software stack called `ha-manager`,
43da8322
DM
85which can do that automatically for you. It is able to automatically
86detect errors and do automatic failover.
87
8c1189b6 88{pve} `ha-manager` works like an ``automated'' administrator. First, you
43da8322 89configure what resources (VMs, containers, ...) it should
049fc557 90manage. Then, `ha-manager` observes the correct functionality, and handles
8c1189b6 91service failover to another node in case of errors. `ha-manager` can
43da8322
DM
92also handle normal user requests which may start, stop, relocate and
93migrate a service.
04bde502
DM
94
95But high availability comes at a price. High quality components are
049fc557 96more expensive, and making them redundant doubles the costs at
04bde502
DM
97least. Additional spare parts increase costs further. So you should
98carefully calculate the benefits, and compare with those additional
99costs.
100
101TIP: Increasing availability from 99% to 99.9% is relatively
d5c3a54a 102simple. But increasing availability from 99.9999% to 99.99999% is very
8c1189b6 103hard and costly. `ha-manager` has typical error detection and failover
43da8322
DM
104times of about 2 minutes, so you can get no more than 99.999%
105availability.
b5266e9f 106
823fa863 107
5bd515d4
DM
108Requirements
109------------
3810ae1e 110
823fa863
DM
111You must meet the following requirements before you start with HA:
112
5bd515d4 113* at least three cluster nodes (to get reliable quorum)
43da8322 114
5bd515d4 115* shared storage for VMs and containers
43da8322 116
5bd515d4 117* hardware redundancy (everywhere)
3810ae1e 118
823fa863
DM
119* use reliable “server” components
120
5bd515d4 121* hardware watchdog - if not available we fall back to the
8c1189b6 122 linux kernel software watchdog (`softdog`)
3810ae1e 123
5bd515d4 124* optional hardware fencing devices
3810ae1e 125
3810ae1e 126
80c0adcb 127[[ha_manager_resources]]
5bd515d4
DM
128Resources
129---------
130
8c1189b6
FG
131We call the primary management unit handled by `ha-manager` a
132resource. A resource (also called ``service'') is uniquely
5bd515d4 133identified by a service ID (SID), which consists of the resource type
049fc557 134and a type specific ID, for example `vm:100`. That example would be a
8c1189b6 135resource of type `vm` (virtual machine) with the ID 100.
5bd515d4
DM
136
137For now we have two important resources types - virtual machines and
138containers. One basic idea here is that we can bundle related software
a35aad4a 139into such a VM or container, so there is no need to compose one big
049fc557 140service from other services, as was done with `rgmanager`. In
4c34defd 141general, a HA managed resource should not depend on other resources.
3810ae1e 142
22653ac8 143
d4642672
DM
144Management Tasks
145----------------
146
147This section provides a short overview of common management tasks. The
148first step is to enable HA for a resource. This is done by adding the
149resource to the HA resource configuration. You can do this using the
150GUI, or simply use the command line tool, for example:
151
152----
153# ha-manager add vm:100
154----
155
049fc557 156The HA stack now tries to start the resources and keep them
d4642672 157running. Please note that you can configure the ``requested''
a35aad4a 158resources state. For example you may want the HA stack to stop the
d4642672
DM
159resource:
160
161----
162# ha-manager set vm:100 --state stopped
163----
164
165and start it again later:
166
167----
168# ha-manager set vm:100 --state started
169----
170
171You can also use the normal VM and container management commands. They
172automatically forward the commands to the HA stack, so
173
174----
175# qm start 100
176----
177
049fc557 178simply sets the requested state to `started`. The same applies to `qm
d4642672
DM
179stop`, which sets the requested state to `stopped`.
180
181NOTE: The HA stack works fully asynchronous and needs to communicate
049fc557 182with other cluster members. Therefore, it takes some seconds until you see
d4642672
DM
183the result of such actions.
184
185To view the current HA resource configuration use:
186
187----
188# ha-manager config
189vm:100
190 state stopped
191----
192
193And you can view the actual HA manager and resource state with:
194
195----
196# ha-manager status
197quorum OK
198master node1 (active, Wed Nov 23 11:07:23 2016)
199lrm elsa (active, Wed Nov 23 11:07:19 2016)
200service vm:100 (node1, started)
201----
202
203You can also initiate resource migration to other nodes:
204
205----
206# ha-manager migrate vm:100 node2
207----
208
209This uses online migration and tries to keep the VM running. Online
210migration needs to transfer all used memory over the network, so it is
049fc557 211sometimes faster to stop the VM, then restart it on the new node. This can be
d4642672
DM
212done using the `relocate` command:
213
214----
215# ha-manager relocate vm:100 node2
216----
217
218Finally, you can remove the resource from the HA configuration using
219the following command:
220
221----
222# ha-manager remove vm:100
223----
224
225NOTE: This does not start or stop the resource.
226
a35aad4a 227But all HA related tasks can be done in the GUI, so there is no need to
d4642672
DM
228use the command line at all.
229
230
2b52e195 231How It Works
22653ac8
DM
232------------
233
c7470421
DM
234This section provides a detailed description of the {PVE} HA manager
235internals. It describes all involved daemons and how they work
236together. To provide HA, two daemons run on each node:
3810ae1e 237
8c1189b6 238`pve-ha-lrm`::
3810ae1e 239
1600c60a
DM
240The local resource manager (LRM), which controls the services running on
241the local node. It reads the requested states for its services from
242the current manager status file and executes the respective commands.
3810ae1e 243
8c1189b6 244`pve-ha-crm`::
3810ae1e 245
60ed554f 246The cluster resource manager (CRM), which makes the cluster-wide
1600c60a
DM
247decisions. It sends commands to the LRM, processes the results,
248and moves resources to other nodes if something fails. The CRM also
249handles node fencing.
250
3810ae1e
TL
251
252.Locks in the LRM & CRM
253[NOTE]
254Locks are provided by our distributed configuration file system (pmxcfs).
a35aad4a 255They are used to guarantee that each LRM is active once and working. As an
3821ecaf 256LRM only executes actions when it holds its lock, we can mark a failed node
049fc557 257as fenced if we can acquire its lock. This then lets us recover any failed
5eba0743 258HA services securely without any interference from the now unknown failed node.
049fc557 259This all gets supervised by the CRM which currently holds the manager master
3810ae1e
TL
260lock.
261
c7470421
DM
262
263Service States
264~~~~~~~~~~~~~~
265
049fc557
DW
266The CRM uses a service state enumeration to record the current service
267state. This state is displayed on the GUI and can be queried using
c7470421
DM
268the `ha-manager` command line tool:
269
270----
271# ha-manager status
272quorum OK
273master elsa (active, Mon Nov 21 07:23:29 2016)
274lrm elsa (active, Mon Nov 21 07:23:22 2016)
275service ct:100 (elsa, stopped)
276service ct:102 (elsa, started)
277service vm:501 (elsa, started)
278----
279
280Here is the list of possible states:
281
282stopped::
283
284Service is stopped (confirmed by LRM). If the LRM detects a stopped
285service is still running, it will stop it again.
286
287request_stop::
288
289Service should be stopped. The CRM waits for confirmation from the
290LRM.
291
1cd01666
DM
292stopping::
293
294Pending stop request. But the CRM did not get the request so far.
295
c7470421
DM
296started::
297
298Service is active an LRM should start it ASAP if not already running.
299If the Service fails and is detected to be not running the LRM
300restarts it
301(see xref:ha_manager_start_failure_policy[Start Failure Policy]).
302
1cd01666
DM
303starting::
304
305Pending start request. But the CRM has not got any confirmation from the
306LRM that the service is running.
307
c7470421
DM
308fence::
309
fa94c2b3
TL
310Wait for node fencing as the service node is not inside the quorate cluster
311partition (see xref:ha_manager_fencing[Fencing]).
312As soon as node gets fenced successfully the service will be placed into the
313recovery state.
314
315recovery::
316
317Wait for recovery of the service. The HA manager tries to find a new node where
318the service can run on. This search depends not only on the list of online and
319quorate nodes, but also if the service is a group member and how such a group
320is limited.
321As soon as a new available node is found, the service will be moved there and
322initially placed into stopped state. If it's configured to run the new node
323will do so.
c7470421
DM
324
325freeze::
326
327Do not touch the service state. We use this state while we reboot a
328node, or when we restart the LRM daemon
329(see xref:ha_manager_package_updates[Package Updates]).
330
581f2240
TL
331ignored::
332
fb29acdd 333Act as if the service were not managed by HA at all.
fa94c2b3
TL
334Useful, when full control over the service is desired temporarily, without
335removing it from the HA configuration.
581f2240 336
c7470421
DM
337migrate::
338
339Migrate service (live) to other node.
340
341error::
342
343Service is disabled because of LRM errors. Needs manual intervention
344(see xref:ha_manager_error_recovery[Error Recovery]).
345
1cd01666
DM
346queued::
347
348Service is newly added, and the CRM has not seen it so far.
349
350disabled::
351
352Service is stopped and marked as `disabled`
353
c7470421 354
3810ae1e
TL
355Local Resource Manager
356~~~~~~~~~~~~~~~~~~~~~~
357
8c1189b6 358The local resource manager (`pve-ha-lrm`) is started as a daemon on
60ed554f 359boot and waits until the HA cluster is quorate and thus cluster-wide
3810ae1e
TL
360locks are working.
361
362It can be in three states:
363
b8663359 364wait for agent lock::
e1ea726a
FG
365
366The LRM waits for our exclusive lock. This is also used as idle state if no
367service is configured.
368
b8663359 369active::
e1ea726a
FG
370
371The LRM holds its exclusive lock and has services configured.
372
b8663359 373lost agent lock::
e1ea726a
FG
374
375The LRM lost its lock, this means a failure happened and quorum was lost.
3810ae1e
TL
376
377After the LRM gets in the active state it reads the manager status
8c1189b6 378file in `/etc/pve/ha/manager_status` and determines the commands it
2af6af05 379has to execute for the services it owns.
a35aad4a 380For each command a worker gets started, these workers are running in
5eba0743 381parallel and are limited to at most 4 by default. This default setting
8c1189b6 382may be changed through the datacenter configuration key `max_worker`.
2af6af05
TL
383When finished the worker process gets collected and its result saved for
384the CRM.
3810ae1e 385
5eba0743 386.Maximum Concurrent Worker Adjustment Tips
3810ae1e 387[NOTE]
5eba0743 388The default value of at most 4 concurrent workers may be unsuited for
049fc557 389a specific setup. For example, 4 live migrations may occur at the same
3810ae1e 390time, which can lead to network congestions with slower networks and/or
049fc557
DW
391big (memory wise) services. Also, ensure that in the worst case, congestion is
392at a minimum, even if this means lowering the `max_worker` value. On the
393contrary, if you have a particularly powerful, high-end setup you may also want
394to increase it.
3810ae1e 395
049fc557
DW
396Each command requested by the CRM is uniquely identifiable by a UID. When
397the worker finishes, its result will be processed and written in the LRM
8c1189b6 398status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
049fc557 399it and let its state machine - respective to the commands output - act on it.
3810ae1e
TL
400
401The actions on each service between CRM and LRM are normally always synced.
a35aad4a 402This means that the CRM requests a state uniquely marked by a UID, the LRM
049fc557 403then executes this action *one time* and writes back the result, which is also
3810ae1e 404identifiable by the same UID. This is needed so that the LRM does not
a35aad4a 405execute an outdated command.
049fc557
DW
406The only exceptions to this behaviour are the `stop` and `error` commands;
407these two do not depend on the result produced and are executed
3810ae1e
TL
408always in the case of the stopped state and once in the case of
409the error state.
410
411.Read the Logs
412[NOTE]
413The HA Stack logs every action it makes. This helps to understand what
414and also why something happens in the cluster. Here its important to see
415what both daemons, the LRM and the CRM, did. You may use
416`journalctl -u pve-ha-lrm` on the node(s) where the service is and
417the same command for the pve-ha-crm on the node which is the current master.
418
419Cluster Resource Manager
420~~~~~~~~~~~~~~~~~~~~~~~~
22653ac8 421
8c1189b6 422The cluster resource manager (`pve-ha-crm`) starts on each node and
22653ac8
DM
423waits there for the manager lock, which can only be held by one node
424at a time. The node which successfully acquires the manager lock gets
3810ae1e
TL
425promoted to the CRM master.
426
2af6af05 427It can be in three states:
3810ae1e 428
b8663359 429wait for agent lock::
e1ea726a 430
97ae300a 431The CRM waits for our exclusive lock. This is also used as idle state if no
e1ea726a
FG
432service is configured
433
b8663359 434active::
e1ea726a 435
97ae300a 436The CRM holds its exclusive lock and has services configured
e1ea726a 437
b8663359 438lost agent lock::
e1ea726a 439
97ae300a 440The CRM lost its lock, this means a failure happened and quorum was lost.
3810ae1e 441
a35aad4a 442Its main task is to manage the services which are configured to be highly
4c34defd
TL
443available and try to always enforce the requested state. For example, a
444service with the requested state 'started' will be started if its not
445already running. If it crashes it will be automatically started again.
a35aad4a 446Thus the CRM dictates the actions the LRM needs to execute.
22653ac8 447
049fc557
DW
448When a node leaves the cluster quorum, its state changes to unknown.
449If the current CRM can then secure the failed node's lock, the services
22653ac8
DM
450will be 'stolen' and restarted on another node.
451
452When a cluster member determines that it is no longer in the cluster
453quorum, the LRM waits for a new quorum to form. As long as there is no
454quorum the node cannot reset the watchdog. This will trigger a reboot
049fc557 455after the watchdog times out (this happens after 60 seconds).
22653ac8 456
85363588 457
b8633a34
RV
458HA Simulator
459------------
460
461[thumbnail="screenshot/gui-ha-manager-status.png"]
462
463By using the HA simulator you can test and learn all functionalities of the
464Proxmox VE HA solutions.
465
3c5584e9
TL
466By default, the simulator allows you to watch and test the behaviour of a
467real-world 3 node cluster with 6 VMs. You can also add or remove additional VMs
468or Container.
b8633a34
RV
469
470You do not have to setup or configure a real cluster, the HA simulator runs out
471of the box.
472
473Install with apt:
474
475----
476apt install pve-ha-simulator
477----
478
049fc557 479You can even install the package on any Debian-based system without any
b8633a34
RV
480other Proxmox VE packages. For that you will need to download the package and
481copy it to the system you want to run it on for installation. When you install
482the package with apt from the local file system it will also resolve the
483required dependencies for you.
484
485
049fc557 486To start the simulator on a remote machine you must have an X11 redirection to
b8633a34
RV
487your current system.
488
489If you are on a Linux machine you can use:
490
491----
3c5584e9 492ssh root@<IPofPVE> -Y
b8633a34
RV
493----
494
049fc557 495On Windows it works with https://mobaxterm.mobatek.net/[mobaxterm].
b8633a34 496
049fc557
DW
497After connecting to an existing {pve} with the simulator installed or
498installing it on your local Debian-based system manually, you can try it out as
3c5584e9
TL
499follows.
500
049fc557
DW
501First you need to create a working directory where the simulator saves its
502current state and writes its default config:
b8633a34
RV
503
504----
505mkdir working
506----
507
049fc557 508Then, simply pass the created directory as a parameter to 'pve-ha-simulator':
b8633a34
RV
509
510----
511pve-ha-simulator working/
512----
513
3c5584e9
TL
514You can then start, stop, migrate the simulated HA services, or even check out
515what happens on a node failure.
b8633a34 516
2b52e195 517Configuration
22653ac8
DM
518-------------
519
85363588
DM
520The HA stack is well integrated into the {pve} API. So, for example,
521HA can be configured via the `ha-manager` command line interface, or
522the {pve} web interface - both interfaces provide an easy way to
523manage HA. Automation tools can use the API directly.
524
525All HA configuration files are within `/etc/pve/ha/`, so they get
526automatically distributed to the cluster nodes, and all nodes share
527the same HA configuration.
528
206c2476 529
4c34defd 530[[ha_manager_resource_config]]
206c2476
DM
531Resources
532~~~~~~~~~
533
1ff5e4e8 534[thumbnail="screenshot/gui-ha-manager-status.png"]
863a8f3a 535
4d63b3cc 536
85363588
DM
537The resource configuration file `/etc/pve/ha/resources.cfg` stores
538the list of resources managed by `ha-manager`. A resource configuration
a35aad4a 539inside that list looks like this:
85363588
DM
540
541----
8bdc398c 542<type>: <name>
85363588
DM
543 <property> <value>
544 ...
545----
546
698e5dd2
DM
547It starts with a resource type followed by a resource specific name,
548separated with colon. Together this forms the HA resource ID, which is
549used by all `ha-manager` commands to uniquely identify a resource
a9c77fec
DM
550(example: `vm:100` or `ct:101`). The next lines contain additional
551properties:
85363588
DM
552
553include::ha-resources-opts.adoc[]
554
8bdc398c 555Here is a real world example with one VM and one container. As you see,
470d4313 556the syntax of those files is really simple, so it is even possible to
8bdc398c
DM
557read or edit those files using your favorite editor:
558
e7b9b0ac 559.Configuration Example (`/etc/pve/ha/resources.cfg`)
8bdc398c
DM
560----
561vm: 501
562 state started
563 max_relocate 2
564
565ct: 102
a319e18b
DM
566 # Note: use default settings for everything
567----
568
1ff5e4e8 569[thumbnail="screenshot/gui-ha-manager-add-resource.png"]
4d63b3cc 570
049fc557 571The above config was generated using the `ha-manager` command line tool:
a319e18b
DM
572
573----
574# ha-manager add vm:501 --state started --max_relocate 2
575# ha-manager add ct:102
8bdc398c
DM
576----
577
85363588 578
1acab952 579[[ha_manager_groups]]
206c2476
DM
580Groups
581~~~~~~
582
1ff5e4e8 583[thumbnail="screenshot/gui-ha-manager-groups-view.png"]
4d63b3cc 584
85363588
DM
585The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
586define groups of cluster nodes. A resource can be restricted to run
206c2476
DM
587only on the members of such group. A group configuration look like
588this:
85363588 589
206c2476
DM
590----
591group: <group>
592 nodes <node_list>
593 <property> <value>
594 ...
595----
85363588 596
206c2476 597include::ha-groups-opts.adoc[]
22653ac8 598
1ff5e4e8 599[thumbnail="screenshot/gui-ha-manager-add-group.png"]
4d63b3cc 600
e60ce90c 601A common requirement is that a resource should run on a specific
1acab952
DM
602node. Usually the resource is able to run on other nodes, so you can define
603an unrestricted group with a single member:
604
605----
606# ha-manager groupadd prefer_node1 --nodes node1
607----
608
609For bigger clusters, it makes sense to define a more detailed failover
610behavior. For example, you may want to run a set of services on
611`node1` if possible. If `node1` is not available, you want to run them
049fc557 612equally split on `node2` and `node3`. If those nodes also fail, the
1acab952
DM
613services should run on `node4`. To achieve this you could set the node
614list to:
615
616----
617# ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4"
618----
619
620Another use case is if a resource uses other resources only available
621on specific nodes, lets say `node1` and `node2`. We need to make sure
622that HA manager does not use other nodes, so we need to create a
623restricted group with said nodes:
624
625----
626# ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted
627----
628
049fc557 629The above commands created the following group configuration file:
1acab952
DM
630
631.Configuration Example (`/etc/pve/ha/groups.cfg`)
632----
633group: prefer_node1
634 nodes node1
635
636group: mygroup1
637 nodes node2:1,node4,node1:2,node3:1
638
639group: mygroup2
640 nodes node2,node1
641 restricted 1
642----
643
644
645The `nofailback` options is mostly useful to avoid unwanted resource
e60ce90c 646movements during administration tasks. For example, if you need to
049fc557
DW
647migrate a service to a node which doesn't have the highest priority in the
648group, you need to tell the HA manager not to instantly move this service
649back by setting the `nofailback` option.
1acab952
DM
650
651Another scenario is when a service was fenced and it got recovered to
652another node. The admin tries to repair the fenced node and brings it
049fc557
DW
653up online again to investigate the cause of failure and check if it runs
654stably again. Setting the `nofailback` flag prevents the recovered services from
655moving straight back to the fenced node.
1acab952 656
22653ac8 657
80c0adcb 658[[ha_manager_fencing]]
3810ae1e
TL
659Fencing
660-------
661
0d427077
DM
662On node failures, fencing ensures that the erroneous node is
663guaranteed to be offline. This is required to make sure that no
664resource runs twice when it gets recovered on another node. This is a
049fc557 665really important task, because without this, it would not be possible to
0d427077
DM
666recover a resource on another node.
667
bdfd4601 668If a node did not get fenced, it would be in an unknown state where
0d427077 669it may have still access to shared resources. This is really
049fc557 670dangerous! Imagine that every network but the storage one broke. Now,
0d427077
DM
671while not reachable from the public network, the VM still runs and
672writes to the shared storage.
673
674If we then simply start up this VM on another node, we would get a
049fc557
DW
675dangerous race condition, because we write from both nodes. Such
676conditions can destroy all VM data and the whole VM could be rendered
677unusable. The recovery could also fail if the storage protects against
0d427077
DM
678multiple mounts.
679
5771d9b0
TL
680
681How {pve} Fences
0d427077 682~~~~~~~~~~~~~~~~
5771d9b0 683
61972f55
DM
684There are different methods to fence a node, for example, fence
685devices which cut off the power from the node or disable their
686communication completely. Those are often quite expensive and bring
687additional critical components into a system, because if they fail you
688cannot recover any service.
689
690We thus wanted to integrate a simpler fencing method, which does not
691require additional external hardware. This can be done using
692watchdog timers.
693
694.Possible Fencing Methods
695- external power switches
696- isolate nodes by disabling complete network traffic on the switch
697- self fencing using watchdog timers
698
049fc557
DW
699Watchdog timers have been widely used in critical and dependable systems
700since the beginning of microcontrollers. They are often simple, independent
701integrated circuits which are used to detect and recover from computer malfunctions.
61972f55
DM
702
703During normal operation, `ha-manager` regularly resets the watchdog
704timer to prevent it from elapsing. If, due to a hardware fault or
705program error, the computer fails to reset the watchdog, the timer
049fc557 706will elapse and trigger a reset of the whole server (reboot).
61972f55
DM
707
708Recent server motherboards often include such hardware watchdogs, but
709these need to be configured. If no watchdog is available or
710configured, we fall back to the Linux Kernel 'softdog'. While still
711reliable, it is not independent of the servers hardware, and thus has
712a lower reliability than a hardware watchdog.
3810ae1e 713
a472fde8 714
3810ae1e
TL
715Configure Hardware Watchdog
716~~~~~~~~~~~~~~~~~~~~~~~~~~~
a472fde8
DM
717
718By default, all hardware watchdog modules are blocked for security
719reasons. They are like a loaded gun if not correctly initialized. To
720enable a hardware watchdog, you need to specify the module to load in
721'/etc/default/pve-ha-manager', for example:
722
723----
724# select watchdog module (default is softdog)
725WATCHDOG_MODULE=iTCO_wdt
726----
727
049fc557 728This configuration is read by the 'watchdog-mux' service, which loads
a472fde8
DM
729the specified module at startup.
730
3810ae1e 731
2957ef80
TL
732Recover Fenced Services
733~~~~~~~~~~~~~~~~~~~~~~~
734
480e67e1
DM
735After a node failed and its fencing was successful, the CRM tries to
736move services from the failed node to nodes which are still online.
737
738The selection of nodes, on which those services gets recovered, is
739influenced by the resource `group` settings, the list of currently active
740nodes, and their respective active service count.
741
742The CRM first builds a set out of the intersection between user selected
743nodes (from `group` setting) and available nodes. It then choose the
744subset of nodes with the highest priority, and finally select the node
745with the lowest active service count. This minimizes the possibility
746of an overloaded node.
747
748CAUTION: On node failure, the CRM distributes services to the
049fc557 749remaining nodes. This increases the service count on those nodes, and
480e67e1
DM
750can lead to high load, especially on small clusters. Please design
751your cluster so that it can handle such worst case scenarios.
2957ef80 752
22653ac8 753
c7470421 754[[ha_manager_start_failure_policy]]
a3189ad1
TL
755Start Failure Policy
756---------------------
757
049fc557 758The start failure policy comes into effect if a service failed to start on a
a35aad4a 759node one or more times. It can be used to configure how often a restart
a3189ad1 760should be triggered on the same node and how often a service should be
049fc557 761relocated, so that it has an attempt to be started on another node.
a3189ad1
TL
762The aim of this policy is to circumvent temporary unavailability of shared
763resources on a specific node. For example, if a shared storage isn't available
049fc557
DW
764on a quorate node anymore, for instance due to network problems, but is still
765available on other nodes, the relocate policy allows the service to start
766nonetheless.
a3189ad1
TL
767
768There are two service start recover policy settings which can be configured
22653ac8
DM
769specific for each resource.
770
771max_restart::
772
049fc557 773Maximum number of attempts to restart a failed service on the actual
22653ac8
DM
774node. The default is set to one.
775
776max_relocate::
777
049fc557 778Maximum number of attempts to relocate the service to a different node.
22653ac8
DM
779A relocate only happens after the max_restart value is exceeded on the
780actual node. The default is set to one.
781
0abc65b0 782NOTE: The relocate count state will only reset to zero when the
22653ac8 783service had at least one successful start. That means if a service is
4c34defd 784re-started without fixing the error only the restart policy gets
22653ac8
DM
785repeated.
786
c7470421
DM
787
788[[ha_manager_error_recovery]]
2b52e195 789Error Recovery
22653ac8
DM
790--------------
791
049fc557
DW
792If, after all attempts, the service state could not be recovered, it gets
793placed in an error state. In this state, the service won't get touched
c5bca1ae 794by the HA stack anymore. The only way out is disabling a service:
d02982f7 795
c5bca1ae
TL
796----
797# ha-manager set vm:100 --state disabled
798----
d02982f7 799
c5bca1ae
TL
800This can also be done in the web interface.
801
802To recover from the error state you should do the following:
22653ac8 803
c5bca1ae
TL
804* bring the resource back into a safe and consistent state (e.g.:
805kill its process if the service could not be stopped)
22653ac8 806
c5bca1ae 807* disable the resource to remove the error flag
22653ac8
DM
808
809* fix the error which led to this failures
810
4c34defd 811* *after* you fixed all errors you may request that the service starts again
22653ac8
DM
812
813
26513dae
DM
814[[ha_manager_package_updates]]
815Package Updates
816---------------
817
049fc557 818When updating the ha-manager, you should do one node after the other, never
26513dae 819all at once for various reasons. First, while we test our software
049fc557 820thoroughly, a bug affecting your specific setup cannot totally be ruled out.
d5c3a54a
FE
821Updating one node after the other and checking the functionality of each node
822after finishing the update helps to recover from eventual problems, while
823updating all at once could result in a broken cluster and is generally not
26513dae
DM
824good practice.
825
826Also, the {pve} HA stack uses a request acknowledge protocol to perform
827actions between the cluster and the local resource manager. For restarting,
828the LRM makes a request to the CRM to freeze all its services. This prevents
049fc557
DW
829them from getting touched by the Cluster during the short time the LRM is restarting.
830After that, the LRM may safely close the watchdog during a restart.
7dd7a0b7
TL
831Such a restart happens normally during a package update and, as already stated,
832an active master CRM is needed to acknowledge the requests from the LRM. If
fb29acdd
FG
833this is not the case the update process can take too long which, in the worst
834case, may result in a reset triggered by the watchdog.
26513dae
DM
835
836
a9023144
DM
837Node Maintenance
838----------------
52a75187 839
049fc557
DW
840It is sometimes necessary to shutdown or reboot a node to do maintenance tasks,
841such as to replace hardware, or simply to install a new kernel image. This is
842also true when using the HA stack. The behaviour of the HA stack during a
843shutdown can be configured.
a9023144 844
a4a67cdb
TL
845[[ha_manager_shutdown_policy]]
846Shutdown Policy
847~~~~~~~~~~~~~~~
a9023144 848
a4a67cdb
TL
849Below you will find a description of the different HA policies for a node
850shutdown. Currently 'Conditional' is the default due to backward compatibility.
049fc557 851Some users may find that 'Migrate' behaves more as expected.
a9023144 852
a4a67cdb
TL
853Migrate
854^^^^^^^
a9023144 855
a4a67cdb 856Once the Local Resource manager (LRM) gets a shutdown request and this policy
049fc557 857is enabled, it will mark itself as unavailable for the current HA manager.
a4a67cdb 858This triggers a migration of all HA Services currently located on this node.
049fc557
DW
859The LRM will try to delay the shutdown process, until all running services get
860moved away. But, this expects that the running services *can* be migrated to
861another node. In other words, the service must not be locally bound, for example
862by using hardware passthrough. As non-group member nodes are considered as
863runnable target if no group member is available, this policy can still be used
864when making use of HA groups with only some nodes selected. But, marking a group
865as 'restricted' tells the HA manager that the service cannot run outside of the
866chosen set of nodes. If all of those nodes are unavailable, the shutdown will
867hang until you manually intervene. Once the shut down node comes back online
868again, the previously displaced services will be moved back, if they were not
869already manually migrated in-between.
a9023144 870
a4a67cdb
TL
871NOTE: The watchdog is still active during the migration process on shutdown.
872If the node loses quorum it will be fenced and the services will be recovered.
a9023144 873
e9833be4
TL
874If you start a (previously stopped) service on a node which is currently being
875maintained, the node needs to be fenced to ensure that the service can be moved
049fc557 876and started on another available node.
e9833be4 877
a4a67cdb
TL
878Failover
879^^^^^^^^
880
881This mode ensures that all services get stopped, but that they will also be
882recovered, if the current node is not online soon. It can be useful when doing
049fc557
DW
883maintenance on a cluster scale, where live-migrating VMs may not be possible if
884too many nodes are powered off at a time, but you still want to ensure HA
a4a67cdb
TL
885services get recovered and started again as soon as possible.
886
887Freeze
888^^^^^^
889
890This mode ensures that all services get stopped and frozen, so that they won't
891get recovered until the current node is online again.
892
893Conditional
894^^^^^^^^^^^
895
3dc611ff
TL
896The 'Conditional' shutdown policy automatically detects if a shutdown or a
897reboot is requested, and changes behaviour accordingly.
898
a4a67cdb
TL
899.Shutdown
900
049fc557
DW
901A shutdown ('poweroff') is usually done if it is planned for the node to stay
902down for some time. The LRM stops all managed services in this case. This means
903that other nodes will take over those services afterwards.
a4a67cdb
TL
904
905NOTE: Recent hardware has large amounts of memory (RAM). So we stop all
906resources, then restart them to avoid online migration of all that RAM. If you
907want to use online migration, you need to invoke that manually before you
908shutdown the node.
909
910
911.Reboot
a9023144 912
a4a67cdb
TL
913Node reboots are initiated with the 'reboot' command. This is usually done
914after installing a new kernel. Please note that this is different from
915``shutdown'', because the node immediately starts again.
a9023144 916
a4a67cdb
TL
917The LRM tells the CRM that it wants to restart, and waits until the CRM puts
918all resources into the `freeze` state (same mechanism is used for
049fc557
DW
919xref:ha_manager_package_updates[Package Updates]). This prevents those resources
920from being moved to other nodes. Instead, the CRM starts the resources after the
921reboot on the same node.
a9023144
DM
922
923
924Manual Resource Movement
3dc611ff 925^^^^^^^^^^^^^^^^^^^^^^^^
a9023144 926
049fc557 927Last but not least, you can also manually move resources to other nodes, before
a4a67cdb
TL
928you shutdown or restart a node. The advantage is that you have full control,
929and you can decide if you want to use online migration or not.
a9023144
DM
930
931NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or
049fc557 932`watchdog-mux`. They manage and use the watchdog, so this can result in an
a4a67cdb 933immediate node reboot or even reset.
52a75187
DM
934
935
221a0d1a 936[[ha_manager_crs]]
57aa03c7
TL
937Cluster Resource Scheduling
938---------------------------
7210615d
FE
939
940The scheduler mode controls how HA selects nodes for the recovery of a service
941as well as for migrations that are triggered by a shutdown policy. The default
942mode is `basic`, you can change it in `datacenter.cfg`:
943
944----
945crs: ha=static
946----
947
0031ebd8
FE
948The change will be in effect starting with the next manager round (after a few
949seconds).
7210615d
FE
950
951For each service that needs to be recovered or migrated, the scheduler
952iteratively chooses the best node among the nodes with the highest priority in
953the service's group.
954
955NOTE: There are plans to add modes for (static and dynamic) load-balancing in
956the future.
957
958Basic
336ed7a0 959~~~~~
7210615d 960
0940b99b 961The number of active HA services on each node is used to choose a recovery node.
7210615d
FE
962
963Static
336ed7a0 964~~~~~~
7210615d 965
57aa03c7 966IMPORTANT: The static mode is still a technology preview.
6d085cd5 967
0940b99b 968Static usage information from HA services on each node is used to choose a
7210615d
FE
969recovery node.
970
971For this selection, each node in turn is considered as if the service was
972already running on it, using CPU and memory usage from the associated guest
973configuration. Then for each such alternative, CPU and memory usage of all nodes
974are considered, with memory being weighted much more, because it's a truly
975limited resource. For both, CPU and memory, highest usage among nodes (weighted
976more, as ideally no node should be overcommitted) and average usage of all nodes
977(to still be able to distinguish in case there already is a more highly
978committed node) are considered.
979
57aa03c7
TL
980IMPORTANT: The more services the more possible combinations there are, so it's
981currently not recommended to use it if you have thousands of HA managed
982services.
983
984
22653ac8
DM
985ifdef::manvolnum[]
986include::pve-copyright.adoc[]
987endif::manvolnum[]
988