]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_ha_manager]] |
22653ac8 | 2 | ifdef::manvolnum[] |
b2f242ab DM |
3 | ha-manager(1) |
4 | ============= | |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
22653ac8 DM |
7 | NAME |
8 | ---- | |
9 | ||
734404b4 | 10 | ha-manager - Proxmox VE HA Manager |
22653ac8 | 11 | |
49a5e11c | 12 | SYNOPSIS |
22653ac8 DM |
13 | -------- |
14 | ||
15 | include::ha-manager.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
22653ac8 DM |
20 | ifndef::manvolnum[] |
21 | High Availability | |
22 | ================= | |
5f09af76 | 23 | :pve-toplevel: |
194d2f29 | 24 | endif::manvolnum[] |
b5266e9f DM |
25 | |
26 | Our modern society depends heavily on information provided by | |
27 | computers over the network. Mobile devices amplified that dependency, | |
28 | because people can access the network any time from anywhere. If you | |
29 | provide such services, it is very important that they are available | |
30 | most of the time. | |
31 | ||
32 | We can mathematically define the availability as the ratio of (A) the | |
33 | total time a service is capable of being used during a given interval | |
34 | to (B) the length of the interval. It is normally expressed as a | |
35 | percentage of uptime in a given year. | |
36 | ||
37 | .Availability - Downtime per Year | |
38 | [width="60%",cols="<d,d",options="header"] | |
39 | |=========================================================== | |
40 | |Availability % |Downtime per year | |
41 | |99 |3.65 days | |
42 | |99.9 |8.76 hours | |
43 | |99.99 |52.56 minutes | |
44 | |99.999 |5.26 minutes | |
45 | |99.9999 |31.5 seconds | |
46 | |99.99999 |3.15 seconds | |
47 | |=========================================================== | |
48 | ||
04bde502 DM |
49 | There are several ways to increase availability. The most elegant |
50 | solution is to rewrite your software, so that you can run it on | |
51 | several host at the same time. The software itself need to have a way | |
2af6af05 | 52 | to detect errors and do failover. This is relatively easy if you just |
04bde502 DM |
53 | want to serve read-only web pages. But in general this is complex, and |
54 | sometimes impossible because you cannot modify the software | |
55 | yourself. The following solutions works without modifying the | |
56 | software: | |
57 | ||
8c1189b6 | 58 | * Use reliable ``server'' components |
fd9e8984 | 59 | + |
04bde502 | 60 | NOTE: Computer components with same functionality can have varying |
2af6af05 | 61 | reliability numbers, depending on the component quality. Most vendors |
8c1189b6 | 62 | sell components with higher reliability as ``server'' components - |
04bde502 | 63 | usually at higher price. |
b5266e9f DM |
64 | |
65 | * Eliminate single point of failure (redundant components) | |
8c1189b6 FG |
66 | ** use an uninterruptible power supply (UPS) |
67 | ** use redundant power supplies on the main boards | |
68 | ** use ECC-RAM | |
69 | ** use redundant network hardware | |
70 | ** use RAID for local storage | |
71 | ** use distributed, redundant storage for VM data | |
b5266e9f DM |
72 | |
73 | * Reduce downtime | |
8c1189b6 FG |
74 | ** rapidly accessible administrators (24/7) |
75 | ** availability of spare parts (other nodes in a {pve} cluster) | |
76 | ** automatic error detection (provided by `ha-manager`) | |
77 | ** automatic failover (provided by `ha-manager`) | |
b5266e9f | 78 | |
5771d9b0 | 79 | Virtualization environments like {pve} make it much easier to reach |
8c1189b6 | 80 | high availability because they remove the ``hardware'' dependency. They |
04bde502 DM |
81 | also support to setup and use redundant storage and network |
82 | devices. So if one host fail, you can simply start those services on | |
43da8322 DM |
83 | another host within your cluster. |
84 | ||
8c1189b6 | 85 | Even better, {pve} provides a software stack called `ha-manager`, |
43da8322 DM |
86 | which can do that automatically for you. It is able to automatically |
87 | detect errors and do automatic failover. | |
88 | ||
8c1189b6 | 89 | {pve} `ha-manager` works like an ``automated'' administrator. First, you |
43da8322 | 90 | configure what resources (VMs, containers, ...) it should |
8c1189b6 FG |
91 | manage. `ha-manager` then observes correct functionality, and handles |
92 | service failover to another node in case of errors. `ha-manager` can | |
43da8322 DM |
93 | also handle normal user requests which may start, stop, relocate and |
94 | migrate a service. | |
04bde502 DM |
95 | |
96 | But high availability comes at a price. High quality components are | |
97 | more expensive, and making them redundant duplicates the costs at | |
98 | least. Additional spare parts increase costs further. So you should | |
99 | carefully calculate the benefits, and compare with those additional | |
100 | costs. | |
101 | ||
102 | TIP: Increasing availability from 99% to 99.9% is relatively | |
103 | simply. But increasing availability from 99.9999% to 99.99999% is very | |
8c1189b6 | 104 | hard and costly. `ha-manager` has typical error detection and failover |
43da8322 DM |
105 | times of about 2 minutes, so you can get no more than 99.999% |
106 | availability. | |
b5266e9f | 107 | |
823fa863 | 108 | |
5bd515d4 DM |
109 | Requirements |
110 | ------------ | |
3810ae1e | 111 | |
823fa863 DM |
112 | You must meet the following requirements before you start with HA: |
113 | ||
5bd515d4 | 114 | * at least three cluster nodes (to get reliable quorum) |
43da8322 | 115 | |
5bd515d4 | 116 | * shared storage for VMs and containers |
43da8322 | 117 | |
5bd515d4 | 118 | * hardware redundancy (everywhere) |
3810ae1e | 119 | |
823fa863 DM |
120 | * use reliable “server” components |
121 | ||
5bd515d4 | 122 | * hardware watchdog - if not available we fall back to the |
8c1189b6 | 123 | linux kernel software watchdog (`softdog`) |
3810ae1e | 124 | |
5bd515d4 | 125 | * optional hardware fencing devices |
3810ae1e | 126 | |
3810ae1e | 127 | |
80c0adcb | 128 | [[ha_manager_resources]] |
5bd515d4 DM |
129 | Resources |
130 | --------- | |
131 | ||
8c1189b6 FG |
132 | We call the primary management unit handled by `ha-manager` a |
133 | resource. A resource (also called ``service'') is uniquely | |
5bd515d4 | 134 | identified by a service ID (SID), which consists of the resource type |
8c1189b6 FG |
135 | and an type specific ID, e.g.: `vm:100`. That example would be a |
136 | resource of type `vm` (virtual machine) with the ID 100. | |
5bd515d4 DM |
137 | |
138 | For now we have two important resources types - virtual machines and | |
139 | containers. One basic idea here is that we can bundle related software | |
140 | into such VM or container, so there is no need to compose one big | |
8c1189b6 | 141 | service from other services, like it was done with `rgmanager`. In |
5bd515d4 | 142 | general, a HA enabled resource should not depend on other resources. |
3810ae1e | 143 | |
22653ac8 | 144 | |
2b52e195 | 145 | How It Works |
22653ac8 DM |
146 | ------------ |
147 | ||
3810ae1e TL |
148 | This section provides an in detail description of the {PVE} HA-manager |
149 | internals. It describes how the CRM and the LRM work together. | |
150 | ||
151 | To provide High Availability two daemons run on each node: | |
152 | ||
8c1189b6 | 153 | `pve-ha-lrm`:: |
3810ae1e | 154 | |
1600c60a DM |
155 | The local resource manager (LRM), which controls the services running on |
156 | the local node. It reads the requested states for its services from | |
157 | the current manager status file and executes the respective commands. | |
3810ae1e | 158 | |
8c1189b6 | 159 | `pve-ha-crm`:: |
3810ae1e | 160 | |
1600c60a DM |
161 | The cluster resource manager (CRM), which makes the cluster wide |
162 | decisions. It sends commands to the LRM, processes the results, | |
163 | and moves resources to other nodes if something fails. The CRM also | |
164 | handles node fencing. | |
165 | ||
3810ae1e TL |
166 | |
167 | .Locks in the LRM & CRM | |
168 | [NOTE] | |
169 | Locks are provided by our distributed configuration file system (pmxcfs). | |
5771d9b0 TL |
170 | They are used to guarantee that each LRM is active once and working. As a |
171 | LRM only executes actions when it holds its lock we can mark a failed node | |
172 | as fenced if we can acquire its lock. This lets us then recover any failed | |
5eba0743 | 173 | HA services securely without any interference from the now unknown failed node. |
3810ae1e TL |
174 | This all gets supervised by the CRM which holds currently the manager master |
175 | lock. | |
176 | ||
177 | Local Resource Manager | |
178 | ~~~~~~~~~~~~~~~~~~~~~~ | |
179 | ||
8c1189b6 | 180 | The local resource manager (`pve-ha-lrm`) is started as a daemon on |
3810ae1e TL |
181 | boot and waits until the HA cluster is quorate and thus cluster wide |
182 | locks are working. | |
183 | ||
184 | It can be in three states: | |
185 | ||
b8663359 | 186 | wait for agent lock:: |
e1ea726a FG |
187 | |
188 | The LRM waits for our exclusive lock. This is also used as idle state if no | |
189 | service is configured. | |
190 | ||
b8663359 | 191 | active:: |
e1ea726a FG |
192 | |
193 | The LRM holds its exclusive lock and has services configured. | |
194 | ||
b8663359 | 195 | lost agent lock:: |
e1ea726a FG |
196 | |
197 | The LRM lost its lock, this means a failure happened and quorum was lost. | |
3810ae1e TL |
198 | |
199 | After the LRM gets in the active state it reads the manager status | |
8c1189b6 | 200 | file in `/etc/pve/ha/manager_status` and determines the commands it |
2af6af05 | 201 | has to execute for the services it owns. |
3810ae1e | 202 | For each command a worker gets started, this workers are running in |
5eba0743 | 203 | parallel and are limited to at most 4 by default. This default setting |
8c1189b6 | 204 | may be changed through the datacenter configuration key `max_worker`. |
2af6af05 TL |
205 | When finished the worker process gets collected and its result saved for |
206 | the CRM. | |
3810ae1e | 207 | |
5eba0743 | 208 | .Maximum Concurrent Worker Adjustment Tips |
3810ae1e | 209 | [NOTE] |
5eba0743 | 210 | The default value of at most 4 concurrent workers may be unsuited for |
3810ae1e TL |
211 | a specific setup. For example may 4 live migrations happen at the same |
212 | time, which can lead to network congestions with slower networks and/or | |
213 | big (memory wise) services. Ensure that also in the worst case no congestion | |
8c1189b6 | 214 | happens and lower the `max_worker` value if needed. In the contrary, if you |
3810ae1e TL |
215 | have a particularly powerful high end setup you may also want to increase it. |
216 | ||
217 | Each command requested by the CRM is uniquely identifiable by an UID, when | |
218 | the worker finished its result will be processed and written in the LRM | |
8c1189b6 | 219 | status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect |
3810ae1e TL |
220 | it and let its state machine - respective the commands output - act on it. |
221 | ||
222 | The actions on each service between CRM and LRM are normally always synced. | |
223 | This means that the CRM requests a state uniquely marked by an UID, the LRM | |
224 | then executes this action *one time* and writes back the result, also | |
225 | identifiable by the same UID. This is needed so that the LRM does not | |
226 | executes an outdated command. | |
8c1189b6 | 227 | With the exception of the `stop` and the `error` command, |
c9aa5d47 | 228 | those two do not depend on the result produced and are executed |
3810ae1e TL |
229 | always in the case of the stopped state and once in the case of |
230 | the error state. | |
231 | ||
232 | .Read the Logs | |
233 | [NOTE] | |
234 | The HA Stack logs every action it makes. This helps to understand what | |
235 | and also why something happens in the cluster. Here its important to see | |
236 | what both daemons, the LRM and the CRM, did. You may use | |
237 | `journalctl -u pve-ha-lrm` on the node(s) where the service is and | |
238 | the same command for the pve-ha-crm on the node which is the current master. | |
239 | ||
240 | Cluster Resource Manager | |
241 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
22653ac8 | 242 | |
8c1189b6 | 243 | The cluster resource manager (`pve-ha-crm`) starts on each node and |
22653ac8 DM |
244 | waits there for the manager lock, which can only be held by one node |
245 | at a time. The node which successfully acquires the manager lock gets | |
3810ae1e TL |
246 | promoted to the CRM master. |
247 | ||
2af6af05 | 248 | It can be in three states: |
3810ae1e | 249 | |
b8663359 | 250 | wait for agent lock:: |
e1ea726a | 251 | |
97ae300a | 252 | The CRM waits for our exclusive lock. This is also used as idle state if no |
e1ea726a FG |
253 | service is configured |
254 | ||
b8663359 | 255 | active:: |
e1ea726a | 256 | |
97ae300a | 257 | The CRM holds its exclusive lock and has services configured |
e1ea726a | 258 | |
b8663359 | 259 | lost agent lock:: |
e1ea726a | 260 | |
97ae300a | 261 | The CRM lost its lock, this means a failure happened and quorum was lost. |
3810ae1e TL |
262 | |
263 | It main task is to manage the services which are configured to be highly | |
2af6af05 | 264 | available and try to always enforce them to the wanted state, e.g.: a |
3810ae1e | 265 | enabled service will be started if its not running, if it crashes it will |
2af6af05 | 266 | be started again. Thus it dictates the LRM the actions it needs to execute. |
22653ac8 DM |
267 | |
268 | When an node leaves the cluster quorum, its state changes to unknown. | |
269 | If the current CRM then can secure the failed nodes lock, the services | |
270 | will be 'stolen' and restarted on another node. | |
271 | ||
272 | When a cluster member determines that it is no longer in the cluster | |
273 | quorum, the LRM waits for a new quorum to form. As long as there is no | |
274 | quorum the node cannot reset the watchdog. This will trigger a reboot | |
2af6af05 | 275 | after the watchdog then times out, this happens after 60 seconds. |
22653ac8 | 276 | |
85363588 | 277 | |
2b52e195 | 278 | Configuration |
22653ac8 DM |
279 | ------------- |
280 | ||
85363588 DM |
281 | The HA stack is well integrated into the {pve} API. So, for example, |
282 | HA can be configured via the `ha-manager` command line interface, or | |
283 | the {pve} web interface - both interfaces provide an easy way to | |
284 | manage HA. Automation tools can use the API directly. | |
285 | ||
286 | All HA configuration files are within `/etc/pve/ha/`, so they get | |
287 | automatically distributed to the cluster nodes, and all nodes share | |
288 | the same HA configuration. | |
289 | ||
206c2476 DM |
290 | |
291 | Resources | |
292 | ~~~~~~~~~ | |
293 | ||
85363588 DM |
294 | The resource configuration file `/etc/pve/ha/resources.cfg` stores |
295 | the list of resources managed by `ha-manager`. A resource configuration | |
296 | inside that list look like this: | |
297 | ||
298 | ---- | |
8bdc398c | 299 | <type>: <name> |
85363588 DM |
300 | <property> <value> |
301 | ... | |
302 | ---- | |
303 | ||
698e5dd2 DM |
304 | It starts with a resource type followed by a resource specific name, |
305 | separated with colon. Together this forms the HA resource ID, which is | |
306 | used by all `ha-manager` commands to uniquely identify a resource | |
a9c77fec DM |
307 | (example: `vm:100` or `ct:101`). The next lines contain additional |
308 | properties: | |
85363588 DM |
309 | |
310 | include::ha-resources-opts.adoc[] | |
311 | ||
8bdc398c DM |
312 | Here is a real world example with one VM and one container. As you see, |
313 | the syntax of those files is really simple, so it is even posiible to | |
314 | read or edit those files using your favorite editor: | |
315 | ||
e7b9b0ac | 316 | .Configuration Example (`/etc/pve/ha/resources.cfg`) |
8bdc398c DM |
317 | ---- |
318 | vm: 501 | |
319 | state started | |
320 | max_relocate 2 | |
321 | ||
322 | ct: 102 | |
323 | # use default settings for everything | |
324 | ---- | |
325 | ||
85363588 | 326 | |
206c2476 DM |
327 | Groups |
328 | ~~~~~~ | |
329 | ||
85363588 DM |
330 | The HA group configuration file `/etc/pve/ha/groups.cfg` is used to |
331 | define groups of cluster nodes. A resource can be restricted to run | |
206c2476 DM |
332 | only on the members of such group. A group configuration look like |
333 | this: | |
85363588 | 334 | |
206c2476 DM |
335 | ---- |
336 | group: <group> | |
337 | nodes <node_list> | |
338 | <property> <value> | |
339 | ... | |
340 | ---- | |
85363588 | 341 | |
206c2476 | 342 | include::ha-groups-opts.adoc[] |
22653ac8 | 343 | |
22653ac8 | 344 | |
3810ae1e TL |
345 | Node Power Status |
346 | ----------------- | |
347 | ||
348 | If a node needs maintenance you should migrate and or relocate all | |
349 | services which are required to run always on another node first. | |
350 | After that you can stop the LRM and CRM services. But note that the | |
351 | watchdog triggers if you stop it with active services. | |
352 | ||
5771d9b0 TL |
353 | Package Updates |
354 | --------------- | |
355 | ||
2af6af05 | 356 | When updating the ha-manager you should do one node after the other, never |
5771d9b0 TL |
357 | all at once for various reasons. First, while we test our software |
358 | thoughtfully, a bug affecting your specific setup cannot totally be ruled out. | |
359 | Upgrading one node after the other and checking the functionality of each node | |
360 | after finishing the update helps to recover from an eventual problems, while | |
361 | updating all could render you in a broken cluster state and is generally not | |
362 | good practice. | |
363 | ||
364 | Also, the {pve} HA stack uses a request acknowledge protocol to perform | |
365 | actions between the cluster and the local resource manager. For restarting, | |
366 | the LRM makes a request to the CRM to freeze all its services. This prevents | |
367 | that they get touched by the Cluster during the short time the LRM is restarting. | |
368 | After that the LRM may safely close the watchdog during a restart. | |
369 | Such a restart happens on a update and as already stated a active master | |
370 | CRM is needed to acknowledge the requests from the LRM, if this is not the case | |
371 | the update process can be too long which, in the worst case, may result in | |
372 | a watchdog reset. | |
373 | ||
2af6af05 | 374 | |
80c0adcb | 375 | [[ha_manager_fencing]] |
3810ae1e TL |
376 | Fencing |
377 | ------- | |
378 | ||
5eba0743 | 379 | What is Fencing |
3810ae1e TL |
380 | ~~~~~~~~~~~~~~~ |
381 | ||
382 | Fencing secures that on a node failure the dangerous node gets will be rendered | |
383 | unable to do any damage and that no resource runs twice when it gets recovered | |
5771d9b0 TL |
384 | from the failed node. This is a really important task and one of the base |
385 | principles to make a system Highly Available. | |
386 | ||
387 | If a node would not get fenced it would be in an unknown state where it may | |
388 | have still access to shared resources, this is really dangerous! | |
389 | Imagine that every network but the storage one broke, now while not | |
390 | reachable from the public network the VM still runs and writes on the shared | |
391 | storage. If we would not fence the node and just start up this VM on another | |
392 | Node we would get dangerous race conditions, atomicity violations the whole VM | |
393 | could be rendered unusable. The recovery could also simply fail if the storage | |
394 | protects from multiple mounts and thus defeat the purpose of HA. | |
395 | ||
396 | How {pve} Fences | |
397 | ~~~~~~~~~~~~~~~~~ | |
398 | ||
399 | There are different methods to fence a node, for example fence devices which | |
400 | cut off the power from the node or disable their communication completely. | |
401 | ||
402 | Those are often quite expensive and bring additional critical components in | |
403 | a system, because if they fail you cannot recover any service. | |
404 | ||
405 | We thus wanted to integrate a simpler method in the HA Manager first, namely | |
406 | self fencing with watchdogs. | |
407 | ||
408 | Watchdogs are widely used in critical and dependable systems since the | |
409 | beginning of micro controllers, they are often independent and simple | |
410 | integrated circuit which programs can use to watch them. After opening they need to | |
411 | report periodically. If, for whatever reason, a program becomes unable to do | |
412 | so the watchdogs triggers a reset of the whole server. | |
413 | ||
414 | Server motherboards often already include such hardware watchdogs, these need | |
415 | to be configured. If no watchdog is available or configured we fall back to the | |
416 | Linux Kernel softdog while still reliable it is not independent of the servers | |
417 | Hardware and thus has a lower reliability then a hardware watchdog. | |
3810ae1e TL |
418 | |
419 | Configure Hardware Watchdog | |
420 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
421 | By default all watchdog modules are blocked for security reasons as they are | |
422 | like a loaded gun if not correctly initialized. | |
c9aa5d47 | 423 | If you have a hardware watchdog available remove its kernel module from the |
8c1189b6 | 424 | blacklist, load it with insmod and restart the `watchdog-mux` service or reboot |
c9aa5d47 | 425 | the node. |
3810ae1e | 426 | |
2957ef80 TL |
427 | Recover Fenced Services |
428 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
429 | ||
430 | After a node failed and its fencing was successful we start to recover services | |
431 | to other available nodes and restart them there so that they can provide service | |
432 | again. | |
433 | ||
434 | The selection of the node on which the services gets recovered is influenced | |
435 | by the users group settings, the currently active nodes and their respective | |
436 | active service count. | |
437 | First we build a set out of the intersection between user selected nodes and | |
438 | available nodes. Then the subset with the highest priority of those nodes | |
439 | gets chosen as possible nodes for recovery. We select the node with the | |
440 | currently lowest active service count as a new node for the service. | |
441 | That minimizes the possibility of an overload, which else could cause an | |
442 | unresponsive node and as a result a chain reaction of node failures in the | |
443 | cluster. | |
444 | ||
80c0adcb | 445 | [[ha_manager_groups]] |
2b52e195 | 446 | Groups |
22653ac8 DM |
447 | ------ |
448 | ||
449 | A group is a collection of cluster nodes which a service may be bound to. | |
450 | ||
2b52e195 | 451 | Group Settings |
22653ac8 DM |
452 | ~~~~~~~~~~~~~~ |
453 | ||
454 | nodes:: | |
455 | ||
c9aa5d47 TL |
456 | List of group node members where a priority can be given to each node. |
457 | A service bound to this group will run on the nodes with the highest priority | |
458 | available. If more nodes are in the highest priority class the services will | |
459 | get distributed to those node if not already there. The priorities have a | |
460 | relative meaning only. | |
93d2a4f9 | 461 | Example;; |
b352bff4 DM |
462 | You want to run all services from a group on `node1` if possible. If this node |
463 | is not available, you want them to run equally splitted on `node2` and `node3`, and | |
464 | if those fail it should use `node4`. | |
93d2a4f9 TL |
465 | To achieve this you could set the node list to: |
466 | [source,bash] | |
467 | ha-manager groupset mygroup -nodes "node1:2,node2:1,node3:1,node4" | |
22653ac8 DM |
468 | |
469 | restricted:: | |
470 | ||
5eba0743 | 471 | Resources bound to this group may only run on nodes defined by the |
22653ac8 DM |
472 | group. If no group node member is available the resource will be |
473 | placed in the stopped state. | |
93d2a4f9 | 474 | Example;; |
01911cf3 DM |
475 | Lets say a service uses resources only available on `node1` and `node2`, |
476 | so we need to make sure that HA manager does not use other nodes. | |
477 | We need to create a 'restricted' group with said nodes: | |
478 | [source,bash] | |
479 | ha-manager groupset mygroup -nodes "node1,node2" -restricted | |
22653ac8 DM |
480 | |
481 | nofailback:: | |
482 | ||
5eba0743 | 483 | The resource won't automatically fail back when a more preferred node |
22653ac8 | 484 | (re)joins the cluster. |
93d2a4f9 TL |
485 | Examples;; |
486 | * You need to migrate a service to a node which hasn't the highest priority | |
487 | in the group at the moment, to tell the HA manager to not move this service | |
20fa8c22 | 488 | instantly back set the 'nofailback' option and the service will stay on |
345f5fe0 | 489 | the current node. |
93d2a4f9 | 490 | |
345f5fe0 DM |
491 | * A service was fenced and it got recovered to another node. The admin |
492 | repaired the node and brought it up online again but does not want that the | |
93d2a4f9 TL |
493 | recovered services move straight back to the repaired node as he wants to |
494 | first investigate the failure cause and check if it runs stable. He can use | |
345f5fe0 | 495 | the 'nofailback' option to achieve this. |
22653ac8 DM |
496 | |
497 | ||
a3189ad1 TL |
498 | Start Failure Policy |
499 | --------------------- | |
500 | ||
501 | The start failure policy comes in effect if a service failed to start on a | |
502 | node once ore more times. It can be used to configure how often a restart | |
503 | should be triggered on the same node and how often a service should be | |
504 | relocated so that it gets a try to be started on another node. | |
505 | The aim of this policy is to circumvent temporary unavailability of shared | |
506 | resources on a specific node. For example, if a shared storage isn't available | |
507 | on a quorate node anymore, e.g. network problems, but still on other nodes, | |
508 | the relocate policy allows then that the service gets started nonetheless. | |
509 | ||
510 | There are two service start recover policy settings which can be configured | |
22653ac8 DM |
511 | specific for each resource. |
512 | ||
513 | max_restart:: | |
514 | ||
5eba0743 | 515 | Maximum number of tries to restart an failed service on the actual |
22653ac8 DM |
516 | node. The default is set to one. |
517 | ||
518 | max_relocate:: | |
519 | ||
5eba0743 | 520 | Maximum number of tries to relocate the service to a different node. |
22653ac8 DM |
521 | A relocate only happens after the max_restart value is exceeded on the |
522 | actual node. The default is set to one. | |
523 | ||
0abc65b0 | 524 | NOTE: The relocate count state will only reset to zero when the |
22653ac8 DM |
525 | service had at least one successful start. That means if a service is |
526 | re-enabled without fixing the error only the restart policy gets | |
527 | repeated. | |
528 | ||
2b52e195 | 529 | Error Recovery |
22653ac8 DM |
530 | -------------- |
531 | ||
532 | If after all tries the service state could not be recovered it gets | |
533 | placed in an error state. In this state the service won't get touched | |
534 | by the HA stack anymore. To recover from this state you should follow | |
535 | these steps: | |
536 | ||
5eba0743 | 537 | * bring the resource back into a safe and consistent state (e.g., |
22653ac8 DM |
538 | killing its process) |
539 | ||
540 | * disable the ha resource to place it in an stopped state | |
541 | ||
542 | * fix the error which led to this failures | |
543 | ||
544 | * *after* you fixed all errors you may enable the service again | |
545 | ||
546 | ||
8b598c33 | 547 | [[ha_manager_service_operations]] |
2b52e195 | 548 | Service Operations |
22653ac8 DM |
549 | ------------------ |
550 | ||
551 | This are how the basic user-initiated service operations (via | |
8c1189b6 | 552 | `ha-manager`) work. |
22653ac8 DM |
553 | |
554 | enable:: | |
555 | ||
5eba0743 | 556 | The service will be started by the LRM if not already running. |
22653ac8 DM |
557 | |
558 | disable:: | |
559 | ||
5eba0743 | 560 | The service will be stopped by the LRM if running. |
22653ac8 DM |
561 | |
562 | migrate/relocate:: | |
563 | ||
5eba0743 | 564 | The service will be relocated (live) to another node. |
22653ac8 DM |
565 | |
566 | remove:: | |
567 | ||
5eba0743 | 568 | The service will be removed from the HA managed resource list. Its |
22653ac8 DM |
569 | current state will not be touched. |
570 | ||
571 | start/stop:: | |
572 | ||
8c1189b6 FG |
573 | `start` and `stop` commands can be issued to the resource specific tools |
574 | (like `qm` or `pct`), they will forward the request to the | |
575 | `ha-manager` which then will execute the action and set the resulting | |
22653ac8 DM |
576 | service state (enabled, disabled). |
577 | ||
578 | ||
2b52e195 | 579 | Service States |
22653ac8 DM |
580 | -------------- |
581 | ||
582 | stopped:: | |
583 | ||
c9aa5d47 TL |
584 | Service is stopped (confirmed by LRM), if detected running it will get stopped |
585 | again. | |
22653ac8 DM |
586 | |
587 | request_stop:: | |
588 | ||
589 | Service should be stopped. Waiting for confirmation from LRM. | |
590 | ||
591 | started:: | |
592 | ||
593 | Service is active an LRM should start it ASAP if not already running. | |
c9aa5d47 | 594 | If the Service fails and is detected to be not running the LRM restarts it. |
22653ac8 DM |
595 | |
596 | fence:: | |
597 | ||
598 | Wait for node fencing (service node is not inside quorate cluster | |
599 | partition). | |
c9aa5d47 TL |
600 | As soon as node gets fenced successfully the service will be recovered to |
601 | another node, if possible. | |
22653ac8 DM |
602 | |
603 | freeze:: | |
604 | ||
605 | Do not touch the service state. We use this state while we reboot a | |
606 | node, or when we restart the LRM daemon. | |
607 | ||
608 | migrate:: | |
609 | ||
610 | Migrate service (live) to other node. | |
611 | ||
612 | error:: | |
613 | ||
614 | Service disabled because of LRM errors. Needs manual intervention. | |
615 | ||
616 | ||
617 | ifdef::manvolnum[] | |
618 | include::pve-copyright.adoc[] | |
619 | endif::manvolnum[] | |
620 |