]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_ha_manager]] |
22653ac8 | 2 | ifdef::manvolnum[] |
b2f242ab DM |
3 | ha-manager(1) |
4 | ============= | |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
22653ac8 DM |
7 | NAME |
8 | ---- | |
9 | ||
734404b4 | 10 | ha-manager - Proxmox VE HA Manager |
22653ac8 | 11 | |
49a5e11c | 12 | SYNOPSIS |
22653ac8 DM |
13 | -------- |
14 | ||
15 | include::ha-manager.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
22653ac8 DM |
20 | ifndef::manvolnum[] |
21 | High Availability | |
22 | ================= | |
5f09af76 | 23 | :pve-toplevel: |
194d2f29 | 24 | endif::manvolnum[] |
b5266e9f DM |
25 | |
26 | Our modern society depends heavily on information provided by | |
27 | computers over the network. Mobile devices amplified that dependency, | |
28 | because people can access the network any time from anywhere. If you | |
29 | provide such services, it is very important that they are available | |
30 | most of the time. | |
31 | ||
32 | We can mathematically define the availability as the ratio of (A) the | |
33 | total time a service is capable of being used during a given interval | |
34 | to (B) the length of the interval. It is normally expressed as a | |
35 | percentage of uptime in a given year. | |
36 | ||
37 | .Availability - Downtime per Year | |
38 | [width="60%",cols="<d,d",options="header"] | |
39 | |=========================================================== | |
40 | |Availability % |Downtime per year | |
41 | |99 |3.65 days | |
42 | |99.9 |8.76 hours | |
43 | |99.99 |52.56 minutes | |
44 | |99.999 |5.26 minutes | |
45 | |99.9999 |31.5 seconds | |
46 | |99.99999 |3.15 seconds | |
47 | |=========================================================== | |
48 | ||
04bde502 DM |
49 | There are several ways to increase availability. The most elegant |
50 | solution is to rewrite your software, so that you can run it on | |
51 | several host at the same time. The software itself need to have a way | |
2af6af05 | 52 | to detect errors and do failover. This is relatively easy if you just |
04bde502 DM |
53 | want to serve read-only web pages. But in general this is complex, and |
54 | sometimes impossible because you cannot modify the software | |
55 | yourself. The following solutions works without modifying the | |
56 | software: | |
57 | ||
8c1189b6 | 58 | * Use reliable ``server'' components |
fd9e8984 | 59 | + |
04bde502 | 60 | NOTE: Computer components with same functionality can have varying |
2af6af05 | 61 | reliability numbers, depending on the component quality. Most vendors |
8c1189b6 | 62 | sell components with higher reliability as ``server'' components - |
04bde502 | 63 | usually at higher price. |
b5266e9f DM |
64 | |
65 | * Eliminate single point of failure (redundant components) | |
8c1189b6 FG |
66 | ** use an uninterruptible power supply (UPS) |
67 | ** use redundant power supplies on the main boards | |
68 | ** use ECC-RAM | |
69 | ** use redundant network hardware | |
70 | ** use RAID for local storage | |
71 | ** use distributed, redundant storage for VM data | |
b5266e9f DM |
72 | |
73 | * Reduce downtime | |
8c1189b6 FG |
74 | ** rapidly accessible administrators (24/7) |
75 | ** availability of spare parts (other nodes in a {pve} cluster) | |
76 | ** automatic error detection (provided by `ha-manager`) | |
77 | ** automatic failover (provided by `ha-manager`) | |
b5266e9f | 78 | |
5771d9b0 | 79 | Virtualization environments like {pve} make it much easier to reach |
8c1189b6 | 80 | high availability because they remove the ``hardware'' dependency. They |
04bde502 DM |
81 | also support to setup and use redundant storage and network |
82 | devices. So if one host fail, you can simply start those services on | |
43da8322 DM |
83 | another host within your cluster. |
84 | ||
8c1189b6 | 85 | Even better, {pve} provides a software stack called `ha-manager`, |
43da8322 DM |
86 | which can do that automatically for you. It is able to automatically |
87 | detect errors and do automatic failover. | |
88 | ||
8c1189b6 | 89 | {pve} `ha-manager` works like an ``automated'' administrator. First, you |
43da8322 | 90 | configure what resources (VMs, containers, ...) it should |
8c1189b6 FG |
91 | manage. `ha-manager` then observes correct functionality, and handles |
92 | service failover to another node in case of errors. `ha-manager` can | |
43da8322 DM |
93 | also handle normal user requests which may start, stop, relocate and |
94 | migrate a service. | |
04bde502 DM |
95 | |
96 | But high availability comes at a price. High quality components are | |
97 | more expensive, and making them redundant duplicates the costs at | |
98 | least. Additional spare parts increase costs further. So you should | |
99 | carefully calculate the benefits, and compare with those additional | |
100 | costs. | |
101 | ||
102 | TIP: Increasing availability from 99% to 99.9% is relatively | |
103 | simply. But increasing availability from 99.9999% to 99.99999% is very | |
8c1189b6 | 104 | hard and costly. `ha-manager` has typical error detection and failover |
43da8322 DM |
105 | times of about 2 minutes, so you can get no more than 99.999% |
106 | availability. | |
b5266e9f | 107 | |
823fa863 | 108 | |
5bd515d4 DM |
109 | Requirements |
110 | ------------ | |
3810ae1e | 111 | |
823fa863 DM |
112 | You must meet the following requirements before you start with HA: |
113 | ||
5bd515d4 | 114 | * at least three cluster nodes (to get reliable quorum) |
43da8322 | 115 | |
5bd515d4 | 116 | * shared storage for VMs and containers |
43da8322 | 117 | |
5bd515d4 | 118 | * hardware redundancy (everywhere) |
3810ae1e | 119 | |
823fa863 DM |
120 | * use reliable “server” components |
121 | ||
5bd515d4 | 122 | * hardware watchdog - if not available we fall back to the |
8c1189b6 | 123 | linux kernel software watchdog (`softdog`) |
3810ae1e | 124 | |
5bd515d4 | 125 | * optional hardware fencing devices |
3810ae1e | 126 | |
3810ae1e | 127 | |
80c0adcb | 128 | [[ha_manager_resources]] |
5bd515d4 DM |
129 | Resources |
130 | --------- | |
131 | ||
8c1189b6 FG |
132 | We call the primary management unit handled by `ha-manager` a |
133 | resource. A resource (also called ``service'') is uniquely | |
5bd515d4 | 134 | identified by a service ID (SID), which consists of the resource type |
8c1189b6 FG |
135 | and an type specific ID, e.g.: `vm:100`. That example would be a |
136 | resource of type `vm` (virtual machine) with the ID 100. | |
5bd515d4 DM |
137 | |
138 | For now we have two important resources types - virtual machines and | |
139 | containers. One basic idea here is that we can bundle related software | |
a35aad4a | 140 | into such a VM or container, so there is no need to compose one big |
8c1189b6 | 141 | service from other services, like it was done with `rgmanager`. In |
4c34defd | 142 | general, a HA managed resource should not depend on other resources. |
3810ae1e | 143 | |
22653ac8 | 144 | |
d4642672 DM |
145 | Management Tasks |
146 | ---------------- | |
147 | ||
148 | This section provides a short overview of common management tasks. The | |
149 | first step is to enable HA for a resource. This is done by adding the | |
150 | resource to the HA resource configuration. You can do this using the | |
151 | GUI, or simply use the command line tool, for example: | |
152 | ||
153 | ---- | |
154 | # ha-manager add vm:100 | |
155 | ---- | |
156 | ||
157 | The HA stack now tries to start the resources and keeps it | |
158 | running. Please note that you can configure the ``requested'' | |
a35aad4a | 159 | resources state. For example you may want the HA stack to stop the |
d4642672 DM |
160 | resource: |
161 | ||
162 | ---- | |
163 | # ha-manager set vm:100 --state stopped | |
164 | ---- | |
165 | ||
166 | and start it again later: | |
167 | ||
168 | ---- | |
169 | # ha-manager set vm:100 --state started | |
170 | ---- | |
171 | ||
172 | You can also use the normal VM and container management commands. They | |
173 | automatically forward the commands to the HA stack, so | |
174 | ||
175 | ---- | |
176 | # qm start 100 | |
177 | ---- | |
178 | ||
179 | simply sets the requested state to `started`. Same applied to `qm | |
180 | stop`, which sets the requested state to `stopped`. | |
181 | ||
182 | NOTE: The HA stack works fully asynchronous and needs to communicate | |
3821ecaf | 183 | with other cluster members. So it takes some seconds until you see |
d4642672 DM |
184 | the result of such actions. |
185 | ||
186 | To view the current HA resource configuration use: | |
187 | ||
188 | ---- | |
189 | # ha-manager config | |
190 | vm:100 | |
191 | state stopped | |
192 | ---- | |
193 | ||
194 | And you can view the actual HA manager and resource state with: | |
195 | ||
196 | ---- | |
197 | # ha-manager status | |
198 | quorum OK | |
199 | master node1 (active, Wed Nov 23 11:07:23 2016) | |
200 | lrm elsa (active, Wed Nov 23 11:07:19 2016) | |
201 | service vm:100 (node1, started) | |
202 | ---- | |
203 | ||
204 | You can also initiate resource migration to other nodes: | |
205 | ||
206 | ---- | |
207 | # ha-manager migrate vm:100 node2 | |
208 | ---- | |
209 | ||
210 | This uses online migration and tries to keep the VM running. Online | |
211 | migration needs to transfer all used memory over the network, so it is | |
212 | sometimes faster to stop VM, then restart it on the new node. This can be | |
213 | done using the `relocate` command: | |
214 | ||
215 | ---- | |
216 | # ha-manager relocate vm:100 node2 | |
217 | ---- | |
218 | ||
219 | Finally, you can remove the resource from the HA configuration using | |
220 | the following command: | |
221 | ||
222 | ---- | |
223 | # ha-manager remove vm:100 | |
224 | ---- | |
225 | ||
226 | NOTE: This does not start or stop the resource. | |
227 | ||
a35aad4a | 228 | But all HA related tasks can be done in the GUI, so there is no need to |
d4642672 DM |
229 | use the command line at all. |
230 | ||
231 | ||
2b52e195 | 232 | How It Works |
22653ac8 DM |
233 | ------------ |
234 | ||
c7470421 DM |
235 | This section provides a detailed description of the {PVE} HA manager |
236 | internals. It describes all involved daemons and how they work | |
237 | together. To provide HA, two daemons run on each node: | |
3810ae1e | 238 | |
8c1189b6 | 239 | `pve-ha-lrm`:: |
3810ae1e | 240 | |
1600c60a DM |
241 | The local resource manager (LRM), which controls the services running on |
242 | the local node. It reads the requested states for its services from | |
243 | the current manager status file and executes the respective commands. | |
3810ae1e | 244 | |
8c1189b6 | 245 | `pve-ha-crm`:: |
3810ae1e | 246 | |
1600c60a DM |
247 | The cluster resource manager (CRM), which makes the cluster wide |
248 | decisions. It sends commands to the LRM, processes the results, | |
249 | and moves resources to other nodes if something fails. The CRM also | |
250 | handles node fencing. | |
251 | ||
3810ae1e TL |
252 | |
253 | .Locks in the LRM & CRM | |
254 | [NOTE] | |
255 | Locks are provided by our distributed configuration file system (pmxcfs). | |
a35aad4a | 256 | They are used to guarantee that each LRM is active once and working. As an |
3821ecaf | 257 | LRM only executes actions when it holds its lock, we can mark a failed node |
5771d9b0 | 258 | as fenced if we can acquire its lock. This lets us then recover any failed |
5eba0743 | 259 | HA services securely without any interference from the now unknown failed node. |
3810ae1e TL |
260 | This all gets supervised by the CRM which holds currently the manager master |
261 | lock. | |
262 | ||
c7470421 DM |
263 | |
264 | Service States | |
265 | ~~~~~~~~~~~~~~ | |
266 | ||
267 | The CRM use a service state enumeration to record the current service | |
268 | state. We display this state on the GUI and you can query it using | |
269 | the `ha-manager` command line tool: | |
270 | ||
271 | ---- | |
272 | # ha-manager status | |
273 | quorum OK | |
274 | master elsa (active, Mon Nov 21 07:23:29 2016) | |
275 | lrm elsa (active, Mon Nov 21 07:23:22 2016) | |
276 | service ct:100 (elsa, stopped) | |
277 | service ct:102 (elsa, started) | |
278 | service vm:501 (elsa, started) | |
279 | ---- | |
280 | ||
281 | Here is the list of possible states: | |
282 | ||
283 | stopped:: | |
284 | ||
285 | Service is stopped (confirmed by LRM). If the LRM detects a stopped | |
286 | service is still running, it will stop it again. | |
287 | ||
288 | request_stop:: | |
289 | ||
290 | Service should be stopped. The CRM waits for confirmation from the | |
291 | LRM. | |
292 | ||
1cd01666 DM |
293 | stopping:: |
294 | ||
295 | Pending stop request. But the CRM did not get the request so far. | |
296 | ||
c7470421 DM |
297 | started:: |
298 | ||
299 | Service is active an LRM should start it ASAP if not already running. | |
300 | If the Service fails and is detected to be not running the LRM | |
301 | restarts it | |
302 | (see xref:ha_manager_start_failure_policy[Start Failure Policy]). | |
303 | ||
1cd01666 DM |
304 | starting:: |
305 | ||
306 | Pending start request. But the CRM has not got any confirmation from the | |
307 | LRM that the service is running. | |
308 | ||
c7470421 DM |
309 | fence:: |
310 | ||
311 | Wait for node fencing (service node is not inside quorate cluster | |
312 | partition). As soon as node gets fenced successfully the service will | |
313 | be recovered to another node, if possible | |
314 | (see xref:ha_manager_fencing[Fencing]). | |
315 | ||
316 | freeze:: | |
317 | ||
318 | Do not touch the service state. We use this state while we reboot a | |
319 | node, or when we restart the LRM daemon | |
320 | (see xref:ha_manager_package_updates[Package Updates]). | |
321 | ||
581f2240 TL |
322 | ignored:: |
323 | ||
fb29acdd | 324 | Act as if the service were not managed by HA at all. |
581f2240 TL |
325 | Useful, when full control over the service is desired temporarily, |
326 | without removing it from the HA configuration. | |
327 | ||
328 | ||
c7470421 DM |
329 | migrate:: |
330 | ||
331 | Migrate service (live) to other node. | |
332 | ||
333 | error:: | |
334 | ||
335 | Service is disabled because of LRM errors. Needs manual intervention | |
336 | (see xref:ha_manager_error_recovery[Error Recovery]). | |
337 | ||
1cd01666 DM |
338 | queued:: |
339 | ||
340 | Service is newly added, and the CRM has not seen it so far. | |
341 | ||
342 | disabled:: | |
343 | ||
344 | Service is stopped and marked as `disabled` | |
345 | ||
c7470421 | 346 | |
3810ae1e TL |
347 | Local Resource Manager |
348 | ~~~~~~~~~~~~~~~~~~~~~~ | |
349 | ||
8c1189b6 | 350 | The local resource manager (`pve-ha-lrm`) is started as a daemon on |
3810ae1e TL |
351 | boot and waits until the HA cluster is quorate and thus cluster wide |
352 | locks are working. | |
353 | ||
354 | It can be in three states: | |
355 | ||
b8663359 | 356 | wait for agent lock:: |
e1ea726a FG |
357 | |
358 | The LRM waits for our exclusive lock. This is also used as idle state if no | |
359 | service is configured. | |
360 | ||
b8663359 | 361 | active:: |
e1ea726a FG |
362 | |
363 | The LRM holds its exclusive lock and has services configured. | |
364 | ||
b8663359 | 365 | lost agent lock:: |
e1ea726a FG |
366 | |
367 | The LRM lost its lock, this means a failure happened and quorum was lost. | |
3810ae1e TL |
368 | |
369 | After the LRM gets in the active state it reads the manager status | |
8c1189b6 | 370 | file in `/etc/pve/ha/manager_status` and determines the commands it |
2af6af05 | 371 | has to execute for the services it owns. |
a35aad4a | 372 | For each command a worker gets started, these workers are running in |
5eba0743 | 373 | parallel and are limited to at most 4 by default. This default setting |
8c1189b6 | 374 | may be changed through the datacenter configuration key `max_worker`. |
2af6af05 TL |
375 | When finished the worker process gets collected and its result saved for |
376 | the CRM. | |
3810ae1e | 377 | |
5eba0743 | 378 | .Maximum Concurrent Worker Adjustment Tips |
3810ae1e | 379 | [NOTE] |
5eba0743 | 380 | The default value of at most 4 concurrent workers may be unsuited for |
3810ae1e TL |
381 | a specific setup. For example may 4 live migrations happen at the same |
382 | time, which can lead to network congestions with slower networks and/or | |
383 | big (memory wise) services. Ensure that also in the worst case no congestion | |
a35aad4a | 384 | happens and lower the `max_worker` value if needed. On the contrary, if you |
3810ae1e TL |
385 | have a particularly powerful high end setup you may also want to increase it. |
386 | ||
a35aad4a DL |
387 | Each command requested by the CRM is uniquely identifiable by a UID, when |
388 | the worker finishes its result will be processed and written in the LRM | |
8c1189b6 | 389 | status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect |
3810ae1e TL |
390 | it and let its state machine - respective the commands output - act on it. |
391 | ||
392 | The actions on each service between CRM and LRM are normally always synced. | |
a35aad4a | 393 | This means that the CRM requests a state uniquely marked by a UID, the LRM |
3810ae1e TL |
394 | then executes this action *one time* and writes back the result, also |
395 | identifiable by the same UID. This is needed so that the LRM does not | |
a35aad4a | 396 | execute an outdated command. |
8c1189b6 | 397 | With the exception of the `stop` and the `error` command, |
c9aa5d47 | 398 | those two do not depend on the result produced and are executed |
3810ae1e TL |
399 | always in the case of the stopped state and once in the case of |
400 | the error state. | |
401 | ||
402 | .Read the Logs | |
403 | [NOTE] | |
404 | The HA Stack logs every action it makes. This helps to understand what | |
405 | and also why something happens in the cluster. Here its important to see | |
406 | what both daemons, the LRM and the CRM, did. You may use | |
407 | `journalctl -u pve-ha-lrm` on the node(s) where the service is and | |
408 | the same command for the pve-ha-crm on the node which is the current master. | |
409 | ||
410 | Cluster Resource Manager | |
411 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
22653ac8 | 412 | |
8c1189b6 | 413 | The cluster resource manager (`pve-ha-crm`) starts on each node and |
22653ac8 DM |
414 | waits there for the manager lock, which can only be held by one node |
415 | at a time. The node which successfully acquires the manager lock gets | |
3810ae1e TL |
416 | promoted to the CRM master. |
417 | ||
2af6af05 | 418 | It can be in three states: |
3810ae1e | 419 | |
b8663359 | 420 | wait for agent lock:: |
e1ea726a | 421 | |
97ae300a | 422 | The CRM waits for our exclusive lock. This is also used as idle state if no |
e1ea726a FG |
423 | service is configured |
424 | ||
b8663359 | 425 | active:: |
e1ea726a | 426 | |
97ae300a | 427 | The CRM holds its exclusive lock and has services configured |
e1ea726a | 428 | |
b8663359 | 429 | lost agent lock:: |
e1ea726a | 430 | |
97ae300a | 431 | The CRM lost its lock, this means a failure happened and quorum was lost. |
3810ae1e | 432 | |
a35aad4a | 433 | Its main task is to manage the services which are configured to be highly |
4c34defd TL |
434 | available and try to always enforce the requested state. For example, a |
435 | service with the requested state 'started' will be started if its not | |
436 | already running. If it crashes it will be automatically started again. | |
a35aad4a | 437 | Thus the CRM dictates the actions the LRM needs to execute. |
22653ac8 DM |
438 | |
439 | When an node leaves the cluster quorum, its state changes to unknown. | |
440 | If the current CRM then can secure the failed nodes lock, the services | |
441 | will be 'stolen' and restarted on another node. | |
442 | ||
443 | When a cluster member determines that it is no longer in the cluster | |
444 | quorum, the LRM waits for a new quorum to form. As long as there is no | |
445 | quorum the node cannot reset the watchdog. This will trigger a reboot | |
2af6af05 | 446 | after the watchdog then times out, this happens after 60 seconds. |
22653ac8 | 447 | |
85363588 | 448 | |
2b52e195 | 449 | Configuration |
22653ac8 DM |
450 | ------------- |
451 | ||
85363588 DM |
452 | The HA stack is well integrated into the {pve} API. So, for example, |
453 | HA can be configured via the `ha-manager` command line interface, or | |
454 | the {pve} web interface - both interfaces provide an easy way to | |
455 | manage HA. Automation tools can use the API directly. | |
456 | ||
457 | All HA configuration files are within `/etc/pve/ha/`, so they get | |
458 | automatically distributed to the cluster nodes, and all nodes share | |
459 | the same HA configuration. | |
460 | ||
206c2476 | 461 | |
4c34defd | 462 | [[ha_manager_resource_config]] |
206c2476 DM |
463 | Resources |
464 | ~~~~~~~~~ | |
465 | ||
1ff5e4e8 | 466 | [thumbnail="screenshot/gui-ha-manager-status.png"] |
863a8f3a | 467 | |
4d63b3cc | 468 | |
85363588 DM |
469 | The resource configuration file `/etc/pve/ha/resources.cfg` stores |
470 | the list of resources managed by `ha-manager`. A resource configuration | |
a35aad4a | 471 | inside that list looks like this: |
85363588 DM |
472 | |
473 | ---- | |
8bdc398c | 474 | <type>: <name> |
85363588 DM |
475 | <property> <value> |
476 | ... | |
477 | ---- | |
478 | ||
698e5dd2 DM |
479 | It starts with a resource type followed by a resource specific name, |
480 | separated with colon. Together this forms the HA resource ID, which is | |
481 | used by all `ha-manager` commands to uniquely identify a resource | |
a9c77fec DM |
482 | (example: `vm:100` or `ct:101`). The next lines contain additional |
483 | properties: | |
85363588 DM |
484 | |
485 | include::ha-resources-opts.adoc[] | |
486 | ||
8bdc398c | 487 | Here is a real world example with one VM and one container. As you see, |
470d4313 | 488 | the syntax of those files is really simple, so it is even possible to |
8bdc398c DM |
489 | read or edit those files using your favorite editor: |
490 | ||
e7b9b0ac | 491 | .Configuration Example (`/etc/pve/ha/resources.cfg`) |
8bdc398c DM |
492 | ---- |
493 | vm: 501 | |
494 | state started | |
495 | max_relocate 2 | |
496 | ||
497 | ct: 102 | |
a319e18b DM |
498 | # Note: use default settings for everything |
499 | ---- | |
500 | ||
1ff5e4e8 | 501 | [thumbnail="screenshot/gui-ha-manager-add-resource.png"] |
4d63b3cc | 502 | |
a319e18b DM |
503 | Above config was generated using the `ha-manager` command line tool: |
504 | ||
505 | ---- | |
506 | # ha-manager add vm:501 --state started --max_relocate 2 | |
507 | # ha-manager add ct:102 | |
8bdc398c DM |
508 | ---- |
509 | ||
85363588 | 510 | |
1acab952 | 511 | [[ha_manager_groups]] |
206c2476 DM |
512 | Groups |
513 | ~~~~~~ | |
514 | ||
1ff5e4e8 | 515 | [thumbnail="screenshot/gui-ha-manager-groups-view.png"] |
4d63b3cc | 516 | |
85363588 DM |
517 | The HA group configuration file `/etc/pve/ha/groups.cfg` is used to |
518 | define groups of cluster nodes. A resource can be restricted to run | |
206c2476 DM |
519 | only on the members of such group. A group configuration look like |
520 | this: | |
85363588 | 521 | |
206c2476 DM |
522 | ---- |
523 | group: <group> | |
524 | nodes <node_list> | |
525 | <property> <value> | |
526 | ... | |
527 | ---- | |
85363588 | 528 | |
206c2476 | 529 | include::ha-groups-opts.adoc[] |
22653ac8 | 530 | |
1ff5e4e8 | 531 | [thumbnail="screenshot/gui-ha-manager-add-group.png"] |
4d63b3cc | 532 | |
e60ce90c | 533 | A common requirement is that a resource should run on a specific |
1acab952 DM |
534 | node. Usually the resource is able to run on other nodes, so you can define |
535 | an unrestricted group with a single member: | |
536 | ||
537 | ---- | |
538 | # ha-manager groupadd prefer_node1 --nodes node1 | |
539 | ---- | |
540 | ||
541 | For bigger clusters, it makes sense to define a more detailed failover | |
542 | behavior. For example, you may want to run a set of services on | |
543 | `node1` if possible. If `node1` is not available, you want to run them | |
470d4313 | 544 | equally split on `node2` and `node3`. If those nodes also fail the |
1acab952 DM |
545 | services should run on `node4`. To achieve this you could set the node |
546 | list to: | |
547 | ||
548 | ---- | |
549 | # ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4" | |
550 | ---- | |
551 | ||
552 | Another use case is if a resource uses other resources only available | |
553 | on specific nodes, lets say `node1` and `node2`. We need to make sure | |
554 | that HA manager does not use other nodes, so we need to create a | |
555 | restricted group with said nodes: | |
556 | ||
557 | ---- | |
558 | # ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted | |
559 | ---- | |
560 | ||
561 | Above commands created the following group configuration fils: | |
562 | ||
563 | .Configuration Example (`/etc/pve/ha/groups.cfg`) | |
564 | ---- | |
565 | group: prefer_node1 | |
566 | nodes node1 | |
567 | ||
568 | group: mygroup1 | |
569 | nodes node2:1,node4,node1:2,node3:1 | |
570 | ||
571 | group: mygroup2 | |
572 | nodes node2,node1 | |
573 | restricted 1 | |
574 | ---- | |
575 | ||
576 | ||
577 | The `nofailback` options is mostly useful to avoid unwanted resource | |
e60ce90c | 578 | movements during administration tasks. For example, if you need to |
1acab952 DM |
579 | migrate a service to a node which hasn't the highest priority in the |
580 | group, you need to tell the HA manager to not move this service | |
581 | instantly back by setting the `nofailback` option. | |
582 | ||
583 | Another scenario is when a service was fenced and it got recovered to | |
584 | another node. The admin tries to repair the fenced node and brings it | |
585 | up online again to investigate the failure cause and check if it runs | |
586 | stable again. Setting the `nofailback` flag prevents that the | |
587 | recovered services move straight back to the fenced node. | |
588 | ||
22653ac8 | 589 | |
80c0adcb | 590 | [[ha_manager_fencing]] |
3810ae1e TL |
591 | Fencing |
592 | ------- | |
593 | ||
0d427077 DM |
594 | On node failures, fencing ensures that the erroneous node is |
595 | guaranteed to be offline. This is required to make sure that no | |
596 | resource runs twice when it gets recovered on another node. This is a | |
597 | really important task, because without, it would not be possible to | |
598 | recover a resource on another node. | |
599 | ||
600 | If a node would not get fenced, it would be in an unknown state where | |
601 | it may have still access to shared resources. This is really | |
602 | dangerous! Imagine that every network but the storage one broke. Now, | |
603 | while not reachable from the public network, the VM still runs and | |
604 | writes to the shared storage. | |
605 | ||
606 | If we then simply start up this VM on another node, we would get a | |
607 | dangerous race conditions because we write from both nodes. Such | |
608 | condition can destroy all VM data and the whole VM could be rendered | |
609 | unusable. The recovery could also fail if the storage protects from | |
610 | multiple mounts. | |
611 | ||
5771d9b0 TL |
612 | |
613 | How {pve} Fences | |
0d427077 | 614 | ~~~~~~~~~~~~~~~~ |
5771d9b0 | 615 | |
61972f55 DM |
616 | There are different methods to fence a node, for example, fence |
617 | devices which cut off the power from the node or disable their | |
618 | communication completely. Those are often quite expensive and bring | |
619 | additional critical components into a system, because if they fail you | |
620 | cannot recover any service. | |
621 | ||
622 | We thus wanted to integrate a simpler fencing method, which does not | |
623 | require additional external hardware. This can be done using | |
624 | watchdog timers. | |
625 | ||
626 | .Possible Fencing Methods | |
627 | - external power switches | |
628 | - isolate nodes by disabling complete network traffic on the switch | |
629 | - self fencing using watchdog timers | |
630 | ||
631 | Watchdog timers are widely used in critical and dependable systems | |
632 | since the beginning of micro controllers. They are often independent | |
633 | and simple integrated circuits which are used to detect and recover | |
634 | from computer malfunctions. | |
635 | ||
636 | During normal operation, `ha-manager` regularly resets the watchdog | |
637 | timer to prevent it from elapsing. If, due to a hardware fault or | |
638 | program error, the computer fails to reset the watchdog, the timer | |
639 | will elapse and triggers a reset of the whole server (reboot). | |
640 | ||
641 | Recent server motherboards often include such hardware watchdogs, but | |
642 | these need to be configured. If no watchdog is available or | |
643 | configured, we fall back to the Linux Kernel 'softdog'. While still | |
644 | reliable, it is not independent of the servers hardware, and thus has | |
645 | a lower reliability than a hardware watchdog. | |
3810ae1e | 646 | |
a472fde8 | 647 | |
3810ae1e TL |
648 | Configure Hardware Watchdog |
649 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
a472fde8 DM |
650 | |
651 | By default, all hardware watchdog modules are blocked for security | |
652 | reasons. They are like a loaded gun if not correctly initialized. To | |
653 | enable a hardware watchdog, you need to specify the module to load in | |
654 | '/etc/default/pve-ha-manager', for example: | |
655 | ||
656 | ---- | |
657 | # select watchdog module (default is softdog) | |
658 | WATCHDOG_MODULE=iTCO_wdt | |
659 | ---- | |
660 | ||
661 | This configuration is read by the 'watchdog-mux' service, which load | |
662 | the specified module at startup. | |
663 | ||
3810ae1e | 664 | |
2957ef80 TL |
665 | Recover Fenced Services |
666 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
667 | ||
480e67e1 DM |
668 | After a node failed and its fencing was successful, the CRM tries to |
669 | move services from the failed node to nodes which are still online. | |
670 | ||
671 | The selection of nodes, on which those services gets recovered, is | |
672 | influenced by the resource `group` settings, the list of currently active | |
673 | nodes, and their respective active service count. | |
674 | ||
675 | The CRM first builds a set out of the intersection between user selected | |
676 | nodes (from `group` setting) and available nodes. It then choose the | |
677 | subset of nodes with the highest priority, and finally select the node | |
678 | with the lowest active service count. This minimizes the possibility | |
679 | of an overloaded node. | |
680 | ||
681 | CAUTION: On node failure, the CRM distributes services to the | |
682 | remaining nodes. This increase the service count on those nodes, and | |
683 | can lead to high load, especially on small clusters. Please design | |
684 | your cluster so that it can handle such worst case scenarios. | |
2957ef80 | 685 | |
22653ac8 | 686 | |
c7470421 | 687 | [[ha_manager_start_failure_policy]] |
a3189ad1 TL |
688 | Start Failure Policy |
689 | --------------------- | |
690 | ||
691 | The start failure policy comes in effect if a service failed to start on a | |
a35aad4a | 692 | node one or more times. It can be used to configure how often a restart |
a3189ad1 TL |
693 | should be triggered on the same node and how often a service should be |
694 | relocated so that it gets a try to be started on another node. | |
695 | The aim of this policy is to circumvent temporary unavailability of shared | |
696 | resources on a specific node. For example, if a shared storage isn't available | |
697 | on a quorate node anymore, e.g. network problems, but still on other nodes, | |
698 | the relocate policy allows then that the service gets started nonetheless. | |
699 | ||
700 | There are two service start recover policy settings which can be configured | |
22653ac8 DM |
701 | specific for each resource. |
702 | ||
703 | max_restart:: | |
704 | ||
5eba0743 | 705 | Maximum number of tries to restart an failed service on the actual |
22653ac8 DM |
706 | node. The default is set to one. |
707 | ||
708 | max_relocate:: | |
709 | ||
5eba0743 | 710 | Maximum number of tries to relocate the service to a different node. |
22653ac8 DM |
711 | A relocate only happens after the max_restart value is exceeded on the |
712 | actual node. The default is set to one. | |
713 | ||
0abc65b0 | 714 | NOTE: The relocate count state will only reset to zero when the |
22653ac8 | 715 | service had at least one successful start. That means if a service is |
4c34defd | 716 | re-started without fixing the error only the restart policy gets |
22653ac8 DM |
717 | repeated. |
718 | ||
c7470421 DM |
719 | |
720 | [[ha_manager_error_recovery]] | |
2b52e195 | 721 | Error Recovery |
22653ac8 DM |
722 | -------------- |
723 | ||
724 | If after all tries the service state could not be recovered it gets | |
725 | placed in an error state. In this state the service won't get touched | |
c5bca1ae | 726 | by the HA stack anymore. The only way out is disabling a service: |
d02982f7 | 727 | |
c5bca1ae TL |
728 | ---- |
729 | # ha-manager set vm:100 --state disabled | |
730 | ---- | |
d02982f7 | 731 | |
c5bca1ae TL |
732 | This can also be done in the web interface. |
733 | ||
734 | To recover from the error state you should do the following: | |
22653ac8 | 735 | |
c5bca1ae TL |
736 | * bring the resource back into a safe and consistent state (e.g.: |
737 | kill its process if the service could not be stopped) | |
22653ac8 | 738 | |
c5bca1ae | 739 | * disable the resource to remove the error flag |
22653ac8 DM |
740 | |
741 | * fix the error which led to this failures | |
742 | ||
4c34defd | 743 | * *after* you fixed all errors you may request that the service starts again |
22653ac8 DM |
744 | |
745 | ||
26513dae DM |
746 | [[ha_manager_package_updates]] |
747 | Package Updates | |
748 | --------------- | |
749 | ||
750 | When updating the ha-manager you should do one node after the other, never | |
751 | all at once for various reasons. First, while we test our software | |
752 | thoughtfully, a bug affecting your specific setup cannot totally be ruled out. | |
753 | Upgrading one node after the other and checking the functionality of each node | |
754 | after finishing the update helps to recover from an eventual problems, while | |
755 | updating all could render you in a broken cluster state and is generally not | |
756 | good practice. | |
757 | ||
758 | Also, the {pve} HA stack uses a request acknowledge protocol to perform | |
759 | actions between the cluster and the local resource manager. For restarting, | |
760 | the LRM makes a request to the CRM to freeze all its services. This prevents | |
761 | that they get touched by the Cluster during the short time the LRM is restarting. | |
762 | After that the LRM may safely close the watchdog during a restart. | |
7dd7a0b7 TL |
763 | Such a restart happens normally during a package update and, as already stated, |
764 | an active master CRM is needed to acknowledge the requests from the LRM. If | |
fb29acdd FG |
765 | this is not the case the update process can take too long which, in the worst |
766 | case, may result in a reset triggered by the watchdog. | |
26513dae DM |
767 | |
768 | ||
a9023144 DM |
769 | Node Maintenance |
770 | ---------------- | |
52a75187 | 771 | |
a9023144 DM |
772 | It is sometimes possible to shutdown or reboot a node to do |
773 | maintenance tasks. Either to replace hardware, or simply to install a | |
774 | new kernel image. | |
775 | ||
776 | ||
777 | Shutdown | |
778 | ~~~~~~~~ | |
779 | ||
780 | A shutdown ('poweroff') is usually done if the node is planned to stay | |
781 | down for some time. The LRM stops all managed services in that | |
782 | case. This means that other nodes will take over those service | |
783 | afterwards. | |
784 | ||
785 | NOTE: Recent hardware has large amounts of RAM. So we stop all | |
786 | resources, then restart them to avoid online migration of all that | |
787 | RAM. If you want to use online migration, you need to invoke that | |
788 | manually before you shutdown the node. | |
789 | ||
790 | ||
791 | Reboot | |
792 | ~~~~~~ | |
793 | ||
794 | Node reboots are initiated with the 'reboot' command. This is usually | |
795 | done after installing a new kernel. Please note that this is different | |
796 | from ``shutdown'', because the node immediately starts again. | |
797 | ||
798 | The LRM tells the CRM that it wants to restart, and waits until the | |
26513dae | 799 | CRM puts all resources into the `freeze` state (same mechanism is used |
470d4313 | 800 | for xref:ha_manager_package_updates[Package Updates]). This prevents |
26513dae DM |
801 | that those resources are moved to other nodes. Instead, the CRM start |
802 | the resources after the reboot on the same node. | |
a9023144 DM |
803 | |
804 | ||
805 | Manual Resource Movement | |
806 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
807 | ||
808 | Last but not least, you can also move resources manually to other | |
809 | nodes before you shutdown or restart a node. The advantage is that you | |
810 | have full control, and you can decide if you want to use online | |
811 | migration or not. | |
812 | ||
813 | NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or | |
814 | `watchdog-mux`. They manage and use the watchdog, so this can result | |
815 | in a node reboot. | |
52a75187 DM |
816 | |
817 | ||
22653ac8 DM |
818 | ifdef::manvolnum[] |
819 | include::pve-copyright.adoc[] | |
820 | endif::manvolnum[] | |
821 |