Commit | Line | Data |
---|---|---|
22653ac8 DM |
1 | [[chapter-ha-manager]] |
2 | ifdef::manvolnum[] | |
3 | PVE({manvolnum}) | |
4 | ================ | |
5 | include::attributes.txt[] | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
734404b4 | 10 | ha-manager - Proxmox VE HA Manager |
22653ac8 DM |
11 | |
12 | SYNOPSYS | |
13 | -------- | |
14 | ||
15 | include::ha-manager.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | High Availability | |
23 | ================= | |
24 | include::attributes.txt[] | |
25 | endif::manvolnum[] | |
26 | ||
b5266e9f DM |
27 | |
28 | Our modern society depends heavily on information provided by | |
29 | computers over the network. Mobile devices amplified that dependency, | |
30 | because people can access the network any time from anywhere. If you | |
31 | provide such services, it is very important that they are available | |
32 | most of the time. | |
33 | ||
34 | We can mathematically define the availability as the ratio of (A) the | |
35 | total time a service is capable of being used during a given interval | |
36 | to (B) the length of the interval. It is normally expressed as a | |
37 | percentage of uptime in a given year. | |
38 | ||
39 | .Availability - Downtime per Year | |
40 | [width="60%",cols="<d,d",options="header"] | |
41 | |=========================================================== | |
42 | |Availability % |Downtime per year | |
43 | |99 |3.65 days | |
44 | |99.9 |8.76 hours | |
45 | |99.99 |52.56 minutes | |
46 | |99.999 |5.26 minutes | |
47 | |99.9999 |31.5 seconds | |
48 | |99.99999 |3.15 seconds | |
49 | |=========================================================== | |
50 | ||
04bde502 DM |
51 | There are several ways to increase availability. The most elegant |
52 | solution is to rewrite your software, so that you can run it on | |
53 | several host at the same time. The software itself need to have a way | |
2af6af05 | 54 | to detect errors and do failover. This is relatively easy if you just |
04bde502 DM |
55 | want to serve read-only web pages. But in general this is complex, and |
56 | sometimes impossible because you cannot modify the software | |
57 | yourself. The following solutions works without modifying the | |
58 | software: | |
59 | ||
60 | * Use reliable "server" components | |
61 | ||
62 | NOTE: Computer components with same functionality can have varying | |
2af6af05 | 63 | reliability numbers, depending on the component quality. Most vendors |
04bde502 DM |
64 | sell components with higher reliability as "server" components - |
65 | usually at higher price. | |
b5266e9f DM |
66 | |
67 | * Eliminate single point of failure (redundant components) | |
68 | ||
2af6af05 | 69 | - use an uninterruptible power supply (UPS) |
b5266e9f DM |
70 | - use redundant power supplies on the main boards |
71 | - use ECC-RAM | |
72 | - use redundant network hardware | |
04bde502 DM |
73 | - use RAID for local storage |
74 | - use distributed, redundant storage for VM data | |
b5266e9f DM |
75 | |
76 | * Reduce downtime | |
77 | ||
2af6af05 TL |
78 | - rapidly accessible administrators (24/7) |
79 | - availability of spare parts (other nodes in a {pve} cluster) | |
04bde502 DM |
80 | - automatic error detection ('ha-manager') |
81 | - automatic failover ('ha-manager') | |
b5266e9f DM |
82 | |
83 | Virtualization environments like {pve} makes it much easier to reach | |
04bde502 DM |
84 | high availability because they remove the "hardware" dependency. They |
85 | also support to setup and use redundant storage and network | |
86 | devices. So if one host fail, you can simply start those services on | |
43da8322 DM |
87 | another host within your cluster. |
88 | ||
89 | Even better, {pve} provides a software stack called 'ha-manager', | |
90 | which can do that automatically for you. It is able to automatically | |
91 | detect errors and do automatic failover. | |
92 | ||
93 | {pve} 'ha-manager' works like an "automated" administrator. First, you | |
94 | configure what resources (VMs, containers, ...) it should | |
95 | manage. 'ha-manager' then observes correct functionality, and handles | |
96 | service failover to another node in case of errors. 'ha-manager' can | |
97 | also handle normal user requests which may start, stop, relocate and | |
98 | migrate a service. | |
04bde502 DM |
99 | |
100 | But high availability comes at a price. High quality components are | |
101 | more expensive, and making them redundant duplicates the costs at | |
102 | least. Additional spare parts increase costs further. So you should | |
103 | carefully calculate the benefits, and compare with those additional | |
104 | costs. | |
105 | ||
106 | TIP: Increasing availability from 99% to 99.9% is relatively | |
107 | simply. But increasing availability from 99.9999% to 99.99999% is very | |
43da8322 DM |
108 | hard and costly. 'ha-manager' has typical error detection and failover |
109 | times of about 2 minutes, so you can get no more than 99.999% | |
110 | availability. | |
b5266e9f | 111 | |
5bd515d4 DM |
112 | Requirements |
113 | ------------ | |
3810ae1e | 114 | |
5bd515d4 | 115 | * at least three cluster nodes (to get reliable quorum) |
43da8322 | 116 | |
5bd515d4 | 117 | * shared storage for VMs and containers |
43da8322 | 118 | |
5bd515d4 | 119 | * hardware redundancy (everywhere) |
3810ae1e | 120 | |
5bd515d4 DM |
121 | * hardware watchdog - if not available we fall back to the |
122 | linux kernel software watchdog ('softdog') | |
3810ae1e | 123 | |
5bd515d4 | 124 | * optional hardware fencing devices |
3810ae1e | 125 | |
3810ae1e | 126 | |
5bd515d4 DM |
127 | Resources |
128 | --------- | |
129 | ||
130 | We call the primary management unit handled by 'ha-manager' a | |
131 | resource. A resource (also called "service") is uniquely | |
132 | identified by a service ID (SID), which consists of the resource type | |
133 | and an type specific ID, e.g.: 'vm:100'. That example would be a | |
134 | resource of type 'vm' (virtual machine) with the ID 100. | |
135 | ||
136 | For now we have two important resources types - virtual machines and | |
137 | containers. One basic idea here is that we can bundle related software | |
138 | into such VM or container, so there is no need to compose one big | |
139 | service from other services, like it was done with 'rgmanager'. In | |
140 | general, a HA enabled resource should not depend on other resources. | |
3810ae1e | 141 | |
22653ac8 | 142 | |
2b52e195 | 143 | How It Works |
22653ac8 DM |
144 | ------------ |
145 | ||
3810ae1e TL |
146 | This section provides an in detail description of the {PVE} HA-manager |
147 | internals. It describes how the CRM and the LRM work together. | |
148 | ||
149 | To provide High Availability two daemons run on each node: | |
150 | ||
151 | 'pve-ha-lrm':: | |
152 | ||
153 | The local resource manager (LRM), it controls the services running on | |
154 | the local node. | |
155 | It reads the requested states for its services from the current manager | |
156 | status file and executes the respective commands. | |
157 | ||
158 | 'pve-ha-crm':: | |
159 | ||
160 | The cluster resource manager (CRM), it controls the cluster wide | |
2af6af05 | 161 | actions of the services, processes the LRM results and includes the state |
3810ae1e TL |
162 | machine which controls the state of each service. |
163 | ||
164 | .Locks in the LRM & CRM | |
165 | [NOTE] | |
166 | Locks are provided by our distributed configuration file system (pmxcfs). | |
167 | They are used to guarantee that each LRM is active and working as a | |
168 | LRM only executes actions when he has its lock we can mark a failed node | |
169 | as fenced if we get its lock. This lets us then recover the failed HA services | |
170 | securely without the failed (but maybe still running) LRM interfering. | |
171 | This all gets supervised by the CRM which holds currently the manager master | |
172 | lock. | |
173 | ||
174 | Local Resource Manager | |
175 | ~~~~~~~~~~~~~~~~~~~~~~ | |
176 | ||
22653ac8 | 177 | The local resource manager ('pve-ha-lrm') is started as a daemon on |
3810ae1e TL |
178 | boot and waits until the HA cluster is quorate and thus cluster wide |
179 | locks are working. | |
180 | ||
181 | It can be in three states: | |
182 | ||
183 | * *wait for agent lock*: the LRM waits for our exclusive lock. This is | |
184 | also used as idle sate if no service is configured | |
185 | * *active*: the LRM holds its exclusive lock and has services configured | |
186 | * *lost agent lock*: the LRM lost its lock, this means a failure happened | |
187 | and quorum was lost. | |
188 | ||
189 | After the LRM gets in the active state it reads the manager status | |
190 | file in '/etc/pve/ha/manager_status' and determines the commands it | |
2af6af05 | 191 | has to execute for the services it owns. |
3810ae1e TL |
192 | For each command a worker gets started, this workers are running in |
193 | parallel and are limited to maximal 4 by default. This default setting | |
194 | may be changed through the datacenter configuration key "max_worker". | |
2af6af05 TL |
195 | When finished the worker process gets collected and its result saved for |
196 | the CRM. | |
3810ae1e TL |
197 | |
198 | .Maximal Concurrent Worker Adjustment Tips | |
199 | [NOTE] | |
200 | The default value of 4 maximal concurrent Workers may be unsuited for | |
201 | a specific setup. For example may 4 live migrations happen at the same | |
202 | time, which can lead to network congestions with slower networks and/or | |
203 | big (memory wise) services. Ensure that also in the worst case no congestion | |
204 | happens and lower the "max_worker" value if needed. In the contrary, if you | |
205 | have a particularly powerful high end setup you may also want to increase it. | |
206 | ||
207 | Each command requested by the CRM is uniquely identifiable by an UID, when | |
208 | the worker finished its result will be processed and written in the LRM | |
209 | status file '/etc/pve/nodes/<nodename>/lrm_status'. There the CRM may collect | |
210 | it and let its state machine - respective the commands output - act on it. | |
211 | ||
212 | The actions on each service between CRM and LRM are normally always synced. | |
213 | This means that the CRM requests a state uniquely marked by an UID, the LRM | |
214 | then executes this action *one time* and writes back the result, also | |
215 | identifiable by the same UID. This is needed so that the LRM does not | |
216 | executes an outdated command. | |
217 | With the exception of the 'stop' and the 'error' command, | |
218 | those two do not depend on the result produce and are executed | |
219 | always in the case of the stopped state and once in the case of | |
220 | the error state. | |
221 | ||
222 | .Read the Logs | |
223 | [NOTE] | |
224 | The HA Stack logs every action it makes. This helps to understand what | |
225 | and also why something happens in the cluster. Here its important to see | |
226 | what both daemons, the LRM and the CRM, did. You may use | |
227 | `journalctl -u pve-ha-lrm` on the node(s) where the service is and | |
228 | the same command for the pve-ha-crm on the node which is the current master. | |
229 | ||
230 | Cluster Resource Manager | |
231 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
22653ac8 DM |
232 | |
233 | The cluster resource manager ('pve-ha-crm') starts on each node and | |
234 | waits there for the manager lock, which can only be held by one node | |
235 | at a time. The node which successfully acquires the manager lock gets | |
3810ae1e TL |
236 | promoted to the CRM master. |
237 | ||
2af6af05 | 238 | It can be in three states: |
3810ae1e TL |
239 | |
240 | * *wait for agent lock*: the LRM waits for our exclusive lock. This is | |
241 | also used as idle sate if no service is configured | |
242 | * *active*: the LRM holds its exclusive lock and has services configured | |
243 | * *lost agent lock*: the LRM lost its lock, this means a failure happened | |
244 | and quorum was lost. | |
245 | ||
246 | It main task is to manage the services which are configured to be highly | |
2af6af05 | 247 | available and try to always enforce them to the wanted state, e.g.: a |
3810ae1e | 248 | enabled service will be started if its not running, if it crashes it will |
2af6af05 | 249 | be started again. Thus it dictates the LRM the actions it needs to execute. |
22653ac8 DM |
250 | |
251 | When an node leaves the cluster quorum, its state changes to unknown. | |
252 | If the current CRM then can secure the failed nodes lock, the services | |
253 | will be 'stolen' and restarted on another node. | |
254 | ||
255 | When a cluster member determines that it is no longer in the cluster | |
256 | quorum, the LRM waits for a new quorum to form. As long as there is no | |
257 | quorum the node cannot reset the watchdog. This will trigger a reboot | |
2af6af05 | 258 | after the watchdog then times out, this happens after 60 seconds. |
22653ac8 | 259 | |
2b52e195 | 260 | Configuration |
22653ac8 DM |
261 | ------------- |
262 | ||
2af6af05 | 263 | The HA stack is well integrated in the Proxmox VE API2. So, for |
22653ac8 DM |
264 | example, HA can be configured via 'ha-manager' or the PVE web |
265 | interface, which both provide an easy to use tool. | |
266 | ||
267 | The resource configuration file can be located at | |
268 | '/etc/pve/ha/resources.cfg' and the group configuration file at | |
269 | '/etc/pve/ha/groups.cfg'. Use the provided tools to make changes, | |
270 | there shouldn't be any need to edit them manually. | |
271 | ||
3810ae1e TL |
272 | Node Power Status |
273 | ----------------- | |
274 | ||
275 | If a node needs maintenance you should migrate and or relocate all | |
276 | services which are required to run always on another node first. | |
277 | After that you can stop the LRM and CRM services. But note that the | |
278 | watchdog triggers if you stop it with active services. | |
279 | ||
2af6af05 TL |
280 | Updates |
281 | ~~~~~~~ | |
282 | When updating the ha-manager you should do one node after the other, never | |
283 | all at once. Further you have to ensure that no service located at the node | |
284 | is in the error state, a node with erroneous service is not able to be upgraded | |
285 | and if tried nonetheless it may even trigger a Node reset when doing so! | |
286 | When dealing with erroneous services first check what happened to them, then | |
287 | bring them in a secure state, after that disable or remove them from HA. | |
288 | Only after that you may start upgrading a Nodes LRM and CRM. | |
289 | ||
3810ae1e TL |
290 | Fencing |
291 | ------- | |
292 | ||
293 | What Is Fencing | |
294 | ~~~~~~~~~~~~~~~ | |
295 | ||
296 | Fencing secures that on a node failure the dangerous node gets will be rendered | |
297 | unable to do any damage and that no resource runs twice when it gets recovered | |
298 | from the failed node. | |
299 | ||
300 | Configure Hardware Watchdog | |
301 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
302 | By default all watchdog modules are blocked for security reasons as they are | |
303 | like a loaded gun if not correctly initialized. | |
304 | If you have a hardware watchdog available remove its module from the blacklist | |
305 | and restart 'the watchdog-mux' service. | |
306 | ||
307 | ||
2b52e195 | 308 | Groups |
22653ac8 DM |
309 | ------ |
310 | ||
311 | A group is a collection of cluster nodes which a service may be bound to. | |
312 | ||
2b52e195 | 313 | Group Settings |
22653ac8 DM |
314 | ~~~~~~~~~~~~~~ |
315 | ||
316 | nodes:: | |
317 | ||
318 | list of group node members | |
319 | ||
320 | restricted:: | |
321 | ||
322 | resources bound to this group may only run on nodes defined by the | |
323 | group. If no group node member is available the resource will be | |
324 | placed in the stopped state. | |
325 | ||
326 | nofailback:: | |
327 | ||
328 | the resource won't automatically fail back when a more preferred node | |
329 | (re)joins the cluster. | |
330 | ||
331 | ||
2b52e195 | 332 | Recovery Policy |
22653ac8 DM |
333 | --------------- |
334 | ||
335 | There are two service recover policy settings which can be configured | |
336 | specific for each resource. | |
337 | ||
338 | max_restart:: | |
339 | ||
340 | maximal number of tries to restart an failed service on the actual | |
341 | node. The default is set to one. | |
342 | ||
343 | max_relocate:: | |
344 | ||
345 | maximal number of tries to relocate the service to a different node. | |
346 | A relocate only happens after the max_restart value is exceeded on the | |
347 | actual node. The default is set to one. | |
348 | ||
0abc65b0 | 349 | NOTE: The relocate count state will only reset to zero when the |
22653ac8 DM |
350 | service had at least one successful start. That means if a service is |
351 | re-enabled without fixing the error only the restart policy gets | |
352 | repeated. | |
353 | ||
2b52e195 | 354 | Error Recovery |
22653ac8 DM |
355 | -------------- |
356 | ||
357 | If after all tries the service state could not be recovered it gets | |
358 | placed in an error state. In this state the service won't get touched | |
359 | by the HA stack anymore. To recover from this state you should follow | |
360 | these steps: | |
361 | ||
362 | * bring the resource back into an safe and consistent state (e.g: | |
363 | killing its process) | |
364 | ||
365 | * disable the ha resource to place it in an stopped state | |
366 | ||
367 | * fix the error which led to this failures | |
368 | ||
369 | * *after* you fixed all errors you may enable the service again | |
370 | ||
371 | ||
2b52e195 | 372 | Service Operations |
22653ac8 DM |
373 | ------------------ |
374 | ||
375 | This are how the basic user-initiated service operations (via | |
376 | 'ha-manager') work. | |
377 | ||
378 | enable:: | |
379 | ||
380 | the service will be started by the LRM if not already running. | |
381 | ||
382 | disable:: | |
383 | ||
384 | the service will be stopped by the LRM if running. | |
385 | ||
386 | migrate/relocate:: | |
387 | ||
388 | the service will be relocated (live) to another node. | |
389 | ||
390 | remove:: | |
391 | ||
392 | the service will be removed from the HA managed resource list. Its | |
393 | current state will not be touched. | |
394 | ||
395 | start/stop:: | |
396 | ||
397 | start and stop commands can be issued to the resource specific tools | |
398 | (like 'qm' or 'pct'), they will forward the request to the | |
399 | 'ha-manager' which then will execute the action and set the resulting | |
400 | service state (enabled, disabled). | |
401 | ||
402 | ||
2b52e195 | 403 | Service States |
22653ac8 DM |
404 | -------------- |
405 | ||
406 | stopped:: | |
407 | ||
408 | Service is stopped (confirmed by LRM) | |
409 | ||
410 | request_stop:: | |
411 | ||
412 | Service should be stopped. Waiting for confirmation from LRM. | |
413 | ||
414 | started:: | |
415 | ||
416 | Service is active an LRM should start it ASAP if not already running. | |
417 | ||
418 | fence:: | |
419 | ||
420 | Wait for node fencing (service node is not inside quorate cluster | |
421 | partition). | |
422 | ||
423 | freeze:: | |
424 | ||
425 | Do not touch the service state. We use this state while we reboot a | |
426 | node, or when we restart the LRM daemon. | |
427 | ||
428 | migrate:: | |
429 | ||
430 | Migrate service (live) to other node. | |
431 | ||
432 | error:: | |
433 | ||
434 | Service disabled because of LRM errors. Needs manual intervention. | |
435 | ||
436 | ||
437 | ifdef::manvolnum[] | |
438 | include::pve-copyright.adoc[] | |
439 | endif::manvolnum[] | |
440 |