Commit | Line | Data |
---|---|---|
22653ac8 DM |
1 | [[chapter-ha-manager]] |
2 | ifdef::manvolnum[] | |
3 | PVE({manvolnum}) | |
4 | ================ | |
5 | include::attributes.txt[] | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
734404b4 | 10 | ha-manager - Proxmox VE HA Manager |
22653ac8 DM |
11 | |
12 | SYNOPSYS | |
13 | -------- | |
14 | ||
15 | include::ha-manager.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | High Availability | |
23 | ================= | |
24 | include::attributes.txt[] | |
25 | endif::manvolnum[] | |
26 | ||
b5266e9f DM |
27 | |
28 | Our modern society depends heavily on information provided by | |
29 | computers over the network. Mobile devices amplified that dependency, | |
30 | because people can access the network any time from anywhere. If you | |
31 | provide such services, it is very important that they are available | |
32 | most of the time. | |
33 | ||
34 | We can mathematically define the availability as the ratio of (A) the | |
35 | total time a service is capable of being used during a given interval | |
36 | to (B) the length of the interval. It is normally expressed as a | |
37 | percentage of uptime in a given year. | |
38 | ||
39 | .Availability - Downtime per Year | |
40 | [width="60%",cols="<d,d",options="header"] | |
41 | |=========================================================== | |
42 | |Availability % |Downtime per year | |
43 | |99 |3.65 days | |
44 | |99.9 |8.76 hours | |
45 | |99.99 |52.56 minutes | |
46 | |99.999 |5.26 minutes | |
47 | |99.9999 |31.5 seconds | |
48 | |99.99999 |3.15 seconds | |
49 | |=========================================================== | |
50 | ||
51 | There are several ways to increase availability: | |
52 | ||
53 | * Eliminate single point of failure (redundant components) | |
54 | ||
55 | - use an uniteruptable power supply (UPS) | |
56 | - use redundant power supplies on the main boards | |
57 | - use ECC-RAM | |
58 | - use redundant network hardware | |
59 | - use distributed, redundant storage | |
60 | ||
61 | * Reduce downtime | |
62 | ||
63 | - automatic error detection | |
64 | - automatic failover | |
65 | ||
66 | Virtualization environments like {pve} makes it much easier to reach | |
67 | high availability because they remove the "hardware" dependency. It is | |
68 | also easy to setup and use redundant storage and network devices. So | |
69 | if one host fail, you can simply start those services on another host | |
70 | within your cluster. Even better, 'ha-manager' is able to | |
71 | automatically detect errors and do automatic failover. | |
72 | ||
22653ac8 | 73 | 'ha-manager' handles management of user-defined cluster services. This |
3810ae1e TL |
74 | includes handling of user requests which may start, stop, relocate, |
75 | migrate a service. | |
76 | The cluster resource manager daemon also handles restarting and relocating | |
77 | services to another node in the event of failures. | |
78 | ||
79 | A service (also called resource) is uniquely identified by a service ID | |
80 | (SID) which consists of the service type and an type specific id, e.g.: | |
81 | 'vm:100'. That example would be a service of type vm (Virtual machine) | |
82 | with the VMID 100. | |
83 | ||
84 | Requirements | |
85 | ------------ | |
86 | ||
87 | * at least three nodes | |
88 | ||
89 | * shared storage | |
90 | ||
91 | * hardware redundancy | |
92 | ||
93 | * hardware watchdog - if not available we fall back to the | |
94 | linux kernel soft dog | |
22653ac8 | 95 | |
2b52e195 | 96 | How It Works |
22653ac8 DM |
97 | ------------ |
98 | ||
3810ae1e TL |
99 | This section provides an in detail description of the {PVE} HA-manager |
100 | internals. It describes how the CRM and the LRM work together. | |
101 | ||
102 | To provide High Availability two daemons run on each node: | |
103 | ||
104 | 'pve-ha-lrm':: | |
105 | ||
106 | The local resource manager (LRM), it controls the services running on | |
107 | the local node. | |
108 | It reads the requested states for its services from the current manager | |
109 | status file and executes the respective commands. | |
110 | ||
111 | 'pve-ha-crm':: | |
112 | ||
113 | The cluster resource manager (CRM), it controls the cluster wide | |
114 | actions of the services, processes the LRM result includes the state | |
115 | machine which controls the state of each service. | |
116 | ||
117 | .Locks in the LRM & CRM | |
118 | [NOTE] | |
119 | Locks are provided by our distributed configuration file system (pmxcfs). | |
120 | They are used to guarantee that each LRM is active and working as a | |
121 | LRM only executes actions when he has its lock we can mark a failed node | |
122 | as fenced if we get its lock. This lets us then recover the failed HA services | |
123 | securely without the failed (but maybe still running) LRM interfering. | |
124 | This all gets supervised by the CRM which holds currently the manager master | |
125 | lock. | |
126 | ||
127 | Local Resource Manager | |
128 | ~~~~~~~~~~~~~~~~~~~~~~ | |
129 | ||
22653ac8 | 130 | The local resource manager ('pve-ha-lrm') is started as a daemon on |
3810ae1e TL |
131 | boot and waits until the HA cluster is quorate and thus cluster wide |
132 | locks are working. | |
133 | ||
134 | It can be in three states: | |
135 | ||
136 | * *wait for agent lock*: the LRM waits for our exclusive lock. This is | |
137 | also used as idle sate if no service is configured | |
138 | * *active*: the LRM holds its exclusive lock and has services configured | |
139 | * *lost agent lock*: the LRM lost its lock, this means a failure happened | |
140 | and quorum was lost. | |
141 | ||
142 | After the LRM gets in the active state it reads the manager status | |
143 | file in '/etc/pve/ha/manager_status' and determines the commands it | |
144 | has to execute for the service it owns. | |
145 | For each command a worker gets started, this workers are running in | |
146 | parallel and are limited to maximal 4 by default. This default setting | |
147 | may be changed through the datacenter configuration key "max_worker". | |
148 | ||
149 | .Maximal Concurrent Worker Adjustment Tips | |
150 | [NOTE] | |
151 | The default value of 4 maximal concurrent Workers may be unsuited for | |
152 | a specific setup. For example may 4 live migrations happen at the same | |
153 | time, which can lead to network congestions with slower networks and/or | |
154 | big (memory wise) services. Ensure that also in the worst case no congestion | |
155 | happens and lower the "max_worker" value if needed. In the contrary, if you | |
156 | have a particularly powerful high end setup you may also want to increase it. | |
157 | ||
158 | Each command requested by the CRM is uniquely identifiable by an UID, when | |
159 | the worker finished its result will be processed and written in the LRM | |
160 | status file '/etc/pve/nodes/<nodename>/lrm_status'. There the CRM may collect | |
161 | it and let its state machine - respective the commands output - act on it. | |
162 | ||
163 | The actions on each service between CRM and LRM are normally always synced. | |
164 | This means that the CRM requests a state uniquely marked by an UID, the LRM | |
165 | then executes this action *one time* and writes back the result, also | |
166 | identifiable by the same UID. This is needed so that the LRM does not | |
167 | executes an outdated command. | |
168 | With the exception of the 'stop' and the 'error' command, | |
169 | those two do not depend on the result produce and are executed | |
170 | always in the case of the stopped state and once in the case of | |
171 | the error state. | |
172 | ||
173 | .Read the Logs | |
174 | [NOTE] | |
175 | The HA Stack logs every action it makes. This helps to understand what | |
176 | and also why something happens in the cluster. Here its important to see | |
177 | what both daemons, the LRM and the CRM, did. You may use | |
178 | `journalctl -u pve-ha-lrm` on the node(s) where the service is and | |
179 | the same command for the pve-ha-crm on the node which is the current master. | |
180 | ||
181 | Cluster Resource Manager | |
182 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
22653ac8 DM |
183 | |
184 | The cluster resource manager ('pve-ha-crm') starts on each node and | |
185 | waits there for the manager lock, which can only be held by one node | |
186 | at a time. The node which successfully acquires the manager lock gets | |
3810ae1e TL |
187 | promoted to the CRM master. |
188 | ||
189 | It can be in three states: TODO | |
190 | ||
191 | * *wait for agent lock*: the LRM waits for our exclusive lock. This is | |
192 | also used as idle sate if no service is configured | |
193 | * *active*: the LRM holds its exclusive lock and has services configured | |
194 | * *lost agent lock*: the LRM lost its lock, this means a failure happened | |
195 | and quorum was lost. | |
196 | ||
197 | It main task is to manage the services which are configured to be highly | |
198 | available and try to get always bring them in the wanted state, e.g.: a | |
199 | enabled service will be started if its not running, if it crashes it will | |
200 | be started again. Thus it dictates the LRM the wanted actions. | |
22653ac8 DM |
201 | |
202 | When an node leaves the cluster quorum, its state changes to unknown. | |
203 | If the current CRM then can secure the failed nodes lock, the services | |
204 | will be 'stolen' and restarted on another node. | |
205 | ||
206 | When a cluster member determines that it is no longer in the cluster | |
207 | quorum, the LRM waits for a new quorum to form. As long as there is no | |
208 | quorum the node cannot reset the watchdog. This will trigger a reboot | |
209 | after 60 seconds. | |
210 | ||
2b52e195 | 211 | Configuration |
22653ac8 DM |
212 | ------------- |
213 | ||
214 | The HA stack is well integrated int the Proxmox VE API2. So, for | |
215 | example, HA can be configured via 'ha-manager' or the PVE web | |
216 | interface, which both provide an easy to use tool. | |
217 | ||
218 | The resource configuration file can be located at | |
219 | '/etc/pve/ha/resources.cfg' and the group configuration file at | |
220 | '/etc/pve/ha/groups.cfg'. Use the provided tools to make changes, | |
221 | there shouldn't be any need to edit them manually. | |
222 | ||
3810ae1e TL |
223 | Node Power Status |
224 | ----------------- | |
225 | ||
226 | If a node needs maintenance you should migrate and or relocate all | |
227 | services which are required to run always on another node first. | |
228 | After that you can stop the LRM and CRM services. But note that the | |
229 | watchdog triggers if you stop it with active services. | |
230 | ||
231 | Fencing | |
232 | ------- | |
233 | ||
234 | What Is Fencing | |
235 | ~~~~~~~~~~~~~~~ | |
236 | ||
237 | Fencing secures that on a node failure the dangerous node gets will be rendered | |
238 | unable to do any damage and that no resource runs twice when it gets recovered | |
239 | from the failed node. | |
240 | ||
241 | Configure Hardware Watchdog | |
242 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
243 | By default all watchdog modules are blocked for security reasons as they are | |
244 | like a loaded gun if not correctly initialized. | |
245 | If you have a hardware watchdog available remove its module from the blacklist | |
246 | and restart 'the watchdog-mux' service. | |
247 | ||
248 | ||
2b52e195 | 249 | Resource/Service Agents |
22653ac8 DM |
250 | ------------------------- |
251 | ||
252 | A resource or also called service can be managed by the | |
253 | ha-manager. Currently we support virtual machines and container. | |
254 | ||
2b52e195 | 255 | Groups |
22653ac8 DM |
256 | ------ |
257 | ||
258 | A group is a collection of cluster nodes which a service may be bound to. | |
259 | ||
2b52e195 | 260 | Group Settings |
22653ac8 DM |
261 | ~~~~~~~~~~~~~~ |
262 | ||
263 | nodes:: | |
264 | ||
265 | list of group node members | |
266 | ||
267 | restricted:: | |
268 | ||
269 | resources bound to this group may only run on nodes defined by the | |
270 | group. If no group node member is available the resource will be | |
271 | placed in the stopped state. | |
272 | ||
273 | nofailback:: | |
274 | ||
275 | the resource won't automatically fail back when a more preferred node | |
276 | (re)joins the cluster. | |
277 | ||
278 | ||
2b52e195 | 279 | Recovery Policy |
22653ac8 DM |
280 | --------------- |
281 | ||
282 | There are two service recover policy settings which can be configured | |
283 | specific for each resource. | |
284 | ||
285 | max_restart:: | |
286 | ||
287 | maximal number of tries to restart an failed service on the actual | |
288 | node. The default is set to one. | |
289 | ||
290 | max_relocate:: | |
291 | ||
292 | maximal number of tries to relocate the service to a different node. | |
293 | A relocate only happens after the max_restart value is exceeded on the | |
294 | actual node. The default is set to one. | |
295 | ||
296 | Note that the relocate count state will only reset to zero when the | |
297 | service had at least one successful start. That means if a service is | |
298 | re-enabled without fixing the error only the restart policy gets | |
299 | repeated. | |
300 | ||
2b52e195 | 301 | Error Recovery |
22653ac8 DM |
302 | -------------- |
303 | ||
304 | If after all tries the service state could not be recovered it gets | |
305 | placed in an error state. In this state the service won't get touched | |
306 | by the HA stack anymore. To recover from this state you should follow | |
307 | these steps: | |
308 | ||
309 | * bring the resource back into an safe and consistent state (e.g: | |
310 | killing its process) | |
311 | ||
312 | * disable the ha resource to place it in an stopped state | |
313 | ||
314 | * fix the error which led to this failures | |
315 | ||
316 | * *after* you fixed all errors you may enable the service again | |
317 | ||
318 | ||
2b52e195 | 319 | Service Operations |
22653ac8 DM |
320 | ------------------ |
321 | ||
322 | This are how the basic user-initiated service operations (via | |
323 | 'ha-manager') work. | |
324 | ||
325 | enable:: | |
326 | ||
327 | the service will be started by the LRM if not already running. | |
328 | ||
329 | disable:: | |
330 | ||
331 | the service will be stopped by the LRM if running. | |
332 | ||
333 | migrate/relocate:: | |
334 | ||
335 | the service will be relocated (live) to another node. | |
336 | ||
337 | remove:: | |
338 | ||
339 | the service will be removed from the HA managed resource list. Its | |
340 | current state will not be touched. | |
341 | ||
342 | start/stop:: | |
343 | ||
344 | start and stop commands can be issued to the resource specific tools | |
345 | (like 'qm' or 'pct'), they will forward the request to the | |
346 | 'ha-manager' which then will execute the action and set the resulting | |
347 | service state (enabled, disabled). | |
348 | ||
349 | ||
2b52e195 | 350 | Service States |
22653ac8 DM |
351 | -------------- |
352 | ||
353 | stopped:: | |
354 | ||
355 | Service is stopped (confirmed by LRM) | |
356 | ||
357 | request_stop:: | |
358 | ||
359 | Service should be stopped. Waiting for confirmation from LRM. | |
360 | ||
361 | started:: | |
362 | ||
363 | Service is active an LRM should start it ASAP if not already running. | |
364 | ||
365 | fence:: | |
366 | ||
367 | Wait for node fencing (service node is not inside quorate cluster | |
368 | partition). | |
369 | ||
370 | freeze:: | |
371 | ||
372 | Do not touch the service state. We use this state while we reboot a | |
373 | node, or when we restart the LRM daemon. | |
374 | ||
375 | migrate:: | |
376 | ||
377 | Migrate service (live) to other node. | |
378 | ||
379 | error:: | |
380 | ||
381 | Service disabled because of LRM errors. Needs manual intervention. | |
382 | ||
383 | ||
384 | ifdef::manvolnum[] | |
385 | include::pve-copyright.adoc[] | |
386 | endif::manvolnum[] | |
387 |