]> git.proxmox.com Git - pve-docs.git/blame - ha-manager.adoc
add more content to ha-manager section
[pve-docs.git] / ha-manager.adoc
CommitLineData
22653ac8
DM
1[[chapter-ha-manager]]
2ifdef::manvolnum[]
3PVE({manvolnum})
4================
5include::attributes.txt[]
6
7NAME
8----
9
734404b4 10ha-manager - Proxmox VE HA Manager
22653ac8
DM
11
12SYNOPSYS
13--------
14
15include::ha-manager.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22High Availability
23=================
24include::attributes.txt[]
25endif::manvolnum[]
26
27'ha-manager' handles management of user-defined cluster services. This
3810ae1e
TL
28includes handling of user requests which may start, stop, relocate,
29migrate a service.
30The cluster resource manager daemon also handles restarting and relocating
31services to another node in the event of failures.
32
33A service (also called resource) is uniquely identified by a service ID
34(SID) which consists of the service type and an type specific id, e.g.:
35'vm:100'. That example would be a service of type vm (Virtual machine)
36with the VMID 100.
37
38Requirements
39------------
40
41* at least three nodes
42
43* shared storage
44
45* hardware redundancy
46
47* hardware watchdog - if not available we fall back to the
48 linux kernel soft dog
22653ac8 49
2b52e195 50How It Works
22653ac8
DM
51------------
52
3810ae1e
TL
53This section provides an in detail description of the {PVE} HA-manager
54internals. It describes how the CRM and the LRM work together.
55
56To provide High Availability two daemons run on each node:
57
58'pve-ha-lrm'::
59
60The local resource manager (LRM), it controls the services running on
61the local node.
62It reads the requested states for its services from the current manager
63status file and executes the respective commands.
64
65'pve-ha-crm'::
66
67The cluster resource manager (CRM), it controls the cluster wide
68actions of the services, processes the LRM result includes the state
69machine which controls the state of each service.
70
71.Locks in the LRM & CRM
72[NOTE]
73Locks are provided by our distributed configuration file system (pmxcfs).
74They are used to guarantee that each LRM is active and working as a
75LRM only executes actions when he has its lock we can mark a failed node
76as fenced if we get its lock. This lets us then recover the failed HA services
77securely without the failed (but maybe still running) LRM interfering.
78This all gets supervised by the CRM which holds currently the manager master
79lock.
80
81Local Resource Manager
82~~~~~~~~~~~~~~~~~~~~~~
83
22653ac8 84The local resource manager ('pve-ha-lrm') is started as a daemon on
3810ae1e
TL
85boot and waits until the HA cluster is quorate and thus cluster wide
86locks are working.
87
88It can be in three states:
89
90* *wait for agent lock*: the LRM waits for our exclusive lock. This is
91 also used as idle sate if no service is configured
92* *active*: the LRM holds its exclusive lock and has services configured
93* *lost agent lock*: the LRM lost its lock, this means a failure happened
94 and quorum was lost.
95
96After the LRM gets in the active state it reads the manager status
97file in '/etc/pve/ha/manager_status' and determines the commands it
98has to execute for the service it owns.
99For each command a worker gets started, this workers are running in
100parallel and are limited to maximal 4 by default. This default setting
101may be changed through the datacenter configuration key "max_worker".
102
103.Maximal Concurrent Worker Adjustment Tips
104[NOTE]
105The default value of 4 maximal concurrent Workers may be unsuited for
106a specific setup. For example may 4 live migrations happen at the same
107time, which can lead to network congestions with slower networks and/or
108big (memory wise) services. Ensure that also in the worst case no congestion
109happens and lower the "max_worker" value if needed. In the contrary, if you
110have a particularly powerful high end setup you may also want to increase it.
111
112Each command requested by the CRM is uniquely identifiable by an UID, when
113the worker finished its result will be processed and written in the LRM
114status file '/etc/pve/nodes/<nodename>/lrm_status'. There the CRM may collect
115it and let its state machine - respective the commands output - act on it.
116
117The actions on each service between CRM and LRM are normally always synced.
118This means that the CRM requests a state uniquely marked by an UID, the LRM
119then executes this action *one time* and writes back the result, also
120identifiable by the same UID. This is needed so that the LRM does not
121executes an outdated command.
122With the exception of the 'stop' and the 'error' command,
123those two do not depend on the result produce and are executed
124always in the case of the stopped state and once in the case of
125the error state.
126
127.Read the Logs
128[NOTE]
129The HA Stack logs every action it makes. This helps to understand what
130and also why something happens in the cluster. Here its important to see
131what both daemons, the LRM and the CRM, did. You may use
132`journalctl -u pve-ha-lrm` on the node(s) where the service is and
133the same command for the pve-ha-crm on the node which is the current master.
134
135Cluster Resource Manager
136~~~~~~~~~~~~~~~~~~~~~~~~
22653ac8
DM
137
138The cluster resource manager ('pve-ha-crm') starts on each node and
139waits there for the manager lock, which can only be held by one node
140at a time. The node which successfully acquires the manager lock gets
3810ae1e
TL
141promoted to the CRM master.
142
143It can be in three states: TODO
144
145* *wait for agent lock*: the LRM waits for our exclusive lock. This is
146 also used as idle sate if no service is configured
147* *active*: the LRM holds its exclusive lock and has services configured
148* *lost agent lock*: the LRM lost its lock, this means a failure happened
149 and quorum was lost.
150
151It main task is to manage the services which are configured to be highly
152available and try to get always bring them in the wanted state, e.g.: a
153enabled service will be started if its not running, if it crashes it will
154be started again. Thus it dictates the LRM the wanted actions.
22653ac8
DM
155
156When an node leaves the cluster quorum, its state changes to unknown.
157If the current CRM then can secure the failed nodes lock, the services
158will be 'stolen' and restarted on another node.
159
160When a cluster member determines that it is no longer in the cluster
161quorum, the LRM waits for a new quorum to form. As long as there is no
162quorum the node cannot reset the watchdog. This will trigger a reboot
163after 60 seconds.
164
2b52e195 165Configuration
22653ac8
DM
166-------------
167
168The HA stack is well integrated int the Proxmox VE API2. So, for
169example, HA can be configured via 'ha-manager' or the PVE web
170interface, which both provide an easy to use tool.
171
172The resource configuration file can be located at
173'/etc/pve/ha/resources.cfg' and the group configuration file at
174'/etc/pve/ha/groups.cfg'. Use the provided tools to make changes,
175there shouldn't be any need to edit them manually.
176
3810ae1e
TL
177Node Power Status
178-----------------
179
180If a node needs maintenance you should migrate and or relocate all
181services which are required to run always on another node first.
182After that you can stop the LRM and CRM services. But note that the
183watchdog triggers if you stop it with active services.
184
185Fencing
186-------
187
188What Is Fencing
189~~~~~~~~~~~~~~~
190
191Fencing secures that on a node failure the dangerous node gets will be rendered
192unable to do any damage and that no resource runs twice when it gets recovered
193from the failed node.
194
195Configure Hardware Watchdog
196~~~~~~~~~~~~~~~~~~~~~~~~~~~
197By default all watchdog modules are blocked for security reasons as they are
198like a loaded gun if not correctly initialized.
199If you have a hardware watchdog available remove its module from the blacklist
200and restart 'the watchdog-mux' service.
201
202
2b52e195 203Resource/Service Agents
22653ac8
DM
204-------------------------
205
206A resource or also called service can be managed by the
207ha-manager. Currently we support virtual machines and container.
208
2b52e195 209Groups
22653ac8
DM
210------
211
212A group is a collection of cluster nodes which a service may be bound to.
213
2b52e195 214Group Settings
22653ac8
DM
215~~~~~~~~~~~~~~
216
217nodes::
218
219list of group node members
220
221restricted::
222
223resources bound to this group may only run on nodes defined by the
224group. If no group node member is available the resource will be
225placed in the stopped state.
226
227nofailback::
228
229the resource won't automatically fail back when a more preferred node
230(re)joins the cluster.
231
232
2b52e195 233Recovery Policy
22653ac8
DM
234---------------
235
236There are two service recover policy settings which can be configured
237specific for each resource.
238
239max_restart::
240
241maximal number of tries to restart an failed service on the actual
242node. The default is set to one.
243
244max_relocate::
245
246maximal number of tries to relocate the service to a different node.
247A relocate only happens after the max_restart value is exceeded on the
248actual node. The default is set to one.
249
250Note that the relocate count state will only reset to zero when the
251service had at least one successful start. That means if a service is
252re-enabled without fixing the error only the restart policy gets
253repeated.
254
2b52e195 255Error Recovery
22653ac8
DM
256--------------
257
258If after all tries the service state could not be recovered it gets
259placed in an error state. In this state the service won't get touched
260by the HA stack anymore. To recover from this state you should follow
261these steps:
262
263* bring the resource back into an safe and consistent state (e.g:
264killing its process)
265
266* disable the ha resource to place it in an stopped state
267
268* fix the error which led to this failures
269
270* *after* you fixed all errors you may enable the service again
271
272
2b52e195 273Service Operations
22653ac8
DM
274------------------
275
276This are how the basic user-initiated service operations (via
277'ha-manager') work.
278
279enable::
280
281the service will be started by the LRM if not already running.
282
283disable::
284
285the service will be stopped by the LRM if running.
286
287migrate/relocate::
288
289the service will be relocated (live) to another node.
290
291remove::
292
293the service will be removed from the HA managed resource list. Its
294current state will not be touched.
295
296start/stop::
297
298start and stop commands can be issued to the resource specific tools
299(like 'qm' or 'pct'), they will forward the request to the
300'ha-manager' which then will execute the action and set the resulting
301service state (enabled, disabled).
302
303
2b52e195 304Service States
22653ac8
DM
305--------------
306
307stopped::
308
309Service is stopped (confirmed by LRM)
310
311request_stop::
312
313Service should be stopped. Waiting for confirmation from LRM.
314
315started::
316
317Service is active an LRM should start it ASAP if not already running.
318
319fence::
320
321Wait for node fencing (service node is not inside quorate cluster
322partition).
323
324freeze::
325
326Do not touch the service state. We use this state while we reboot a
327node, or when we restart the LRM daemon.
328
329migrate::
330
331Migrate service (live) to other node.
332
333error::
334
335Service disabled because of LRM errors. Needs manual intervention.
336
337
338ifdef::manvolnum[]
339include::pve-copyright.adoc[]
340endif::manvolnum[]
341