]> git.proxmox.com Git - pve-docs.git/blame - ha-manager.adoc
add datacenter.cfg to file table
[pve-docs.git] / ha-manager.adoc
CommitLineData
22653ac8
DM
1[[chapter-ha-manager]]
2ifdef::manvolnum[]
3PVE({manvolnum})
4================
5include::attributes.txt[]
6
7NAME
8----
9
734404b4 10ha-manager - Proxmox VE HA Manager
22653ac8
DM
11
12SYNOPSYS
13--------
14
15include::ha-manager.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22High Availability
23=================
24include::attributes.txt[]
25endif::manvolnum[]
26
27'ha-manager' handles management of user-defined cluster services. This
28includes handling of user requests including service start, service
29disable, service relocate, and service restart. The cluster resource
30manager daemon also handles restarting and relocating services in the
31event of failures.
32
33HOW IT WORKS
34------------
35
36The local resource manager ('pve-ha-lrm') is started as a daemon on
37each node at system start and waits until the HA cluster is quorate
38and locks are working. After initialization, the LRM determines which
39services are enabled and starts them. Also the watchdog gets
40initialized.
41
42The cluster resource manager ('pve-ha-crm') starts on each node and
43waits there for the manager lock, which can only be held by one node
44at a time. The node which successfully acquires the manager lock gets
45promoted to the CRM, it handles cluster wide actions like migrations
46and failures.
47
48When an node leaves the cluster quorum, its state changes to unknown.
49If the current CRM then can secure the failed nodes lock, the services
50will be 'stolen' and restarted on another node.
51
52When a cluster member determines that it is no longer in the cluster
53quorum, the LRM waits for a new quorum to form. As long as there is no
54quorum the node cannot reset the watchdog. This will trigger a reboot
55after 60 seconds.
56
57CONFIGURATION
58-------------
59
60The HA stack is well integrated int the Proxmox VE API2. So, for
61example, HA can be configured via 'ha-manager' or the PVE web
62interface, which both provide an easy to use tool.
63
64The resource configuration file can be located at
65'/etc/pve/ha/resources.cfg' and the group configuration file at
66'/etc/pve/ha/groups.cfg'. Use the provided tools to make changes,
67there shouldn't be any need to edit them manually.
68
69RESOURCES/SERVICES AGENTS
70-------------------------
71
72A resource or also called service can be managed by the
73ha-manager. Currently we support virtual machines and container.
74
75GROUPS
76------
77
78A group is a collection of cluster nodes which a service may be bound to.
79
80GROUP SETTINGS
81~~~~~~~~~~~~~~
82
83nodes::
84
85list of group node members
86
87restricted::
88
89resources bound to this group may only run on nodes defined by the
90group. If no group node member is available the resource will be
91placed in the stopped state.
92
93nofailback::
94
95the resource won't automatically fail back when a more preferred node
96(re)joins the cluster.
97
98
99RECOVERY POLICY
100---------------
101
102There are two service recover policy settings which can be configured
103specific for each resource.
104
105max_restart::
106
107maximal number of tries to restart an failed service on the actual
108node. The default is set to one.
109
110max_relocate::
111
112maximal number of tries to relocate the service to a different node.
113A relocate only happens after the max_restart value is exceeded on the
114actual node. The default is set to one.
115
116Note that the relocate count state will only reset to zero when the
117service had at least one successful start. That means if a service is
118re-enabled without fixing the error only the restart policy gets
119repeated.
120
121ERROR RECOVERY
122--------------
123
124If after all tries the service state could not be recovered it gets
125placed in an error state. In this state the service won't get touched
126by the HA stack anymore. To recover from this state you should follow
127these steps:
128
129* bring the resource back into an safe and consistent state (e.g:
130killing its process)
131
132* disable the ha resource to place it in an stopped state
133
134* fix the error which led to this failures
135
136* *after* you fixed all errors you may enable the service again
137
138
139SERVICE OPERATIONS
140------------------
141
142This are how the basic user-initiated service operations (via
143'ha-manager') work.
144
145enable::
146
147the service will be started by the LRM if not already running.
148
149disable::
150
151the service will be stopped by the LRM if running.
152
153migrate/relocate::
154
155the service will be relocated (live) to another node.
156
157remove::
158
159the service will be removed from the HA managed resource list. Its
160current state will not be touched.
161
162start/stop::
163
164start and stop commands can be issued to the resource specific tools
165(like 'qm' or 'pct'), they will forward the request to the
166'ha-manager' which then will execute the action and set the resulting
167service state (enabled, disabled).
168
169
170SERVICE STATES
171--------------
172
173stopped::
174
175Service is stopped (confirmed by LRM)
176
177request_stop::
178
179Service should be stopped. Waiting for confirmation from LRM.
180
181started::
182
183Service is active an LRM should start it ASAP if not already running.
184
185fence::
186
187Wait for node fencing (service node is not inside quorate cluster
188partition).
189
190freeze::
191
192Do not touch the service state. We use this state while we reboot a
193node, or when we restart the LRM daemon.
194
195migrate::
196
197Migrate service (live) to other node.
198
199error::
200
201Service disabled because of LRM errors. Needs manual intervention.
202
203
204ifdef::manvolnum[]
205include::pve-copyright.adoc[]
206endif::manvolnum[]
207