]> git.proxmox.com Git - pve-docs.git/blob - ha-manager.adoc
qm: improve disk controller wording a bit
[pve-docs.git] / ha-manager.adoc
1 [[chapter_ha_manager]]
2 ifdef::manvolnum[]
3 ha-manager(1)
4 =============
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 ha-manager - Proxmox VE HA Manager
11
12 SYNOPSIS
13 --------
14
15 include::ha-manager.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 High Availability
22 =================
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 Our modern society depends heavily on information provided by
27 computers over the network. Mobile devices amplified that dependency,
28 because people can access the network any time from anywhere. If you
29 provide such services, it is very important that they are available
30 most of the time.
31
32 We can mathematically define the availability as the ratio of (A) the
33 total time a service is capable of being used during a given interval
34 to (B) the length of the interval. It is normally expressed as a
35 percentage of uptime in a given year.
36
37 .Availability - Downtime per Year
38 [width="60%",cols="<d,d",options="header"]
39 |===========================================================
40 |Availability % |Downtime per year
41 |99 |3.65 days
42 |99.9 |8.76 hours
43 |99.99 |52.56 minutes
44 |99.999 |5.26 minutes
45 |99.9999 |31.5 seconds
46 |99.99999 |3.15 seconds
47 |===========================================================
48
49 There are several ways to increase availability. The most elegant
50 solution is to rewrite your software, so that you can run it on
51 several host at the same time. The software itself need to have a way
52 to detect errors and do failover. This is relatively easy if you just
53 want to serve read-only web pages. But in general this is complex, and
54 sometimes impossible because you cannot modify the software
55 yourself. The following solutions works without modifying the
56 software:
57
58 * Use reliable ``server'' components
59 +
60 NOTE: Computer components with same functionality can have varying
61 reliability numbers, depending on the component quality. Most vendors
62 sell components with higher reliability as ``server'' components -
63 usually at higher price.
64
65 * Eliminate single point of failure (redundant components)
66 ** use an uninterruptible power supply (UPS)
67 ** use redundant power supplies on the main boards
68 ** use ECC-RAM
69 ** use redundant network hardware
70 ** use RAID for local storage
71 ** use distributed, redundant storage for VM data
72
73 * Reduce downtime
74 ** rapidly accessible administrators (24/7)
75 ** availability of spare parts (other nodes in a {pve} cluster)
76 ** automatic error detection (provided by `ha-manager`)
77 ** automatic failover (provided by `ha-manager`)
78
79 Virtualization environments like {pve} make it much easier to reach
80 high availability because they remove the ``hardware'' dependency. They
81 also support to setup and use redundant storage and network
82 devices. So if one host fail, you can simply start those services on
83 another host within your cluster.
84
85 Even better, {pve} provides a software stack called `ha-manager`,
86 which can do that automatically for you. It is able to automatically
87 detect errors and do automatic failover.
88
89 {pve} `ha-manager` works like an ``automated'' administrator. First, you
90 configure what resources (VMs, containers, ...) it should
91 manage. `ha-manager` then observes correct functionality, and handles
92 service failover to another node in case of errors. `ha-manager` can
93 also handle normal user requests which may start, stop, relocate and
94 migrate a service.
95
96 But high availability comes at a price. High quality components are
97 more expensive, and making them redundant duplicates the costs at
98 least. Additional spare parts increase costs further. So you should
99 carefully calculate the benefits, and compare with those additional
100 costs.
101
102 TIP: Increasing availability from 99% to 99.9% is relatively
103 simply. But increasing availability from 99.9999% to 99.99999% is very
104 hard and costly. `ha-manager` has typical error detection and failover
105 times of about 2 minutes, so you can get no more than 99.999%
106 availability.
107
108
109 Requirements
110 ------------
111
112 You must meet the following requirements before you start with HA:
113
114 * at least three cluster nodes (to get reliable quorum)
115
116 * shared storage for VMs and containers
117
118 * hardware redundancy (everywhere)
119
120 * use reliable “server” components
121
122 * hardware watchdog - if not available we fall back to the
123 linux kernel software watchdog (`softdog`)
124
125 * optional hardware fencing devices
126
127
128 [[ha_manager_resources]]
129 Resources
130 ---------
131
132 We call the primary management unit handled by `ha-manager` a
133 resource. A resource (also called ``service'') is uniquely
134 identified by a service ID (SID), which consists of the resource type
135 and an type specific ID, e.g.: `vm:100`. That example would be a
136 resource of type `vm` (virtual machine) with the ID 100.
137
138 For now we have two important resources types - virtual machines and
139 containers. One basic idea here is that we can bundle related software
140 into such VM or container, so there is no need to compose one big
141 service from other services, like it was done with `rgmanager`. In
142 general, a HA managed resource should not depend on other resources.
143
144
145 Management Tasks
146 ----------------
147
148 This section provides a short overview of common management tasks. The
149 first step is to enable HA for a resource. This is done by adding the
150 resource to the HA resource configuration. You can do this using the
151 GUI, or simply use the command line tool, for example:
152
153 ----
154 # ha-manager add vm:100
155 ----
156
157 The HA stack now tries to start the resources and keeps it
158 running. Please note that you can configure the ``requested''
159 resources state. For example you may want that the HA stack stops the
160 resource:
161
162 ----
163 # ha-manager set vm:100 --state stopped
164 ----
165
166 and start it again later:
167
168 ----
169 # ha-manager set vm:100 --state started
170 ----
171
172 You can also use the normal VM and container management commands. They
173 automatically forward the commands to the HA stack, so
174
175 ----
176 # qm start 100
177 ----
178
179 simply sets the requested state to `started`. Same applied to `qm
180 stop`, which sets the requested state to `stopped`.
181
182 NOTE: The HA stack works fully asynchronous and needs to communicate
183 with other cluster members. So it takes some seconds until you see
184 the result of such actions.
185
186 To view the current HA resource configuration use:
187
188 ----
189 # ha-manager config
190 vm:100
191 state stopped
192 ----
193
194 And you can view the actual HA manager and resource state with:
195
196 ----
197 # ha-manager status
198 quorum OK
199 master node1 (active, Wed Nov 23 11:07:23 2016)
200 lrm elsa (active, Wed Nov 23 11:07:19 2016)
201 service vm:100 (node1, started)
202 ----
203
204 You can also initiate resource migration to other nodes:
205
206 ----
207 # ha-manager migrate vm:100 node2
208 ----
209
210 This uses online migration and tries to keep the VM running. Online
211 migration needs to transfer all used memory over the network, so it is
212 sometimes faster to stop VM, then restart it on the new node. This can be
213 done using the `relocate` command:
214
215 ----
216 # ha-manager relocate vm:100 node2
217 ----
218
219 Finally, you can remove the resource from the HA configuration using
220 the following command:
221
222 ----
223 # ha-manager remove vm:100
224 ----
225
226 NOTE: This does not start or stop the resource.
227
228 But all HA related task can be done on the GUI, so there is no need to
229 use the command line at all.
230
231
232 How It Works
233 ------------
234
235 This section provides a detailed description of the {PVE} HA manager
236 internals. It describes all involved daemons and how they work
237 together. To provide HA, two daemons run on each node:
238
239 `pve-ha-lrm`::
240
241 The local resource manager (LRM), which controls the services running on
242 the local node. It reads the requested states for its services from
243 the current manager status file and executes the respective commands.
244
245 `pve-ha-crm`::
246
247 The cluster resource manager (CRM), which makes the cluster wide
248 decisions. It sends commands to the LRM, processes the results,
249 and moves resources to other nodes if something fails. The CRM also
250 handles node fencing.
251
252
253 .Locks in the LRM & CRM
254 [NOTE]
255 Locks are provided by our distributed configuration file system (pmxcfs).
256 They are used to guarantee that each LRM is active once and working. As a
257 LRM only executes actions when it holds its lock, we can mark a failed node
258 as fenced if we can acquire its lock. This lets us then recover any failed
259 HA services securely without any interference from the now unknown failed node.
260 This all gets supervised by the CRM which holds currently the manager master
261 lock.
262
263
264 Service States
265 ~~~~~~~~~~~~~~
266
267 The CRM use a service state enumeration to record the current service
268 state. We display this state on the GUI and you can query it using
269 the `ha-manager` command line tool:
270
271 ----
272 # ha-manager status
273 quorum OK
274 master elsa (active, Mon Nov 21 07:23:29 2016)
275 lrm elsa (active, Mon Nov 21 07:23:22 2016)
276 service ct:100 (elsa, stopped)
277 service ct:102 (elsa, started)
278 service vm:501 (elsa, started)
279 ----
280
281 Here is the list of possible states:
282
283 stopped::
284
285 Service is stopped (confirmed by LRM). If the LRM detects a stopped
286 service is still running, it will stop it again.
287
288 request_stop::
289
290 Service should be stopped. The CRM waits for confirmation from the
291 LRM.
292
293 stopping::
294
295 Pending stop request. But the CRM did not get the request so far.
296
297 started::
298
299 Service is active an LRM should start it ASAP if not already running.
300 If the Service fails and is detected to be not running the LRM
301 restarts it
302 (see xref:ha_manager_start_failure_policy[Start Failure Policy]).
303
304 starting::
305
306 Pending start request. But the CRM has not got any confirmation from the
307 LRM that the service is running.
308
309 fence::
310
311 Wait for node fencing (service node is not inside quorate cluster
312 partition). As soon as node gets fenced successfully the service will
313 be recovered to another node, if possible
314 (see xref:ha_manager_fencing[Fencing]).
315
316 freeze::
317
318 Do not touch the service state. We use this state while we reboot a
319 node, or when we restart the LRM daemon
320 (see xref:ha_manager_package_updates[Package Updates]).
321
322 migrate::
323
324 Migrate service (live) to other node.
325
326 error::
327
328 Service is disabled because of LRM errors. Needs manual intervention
329 (see xref:ha_manager_error_recovery[Error Recovery]).
330
331 queued::
332
333 Service is newly added, and the CRM has not seen it so far.
334
335 disabled::
336
337 Service is stopped and marked as `disabled`
338
339
340 Local Resource Manager
341 ~~~~~~~~~~~~~~~~~~~~~~
342
343 The local resource manager (`pve-ha-lrm`) is started as a daemon on
344 boot and waits until the HA cluster is quorate and thus cluster wide
345 locks are working.
346
347 It can be in three states:
348
349 wait for agent lock::
350
351 The LRM waits for our exclusive lock. This is also used as idle state if no
352 service is configured.
353
354 active::
355
356 The LRM holds its exclusive lock and has services configured.
357
358 lost agent lock::
359
360 The LRM lost its lock, this means a failure happened and quorum was lost.
361
362 After the LRM gets in the active state it reads the manager status
363 file in `/etc/pve/ha/manager_status` and determines the commands it
364 has to execute for the services it owns.
365 For each command a worker gets started, this workers are running in
366 parallel and are limited to at most 4 by default. This default setting
367 may be changed through the datacenter configuration key `max_worker`.
368 When finished the worker process gets collected and its result saved for
369 the CRM.
370
371 .Maximum Concurrent Worker Adjustment Tips
372 [NOTE]
373 The default value of at most 4 concurrent workers may be unsuited for
374 a specific setup. For example may 4 live migrations happen at the same
375 time, which can lead to network congestions with slower networks and/or
376 big (memory wise) services. Ensure that also in the worst case no congestion
377 happens and lower the `max_worker` value if needed. In the contrary, if you
378 have a particularly powerful high end setup you may also want to increase it.
379
380 Each command requested by the CRM is uniquely identifiable by an UID, when
381 the worker finished its result will be processed and written in the LRM
382 status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
383 it and let its state machine - respective the commands output - act on it.
384
385 The actions on each service between CRM and LRM are normally always synced.
386 This means that the CRM requests a state uniquely marked by an UID, the LRM
387 then executes this action *one time* and writes back the result, also
388 identifiable by the same UID. This is needed so that the LRM does not
389 executes an outdated command.
390 With the exception of the `stop` and the `error` command,
391 those two do not depend on the result produced and are executed
392 always in the case of the stopped state and once in the case of
393 the error state.
394
395 .Read the Logs
396 [NOTE]
397 The HA Stack logs every action it makes. This helps to understand what
398 and also why something happens in the cluster. Here its important to see
399 what both daemons, the LRM and the CRM, did. You may use
400 `journalctl -u pve-ha-lrm` on the node(s) where the service is and
401 the same command for the pve-ha-crm on the node which is the current master.
402
403 Cluster Resource Manager
404 ~~~~~~~~~~~~~~~~~~~~~~~~
405
406 The cluster resource manager (`pve-ha-crm`) starts on each node and
407 waits there for the manager lock, which can only be held by one node
408 at a time. The node which successfully acquires the manager lock gets
409 promoted to the CRM master.
410
411 It can be in three states:
412
413 wait for agent lock::
414
415 The CRM waits for our exclusive lock. This is also used as idle state if no
416 service is configured
417
418 active::
419
420 The CRM holds its exclusive lock and has services configured
421
422 lost agent lock::
423
424 The CRM lost its lock, this means a failure happened and quorum was lost.
425
426 It main task is to manage the services which are configured to be highly
427 available and try to always enforce the requested state. For example, a
428 service with the requested state 'started' will be started if its not
429 already running. If it crashes it will be automatically started again.
430 Thus the CRM dictates the actions which the LRM needs to execute.
431
432 When an node leaves the cluster quorum, its state changes to unknown.
433 If the current CRM then can secure the failed nodes lock, the services
434 will be 'stolen' and restarted on another node.
435
436 When a cluster member determines that it is no longer in the cluster
437 quorum, the LRM waits for a new quorum to form. As long as there is no
438 quorum the node cannot reset the watchdog. This will trigger a reboot
439 after the watchdog then times out, this happens after 60 seconds.
440
441
442 Configuration
443 -------------
444
445 The HA stack is well integrated into the {pve} API. So, for example,
446 HA can be configured via the `ha-manager` command line interface, or
447 the {pve} web interface - both interfaces provide an easy way to
448 manage HA. Automation tools can use the API directly.
449
450 All HA configuration files are within `/etc/pve/ha/`, so they get
451 automatically distributed to the cluster nodes, and all nodes share
452 the same HA configuration.
453
454
455 [[ha_manager_resource_config]]
456 Resources
457 ~~~~~~~~~
458
459 [thumbnail="gui-ha-manager-status.png"]
460
461
462 The resource configuration file `/etc/pve/ha/resources.cfg` stores
463 the list of resources managed by `ha-manager`. A resource configuration
464 inside that list look like this:
465
466 ----
467 <type>: <name>
468 <property> <value>
469 ...
470 ----
471
472 It starts with a resource type followed by a resource specific name,
473 separated with colon. Together this forms the HA resource ID, which is
474 used by all `ha-manager` commands to uniquely identify a resource
475 (example: `vm:100` or `ct:101`). The next lines contain additional
476 properties:
477
478 include::ha-resources-opts.adoc[]
479
480 Here is a real world example with one VM and one container. As you see,
481 the syntax of those files is really simple, so it is even possible to
482 read or edit those files using your favorite editor:
483
484 .Configuration Example (`/etc/pve/ha/resources.cfg`)
485 ----
486 vm: 501
487 state started
488 max_relocate 2
489
490 ct: 102
491 # Note: use default settings for everything
492 ----
493
494 [thumbnail="gui-ha-manager-add-resource.png"]
495
496 Above config was generated using the `ha-manager` command line tool:
497
498 ----
499 # ha-manager add vm:501 --state started --max_relocate 2
500 # ha-manager add ct:102
501 ----
502
503
504 [[ha_manager_groups]]
505 Groups
506 ~~~~~~
507
508 [thumbnail="gui-ha-manager-groups-view.png"]
509
510 The HA group configuration file `/etc/pve/ha/groups.cfg` is used to
511 define groups of cluster nodes. A resource can be restricted to run
512 only on the members of such group. A group configuration look like
513 this:
514
515 ----
516 group: <group>
517 nodes <node_list>
518 <property> <value>
519 ...
520 ----
521
522 include::ha-groups-opts.adoc[]
523
524 [thumbnail="gui-ha-manager-add-group.png"]
525
526 A common requirement is that a resource should run on a specific
527 node. Usually the resource is able to run on other nodes, so you can define
528 an unrestricted group with a single member:
529
530 ----
531 # ha-manager groupadd prefer_node1 --nodes node1
532 ----
533
534 For bigger clusters, it makes sense to define a more detailed failover
535 behavior. For example, you may want to run a set of services on
536 `node1` if possible. If `node1` is not available, you want to run them
537 equally split on `node2` and `node3`. If those nodes also fail the
538 services should run on `node4`. To achieve this you could set the node
539 list to:
540
541 ----
542 # ha-manager groupadd mygroup1 -nodes "node1:2,node2:1,node3:1,node4"
543 ----
544
545 Another use case is if a resource uses other resources only available
546 on specific nodes, lets say `node1` and `node2`. We need to make sure
547 that HA manager does not use other nodes, so we need to create a
548 restricted group with said nodes:
549
550 ----
551 # ha-manager groupadd mygroup2 -nodes "node1,node2" -restricted
552 ----
553
554 Above commands created the following group configuration fils:
555
556 .Configuration Example (`/etc/pve/ha/groups.cfg`)
557 ----
558 group: prefer_node1
559 nodes node1
560
561 group: mygroup1
562 nodes node2:1,node4,node1:2,node3:1
563
564 group: mygroup2
565 nodes node2,node1
566 restricted 1
567 ----
568
569
570 The `nofailback` options is mostly useful to avoid unwanted resource
571 movements during administration tasks. For example, if you need to
572 migrate a service to a node which hasn't the highest priority in the
573 group, you need to tell the HA manager to not move this service
574 instantly back by setting the `nofailback` option.
575
576 Another scenario is when a service was fenced and it got recovered to
577 another node. The admin tries to repair the fenced node and brings it
578 up online again to investigate the failure cause and check if it runs
579 stable again. Setting the `nofailback` flag prevents that the
580 recovered services move straight back to the fenced node.
581
582
583 [[ha_manager_fencing]]
584 Fencing
585 -------
586
587 On node failures, fencing ensures that the erroneous node is
588 guaranteed to be offline. This is required to make sure that no
589 resource runs twice when it gets recovered on another node. This is a
590 really important task, because without, it would not be possible to
591 recover a resource on another node.
592
593 If a node would not get fenced, it would be in an unknown state where
594 it may have still access to shared resources. This is really
595 dangerous! Imagine that every network but the storage one broke. Now,
596 while not reachable from the public network, the VM still runs and
597 writes to the shared storage.
598
599 If we then simply start up this VM on another node, we would get a
600 dangerous race conditions because we write from both nodes. Such
601 condition can destroy all VM data and the whole VM could be rendered
602 unusable. The recovery could also fail if the storage protects from
603 multiple mounts.
604
605
606 How {pve} Fences
607 ~~~~~~~~~~~~~~~~
608
609 There are different methods to fence a node, for example, fence
610 devices which cut off the power from the node or disable their
611 communication completely. Those are often quite expensive and bring
612 additional critical components into a system, because if they fail you
613 cannot recover any service.
614
615 We thus wanted to integrate a simpler fencing method, which does not
616 require additional external hardware. This can be done using
617 watchdog timers.
618
619 .Possible Fencing Methods
620 - external power switches
621 - isolate nodes by disabling complete network traffic on the switch
622 - self fencing using watchdog timers
623
624 Watchdog timers are widely used in critical and dependable systems
625 since the beginning of micro controllers. They are often independent
626 and simple integrated circuits which are used to detect and recover
627 from computer malfunctions.
628
629 During normal operation, `ha-manager` regularly resets the watchdog
630 timer to prevent it from elapsing. If, due to a hardware fault or
631 program error, the computer fails to reset the watchdog, the timer
632 will elapse and triggers a reset of the whole server (reboot).
633
634 Recent server motherboards often include such hardware watchdogs, but
635 these need to be configured. If no watchdog is available or
636 configured, we fall back to the Linux Kernel 'softdog'. While still
637 reliable, it is not independent of the servers hardware, and thus has
638 a lower reliability than a hardware watchdog.
639
640
641 Configure Hardware Watchdog
642 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
643
644 By default, all hardware watchdog modules are blocked for security
645 reasons. They are like a loaded gun if not correctly initialized. To
646 enable a hardware watchdog, you need to specify the module to load in
647 '/etc/default/pve-ha-manager', for example:
648
649 ----
650 # select watchdog module (default is softdog)
651 WATCHDOG_MODULE=iTCO_wdt
652 ----
653
654 This configuration is read by the 'watchdog-mux' service, which load
655 the specified module at startup.
656
657
658 Recover Fenced Services
659 ~~~~~~~~~~~~~~~~~~~~~~~
660
661 After a node failed and its fencing was successful, the CRM tries to
662 move services from the failed node to nodes which are still online.
663
664 The selection of nodes, on which those services gets recovered, is
665 influenced by the resource `group` settings, the list of currently active
666 nodes, and their respective active service count.
667
668 The CRM first builds a set out of the intersection between user selected
669 nodes (from `group` setting) and available nodes. It then choose the
670 subset of nodes with the highest priority, and finally select the node
671 with the lowest active service count. This minimizes the possibility
672 of an overloaded node.
673
674 CAUTION: On node failure, the CRM distributes services to the
675 remaining nodes. This increase the service count on those nodes, and
676 can lead to high load, especially on small clusters. Please design
677 your cluster so that it can handle such worst case scenarios.
678
679
680 [[ha_manager_start_failure_policy]]
681 Start Failure Policy
682 ---------------------
683
684 The start failure policy comes in effect if a service failed to start on a
685 node once ore more times. It can be used to configure how often a restart
686 should be triggered on the same node and how often a service should be
687 relocated so that it gets a try to be started on another node.
688 The aim of this policy is to circumvent temporary unavailability of shared
689 resources on a specific node. For example, if a shared storage isn't available
690 on a quorate node anymore, e.g. network problems, but still on other nodes,
691 the relocate policy allows then that the service gets started nonetheless.
692
693 There are two service start recover policy settings which can be configured
694 specific for each resource.
695
696 max_restart::
697
698 Maximum number of tries to restart an failed service on the actual
699 node. The default is set to one.
700
701 max_relocate::
702
703 Maximum number of tries to relocate the service to a different node.
704 A relocate only happens after the max_restart value is exceeded on the
705 actual node. The default is set to one.
706
707 NOTE: The relocate count state will only reset to zero when the
708 service had at least one successful start. That means if a service is
709 re-started without fixing the error only the restart policy gets
710 repeated.
711
712
713 [[ha_manager_error_recovery]]
714 Error Recovery
715 --------------
716
717 If after all tries the service state could not be recovered it gets
718 placed in an error state. In this state the service won't get touched
719 by the HA stack anymore. The only way out is disabling a service:
720
721 ----
722 # ha-manager set vm:100 --state disabled
723 ----
724
725 This can also be done in the web interface.
726
727 To recover from the error state you should do the following:
728
729 * bring the resource back into a safe and consistent state (e.g.:
730 kill its process if the service could not be stopped)
731
732 * disable the resource to remove the error flag
733
734 * fix the error which led to this failures
735
736 * *after* you fixed all errors you may request that the service starts again
737
738
739 [[ha_manager_package_updates]]
740 Package Updates
741 ---------------
742
743 When updating the ha-manager you should do one node after the other, never
744 all at once for various reasons. First, while we test our software
745 thoughtfully, a bug affecting your specific setup cannot totally be ruled out.
746 Upgrading one node after the other and checking the functionality of each node
747 after finishing the update helps to recover from an eventual problems, while
748 updating all could render you in a broken cluster state and is generally not
749 good practice.
750
751 Also, the {pve} HA stack uses a request acknowledge protocol to perform
752 actions between the cluster and the local resource manager. For restarting,
753 the LRM makes a request to the CRM to freeze all its services. This prevents
754 that they get touched by the Cluster during the short time the LRM is restarting.
755 After that the LRM may safely close the watchdog during a restart.
756 Such a restart happens on a update and as already stated a active master
757 CRM is needed to acknowledge the requests from the LRM, if this is not the case
758 the update process can be too long which, in the worst case, may result in
759 a watchdog reset.
760
761
762 Node Maintenance
763 ----------------
764
765 It is sometimes possible to shutdown or reboot a node to do
766 maintenance tasks. Either to replace hardware, or simply to install a
767 new kernel image.
768
769
770 Shutdown
771 ~~~~~~~~
772
773 A shutdown ('poweroff') is usually done if the node is planned to stay
774 down for some time. The LRM stops all managed services in that
775 case. This means that other nodes will take over those service
776 afterwards.
777
778 NOTE: Recent hardware has large amounts of RAM. So we stop all
779 resources, then restart them to avoid online migration of all that
780 RAM. If you want to use online migration, you need to invoke that
781 manually before you shutdown the node.
782
783
784 Reboot
785 ~~~~~~
786
787 Node reboots are initiated with the 'reboot' command. This is usually
788 done after installing a new kernel. Please note that this is different
789 from ``shutdown'', because the node immediately starts again.
790
791 The LRM tells the CRM that it wants to restart, and waits until the
792 CRM puts all resources into the `freeze` state (same mechanism is used
793 for xref:ha_manager_package_updates[Package Updates]). This prevents
794 that those resources are moved to other nodes. Instead, the CRM start
795 the resources after the reboot on the same node.
796
797
798 Manual Resource Movement
799 ~~~~~~~~~~~~~~~~~~~~~~~~
800
801 Last but not least, you can also move resources manually to other
802 nodes before you shutdown or restart a node. The advantage is that you
803 have full control, and you can decide if you want to use online
804 migration or not.
805
806 NOTE: Please do not 'kill' services like `pve-ha-crm`, `pve-ha-lrm` or
807 `watchdog-mux`. They manage and use the watchdog, so this can result
808 in a node reboot.
809
810
811 ifdef::manvolnum[]
812 include::pve-copyright.adoc[]
813 endif::manvolnum[]
814