5 The current HA manager has a bunch of drawbacks:
7 - no more development (redhat moved to pacemaker)
9 - highly depend on old version of corosync
11 - complicated code (cause by compatibility layer with
12 older cluster stack (cman)
16 In future, we want to make HA easier for our users, and it should
17 be possible to move to newest corosync, or even a totally different
18 cluster stack. So we want:
20 - possibility to run with any distributed key/value store which provides
21 some kind of locking with timeouts (zookeeper, consul, etcd, ..)
23 - self fencing using Linux watchdog device
25 - implemented in Perl, so that we can use PVE framework
27 - only work with simply resources like VMs
29 We dropped the idea to assemble complex, dependend services, because we think
30 this is already done with the VM abstraction.
34 == Cluster requirements ==
36 === Cluster wide locks with timeouts ===
38 The cluster stack must provide cluster wide locks with timeouts.
39 The Proxmox 'pmxcfs' implements this on top of corosync.
43 We need a reliable watchdog mechanism, which is able to provide hard
44 timeouts. It must be guaranteed that the node reboot withing specified
45 timeout if we do not update the watchdog. For me it looks that neither
46 systemd nor the standard watchdog(8) daemon provides such guarantees.
48 We could use the /dev/watchdog directly, but unfortunately this only
49 allows one user. We need to protect at least two daemons, so we write
50 our own watchdog daemon. This daemon work on /dev/watchdog, but
51 provides that service to several other daemons using a local socket.
55 A node needs to aquire a special 'ha_agent_${node}_lock' (one separate
56 lock for each node) before starting HA resources, and the node updates
57 the watchdog device once it get that lock. If the node loose quorum,
58 or is unable to get the 'ha_agent_${node}_lock', the watchdog is no
59 longer updated. The node can release the lock if there are no running
62 This makes sure that the node holds the 'ha_agent_${node}_lock' as
63 long as there are running services on that node.
65 The HA manger can assume that the watchdog triggered a reboot when he
66 is able to aquire the 'ha_agent_${node}_lock' for that node.
68 === Problems with "two_node" Clusters ===
70 This corosync options depends on a fence race condition, and only
71 works using reliable HW fence devices.
73 Above 'self fencing' algorithm does not work if you use this option!
75 == Testing requirements ==
77 We want to be able to simulate HA cluster, using a GUI. This makes it easier
78 to learn how the system behaves. We also need a way to run regression tests.
80 = Implementation details =
82 == Cluster Resource Manager (class PVE::HA::CRM) ==
84 The Cluster Resource Manager (CRM) daemon runs one each node, but
85 locking makes sure only one CRM daemon act in 'master' role. That
86 'master' daemon reads the service configuration file, and request new
87 service states by writing the global 'manager_status'. That data
88 structure is read by the Local Resource Manager, which performs the
89 real work (start/stop/migrate) services.
91 === Service Relocation ===
93 Some services like Qemu Virtual Machines supports live migration.
94 So the LRM can migrate those services without stopping them (CRM
95 service state 'migrate'),
97 Most other service types requires the service to be stopped, and then
98 restarted at the other node. Stopped services are moved by the CRM
99 (usually by simply changing the service configuration).
101 === Possible CRM Service States ===
103 stopped: Service is stopped (confirmed by LRM)
105 request_stop: Service should be stopped. Waiting for
106 confirmation from LRM.
108 started: Service is active an LRM should start it asap.
110 fence: Wait for node fencing (service node is not inside
111 quorate cluster partition).
113 freeze: Do not touch. We use this state while we reboot a node,
114 or when we restart the LRM daemon.
116 migrate: Migrate (live) service to other node.
118 error: Service disabled because of LRM errors.
121 == Local Resource Manager (class PVE::HA::LRM) ==
123 The Local Resource Manager (LRM) daemon runs one each node, and
124 performs service commands (start/stop/migrate) for services assigned
125 to the local node. It should be mentioned that each LRM holds a
126 cluster wide 'ha_agent_${node}_lock' lock, and the CRM is not allowed
127 to assign the service to another node while the LRM holds that lock.
129 The LRM reads the requested service state from 'manager_status', and
130 tries to bring the local service into that state. The actial service
131 status is written back to the 'service_${node}_status', and can be
134 == Pluggable Interface for cluster environment (class PVE::HA::Env) ==
136 This class defines an interface to the actual cluster environment:
138 * get node membership and quorum information
140 * get/release cluster wide locks
146 * read/write cluster wide status files
148 We have plugins for several different environments:
150 * PVE::HA::Sim::TestEnv: the regression test environment
152 * PVE::HA::Sim::RTEnv: the graphical simulator
154 * PVE::HA::Env::PVE2: the real Proxmox VE cluster