wrap possible problematic cfs_read_file calls in eval
Wrap those calls to the cfs_read_file method, which may now also die
if there was a grave problem reading the file, into eval in all
methods which are used by the ha services.
The ones only used by API calls or CLI helpers are not wrapped, as
there it can be handled more gracefull (i.e., no watchdog is
running) and further, this is more intended to temporarily workaround
until we handle such an exception explicitly in the services - which
is a bit bigger change, so let's just go back to the old behavior for
now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 24 Jan 2017 17:37:23 +0000 (18:37 +0100)]
do not show a service as queued if not configured
The check if a service is configured has precedence over the check if
a service is already processed by the manager.
This fixes a bug where a service could be shown as queued even if he
was meant to be ignored.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 24 Jan 2017 17:37:22 +0000 (18:37 +0100)]
add ignore state for resources
In this state the resource will not get touched by us, all commands
(like start/stop/migrate) go directly to the VM/CT itself and not
through the HA stack.
The resource will not get recovered if its node fails.
Achieve that by simply removing the respective service from the
manager_status service status hash if it is in ignored state.
Add the state also to the test and simulator hardware.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 11 Oct 2017 13:10:19 +0000 (15:10 +0200)]
lrm: crm: show interest in pve-api-update trigger
To ensure that the LRM and CRM services get reloaded when
pve-api-update trigger gets activated.
Important, as we directly use perl API modules from qemu-server,
pve-container, pve-common and really want to avoid to run outdated,
possible problematic or deprecated, code.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 11 Oct 2017 13:10:18 +0000 (15:10 +0200)]
lrm.service: do not timeout on stop
we must shut all services down when stopping the LRM for a host
shutdown, this can take longer than 95 seconds and should not
get interrupted to ensure a gracefull poweroff.
The watchdog is still active untill all services got stopped so we
still are safe from a freeze or equivalent failure.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Philip Abernethy [Thu, 14 Sep 2017 12:39:33 +0000 (14:39 +0200)]
fix #1347: let postfix fill in FQDN in fence mails
Using the nodename in $mailto is not correct and can lead to mails not
forwarding in restrictive mail server configurations.
Also changes $mailfrom to 'root' instead of 'root@localhost', which
results in postfix appending the proper FQDN there, too. As a result the
Delivered-to header reads something like 'root@host.domain.tld' instead
of 'root@localhost', which is much more informational and more
consistent.
Reviewed-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 23 Aug 2017 08:15:49 +0000 (10:15 +0200)]
fix #1073: do not count backup-suspended VMs as running
when a stopped VM managed by HA got backuped the HA stack
continuously tried to shut it down as check_running returns only if a
PID for the VM exists.
As the VM was locked the shutdown tries were blocked, but still a lot
of annoying messages and task spawns happened during the backup
period.
As querying the VM status through the vm monitor is not cheap, check
if the VM is locked with the backup lock first, the config is cached
and so this is quite cheap, only then query the VMs status over qmp,
and check if the VM is in the 'prelaunch' state.
This state gets only set if KVM was started with the `-S` option and
has not yet continued guest operation.
Some performance results, I repeated each check 1000 times, first
number is the total time spent just with the check, second time is
the the time per single check:
old check (vm runs): 87.117 ms/total => 87.117 us/loop
new check (runs, no backup): 107.744 ms/total => 107.744 us/loop
new check (runs, backup): 760.337 ms/total => 760.337 us/loop
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 23 May 2017 12:35:38 +0000 (14:35 +0200)]
explicitly sync journal when disabling watchdog updates
Without syncing the journal could loose logs for a small interval (ca
10-60 seconds), but these last seconds are really interesting for
analyzing the cause of a triggered watchdog.
Also without this often the
> "client did not stop watchdog - disable watchdog updates"
messages wasn't flushed to persistent storage and so some users had a
hard time to figure out why the machine reset.
Use the '--sync' switch of journalctl which - to quote its man page -
"guarantees that any log messages written before its invocation are
safely stored on disk at the time it returns."
Use execl to call `journalctl --sync` in a child process, do not care
for any error checks or recovery as we will be reset anyway. This is
just a hit or miss try to log the situation more consistently, if it
fails we cannot really do anything anyhow.
We call the function on two points:
a) if we exit with active connections, here the watchdog will be
triggered soon and we want to ensure that this is logged.
b) if a client closes the connection without sending the magic close
byte, here the watchdog would trigger while we hang in epoll at
the beginning of the loop, so sync the log here also.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 26 May 2017 15:56:11 +0000 (17:56 +0200)]
always queue service stop if node shuts down
Commit 61ae38eb6fc5ab351fb61f2323776819e20538b7 which ensured that
services get freezed on a node reboot had a side effect where running
services did not get gracefully shutdown on node reboot.
This may lead to data loss as the services then get hard killed, or
they may even prevent a node reboot because a storage cannot get
unmounted as a service still access it.
This commits addresses this issue but does not changes behavior of
the freeze logic for now, but we should evaluate if a freeze makes
really sense here or at least make it configurable.
The changed regression test is a result of the fact that we did not
adapt the correct behavior for the is_node_shutdown command in the
problematic commit. The simulation envrionment returned true
everytime a node shutdown (reboot and poweroff) and the real world
environment just returned true if a poweroff happened but not on a
reboot.
Now the simulation acts the same way as the real environment.
Further I moved the simulation implemenentation to the base class so
that both simulator and regression test system behave the same.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 24 Jan 2017 16:54:03 +0000 (17:54 +0100)]
Resource/API: abort early if resource in error state
If a service is in error state the single state change command that
can make sense is setting the disabled request state.
Thus abort on all other commands early to enhance user experience.
Thomas Lamprecht [Thu, 19 Jan 2017 12:32:47 +0000 (13:32 +0100)]
sim: allow new service request states over gui
Change the old enabled/disabled GTK "Switch" element to a ComboBox
one and add all possible service states, so we can simulate the real
world behaviour with its new states better.
As we do not need to map a the boolean swicth value to our states
anymore, we may drop the set_setvice_state method from the RTHardware
class and use the one from the Hardware base class instead.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 19 Jan 2017 12:32:46 +0000 (13:32 +0100)]
factor out and unify sim_hardware_cmd
Most things done by sim_hardware_cmd are already abstracted and
available in both, the TestHardware and the RTHardware class.
Abstract out the CRM and LRM control to allow the unification of both
classes sim_hardware_cmd.
As in the last year mostly the regression test systems TestHardware
class saw new features use it as base.
We return now the current status out of the locked context, this
allows to update the simulators GUI out of the locked context.
This changes increases the power of the HA Simulator, but the new
possible actions must be still implemented in its GUI. This will be
done in future patches.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 19 Jan 2017 12:32:45 +0000 (13:32 +0100)]
sim: allocate HA Env only once per service and node
Do not allocate the HA Environment every time we fork a new CRM or
LRM, but once at the start of the Simulator for all nodes.
This can be done as the Env does not saves any state and thus can be
reused, we use this also in the TestHardware class.
Making the behavior of both Hardware classes more similar allows us
to refactor out some common code in following commits.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 12 Jan 2017 14:51:59 +0000 (15:51 +0100)]
Status: factor out new service state calculation
Factor out the new "fast feedback for user" service state calculation
and use it also in the HA Simulator to provide the same feedback as
in the real world.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 12 Jan 2017 14:51:57 +0000 (15:51 +0100)]
sim: improve canceling the migrate dialog
We could only cancel the migrate dialog by pressing ESC or - if the
used window manager supports it - by pressing the windows "X" button
in the window border.
Pressing ESC caused a warning because the result of the domain was
now a string containing the signal names, as we checked for an
integer to see if the "Ok" button was pressed perl warned us about:
> Argument "closed" isn't numeric in int at ...
Improve this by adding a cancel button and by switching the button
return values from integer to strings, which can be compared in a
more general way.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 12 Jan 2017 14:51:56 +0000 (15:51 +0100)]
sim: set migrate dialog transient to parent
This allows window managers to e.g. keep the dialog on top of the
main window, or center the dialog over the main window.
Fixes also a warning that the dialog had no transient parent
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 12 Jan 2017 14:51:54 +0000 (15:51 +0100)]
ensure test/sim.pl always use the currently developed code
sim.pl suggested to be a perl script, but was a bash script which
called ../pve-ha-simulator
It set the include directory to '..' as its intended to use the
currently developed code, not the pve-ha-simulator installed on the
system.
This did not work correctly as pve-ha-simulator has a
use lib '/global/path/to/HA/Simulator'
directive.
Create a small perl script which runs the RTHardware.
Changes which differ from the pve-ha-simulator script include that we
fall back to the 'simtest' directory if none is given.
Also the 'batch' option does not exists here, use the ha-tester
script if you want this behavior.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 20 Dec 2016 08:33:44 +0000 (09:33 +0100)]
is_node_shutdown: check for correct systemd targets
shutdown.target is active every time when the node shuts down, be it
reboot, poweroff, halt or kexec.
As we want to return true only when the node powers down without a
restart afterwards this was wrong.
Match only poweroff.target and halt.target, those two systemd targets
which cause a node shutdown without a reboot.
Enhance also the regular expression so that we do not falsely match
when a target includes poweroff.target in its name, e.g.
not-a-poweroff.target
Also pass the 'full' flag to systemctl to ensure that target name do
not get ellipsized or cut off.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 21 Dec 2016 15:44:39 +0000 (16:44 +0100)]
Simulator: fix scrolling to end of cluster log view
As the cursor may be set by the user, even if invisible, scrolling to
its position to reach the end of the TextView is potentially flawed.
Connect a signal on the 'size-allocate' event of the TextView element
instead. This event gets triggered on all size changes of the
TextViews.
In the call back use the ScrolledWindow methods to correctly scroll
to the end of its child (the TextView).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 21 Dec 2016 15:44:38 +0000 (16:44 +0100)]
Simulator: do not use cursor position to insert log
While the cursor is set invisible it can be still set by clicking
somewhere in the log window. As we uset insert_at_cursor() to append
text, asuming that the cursor is always at the end, this results in
an erroneous behavior where new log text get inserted at arbitrary
positions.
Use get_end_iter() to get the real end of the text buffer instead.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 18 Nov 2016 15:53:01 +0000 (16:53 +0100)]
rename request state 'enabled' to 'started'
with the new 'stopped' state there was a possible confusion what each
state could do. Rename enabled to started to started to address this.
Enabled can still be used for backward compatibility and will be
removed on the next major release.
The TestHardware does not recognizes the "service enable" command
anymore, the regression test which made use of it were ported to
"service started" command.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 15 Nov 2016 11:11:22 +0000 (12:11 +0100)]
skip transition to 'started' state if we won't stay in it
If we add a resource wth disabled or stopped state we went through
the started state before changing directly to the request_stop state.
Skip the started state entirely and go direct to the request_stop
state.
This is a small behavioral change, as we never wrote in the
manager_status that we were in the started state but changed directly
to request_stop, so the LRM cannot be affected by this change.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 28 Oct 2016 08:39:38 +0000 (10:39 +0200)]
API/Status: avoid using HA Environment
using the environment here is just overhead as it calls functions we
could call also directly here and this code gets only called from the
PVE2 environment and not the test/simulator environment.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 28 Oct 2016 08:39:37 +0000 (10:39 +0200)]
factor out resource config check and default set code
Factor the code which does basic checks and sets default settings
out from the PVE2 environment class to the Config class as
read_and_check_resources_config method.
We can reuse this in the HA API status call.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 15 Sep 2016 08:45:16 +0000 (10:45 +0200)]
change service state to error if no recovery node is available
else we will try again endlessly if the service has no other
possible node where it can run, e.g. if its restricted.
This avoids various problems, especially if a service is configured
to just one node we could never get the service out of the fence
state again without manually hacking the manager status.
Add a regression test for this.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 15 Sep 2016 08:45:15 +0000 (10:45 +0200)]
cleanup backup & mounted locks after recovery (fixes #1100)
This cleans up the backup and mounted locks after a service is
recovered from a failed node, else it may not start if a locked
action occurred during the node failure.
We allow deletion of backup and mounted locks as they are secure
to do so if the node which hosted the locked service is now fenced.
We do not allow snapshot lock deletion as this needs more manual
clean up, also they are normally triggered manually.
Further ignore migration locks for now, we should over think that but
as its a manual triggered action for now it should be OK to not
auto delete it.
We do cannot remove locks via the remove_lock method provided by
PVE::AbstractConfig, as this method is well behaved and does not
allows removing locks from VMs/CTs located on another node. We also
do not want to adapt this method to allow arbitrary lock removing,
independent on which node the config is located, as this could cause
missuse in the future. After all one of our base principles is that
the node owns its VMs/CTs (and there configs) and only the owner
itself may change the status of a VM/CT.
The HA manager needs to be able to change the state of services
when a node failed and is also allowed to do so, but only if the
node is fenced and we need to recover a service from it.
So we (re)implement the remove lock functionality in the resource
plugins.
We call that only if a node was fenced, and only *previous* stealing
the service. After all our implication to remove a lock is that the
owner (the node) is fenced. After stealing it we already changed
owner, and the new owner is *not* fenced and thus our implication
does not hold anymore - the new owner may already do some stuff
with the service (config changes, etc.)
Add the respective log.expect output from the added test to enable
regression testing this issue.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 14 Sep 2016 09:29:43 +0000 (11:29 +0200)]
add possibility to simulate locks from services
In the real PVE2 environment locks can be leftover if a node fails
during an locked action like for example backups.
A left over lock may hinder the service to start on another node,
allow our test environment to simulate such events.
Also add two regression tests without the expected log files.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 14 Sep 2016 09:29:42 +0000 (11:29 +0200)]
remove state transition from error to fence state
Remove the possible transition from error to fence state. The error
state is an end state and mustn't be left by some automatic action
but on manual intervention!
This also allows us later on to place a service which is not
recoverable from the fence in the error state without generating
and endless loop of state changes.
Add a regression test for a failed node with a service in error
state.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 14 Sep 2016 11:50:02 +0000 (13:50 +0200)]
don't regression test when building the simulator package
we did the tests already when building the HA manager package so
if those did not fail the one from the HA manager simulator will
certainly not fail too so do not waste time running them again.
This has the small drawback that if anyone just builds the simulator
package through `make simdeb` the test won't be run at all for the
build package, this is seldom the case and can be worked around -
i.e. just use the all make target or run the tests manual.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 12 Sep 2016 09:28:17 +0000 (11:28 +0200)]
fix race condition on slow resource commands in started state
When we fixed the dangling state machine - where one command request
from the CRM could result in multiple executes of said command by
the LRM - by ensuring in the LRM that UID identified actions get
only started once per UID (except the stop comand) we introduced a
bug which can result in a lost LRM result from an failed service
start.
The reason for this is that we generated a new UID for the started
state every CRM turn, so that a service gets restarted if it
crashes. But as we do this without checking if the LRM has finished
our last request we may loose the result of this last request.
As an example consider the following timeline of events:
1. CRM request start of Service 'ct:100'
2. LRM starts this request, needs a bit longer
3. Before LRM worker finishes the CRM does an iteration and
generates a new UID for this service
4. The LRM worker finishes but cannot write back its result as
the UID doesn't exists anymore in the managers service status.
5. The CRM gets another round and generates a new UID for 'ct:100'
6. The cycle begins again, the LRM always throws away its last
result as the CRM wrongfully generated an new command
This loss of the result is problematic if it was an erroneous one,
because then it result in a malfunction of the failure restart and
relocate policies.
Fix this by checking in the CRM if the last command was processed by
the LRM, so simply check if a $lrm_result exists.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 18 Jul 2016 09:17:49 +0000 (11:17 +0200)]
relocate policy: try to avoid already failed nodes
In the event that the cluster remains stable from the HA view
point (i.e. no LRM failures, no service changes) and a service
with a start error relocate policy configured fails to start
it will cycle between two nodes back and forth, even if other
untried nodes (with a possibility that the service could be
started there) would be available.
The reason for this behavior is that if more than one node is
possible we select the one with the lowest active service count
to minimize cluster impact a bit.
As a start of a service failed on a node a short time ago it
probably will also fail the next try (e.g. storage is offline),
whereas an untried node may have the chance to be fully
able to start the service, which is our goal.
Fix that by excluding those already tried nodes from the top
priority node list in 'select_service_node' if there are other
possible nodes to try, we do that by giving select_service_node
a array of the already tried nodes, those then get deleted from the
selected top priority group.
If there is no node left after that we retry on the current node and
hope that another node becomes available.
While not ideal this a situation caused by the user with a high
probability as our default start error relocation setting is one
relocate try.
If all tries fail we place the service in the error state, the
tried nodes entry gets cleanup after an user triggers an error
recovery by disabling the service, so the information of the tried
nodes stays in the manager status until then.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 18 Jul 2016 09:17:48 +0000 (11:17 +0200)]
Manager: record tried node on relocation policy
Instead of counting up an integer on each failed start trial, record
the already tried nodes. We can then use the size of the tried nodes
record array as 'try count' and achieve so the same behaviour as with
the 'relocate_trial' hash earlier.
Log the tried nodes after the service started or if it could not be
started at all, so an admin can follow the behaviour and investigate
the reason of the failure on a specific node.
This prepares us also for a more intelligent recovery node selection,
as we can skip already tried nodes from the current recovery cycle.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 18 Jul 2016 09:17:47 +0000 (11:17 +0200)]
cleanup manager status on start
Cleanup the manager state in a general way if we get promoted to
manager. This safes us code as instead of having to check all
deprecated entries and delete them, each one extra, we just safe the
state part needed to change master without loosing any result of the
manager status and just delete the rest.
This would include the following keys:
* service status: as it may contain unprocessed results
* manager_node: this is set only once before this cleanup so do not
delete it.
Signed-off-by: Dietmar Maurer <dietmar@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 14 Jul 2016 12:41:53 +0000 (14:41 +0200)]
FenceConfig: assert that device priorities are unique
Each device priority must be unique to guarantee a deterministic
execution of the configured devices.
Else we cannot decide in which order we should write the devices to
the config file.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 14 Jul 2016 12:41:50 +0000 (14:41 +0200)]
add parser self check regression test
This test does a read -> write -> read cycle and then checks if the
config generated from the first read and the config generated from
the second read are equivalent (semantically).
This tests mainly the FenceConfig::write_config method, the
parse_config method gets checked with the already implemented config
regression tests.
To compare both configs we add a deep object compare method,
it can compare Objects with arbitrary nested data structures from the
types: hash, array and scalar.
It makes an depth first search with some early abortion checks on
the roots of subtrees (i.e. is the length and type the same).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 30 Jun 2016 10:57:39 +0000 (12:57 +0200)]
Fence: rewrite class
A rather big cleanup of parts I already had in a separate branch and
new changes which arose from further testing and evaluating the
current code.
The class is currently not used and thus the rewrite should not be
problematic, I also tested it with my HW fencing branch and it still
works as previously.
Changes:
* add constructor and safe state in class object instead of global
vars
* check_jobs was now split up in collect_finished_workers (which may
be later factored out as LRM uses a similar worker implementation)
and in check_worker_results.
* change method name from start_fencing to run_fence_jobs, it will
check the active workers if the node has already a fence job
deployed or else start fencing for this node
* fix bugs in process_fencing where if multiple parallel device
succeeded at once only one was counted
* restrict waitpid to fence_job worker pids (instead of -1) else
we could get some pretty nasty side effects
* remove 'reset_hard', 'reset' and 'bailout', they are replaced by
kill_and_cleanup_jobs
* some comment and formatting changes
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 15 Jun 2016 15:59:01 +0000 (17:59 +0200)]
send email on fence failure and success
Fencing is something which should not happen often in the real world
and has most time a really bad cause, thus send a email when
starting to fence a node and on success to root@localhost to inform
the cluster admin of said failures so he can check the hardware and
cluster status as soon as possible.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>