pmxcfs: only exit parent when successfully started
since systemd depends that parent exits only
when the service is actually started, we need to wait for the
child to get to the point where it starts the fuse loop
and signal the parent to now exit and write the pid file
without this, we had an issue, where the
ExecStartPost hook (which runs pvecm updatecerts) did not run reliably,
but which is necessary to setup the nodes/ dir in /etc/pve
and generating the ssl certificates
this could also affect every service which has an
After=pve-cluster
Thomas Lamprecht [Tue, 27 Mar 2018 06:08:37 +0000 (08:08 +0200)]
API/Cluster: autoflush STDOUT for join and create
We're in a forked worker here, so STDOUT isn't connected to a
(pseudo)TTY directly, so perl flushes only when it's intewrnal buffer
is full.
Ensure each line gets flushed out to the API client in use to give
immediate feedback about the operation.
For example, our WebUIs Task Viewer won't show anything without this
quite a bit of time, you may even get logged out before the flush
from the perl side happens, which is simply bad UX.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 29 Mar 2018 09:06:08 +0000 (11:06 +0200)]
pvecm join: also default to resolved IP with use_ssh param
We already switched to this behaviour in pvecm create and pvecm join
(with API) but did not changed it for the case when a user requested
to use the old method to join with --use_ssh.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
cluster join: ensure updatecerts gets called on quorate cluster
We moved the start of pve-cluster together with the one of corosync
earlier, before the quorate check.
This meant that the 'pvecm updatecerts --silent' we call in the
from the pve-cluster.service through ExecStartPost exited as it has
not yet quorum.
So factor the respective code out to the Cluster perl module and
call this function manually after we reached quorum.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If we are not part of a cluster we do not need to worry about other
members messing with the config. But there may be local contenders,
e.g., two automation script instances started in parallel by mistake
or two admin (sessions) which start a create or join clsuter request
at the same time.
Reuse the local flock for this purpose.
lock_file silents an exception, but does not alters it so we die if
$@ is set, to ensure a worker gets to know that something bad
happened.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
api/cluster: add endpoint to GET cluster join information
Returns all relevant information for joining this cluster over the
current connected node securely over the API, address, fingerprint,
totem config section and (not directly needed but possibly useful)
cluster configuration digest.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 12 Dec 2017 16:15:56 +0000 (17:15 +0100)]
api/cluster: create cluster in forked worker
Creating a cluster may need a bit longer, we need to gather random
data for the corosync authkey, restart services and such.
As we're now exposed in the API the 30 second response limit from
pveproxy is a big reason to do this. But we also get a nice task log
entry with this, which is nice.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 27 Nov 2017 11:53:46 +0000 (12:53 +0100)]
pvecm add: use API by default to join cluster
Default to using the API for a add node procedure.
But, allow the user to manually fall back to the legacy SSH method.
Also fallback if the API detected an not up to date peer, this is
done by checking for the 501 HTTP_NOT_IMPLEMENTED response code.
This could be removed in a later major release, e.g. 6.0.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 27 Nov 2017 09:55:14 +0000 (10:55 +0100)]
api/cluster: add join endpoint
Add an endpoint to the API which allows to join an existing PVE
cluster by only using the API instead of CLI tools (pvecm).
Use a worker as this operation may need longer than 30 seconds.
With the worker we also get a task log entry/window for an UI for
free, allowing to give better feedback.
The join helper will be reused by the CLI handler in a later patch.
It is based on its behaviour, but swapped out the ssh parts with API
calls.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 26 Jan 2018 10:40:29 +0000 (11:40 +0100)]
node add: factor out local joining steps
Factor out the code which finishes the join to a cluster on the
joinee side, after a cluster member approved the join request and
supplied us with the necessary information.
Will be used by API and the SSH join code paths.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 23 Nov 2017 13:29:37 +0000 (14:29 +0100)]
node add: factor out checks for joining
Factor out the code which checks if the node can join another
cluster. It will be used by the new API endpoint to join a cluster
but stays also in the CLIHandler as we keep the old legacy SSH method
for a bit.
This is not a completely 1:1 move, I changed:
* &$error(...) to $error->(...)
* removing a few empty lines, where code was so spread out that those
lines resulted in the opposite of what they intended, i.e., less
readability
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 22 Jan 2018 09:52:12 +0000 (10:52 +0100)]
avoid harmful '<>' pattern, explicitly read from STDIN
Fixes problems in CLIHandler using the code pattern:
while (my $line = <>) {
...
}
For why this causes only _now_ problems lets first look how <>
behaves:
"The null filehandle <> is special: [...] Input from <> comes either
from standard input, or from each file listed on the command line.
Here's how it works: the first time <> is evaluated, the @ARGV array
is checked, and if it is empty, $ARGV[0] is set to "-" , which when
opened gives you standard input. The @ARGV array is then processed
as a list of filenames." - 'perldoc perlop'
Recent changes in the CLIHandler code changed how we modfiied @ARGV
Earlier we assumed that the first argument must be the command and
thus shifted it out of @ARGV, now we can have multiple levels of
(sub)commands. This change also changed how we handle @ARGV, we do
not unshift anything but go through the arguments until we got to
the final command and copy the rest of @ARGV as we know that this
must be the commandos arguments.
For '<>' this means that ARGV was still fully populated and perl
tried to open element as a file, which naturally failed.
Thus the change in pve-common only exposed this 'dangerous' code
pattern.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 22 Dec 2017 13:34:32 +0000 (14:34 +0100)]
add pmxcfs restart detection heuristic for IPCC
Allow clean pmxcfs restarts to be fully transparent for the IPCC
using perl stack aboce.
A restart of pmxcfs invalidates the connection cache, we only set the
cached connection to NULL for this case and the next call to
ipcc_send_rec would connect newly.
Further, such a restart may need quite a bit time (seconds).
Thus write a status file to flag a possible restart when terminating
from pmxcfs. Delete this flag file once we're up and ready again.
Error case handling is described further below.
If a new connections fails and this flag file exists then retry
connecting for a certain period (for now five seconds).
If a cached connection fails always retry once, as every pmxcfs
restart makes the cached connection invalid, even if IPCC would be
fully up and ready again and then also follow the connection polling
heuristic if the restart flag exists, as new connections do.
We use the monotonic clock to avoid problems if the (system) time
changes and to keep things as easy as possible.
We delete the flag file if a IPCC call could not connect in the grace
period, but only if the file is still the same, i.e., no one else has
deleted and recreated it in the meantime (e.g. a second cfs restart).
This guarantees that IPCC calls try this heuristic only for a limited
time (5 seconds until the first failed one) if the cfs does not
starts again.
Further, as the flag resided in /run/... - which is always a tmpfs
(thus in memory and thus cleaned upon reboot) we may not run into
leftover flag files on a node reset, e.g. done by the HA watchdog for
self-fencing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 20 Dec 2017 09:35:21 +0000 (10:35 +0100)]
postinst: stop LRM before CRM in workaround
may help in a single node HA cluster, which while not usable for real
HA can be still found in the wild and make sense as ensure my VM/CT
stays started manager.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ensure problematic ha service is stopped during update
Add a postinst file which stops, if running, the ha service before it
configures pve-cluster and starts them again, if enabled.
Do this only if the version installed before the upgrade is <= 2.0-3
dpkg-query has Version and Config-Version
Version is at this time the new unpacked version already, so we need
to check both to catch all cases.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
datacenter.cfg write: retransform migration property to string
We use parse_property_string in the parser to make life easier for
code working with the migration format, but we did not retransform
it back when writing datacenter.cfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This controlled if we use reload-or-restart or try-reload-or-restart.
They differ in the following way:
> reload-or-restart - Reload one or more units if possible, otherwise
> start or restart
>
> try-reload-or-restart - Reload one or more units if possible,
> otherwise (re)start if active
Under PVE we normally need a running ssh for a node/cluster to work,
there isn't the case where it should be stopped, especially not for
this method which is normally called when setting up or joining a
cluster.
So always use 'reload-or-restart'.
Thomas Lamprecht [Thu, 23 Nov 2017 11:12:05 +0000 (12:12 +0100)]
pvecm: add/delete: local lock & avoid problems on first node addition
cfs_lock is per node, thus we had a possibility for a node addition
race if the process was started on the same node (e.g. by a
script/ansible/...).
So always request a local lock first, if that is acquired check how
many members currently reside in the cluster and then decide if we
can directly execute the code (single node cluster = no contenders)
or must hold the lock.
One may think that there remains a race when adding a node to single
node cluster, i.e., once the node is added it may itself be a target
for another joining node. But this cannot happen as we only tell the
joining node that it could be added once we already *have* added it
locally.
Besides the defense against a race if two user execute a node
addition to the same node at the same time, this also addresses a
issue where the cluster lock could not be removed after writing the
corosync conf, as pmxcfs and corosync triggered an config reload and
added the new node, which itself did not yet know that it was
accepted in the cluster. Thus, the former single node cluster expects
two nodes but has only one for now, until the other node pulled the
config and authkey and started up its cluster stack.
That resulted in a failing removal of the corosync lock, thus adding
another node did not work until this lock timed out (~2 minutes).
While often node additions are separated by more than 2 minutes time
intervall, deployment helpers (or fast admins, for that matter) may
trigger this easily.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 17 Nov 2017 14:04:21 +0000 (15:04 +0100)]
buildsys: autogen debug package and cleanup unecessary rules
don't do manually what the deb helpers do automatically and better.
Autogenerate the debug package, it includes now only the debugsymbols
without effectively duplicating all executables and libraries.
In the same step add a install file which installs our sysctl
settings, this is done together as it allows to skip some
intermediate steps, also the change is not to big, so it should be
possible to see whats going on.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 17 Nov 2017 14:02:21 +0000 (15:02 +0100)]
pve-cluster service: remove $DAEMON_OPTS and environment file
"We use it to pass $DAEMON_OPTS variable to pmxcfs, which is empty per
default.
pmxcfs accepts following options at the moment:
> Usage:
> pmxcfs [OPTION...]
>
> Help Options:
> -h, --help Show help options
>
> Application Options:
> -d, --debug Turn on debug messages
> -f, --foreground Do not daemonize server
> -l, --local Force local mode (ignore corosync.conf,
> force quorum)
"help" can be safely ignored, as can "foreground" - it would break
the service as the Type is forking and thus systemd would expect that
the starting process returns rather quickly and kill it after the
timeout thinking the start failed when this is set.
Then there is "debug", while this is quite safe to use I do not
expect to find it in the wild. 1) it *spams* the logs in such a heavy
manner that it's useless for a user 2) if wished it can be
enabled/disable on the fly by writing 1/0 to /etc/pve/.debug (which
is what we tell people in docs, et al. anyway) So this parameter is
also quite safe to ignore, IMO.
Then there is "local", now this can be enabled but is such evil
and wrong so that anybody setting it *permanently*, deserves to
be saved by not allowing him to do so. If somebody uses this we
should hope that he did out of not knowing better and is actually
thankful when he learns what this meant."
https://pve.proxmox.com/pipermail/pve-devel/2017-November/029527.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 20 Nov 2017 07:42:46 +0000 (08:42 +0100)]
fix #1566: do not setup ssh config in updatecerts call
pvecm updatecerts gets called on each pve-cluster.service start,
thus at least on each node boot and on each pve-cluster update.
updatecerts contained a call to setup_sshd_config, which ensured that
the sshd_config parameter 'PermitRootLogin' gets set to yes, with the
intend that this is needed for a working cluster.
But, also the now more common and secure options 'prohibit-password'
and 'without-password' are OK for a cluster to work properly.
This change was added by 6c0e95b3, without clear indication why, our
installer enforces this setting already, as does a cluster create and
a join to a cluster.
To allow an user to use the more secure setting remove the call from
updatecerts again, thus he only needs to changes this after cluster
create/add operations, on one node only.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
fix #1559: pmxcfs: add missing lock when dumping .rrd
Adding our standard mutex for protecting cfs_status from multiple
conflicting changes solves two things. First, it now protects
cfs_status as it was changed here and secondly, it protects the
global rrd_dump_buf and rrd_dump_last helper variables from
incosistent access and a double free chance.
Fixes: #1559 Reported-by: Tobias Böhm <tb@robhost.de> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 10 Nov 2017 09:24:24 +0000 (10:24 +0100)]
cfs_lock: add missing trailing newline
When we do not instantly get the lock we print a respective message
to stderr. This shows also up in the task logs, and if it's the last
message before a 'Task OK' the UI gets confused an shows the task as
erroneous.
Keep the message as its a good feedback for the user to see why an op
seems to do nothing, so simply add a trailing newline.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We take the left-over timeout returned from alarm() and then
sleep for a second, so when continuing the alarm timeout we
we need to subtract that second for consistency.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
cfs_lock: swap checks for specific errors with $got_lock
We checked if a specific error was set or, respectively, not set to
know if we got the lock or not.
The check if we may unlock again was negated and thus could lead to
problems, in specific - rather unlikely - cases.
Use the by the previous patch added $got_lock variable, which only
gets set when we really got the lock, instead.
While refactoring for the new variable, set the $noerr parameter of
check_cfs_quorum() as we do not want to die here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
cfs_lock: address race where alarm triggers with lock accquired
As mkdir can possibly hang forever we need to enforce a timeout on
it. But this was made in such a way so that a small time window
existed where the lock could be acquired successfully but the alarm
triggered still, leaving around an unused lock for 120 seconds.
Wrap only the mkdir call itself in an alarm and save its result
directly in a $got_lock variable, this minimizes the window as far as
possible from the perl side.
This is also easier to track for humans reading the code and should
cope better against code changes, e.g., it does not breaks just if an
error message typo got corrected a few lines above.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
cluster: improve error handling when reading files
When querying file contents via IPC we return undef if the
file does not exist, but also on any other error. This is
potentially problematic as the ipcc_send_rec() xs function
returns undef on actual errors as well, while setting $!
(errno).
It's better to die in cases other than ENOENT. Before this,
pvesr would assume an empty replication config and an empty
vm list if pmxcfs wasn't running, which could then clear out
the entire local replication state file.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
cluster: cfs_update: option to die rather than warn
It can be useful to know whether we actually have an empty
vm list or whether the last cfs_update call simply failed.
Previously this only warned.
This way we can avoid a nasty type of race condition. For
instance in pvesr where it's possible that the vm list query
fails while everything else worked (eg. if the pmxcfs was
just starting up, or died between the queries), in which
case it would assume there are no guests and the
purge-old-states step would clear out the entire local state
file.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Thomas Lamprecht [Thu, 21 Sep 2017 12:08:00 +0000 (14:08 +0200)]
cfs-func-plug: use RW lock for safe cached data access
fuse may spawn multiple threads if there are concurrent accesses.
Our virtual files, e.g. ".members", ".rrd", are registered over our
"func" cfs plug which is a bit special.
For each unique virtual file there exists a single cfs_plug_func_t
instance, shared between all threads.
As we directly operated unlocked on members of this structure
parallel accesses raced between each other.
This could result in quite visible problems like a crash after a
double free (Bug 1504) or in less noticeable effects where one thread
may read from an inconsistent, or already freed memory region.
Add a Reader/Writer lock to efficiently address this problem.
Other plugs implement more functions and use a mutex to ensure
consistency and thus do not have this problem.
Fixes: #1504 Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 20 Sep 2017 13:11:02 +0000 (15:11 +0200)]
corosync config parser: move to hash format
The old parser itself was simple and easy but resulted in quite a bit
of headache when changing corosync config sections, especially if
multiple section levelsshould be touched.
Move to a more practical internal format which represents the
corosync configuration in hash
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
blowfish, 3des and arcfour are not enabled by default on the
server side anyway.
on most hardware, AES is about 3 times faster than Chacha20
because of hardware accelerated AES, hence the changed order
of preference compared to the default.