ensure problematic ha service is stopped during update
Add a postinst file which stops, if running, the ha service before it
configures pve-cluster and starts them again, if enabled.
Do this only if the version installed before the upgrade is <= 2.0-3
dpkg-query has Version and Config-Version
Version is at this time the new unpacked version already, so we need
to check both to catch all cases.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
datacenter.cfg write: retransform migration property to string
We use parse_property_string in the parser to make life easier for
code working with the migration format, but we did not retransform
it back when writing datacenter.cfg
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This controlled if we use reload-or-restart or try-reload-or-restart.
They differ in the following way:
> reload-or-restart - Reload one or more units if possible, otherwise
> start or restart
>
> try-reload-or-restart - Reload one or more units if possible,
> otherwise (re)start if active
Under PVE we normally need a running ssh for a node/cluster to work,
there isn't the case where it should be stopped, especially not for
this method which is normally called when setting up or joining a
cluster.
So always use 'reload-or-restart'.
Thomas Lamprecht [Thu, 23 Nov 2017 11:12:05 +0000 (12:12 +0100)]
pvecm: add/delete: local lock & avoid problems on first node addition
cfs_lock is per node, thus we had a possibility for a node addition
race if the process was started on the same node (e.g. by a
script/ansible/...).
So always request a local lock first, if that is acquired check how
many members currently reside in the cluster and then decide if we
can directly execute the code (single node cluster = no contenders)
or must hold the lock.
One may think that there remains a race when adding a node to single
node cluster, i.e., once the node is added it may itself be a target
for another joining node. But this cannot happen as we only tell the
joining node that it could be added once we already *have* added it
locally.
Besides the defense against a race if two user execute a node
addition to the same node at the same time, this also addresses a
issue where the cluster lock could not be removed after writing the
corosync conf, as pmxcfs and corosync triggered an config reload and
added the new node, which itself did not yet know that it was
accepted in the cluster. Thus, the former single node cluster expects
two nodes but has only one for now, until the other node pulled the
config and authkey and started up its cluster stack.
That resulted in a failing removal of the corosync lock, thus adding
another node did not work until this lock timed out (~2 minutes).
While often node additions are separated by more than 2 minutes time
intervall, deployment helpers (or fast admins, for that matter) may
trigger this easily.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 17 Nov 2017 14:04:21 +0000 (15:04 +0100)]
buildsys: autogen debug package and cleanup unecessary rules
don't do manually what the deb helpers do automatically and better.
Autogenerate the debug package, it includes now only the debugsymbols
without effectively duplicating all executables and libraries.
In the same step add a install file which installs our sysctl
settings, this is done together as it allows to skip some
intermediate steps, also the change is not to big, so it should be
possible to see whats going on.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 17 Nov 2017 14:02:21 +0000 (15:02 +0100)]
pve-cluster service: remove $DAEMON_OPTS and environment file
"We use it to pass $DAEMON_OPTS variable to pmxcfs, which is empty per
default.
pmxcfs accepts following options at the moment:
> Usage:
> pmxcfs [OPTION...]
>
> Help Options:
> -h, --help Show help options
>
> Application Options:
> -d, --debug Turn on debug messages
> -f, --foreground Do not daemonize server
> -l, --local Force local mode (ignore corosync.conf,
> force quorum)
"help" can be safely ignored, as can "foreground" - it would break
the service as the Type is forking and thus systemd would expect that
the starting process returns rather quickly and kill it after the
timeout thinking the start failed when this is set.
Then there is "debug", while this is quite safe to use I do not
expect to find it in the wild. 1) it *spams* the logs in such a heavy
manner that it's useless for a user 2) if wished it can be
enabled/disable on the fly by writing 1/0 to /etc/pve/.debug (which
is what we tell people in docs, et al. anyway) So this parameter is
also quite safe to ignore, IMO.
Then there is "local", now this can be enabled but is such evil
and wrong so that anybody setting it *permanently*, deserves to
be saved by not allowing him to do so. If somebody uses this we
should hope that he did out of not knowing better and is actually
thankful when he learns what this meant."
https://pve.proxmox.com/pipermail/pve-devel/2017-November/029527.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 20 Nov 2017 07:42:46 +0000 (08:42 +0100)]
fix #1566: do not setup ssh config in updatecerts call
pvecm updatecerts gets called on each pve-cluster.service start,
thus at least on each node boot and on each pve-cluster update.
updatecerts contained a call to setup_sshd_config, which ensured that
the sshd_config parameter 'PermitRootLogin' gets set to yes, with the
intend that this is needed for a working cluster.
But, also the now more common and secure options 'prohibit-password'
and 'without-password' are OK for a cluster to work properly.
This change was added by 6c0e95b3, without clear indication why, our
installer enforces this setting already, as does a cluster create and
a join to a cluster.
To allow an user to use the more secure setting remove the call from
updatecerts again, thus he only needs to changes this after cluster
create/add operations, on one node only.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
fix #1559: pmxcfs: add missing lock when dumping .rrd
Adding our standard mutex for protecting cfs_status from multiple
conflicting changes solves two things. First, it now protects
cfs_status as it was changed here and secondly, it protects the
global rrd_dump_buf and rrd_dump_last helper variables from
incosistent access and a double free chance.
Fixes: #1559 Reported-by: Tobias Böhm <tb@robhost.de> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 10 Nov 2017 09:24:24 +0000 (10:24 +0100)]
cfs_lock: add missing trailing newline
When we do not instantly get the lock we print a respective message
to stderr. This shows also up in the task logs, and if it's the last
message before a 'Task OK' the UI gets confused an shows the task as
erroneous.
Keep the message as its a good feedback for the user to see why an op
seems to do nothing, so simply add a trailing newline.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We take the left-over timeout returned from alarm() and then
sleep for a second, so when continuing the alarm timeout we
we need to subtract that second for consistency.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
cfs_lock: swap checks for specific errors with $got_lock
We checked if a specific error was set or, respectively, not set to
know if we got the lock or not.
The check if we may unlock again was negated and thus could lead to
problems, in specific - rather unlikely - cases.
Use the by the previous patch added $got_lock variable, which only
gets set when we really got the lock, instead.
While refactoring for the new variable, set the $noerr parameter of
check_cfs_quorum() as we do not want to die here.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
cfs_lock: address race where alarm triggers with lock accquired
As mkdir can possibly hang forever we need to enforce a timeout on
it. But this was made in such a way so that a small time window
existed where the lock could be acquired successfully but the alarm
triggered still, leaving around an unused lock for 120 seconds.
Wrap only the mkdir call itself in an alarm and save its result
directly in a $got_lock variable, this minimizes the window as far as
possible from the perl side.
This is also easier to track for humans reading the code and should
cope better against code changes, e.g., it does not breaks just if an
error message typo got corrected a few lines above.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
cluster: improve error handling when reading files
When querying file contents via IPC we return undef if the
file does not exist, but also on any other error. This is
potentially problematic as the ipcc_send_rec() xs function
returns undef on actual errors as well, while setting $!
(errno).
It's better to die in cases other than ENOENT. Before this,
pvesr would assume an empty replication config and an empty
vm list if pmxcfs wasn't running, which could then clear out
the entire local replication state file.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
cluster: cfs_update: option to die rather than warn
It can be useful to know whether we actually have an empty
vm list or whether the last cfs_update call simply failed.
Previously this only warned.
This way we can avoid a nasty type of race condition. For
instance in pvesr where it's possible that the vm list query
fails while everything else worked (eg. if the pmxcfs was
just starting up, or died between the queries), in which
case it would assume there are no guests and the
purge-old-states step would clear out the entire local state
file.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Thomas Lamprecht [Thu, 21 Sep 2017 12:08:00 +0000 (14:08 +0200)]
cfs-func-plug: use RW lock for safe cached data access
fuse may spawn multiple threads if there are concurrent accesses.
Our virtual files, e.g. ".members", ".rrd", are registered over our
"func" cfs plug which is a bit special.
For each unique virtual file there exists a single cfs_plug_func_t
instance, shared between all threads.
As we directly operated unlocked on members of this structure
parallel accesses raced between each other.
This could result in quite visible problems like a crash after a
double free (Bug 1504) or in less noticeable effects where one thread
may read from an inconsistent, or already freed memory region.
Add a Reader/Writer lock to efficiently address this problem.
Other plugs implement more functions and use a mutex to ensure
consistency and thus do not have this problem.
Fixes: #1504 Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 20 Sep 2017 13:11:02 +0000 (15:11 +0200)]
corosync config parser: move to hash format
The old parser itself was simple and easy but resulted in quite a bit
of headache when changing corosync config sections, especially if
multiple section levelsshould be touched.
Move to a more practical internal format which represents the
corosync configuration in hash
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
blowfish, 3des and arcfour are not enabled by default on the
server side anyway.
on most hardware, AES is about 3 times faster than Chacha20
because of hardware accelerated AES, hence the changed order
of preference compared to the default.
Thomas Lamprecht [Fri, 18 Aug 2017 09:21:18 +0000 (11:21 +0200)]
limit tasklist to the maximal pmxcfs status entry size
We tried to limit the size of the tasklist by including non-running
task only if we have less than 25 entries. A reason, among others,
was that a single status entry in the cfs_status.kvhash is limited to
32 KiB.
The "max. 25 entry" heuristic assumes that entries are small, which
is also the norm. But on failed tasks, e.g. a Qemu VM with a
problematic command line, is far longer than the usual task entry.
This led to a situation where the last 25 task were bigger than
32KiB, so the ipcc call to the pmxcfs failed with EFBIG.
This aborted then every new task run with fork_worker, and could
render a node partially unusable until "/var/log/pve/tasks/active"
got truncated.
You should see soon a "ipcc_send_rec failed: File too large"
After this all new task fail, even if they could succeed. pvestatd
also fails to broadcast the tasklist now. To get out of this do:
To address this check the length of the serialized list and remove
elements from its end until we do not exceed the size limit anymore.
Current running tasks and chronological newer ones will get
prioritized.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Dominik Csapak [Mon, 7 Aug 2017 14:04:24 +0000 (16:04 +0200)]
fix #1472: fix rrd file path
upstream rrd-tools changed the syntax for the perl binding,
we now have to supply '-' as the path despite what the documentation
says (it says to supply an empty path, what we did)
Thomas Lamprecht [Wed, 12 Jul 2017 09:53:16 +0000 (11:53 +0200)]
ssh_merge_known_hosts: also add entry if current sshkey does not match
this ensures that our current valid SSH keys gets added even if
another key on the same hostname exists already for some reasons.
The code path which handles hashed host names has this behavior
already since the beginning, so let the new non-hashed code act the
same way.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
ssh_merge_known_hosts: address auth failure problem
On node addition we create two entries in the cluster-wide known_host
file with our public host key, one with the pve-localhost bound IP
address and one with our nodename.
SSH always lower cases hostnames or their aliases before comparing
them to the known host entry. This is allowed as per RFC 1035,
Section "2.3.3 Character Case" [1].
No problems are caused by this, if known_host entries are not hashed,
as both, the original value and the now specified value can be
compared canonically in an case insensitive matter.
But, if a known_host entry is hashed we have no access to its
original plain text value – and we cannot do a case insensitive
comparison anymore. SSH thus expects that the original value was
transformed to lowercase before hashing. We did not follow this
convention when we added node keys to the clusters known_host file as
we kept the case. This resulted in problems when a user set up nodes
with names containing uppercase letters.[2]
Instead of transforming everything to lowercase on hashing lets omit
hashing known_host entries completely.
To explain why this can be done safely – without security
implications - we need to state the reason why hashing those entries
would gain some security in the first place. It wants to prevent
information leakage for the case an local account gets taken over by
an attacker. If not hashed, the attacker could use the known_host
file to see which other host the user connected to.
This could "just" leak information on what a user does but could also
make it easier to attacked the listed hosts too - e.g. if the user
had an unprotected SSH key which the hosts trust. As there are other
ways to get a list of hosts where an user connected too
(.bash_history, monitoring outgoing traffic, ...) hashing known_host
entries itself provides just a small hurdle of obfuscation in the
case an account got already taken over. And this is the case for an
normal, unprivileged user account.
In the case of PVE hashing the used known_host file brings absolutely
*no* advantage. First, the affected known_host file is located under
/etc/pve/priv where only root has read access. Thus, an attacker
would need to take over root to get the known_hosts in the first
place. If he did take over root all hope is lost one way or another.
Even if known_host was world readable, hashing would not do much.
As and attacker would know that the nodes IPs are entries he could
use /etc/network/interfaces to get the subnet of interest and just
bruteforce all entries until we got all node IPs - he normally would
only need to iterate through 8-16 bit in an IPv4 network.
Even this could be simplified by just port scanning the range for an
open port 8006, to get all PVE nodes in a subnet.
Further /etc/hosts (world readable) often provides the information
which hashing known_hosts tries to hide, as does /etc/pve/.members
(readable by: www-data,root)
So, to summarize, while for an unprivileged user it may add a slight
defense against a information leak it really doesn't for a PVE
systems root/cluster members - all information which it tries to hide
is accessible in various other ways.
Add new entries in plain text, add checks if entries are already
there for the plain text case too. Further use lowercase comparison
as openssh does.
If hashed entries are already there allow them still, but ensure that
a lowercase'd version is saved to avoid authentication failed
problems.
Thomas Lamprecht [Mon, 26 Jun 2017 12:10:57 +0000 (14:10 +0200)]
add simple corosync config parser self check
Each test reads and parses a config "writes" it again and then
re-parses it.
Then both the parsed hash structures and the raw config get compared
This is cheap and should catch simple regressions in either the
parser or writer, as currently we have no safety net that
modifications on either one didn't cause regressions.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 13 Jun 2017 07:25:33 +0000 (09:25 +0200)]
factor out corosync methods to own module
PVE::Cluster is already quite big, the corosync part is ~250 lines
long of 1900 total. Further the corosync part is only needed in a few
specialised places (API2/ClusterConfig and CLI/pvecm).
This speaks for factoring out this part in a separate perl module as
most modules which use Cluster load the corosync parts for no reason.
Further, cluster handling through API may even add more corosync
related methods.
Create a new Corosync perl module and move all relevant methods over.
Method names lost the 'corosync_' prefix, not really needed anymore
as they already lives in the 'Corosync' namespace now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Cluster.pm: add get_ssh_info and ssh_info_to_command
To get a node's address info optionally inside a specified
network (eg. migration_network), and a standardized way to
create an SSH command from this info.
pvecm add: fix #1369 - re-allow using hostnames for ringX_addr
If an user passed a hostname as ring0_addr or ring1_addr the check_ip
checked failed as it implicitly assumed IPs even if we allowed a
general address (i.e. IP or hostname) as a format for those
properties.
remote_node_ip: use same return signature for both branches
We have two return statements in the remote_node_ip submethod, one
checked if we are in list context and adapt the returning values
accordingly and one just returned a list, independent of the
context.
Adapt the second one and check the context there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>