Daniel Berteaud [Wed, 18 Sep 2019 15:15:46 +0000 (17:15 +0200)]
Parse volname where needed
The common ZFSPlugin was missing volume name parsing
in a few places. This was not a problem for standard
volumes, but broke functionnalities (like resize,
snapshot, rollback) with linked clones as the name of
the zvol must be extracted from the entry in the config
(remove base-X-disk-Y prefix)
Signed-off-by: Daniel Berteaud <daniel@firewall-services.com>
Tim Marx [Fri, 20 Sep 2019 09:12:46 +0000 (11:12 +0200)]
change file_size_info sub to use qemu-img info json decoding
Using the json output, as suggested by Thomas, we now die if the decoding
fails and, if not, all return values are set to the corresponding decoded
values. That should prevent any unforeseen null size values, except if
qemu-img info reports it, which we then consider as valid.
The patch uses the value from the field 'stored' if it is available.
In Ceph 14.2.2 the storage calculation changed to a per pool basis. This
introduced an additional field 'stored' that holds the amount of data
that has been written to the pool. While the field 'used' now has the
data after replication for the pool.
The new calculation will be used only if all OSDs are running with the
on-disk format introduced by Ceph 14.2.2.
and actually do that not just for creating zvols, but also when
activating them. this should fix a range of issues/races that sometimes
occured on bootup, snapshot rollback or similar operations.
Previously, the web GUI timed out when removing content (e.g. backup) took
too long. Doing the main part of the API DELETE call in a fork_worker solves
this.
since list_volumes is only supposed to be called with filtered content
types, this should ensure that get_subdir is only called for plugins
that have a defined 'path' property, like the old code in
PVE::Storage::template_list did.
checking '$server:$subdir' is too strict to work in all cirumcstances,
e.g. adding/removing a monitor would mean that it is not the same
anymore, same if one is adding/removing the ports from the config
check only if the subdir is the same and if it is a cephfs
this way, it still returns true if someone changes the config
Dominik Csapak [Thu, 27 Jun 2019 08:43:10 +0000 (10:43 +0200)]
CephConfig: do not always interpret '; ' as a comment
; is the beginning of a comment, but in some configuration settings
it is also valid syntax, e.g. for mon_host it is a valid
seperator for hosts (sigh ...)
only remove lines when it starts with a ';'
since we remove all comments anyway any time we write the ceph conf
it should not really matter for our users
Thomas Lamprecht [Mon, 17 Jun 2019 09:51:27 +0000 (11:51 +0200)]
followup: free_image: just check if a file in general exist
don't require any specific file types, if something is here which can
be requested to delete over API/CLI it either comes from an API/CLI
operation and should be thus OK to delete, else a user caused the
creation of the special file and it either works and all is good or
the user gets notified as we check if unlink succeeded anyway
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Alwin Antreich [Wed, 12 Jun 2019 10:32:05 +0000 (12:32 +0200)]
cephfs: Exclude _netdev when mounting with fuse
Since ceph-fuse is called directly in the CephFS storage plugin, which
can not process the _netdev option, mounting the CephFS storage fails
when fuse is set in the storage.cfg.
This patch moves the _netdev option into the else part of the if fuse is
set statement. _netdev is only added if the CephFS kernel client mounts
the storage.
It seems _netdev is not needed anyway for the fuse mount, as the
connection is closed, once the fuse process gets killed on shutdown.
Stefan Reiter [Thu, 13 Jun 2019 09:33:28 +0000 (11:33 +0200)]
fix #1427: Set file mode on uploaded templates/ISOs
simply chmod the temp file before copying to the "correct" permission
mode, where all users with access to the directory can read the file,
to mirror the behavior one gets for a apl_download call.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
as already announced over two months ago[0], remove the unofficial
SheepDog plugin now completely. Besides that it was never fully
supported in Proxmox VE one of its main developer and ex-maintainer
declared it as abandoned[1], and thus just let's remove it, git
allows to resurrect it any time if a wonder happens anyway.
Dominik Csapak [Tue, 4 Jun 2019 10:35:23 +0000 (12:35 +0200)]
Diskmanage: add append_partition sub
we will use this for adding a partition to a disk when using a device
for ceph osd db/wal which already has partitions on it
first we search for the highest partition number, then add the partition
and search for the resulting device (we cannot assume to simply
append the number, e.g. from /dev/nvme0n1 we get /dev/nvme0n1pX)
Dominik Csapak [Wed, 29 May 2019 13:48:06 +0000 (15:48 +0200)]
Diskmanage: detect osds/journals/etc. created with ceph-volume
ceph-volume creates osds/journal/etc. on LVM instead of partitions,
so to detect them, we have to parse the lv_tags of the LVs and
match them with the underlying device
Dominik Csapak [Wed, 29 May 2019 13:48:05 +0000 (15:48 +0200)]
allow disk with LVM for journal devices
With this, users can select disks with LVM on it for journal/db devices,
which will be necessary for ceph nautilus, since there
journals/db/wal will be put on an LV
of course when creating an osd, we have to detect if that
is ok (probably based on the vg name on it)
Mira Limbeck [Mon, 29 Apr 2019 13:00:38 +0000 (15:00 +0200)]
map_volume: fall back to 'path'
Adds a fallback to 'Plugin::path' in the default implementation of
'map_volume' to simplify a common case of calling 'map_volume' followed
by a defined-check and a call to path if it is not. The path is now
always returned if the plugin in question does not override
'map_volume'.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Thomas Lamprecht [Wed, 17 Apr 2019 15:00:20 +0000 (15:00 +0000)]
zpool: handle race with other zpool imports
The underlying issue is that a zpool can get imported only once, so
we first check if it's in `zpool list`, and thus imported, and only
if it does not shows up there we try to import it.
But, this can race with either:
* parallel running activate_storage call, through CLI/API/daemon
* a zpool import from an admin (a bit unlikely, but hey that's the
thing with race conditions ;))
So refactor the "is pool imported" check into a closure, and call it
addditionally if the import failed, and silent the error if the pool
is now listed, and thus imported. This makes it a little bit nicer to
read too, IMO.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fix #318: Delete vzdump log when deleting a backup
Vzdump log files were not deleted when a backup was deleted.
Consequently, the folder continuously filled with .log files.
Now they get deleted after the backup is removed.
Since zfsutils are not a hard dependency of our stack it is possible to not have
`zpool` available.
Checking for existance of `zpool` before calling it suppresses spurious warnings
in the logs (e.g. when creating Ceph OSDs or accessing the 'Disk' Tab in the
GUI).
1) checking that the empty storage list is treated correctly (only override
and datacenter config limit considered)
2) checking that the empty storage list is treated correctly (as with 1).
3) checking that undef can be passed as one element of the storage list (it is
ignored)
If one of the storages passed in $storage_list was not defined
get_bandwidth_limit died (see [0], of an occurence of this).
This patch changes the behavior to ignore undef as storage instead.
zfs: don't generate/update cachefile on pool import
during storage activation.
for pools that don't get imported at boot (e.g. because their vdevs are
not available when zfs-import-*.service runs) it is fatal to include
them in the cachefile, for those that do get imported at boot this code
should never run anyway as they are already imported.
in any case, a fallback to import without cachefile is the safe variant.
Mira Limbeck [Fri, 29 Mar 2019 09:13:10 +0000 (10:13 +0100)]
workaround zfs create -V error for unaligned sizes
fixes the 'cannot create 'nvme/foo': volume size must be a multiple of
volume block size' error by always rounding the size up to the next 1M
boundary. this is a workaround until
https://github.com/zfsonlinux/zfs/issues/8541 is solved.
the current manpage says 128k is the maximum blocksize, but a local test
showed that values up to 1M are allowed. it might be possible to
increase it even further (see f1512ee61).
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Christian Ebner [Mon, 4 Mar 2019 15:47:06 +0000 (16:47 +0100)]
fix #585: remove leftover disks/directory after VM creation failed
When trying to create a qcow2 disk image with a size larger than available on the
storage, this will fail.
As qemu-img does not clean up the disk afterwards, it needs to be deleted
explicitly. Further, the vmid folder is cleaned up once it is empty.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>