]> git.proxmox.com Git - pve-storage.git/log
pve-storage.git
3 years agoDiskmanage: replace check for zpool binary with a function and mock it
Fabian Ebner [Wed, 10 Feb 2021 10:18:42 +0000 (11:18 +0100)]
Diskmanage: replace check for zpool binary with a function and mock it

so the test still works when it's not installed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agozpool: activate: move mount check out and make program flow easier
Thomas Lamprecht [Fri, 19 Feb 2021 14:21:16 +0000 (15:21 +0100)]
zpool: activate: move mount check out and make program flow easier

Early return when mounted heuristics returns true, that allows to get
rid of an indentation level.

Moving the heuristic out makes the activate method smaller and easier
to grasp

Best viewed with ignoring whitespace changes (`git show -w`).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agozpool: activate: don't eval procfs read, if it fails it should be fatal
Thomas Lamprecht [Fri, 19 Feb 2021 14:06:20 +0000 (15:06 +0100)]
zpool: activate: don't eval procfs read, if it fails it should be fatal

highly unlikely to fail in our setups, most realistic case is when
procfs is not mounted at /proc, which breaks much else anyway and is
a requirement

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agozpool: activate: drop intermediate state variable, return directly
Thomas Lamprecht [Fri, 19 Feb 2021 14:04:53 +0000 (15:04 +0100)]
zpool: activate: drop intermediate state variable, return directly

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agozpool: avoid wrong mount-decoding of dataset
Thomas Lamprecht [Fri, 19 Feb 2021 14:01:36 +0000 (15:01 +0100)]
zpool: avoid wrong mount-decoding of dataset

this was mistakenly done as the procfs code uses it and it was
assumed we need to decode this too to get both in the same
encoding-space and thus correct comparission.

But only procfs has that encoding, we don't have it for pool values
in the storage config, so we must not do a decode on that value, that
could potentially break.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agozfspoolplugin: check if imported before importing
Stoiko Ivanov [Fri, 19 Feb 2021 12:45:44 +0000 (13:45 +0100)]
zfspoolplugin: check if imported before importing

This commit is a small performance optimization to the previous one:
`zpool list` is cheaper than `zpool import -d /dev..` (the latter
scans the disks in the provided directory for zfs signatures,
unconditionally)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
3 years agozfspoolplugin: check if mounted instead of imported
Stoiko Ivanov [Fri, 19 Feb 2021 12:45:43 +0000 (13:45 +0100)]
zfspoolplugin: check if mounted instead of imported

This patch addresses an issue we recently saw on a production machine:
* after booting a ZFS pool failed to get imported (due to an empty
  /etc/zfs/zpool.cache)
* pvestatd/guest-startall eventually tried to import the pool
* the pool was imported, yet the datasets of the pool remained
  not mounted

A bit of debugging showed that `zpool import <poolname>` is not
atomic, in fact it does fork+exec `mount` with appropriate parameters.
If an import ran longer than the hardcoded timeout of 15s, it could
happen that the pool got imported, but the zpool command (and its
forks) got terminated due to timing out.

reproducing this is straight-forward by setting (drastic) bw+iops
limits on a guests' disk (which contains a zpool) - e.g.:
`qm set 100 -scsi1 wd:vm-100-disk-1,iops_rd=10,iops_rd_max=20,\
iops_wr=15,iops_wr_max=20,mbps_rd=10,mbps_rd_max=15,mbps_wr=10,\
mbps_wr_max=15`
afterwards running `timeout 15 zpool import <poolname>` resulted in
that situation in the guest on my machine

The patch changes the check in activate_storage for the ZFSPoolPlugin,
to check if any dataset below the 'pool' (which can also be a sub-dataset)
is mounted by parsing /proc/mounts:
* this is cheaper than running `zfs get` or `zpool list`
* it catches a properly imported and mounted pool in case the
  root-dataset has 'canmount' set to off (or noauto), as long
  as any dataset below is mounted

After trying to import the pool, we also run `zfs mount -a` (in case
another check of /proc/mounts fails).

Potential for regression:
* running `zfs mount -a` is problematic, if a dataset is manually
  umounted after booting (without setting 'canmount')
* a pool without any mounted dataset (no mountpoint property set and
  only zvols) - will result in repeated calls to `zfs mount -a`

both of the above seem unlikely and should not occur, if using our
tooling.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
3 years agozfspoolplugin: activate_storage: minor cleanup
Stoiko Ivanov [Fri, 19 Feb 2021 12:45:42 +0000 (13:45 +0100)]
zfspoolplugin: activate_storage: minor cleanup

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
3 years agobump version to 6.3-6
Thomas Lamprecht [Tue, 9 Feb 2021 11:14:00 +0000 (12:14 +0100)]
bump version to 6.3-6

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoNFS: avoid using obsolete rpcinfo option
Fabian Ebner [Thu, 4 Feb 2021 08:04:26 +0000 (09:04 +0100)]
NFS: avoid using obsolete rpcinfo option

as suggested in the man page.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoDiskmanage: also set type for partitions
Fabian Ebner [Tue, 9 Feb 2021 08:04:32 +0000 (09:04 +0100)]
Diskmanage: also set type for partitions

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoremove lock from is_base_and_used check
Fabian Ebner [Fri, 15 Jan 2021 10:58:05 +0000 (11:58 +0100)]
remove lock from is_base_and_used check

and squash the __no_lock-variant into it.

This lock is not broad enough, because for a caller that plans to do or not do
some storage operation based on the result of the check, the following could
happen:
1. volume_is_base_and_used is called and the result is used to enter a branch
2. situation on the storage changes in the meantime
3. the branch chosen in 1. might not be the one that should be taken anymore

This means that callers are responsible for locking, and luckily the existing
callers do use their own locks already:
1. vdisk_free used the __no_lock-variant with a broader lock also covering
   the free operation.
2. vdisk_clone is not a caller, but is relevant and it does lock the storage
2. the calls during VM migration and VM destruction happen in the context of a
   locked VM config. Because the clone operation also locks the VM config, it
   cannot happen that a linked clone is created while the template VM is
   migrated away or destroyed or vice versa. And even if that were the case,
   the base disk would not be freed, because of what vdisk_free/vdisk_clone do.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: also include partitions with get_disks if flag is set
Fabian Ebner [Tue, 26 Jan 2021 11:45:27 +0000 (12:45 +0100)]
Diskmanage: also include partitions with get_disks if flag is set

and have a parent key for partitions, to be able to see the associated disk in
the result without having to rely on naming heuristics (just adding a number at
the end doesn't work for NVMes).

The disk's usage will not be based on the partitions usage if the flag is set,
but will simply be 'partitions'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: save OSD information for individual partitions
Fabian Ebner [Tue, 26 Jan 2021 11:45:26 +0000 (12:45 +0100)]
Diskmanage: save OSD information for individual partitions

in preparation to including partitions for get_disks()

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: introduce ceph info helper
Fabian Ebner [Tue, 26 Jan 2021 11:45:25 +0000 (12:45 +0100)]
Diskmanage: introduce ceph info helper

so it can be re-used for partitions.

Also changes the regular expression in get_ceph_volume_info to match the full
device/partition name the LV is on. Not only is this needed for partitions,
especially if there's multiple partitions with an OSD, but it also fixes
handling NVMe devices with an OSD as a side effect. Previuosly those were not
detected here, because of the digits in the name, e.g. /dev/nvme0n1

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: also detect BIOS boot, EFI and ZFS reserved type partitions
Fabian Ebner [Tue, 26 Jan 2021 11:45:24 +0000 (12:45 +0100)]
Diskmanage: also detect BIOS boot, EFI and ZFS reserved type partitions

as they are relevant to most PVE setups.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: introduce usage helper
Fabian Ebner [Tue, 26 Jan 2021 11:45:23 +0000 (12:45 +0100)]
Diskmanage: introduce usage helper

Note that this is a slight behavior change, because now the first
partition's usage which is not simply 'partition' will become the disk's
usage. Previously, if any partition was 'mounted', it would become the disk's
usage, then 'LVM', 'ZFS', etc.

A partitions usage defaults to 'partition' if nothing more specific can be
found, and is never treated as unused for now.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: collect partitions in hash
Fabian Ebner [Tue, 26 Jan 2021 11:45:22 +0000 (12:45 +0100)]
Diskmanage: collect partitions in hash

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: introduce get_sysdir_size helper
Fabian Ebner [Tue, 26 Jan 2021 11:45:21 +0000 (12:45 +0100)]
Diskmanage: introduce get_sysdir_size helper

to be used for partitions as well.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: also check for filesystem type when determining usage
Fabian Ebner [Tue, 26 Jan 2021 11:45:20 +0000 (12:45 +0100)]
Diskmanage: also check for filesystem type when determining usage

Like this, a non-ZFS filesystem living on a whole disk will also be detected
when it is not mounted.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: refactor and rename get_parttype_info
Fabian Ebner [Tue, 26 Jan 2021 11:45:19 +0000 (12:45 +0100)]
Diskmanage: refactor and rename get_parttype_info

in preparation to also query the file system type from lsblk. Note that the
result now also includes devices without a parttype, so a definedness check in
get_devices_by_partuuid is needed. This will be useful when the whole device
contains a filesystem.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDiskmanage: replace closure with direct hash access
Fabian Ebner [Tue, 26 Jan 2021 11:45:18 +0000 (12:45 +0100)]
Diskmanage: replace closure with direct hash access

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoDisks: return correct journal disk candidates
Fabian Ebner [Tue, 26 Jan 2021 11:45:17 +0000 (12:45 +0100)]
Disks: return correct journal disk candidates

Previously any GPT initialized disk without an osdid (i.e. equal to -1) would
be included in the list of journal disk candidates, for example a ZFS disk. But
the OSD creation API call will fail for those. To fix it, re-use the condition
from the corresponding check in that API call (in PVE/API2/Ceph/OSD.pm).
Now, included disks are unused disks, those with usage 'partitions' and GPT, and
those with usage 'LVM'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoapi: storage/config: use extract_sensitive_params from tools
Dominik Csapak [Wed, 2 Dec 2020 09:21:05 +0000 (10:21 +0100)]
api: storage/config: use extract_sensitive_params from tools

we have a more general version there

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agomark PBS storages as shared
Fabian Ebner [Wed, 27 Jan 2021 12:57:36 +0000 (13:57 +0100)]
mark PBS storages as shared

Like this, the property will get added when parsing the storage configuration
and PBS storages will correctly show up as shared storages in API results.

AFAICT the only affected PBS operation is free_image via vdisk_free, which will
now be protected by a cluster-wide lock, and that shouldn't hurt.

Another issue this fixes, which is the reason this patch exists, was reported
in the forum[0]. The free space from PBS storages was counted once for each node
that had access to the storage.

[0]: https://forum.proxmox.com/threads/pve-6-3-the-storage-size-was-displayed-incorrectly.83136/

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agolvm: Fix #3159: Show RAID LVs as storage content
Dominic Jäger [Wed, 13 Jan 2021 12:19:54 +0000 (13:19 +0100)]
lvm: Fix #3159: Show RAID LVs as storage content

LVM RAID logical volumes (including mirrors) can be valid disk images, so they
should show up in storage content listings (for example pvesm list).

Including LV types is safer than excluding, especially because of possible
additional types in the future.

Co-developed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
3 years agofix: check connection for nfs v4 only server
Alwin Antreich [Wed, 16 Dec 2020 11:59:04 +0000 (12:59 +0100)]
fix: check connection for nfs v4 only server

the check_connection is done by querying the exports of the nfs server
in question. With nfs v4 those exports aren't listed anymore since nfs
v4 employs a pseudo-filesystem starting from root (/).

rpcinfo allows to query the existence of an nfs v4 service.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
3 years agobump version to 6.3-5
Thomas Lamprecht [Tue, 26 Jan 2021 17:37:38 +0000 (18:37 +0100)]
bump version to 6.3-5

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoadd workaround for zfs rollback bug
Dominik Csapak [Thu, 21 Jan 2021 15:47:59 +0000 (16:47 +0100)]
add workaround for zfs rollback bug

as described in the zfs bug https://github.com/openzfs/zfs/issues/10931
the kernel keeps around cached data from mmaps after a rollback, thus
having invalid data in files that were allegedly rolled back

to workaround this (until a real fix comes along), we unmount the subvol,
invalidating the kernel cache anyway

the dataset gets mounted on the next 'activate_volume' again

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agodrop absolute udevadm path
Fabian Grünbichler [Mon, 25 Jan 2021 08:10:38 +0000 (09:10 +0100)]
drop absolute udevadm path

the compat symlink from bin to sbin has been dropped with bullseye, and
we rely on PATH begin set properly in our daemons/CLI tools anyway..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
3 years agodebian/control: use https in homepage link
Thomas Lamprecht [Tue, 26 Jan 2021 13:04:15 +0000 (14:04 +0100)]
debian/control: use https in homepage link

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoDiskmanage: extend wearout detection for SAS disk
Dominik Csapak [Fri, 11 Dec 2020 13:52:41 +0000 (14:52 +0100)]
Diskmanage: extend wearout detection for SAS disk

for some controllers/disks there the line is
Percentage used endurance indicator: x%

so extend the regex for that possibilty.
We even had a test-case for SAS but did not notice we could extract
that info from there...

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agotests: add ssd sas disk
Dominik Csapak [Fri, 11 Dec 2020 13:52:40 +0000 (14:52 +0100)]
tests: add ssd sas disk

copied from test 'sas' with rotational set to 0
this has then the type 'ssd', rpm: 0, and health: 'OK'

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agofix #3199: by fixing usage of strftime
Fabian Ebner [Tue, 15 Dec 2020 10:59:29 +0000 (11:59 +0100)]
fix #3199: by fixing usage of strftime

In a very early version I wanted to parse the date from the backup
name, and when switching to using the ctime and localtime() instead,
I forgot to update the usage of strftime.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agodrbd: comment that the builtin plugin is depreacated
Thomas Lamprecht [Tue, 15 Dec 2020 11:58:20 +0000 (12:58 +0100)]
drbd: comment that the builtin plugin is depreacated

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agod/control: update
Thomas Lamprecht [Mon, 14 Dec 2020 20:58:24 +0000 (21:58 +0100)]
d/control: update

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agod/compat: update to 10
Thomas Lamprecht [Mon, 14 Dec 2020 20:53:13 +0000 (21:53 +0100)]
d/compat: update to 10

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agobump version to 6.3-4
Thomas Lamprecht [Mon, 14 Dec 2020 15:15:07 +0000 (16:15 +0100)]
bump version to 6.3-4

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoprune mark: correctly keep track of already included backups
Fabian Ebner [Mon, 14 Dec 2020 15:03:17 +0000 (15:03 +0000)]
prune mark: correctly keep track of already included backups

This needs to happen in a separate loop, because some time intervals are not
subsets of others, i.e. weeks and months. Previously, with a daily backup
schedule, having:
* a backup on Sun, 06 Dec 2020 kept by keep-daily
* a backup on Sun, 29 Nov 2020 kept by keep-weekly
would lead to the backup on Mon, 30 Nov 2020 to be selected for keep-monthly,
because the iteration did not yet reach the backup on Sun, 29 Nov 2020 that
would mark November as being covered.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agonfs and cifs: implement backup notes helper
Thomas Lamprecht [Mon, 7 Dec 2020 15:13:04 +0000 (16:13 +0100)]
nfs and cifs: implement backup notes helper

reuse the one from DirPlugin by directing the call to it, but with
the actual $class. This should stay stable, as we provide an ABI and
try to always use $class->helpers.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoapi: content/backup: handle deletion of notes
Thomas Lamprecht [Mon, 7 Dec 2020 15:10:07 +0000 (16:10 +0100)]
api: content/backup: handle deletion of notes

Previous to this we did not called the plugins update_volume_notes at
all in the case where a user delted the textarea, which results to
passing a falsy value ('').

Also adapt the currently sole implementation to delete the notes field
in the undef or '' value case. This can be done safely, as we default
to returning an empty string if no notes file exists.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agodir plugin: code cleanup
Thomas Lamprecht [Mon, 7 Dec 2020 15:07:36 +0000 (16:07 +0100)]
dir plugin: code cleanup

mostly re-ordering to improve statement grouping and avoiding the
need for an intermediate variable

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agobump version to 6.3-3
Thomas Lamprecht [Thu, 3 Dec 2020 16:24:59 +0000 (17:24 +0100)]
bump version to 6.3-3

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agod/control: bump versioned dependency for libpve-common-perl
Thomas Lamprecht [Thu, 3 Dec 2020 15:55:23 +0000 (16:55 +0100)]
d/control: bump versioned dependency for libpve-common-perl

so the new get_repository helper for PBS is available

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoPBSPlugin: use get_repository from PVE::PBSClient
Dominik Csapak [Thu, 3 Dec 2020 11:43:39 +0000 (12:43 +0100)]
PBSPlugin: use get_repository from PVE::PBSClient

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agopbs: fix token auth with PVE::APIClient
Wolfgang Bumiller [Thu, 3 Dec 2020 13:03:43 +0000 (14:03 +0100)]
pbs: fix token auth with PVE::APIClient

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
3 years agoapi: scan: note that USB is depreacated
Thomas Lamprecht [Wed, 2 Dec 2020 10:31:03 +0000 (11:31 +0100)]
api: scan: note that USB is depreacated

It now got moved in /nodes/<node>/hardware/usb as envisioned[0], this
allows to sunset the usb scan API endpoint here and drop it with 7.0

[0]: https://lists.proxmox.com/pipermail/pve-devel/2018-November/034694.html

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agobump version to 6.3-2
Thomas Lamprecht [Tue, 1 Dec 2020 18:27:54 +0000 (19:27 +0100)]
bump version to 6.3-2

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoapi/cli: add pbs scan endpoint and command
Thomas Lamprecht [Tue, 1 Dec 2020 18:21:43 +0000 (19:21 +0100)]
api/cli: add pbs scan endpoint and command

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agopvesm: also map the password param for new style cifs scan command
Thomas Lamprecht [Tue, 1 Dec 2020 18:20:44 +0000 (19:20 +0100)]
pvesm: also map the password param for new style cifs scan command

not only for the old deprecated alias

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoapi: scan: move over index and usb scan from manager
Thomas Lamprecht [Tue, 1 Dec 2020 16:48:21 +0000 (17:48 +0100)]
api: scan: move over index and usb scan from manager

Add the missing pieces allowing pve-manager to just point the
/nodes/<node>/scan api directory at this module, dropping it's
duplicated copy.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoapi: scan cifs: port over NT_STATUS filter from pve-manager
Thomas Lamprecht [Tue, 1 Dec 2020 17:08:58 +0000 (18:08 +0100)]
api: scan cifs: port over NT_STATUS filter from pve-manager

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agofactor out scan CLI definition to real API module
Thomas Lamprecht [Tue, 1 Dec 2020 16:15:35 +0000 (17:15 +0100)]
factor out scan CLI definition to real API module

we have a 1:1 copy of that code in pve-manager's PVE::API2::Scan,
which we can avoid by using a common module form pvesm CLI and the
API.

This is the first basic step of dropping the code duplication in
pve-manager.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agopbs: activate storage: fully validate if storage config works
Thomas Lamprecht [Tue, 1 Dec 2020 18:18:05 +0000 (19:18 +0100)]
pbs: activate storage: fully validate if storage config works

improves UX of on_update and on_add hooks *a lot*.

This is a bit more expensive than the TCP ping, or even just an
unauthenticated ping, but not as bad as a full datastore status - as
this only reads the datastore config file (which is normally in page
cache anyway).

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agopbs: add scan datastore helper
Thomas Lamprecht [Tue, 1 Dec 2020 18:15:49 +0000 (19:15 +0100)]
pbs: add scan datastore helper

for use in both, the scan API and the on_add/on_update hooks

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agopbs: reuse pve apiclient for api connect helper
Thomas Lamprecht [Tue, 1 Dec 2020 08:53:38 +0000 (09:53 +0100)]
pbs: reuse pve apiclient for api connect helper

it is flexible enough to easily do so, and should do well until we
actually have cheap native bindings (e.g., through wolfgangs rust
permlod magic).

Make it a private helper, we do *not* want to expose it directly for
now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoplugin: hooks: add explicit returns
Fabian Ebner [Fri, 27 Nov 2020 09:35:44 +0000 (10:35 +0100)]
plugin: hooks: add explicit returns

to avoid returning something unexpected. Finish what
afeda182566292be15413d9b874720876eac14c9 already started for all the other
plugins. At least for ZFS's on_add_hook this is necessary (adding a ZFS storage
currently fails as reported here [0]), but it cannot hurt
in the other places either as the only hooks we expect to return something
currently are PBS's on_add_hook and on_update_hook.

[0]: https://forum.proxmox.com/threads/gui-add-zfs-storage-verification-failed-400-config-type-check-object-failed.79734/

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agonfs: code cleanup
Thomas Lamprecht [Fri, 27 Nov 2020 09:45:06 +0000 (10:45 +0100)]
nfs: code cleanup

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agobump version to 6.3-1
Thomas Lamprecht [Tue, 24 Nov 2020 22:20:45 +0000 (23:20 +0100)]
bump version to 6.3-1

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoapi: content: pass encrypted status for PBS backups
Thomas Lamprecht [Tue, 24 Nov 2020 22:17:57 +0000 (23:17 +0100)]
api: content: pass encrypted status for PBS backups

Prefer the fingerprint, fallback to checking the files crypt-mode.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agopbs add/update: save fingerprint in storage config
Thomas Lamprecht [Tue, 24 Nov 2020 21:09:38 +0000 (22:09 +0100)]
pbs add/update: save fingerprint in storage config

fallback to the old truthy "1" if not available

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agopbs add/update: do basic key value validation
Thomas Lamprecht [Tue, 24 Nov 2020 21:09:15 +0000 (22:09 +0100)]
pbs add/update: do basic key value validation

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agopbs: autogen key: rename old one if existing
Thomas Lamprecht [Tue, 24 Nov 2020 21:05:21 +0000 (22:05 +0100)]
pbs: autogen key: rename old one if existing

it could be debated do have some security implications and that
deletion is safer, but key deletion is a pretty hairy thing.

Should be documented, and people just should use delete instead of
autogen if they want to "destroy" a key.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agobump version to 6.2-12
Thomas Lamprecht [Tue, 24 Nov 2020 15:06:03 +0000 (16:06 +0100)]
bump version to 6.2-12

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoStorage/PBSPlugin: implement get/update_volume_notes for pbs
Dominik Csapak [Tue, 24 Nov 2020 09:09:33 +0000 (10:09 +0100)]
Storage/PBSPlugin: implement get/update_volume_notes for pbs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agoStorage/Plugin: add get/update_volume_comment and implement for dir
Dominik Csapak [Tue, 24 Nov 2020 09:09:32 +0000 (10:09 +0100)]
Storage/Plugin: add get/update_volume_comment and implement for dir

and add the appropriate api call to set and get the comment
we need to bump APIVER for this and can bump APIAGE, since
we only use it at this new call that can work with the default
implementation

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agoapi2/storage/content: change to volume_size_info and add return properties
Dominik Csapak [Tue, 24 Nov 2020 09:09:31 +0000 (10:09 +0100)]
api2/storage/content: change to volume_size_info and add return properties

'file_size_info' only works for directory based storages, while
'volume_size_info' should work for all

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agorename comment to notes
Dominik Csapak [Tue, 24 Nov 2020 09:09:30 +0000 (10:09 +0100)]
rename comment to notes

so that we are more consistent with pbs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agobump version to 6.2-11
Thomas Lamprecht [Mon, 23 Nov 2020 14:58:34 +0000 (15:58 +0100)]
bump version to 6.2-11

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoconvert maxfiles to prune_backups when reading the storage configuration
Fabian Ebner [Mon, 23 Nov 2020 12:33:09 +0000 (13:33 +0100)]
convert maxfiles to prune_backups when reading the storage configuration

If there are already prune options configured, simply delete the maxfiles
setting. Having set both is invalid from vzdump's perspective anyways, and any
backup job on such a storage failed, meaning a user would've noticed.

If there are no prune options, translate the maxfiles value to keep-last,
except for maxfiles being zero (=unlimited), in which case we use keep-all.

If both are not set, don't set anything, so:
1. Storages don't suddenly have retention options set.
2. People relying on vzdump defaults can still use those.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoprune: introduce keep-all option
Fabian Ebner [Mon, 23 Nov 2020 12:33:08 +0000 (13:33 +0100)]
prune: introduce keep-all option

useful to have an alternative to the old maxfiles = 0. There has to
be a way for vzdump to distinguish between:
1. use the /etc/vzdump.conf default (when no options are configured for the storage)
2. use no limit (when keep-all=1)

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoperlcritic: don't modify $_ in list functions, use for
Thomas Lamprecht [Mon, 23 Nov 2020 14:26:14 +0000 (15:26 +0100)]
perlcritic: don't modify $_ in list functions, use for

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoperlcritic: avoid conditional variable declaration
Thomas Lamprecht [Mon, 23 Nov 2020 14:13:33 +0000 (15:13 +0100)]
perlcritic: avoid conditional variable declaration

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoplugin: hooks: avoid that method param count gets returned
Thomas Lamprecht [Mon, 23 Nov 2020 14:12:46 +0000 (15:12 +0100)]
plugin: hooks: avoid that method param count gets returned

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agofix volume activation for ZFS subvols
Fabian Ebner [Thu, 19 Nov 2020 10:29:53 +0000 (11:29 +0100)]
fix volume activation for ZFS subvols

When using the path to request properties, and no ZFS file system is mounted
at that path, ZFS will fall back to the parent filesystem:

> # zfs unmount myzpool/subvol-172-disk-0
> # zfs get mounted /myzpool/subvol-172-disk-0
> NAME     PROPERTY  VALUE    SOURCE
> myzpool  mounted   yes      -
> # zfs get mounted myzpool/subvol-172-disk-0
> NAME                       PROPERTY  VALUE    SOURCE
> myzpool/subvol-172-disk-0  mounted   no       -

Thus, we cannot use the path and need to use the dataset directly.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agod/control: bump versioned dependency on libpve-common-perl
Thomas Lamprecht [Mon, 16 Nov 2020 17:25:26 +0000 (18:25 +0100)]
d/control: bump versioned dependency on libpve-common-perl

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agolvmthin: Match snapshot remove regex to allowed names
Dominic Jäger [Wed, 28 Oct 2020 10:04:54 +0000 (11:04 +0100)]
lvmthin: Match snapshot remove regex to allowed names

We allow snapshot names that match pve-configid but during qm destroy we have
not removed all snapshots that match pve-configid so far. For example, the name
x-y was allowed but the resulting snap_vm-105-disk-0_x-y was not removed.

Reported-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
3 years agoprune: allow having all prune options zero/missing
Fabian Ebner [Fri, 13 Nov 2020 13:08:55 +0000 (14:08 +0100)]
prune: allow having all prune options zero/missing

This is basically necessary for the GUI's prune widget, because we want to
pass along all options equal to zero when all the number fields are cleared.
And it's more similar to how it's done in PBS now.

Bumped the APIAGE and APIVER, in case some external plugin needs to adapt to
the now less restrictive schema for 'prune-backups'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoprune mark: keep all if all prune options are zero/missing
Fabian Ebner [Fri, 13 Nov 2020 13:08:54 +0000 (14:08 +0100)]
prune mark: keep all if all prune options are zero/missing

as an additional safety measure. And add some tests.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agodon't pass along keep-options equal to zero to PBS
Fabian Ebner [Fri, 13 Nov 2020 13:08:53 +0000 (14:08 +0100)]
don't pass along keep-options equal to zero to PBS

In PBS, zero is not allowed for these options.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agopbs: add/update: return enc. key, if newly set or auto-generated
Thomas Lamprecht [Thu, 12 Nov 2020 17:05:26 +0000 (18:05 +0100)]
pbs: add/update: return enc. key, if newly set or auto-generated

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoapi: storage create/update: return parts of the configuration
Thomas Lamprecht [Thu, 12 Nov 2020 17:01:41 +0000 (18:01 +0100)]
api: storage create/update: return parts of the configuration

First, doing such things can make client work slightly easier, as the
submitted values do not need to be made available in any callback
handling the response.

But the actual reason for doing this now is, that this is a
preparatory step for allowing the user to download/print/.. an
autogenerated PBS client encryption key.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoavoid unecessary try for over optimization
Thomas Lamprecht [Thu, 12 Nov 2020 16:31:16 +0000 (17:31 +0100)]
avoid unecessary try for over optimization

That was lots of code and hash map touching for the case where one
avoided a extra stat, which result probably was in the page cache
anyway, for the case that a backup has a comment.

A case which is rather  be unlikely - comments are normally done for
the occasional explicit backup (e.g., before major upgrade, before a
configuration change in that guest, ...), at least not worth some
relatively complicated effort making that sub harder to read and
maintain.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agostorage: get subdirectory files: read .comment files for comments
Dominik Csapak [Thu, 12 Nov 2020 15:26:02 +0000 (16:26 +0100)]
storage: get subdirectory files: read .comment files for comments

we have no way of setting them yet via the API, but we can read them
now

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agoapi: content listing: add comment and verification fields
Dominik Csapak [Thu, 12 Nov 2020 15:26:01 +0000 (16:26 +0100)]
api: content listing: add comment and verification fields

for now only for PBS, since we do not have such info elsewhere

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
3 years agopbs: autogen encryption key: bubble up error message
Thomas Lamprecht [Thu, 12 Nov 2020 10:49:01 +0000 (11:49 +0100)]
pbs: autogen encryption key: bubble up error message

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoapi/config: fix indentation
Thomas Lamprecht [Wed, 11 Nov 2020 08:35:53 +0000 (09:35 +0100)]
api/config: fix indentation

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agobump version to 6.2-10
Thomas Lamprecht [Tue, 10 Nov 2020 18:03:07 +0000 (19:03 +0100)]
bump version to 6.2-10

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agofix #3030: always activate volumes in storage_migrate
Fabian Ebner [Fri, 6 Nov 2020 14:30:55 +0000 (15:30 +0100)]
fix #3030: always activate volumes in storage_migrate

AFAICT the snapshot activation is not necessary for our plugins at the moment,
but it doesn't really hurt and might be relevant in the future or for external
plugins.

Deactivating volumes is up to the caller, because for example, for replication
on a running guest, we obviously don't want to deactivate volumes.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoadd check for fsfreeze before snapshot
Stoiko Ivanov [Fri, 6 Nov 2020 14:19:40 +0000 (15:19 +0100)]
add check for fsfreeze before snapshot

In order to take a snapshot of a container volume, which can be mounted
read-only with RBD, the volume needs to be frozen (fsfreeze (8)) before taking
the snapshot.

This commit adds helpers to determine if the FIFREEZE ioctl needs to be called
for the volume.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
3 years agofix typo in comment
Stoiko Ivanov [Fri, 6 Nov 2020 14:19:39 +0000 (15:19 +0100)]
fix typo in comment

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
3 years agoDiskmanage: Use S.M.A.R.T. attributes for SSDs wearout lookup
Jan-Jonas Sämann [Fri, 30 Oct 2020 03:57:22 +0000 (04:57 +0100)]
Diskmanage: Use S.M.A.R.T. attributes for SSDs wearout lookup

This replaces a locally maintained hardware map in
get_wear_leveling_info() by commonly used register names of
smartmontool. Smartmontool maintains a labeled register database that
contains a majority of drives (including versions). The current lookup
produces false estimates, this approach hopefully provides more reliable
data.

Signed-off-by: Jan-Jonas Sämann <sprinterfreak@binary-kitchen.de>
3 years agoUpdate disk_tests/ssd_smart/sde data
Jan-Jonas Sämann [Fri, 30 Oct 2020 03:57:21 +0000 (04:57 +0100)]
Update disk_tests/ssd_smart/sde data

Provides recent test data for disk_tests/ssd_smart/sde_smart. The
previous data was created using an older smartmontools version, which
did not yet support the drive and therefore had bogus attribute mapping.

Signed-off-by: Jan-Jonas Sämann <sprinterfreak@binary-kitchen.de>
3 years agofix #1452: also log stderr of remote command with insecure storage migration
Fabian Ebner [Thu, 1 Oct 2020 08:11:36 +0000 (10:11 +0200)]
fix #1452: also log stderr of remote command with insecure storage migration

Commit 8fe00d99449b7c80e81ab3c9826625a4fcd89aa4 already
introduced the necessary logging for the secure code path,
so presumably the bug was already fixed for most people.

Delay the potential die for the send command to be able to log
the ouput+error from the receive command. Like this we also see e.g.
'volume ... already exists' instead of just 'broken pipe'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agoavoid output of zfs get command on volume import
Fabian Ebner [Thu, 1 Oct 2020 08:11:35 +0000 (10:11 +0200)]
avoid output of zfs get command on volume import

quiet takes care of both the error and success case.
Without this, there are lines like:
myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__ name myzpool/vm-4352-disk-0@__replicate_4352-7_1601538554__ -
in the log if the dataset exists, and this information is
already present in more readable form.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
3 years agofix #3097: cifs, nfs: increase connection check timeout to 10s
Thomas Lamprecht [Tue, 27 Oct 2020 06:03:17 +0000 (07:03 +0100)]
fix #3097: cifs, nfs: increase connection check timeout to 10s

we already have the ZFS pool plugin as precedent to use 10s, at for
network with remote off-site storage one can get to 200 - 300ms
RTT latency, which means that for a protocol needing multiple rounds of
communication, one can easily get over 2s while not being in a broken
network.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agobump version to 6.2-9
Thomas Lamprecht [Tue, 13 Oct 2020 09:14:10 +0000 (11:14 +0200)]
bump version to 6.2-9

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
3 years agoLIO: drop unused statements
Stoiko Ivanov [Mon, 12 Oct 2020 15:34:58 +0000 (17:34 +0200)]
LIO: drop unused statements

minor cleanup of left-over/unused statements.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
3 years agoLIO: untaint values read from remote config
Stoiko Ivanov [Mon, 12 Oct 2020 15:34:57 +0000 (17:34 +0200)]
LIO: untaint values read from remote config

The LIO backend for ZFS over iSCSI fetches the json-config periodically from
the target.
This patch reduces the stored config values to those which are actually used
and additonally untaints the values read from the remote host's config-file.

Since the LUN index is used in calls to targetcli on the remote host (via
run_command), untainting prevents the call to crash when run with '-T'.

Tested by creating a zfs over iscsi backed VM, starting it, adding disks,
resizing disks, removing disks, creating snapshots, rolling back to a snapshot.

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
3 years agoZFSPlugin: untaint lun number
Stoiko Ivanov [Fri, 9 Oct 2020 15:13:44 +0000 (17:13 +0200)]
ZFSPlugin: untaint lun number

ZFS over iSCSI fetches information about the disk-images via ssh, thus
the obtainted data is tainted (perlsec (1)).

Since pvedaemon runs with '-T' enabled trying to start a VM via GUI/API failed,
while it still worked via `qm` or `pvesh`.

The issue surfaced after commit cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in
pve-common ('run_command: improve performance for logging and long lines'),
and results from concatenating the original (tainted) buffer to a variable,
instead of a captured subgroup.

Untainting the value in ZFSPlugin should not cause any regressiosn, since the
other 3 target providers already have a match on '\d+' for retrieving the
lun number.

reported via pve-user [0].

reproduced and tested by setting up a LIO-target (on top of a virtual PVE),
adding it as storage and trying to start a guest (with a disk on the
ZFS over iSCSI storage) with `perl -T /usr/sbin/qm start $vmid`

[0] https://lists.proxmox.com/pipermail/pve-user/2020-October/172055.html

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>