]> git.proxmox.com Git - pve-storage.git/log
pve-storage.git
19 months agobump version to 7.2-10
Thomas Lamprecht [Thu, 29 Sep 2022 12:33:18 +0000 (14:33 +0200)]
bump version to 7.2-10

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
19 months ago(remote) export: check and untaint format
Fabian Grünbichler [Wed, 28 Sep 2022 12:50:59 +0000 (14:50 +0200)]
(remote) export: check and untaint format

this format comes from the remote cluster, so it might not be supported
on the source side - checking whether it's known (as additional
safeguard) and untainting (to avoid open3 failure) is required.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
 [ T: squashed in canonical perl array ref access ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
19 months agoapi: disk SMART: fix details for depreacated return value comment
Thomas Lamprecht [Fri, 23 Sep 2022 09:59:33 +0000 (11:59 +0200)]
api: disk SMART: fix details for depreacated return value comment

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
19 months agodisk manage: module wide code/style cleanup
Thomas Lamprecht [Fri, 23 Sep 2022 09:54:41 +0000 (11:54 +0200)]
disk manage: module wide code/style cleanup

fixing some issues reported by perlcritic along the way.

cutting down 70 lines, often with even improving readability.
Tried to recheck and be conservative, so there shouldn't be any
regression, but it's still perl after all...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
19 months agofix #4165: disk: SMART: add normalized field
Matthias Heiserer [Thu, 21 Jul 2022 10:45:58 +0000 (12:45 +0200)]
fix #4165: disk: SMART: add normalized field

This makes it consistent with the naming scheme in PBS/GUI.
Keep value for API stability reasons and remove it in the next major version.

Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.cspak@proxmox.com>
19 months agoapi: remove duplicate variable
Fabian Grünbichler [Tue, 20 Sep 2022 08:50:12 +0000 (10:50 +0200)]
api: remove duplicate variable

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
19 months agobump version to 7.2-9
Fabian Grünbichler [Tue, 20 Sep 2022 07:20:14 +0000 (09:20 +0200)]
bump version to 7.2-9

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
20 months agodisks: allow add_storage for already configured local storage
Aaron Lauterer [Fri, 19 Aug 2022 15:01:21 +0000 (17:01 +0200)]
disks: allow add_storage for already configured local storage

One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
20 months agodisks: die if storage name is already in use
Aaron Lauterer [Fri, 19 Aug 2022 15:01:20 +0000 (17:01 +0200)]
disks: die if storage name is already in use

If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.

By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.

For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
20 months agodiskmanage: add mounted_paths
Aaron Lauterer [Fri, 19 Aug 2022 15:01:19 +0000 (17:01 +0200)]
diskmanage: add mounted_paths

returns a list of mounted paths with the backing devices

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
20 months agoRBD plugin: librados connect: increase timeout when in worker
Fiona Ebner [Fri, 2 Sep 2022 07:33:07 +0000 (09:33 +0200)]
RBD plugin: librados connect: increase timeout when in worker

The default timeout in PVE/RADOS.pm is 5 seconds, but this is not
always enough for external clusters under load. Workers can and should
take their time to not fail here too quickly.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
20 months agoRBD plugin: librados connect: pass along options
Fiona Ebner [Fri, 2 Sep 2022 07:33:06 +0000 (09:33 +0200)]
RBD plugin: librados connect: pass along options

In preparation to increase the timeout for workers. Both existing
callers of librados_connect() don't currently use the parameter.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
20 months agoRBD plugin: path: conditionalize get_rbd_dev_path() call
Fiona Ebner [Wed, 31 Aug 2022 08:50:54 +0000 (10:50 +0200)]
RBD plugin: path: conditionalize get_rbd_dev_path() call

The return value of get_rbd_dev_path() is only used when $scfg->{krbd}
evaluates to true and the function shouldn't have any side effects
that are needed later, so the call can be avoided otherwise.

This also saves a RADOS connection and command with configurations for
external clusters with krbd disabled.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
21 months agofix #4189: pbs: bump list_volumes timeout to 2mins
Wolfgang Bumiller [Wed, 17 Aug 2022 10:32:37 +0000 (12:32 +0200)]
fix #4189: pbs: bump list_volumes timeout to 2mins

When switching this from calling the external binary to
using the perl api client the timeout got reduced to 7
seconds, which is definitely insufficient for larger stores.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
21 months agobump version to 7.2-8
Fabian Grünbichler [Tue, 16 Aug 2022 11:56:56 +0000 (13:56 +0200)]
bump version to 7.2-8

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
21 months agopbs: die if master key is missing
Fabian Grünbichler [Tue, 16 Aug 2022 11:55:43 +0000 (13:55 +0200)]
pbs: die if master key is missing

while the resulting backups are encrypted, they would not be restorable
using the master key (only) if the original PVE system is lost.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
21 months agopbs: warn about missing, but config master key
Fabian Grünbichler [Tue, 16 Aug 2022 10:33:54 +0000 (12:33 +0200)]
pbs: warn about missing, but config master key

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
21 months agopbs: detect mismatch of encryption settings and key
Fabian Grünbichler [Tue, 16 Aug 2022 10:33:53 +0000 (12:33 +0200)]
pbs: detect mismatch of encryption settings and key

if the key file doesn't exist (anymore), but the storage.cfg references
one, die on commands that should use encryption instead of falling back
to plain-text operations.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
22 months agobump version to 7.2-7
Thomas Lamprecht [Fri, 15 Jul 2022 11:36:39 +0000 (13:36 +0200)]
bump version to 7.2-7

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
22 months agopbs: fix namespace handling in list_volumes
Fabian Ebner [Fri, 15 Jul 2022 10:47:51 +0000 (12:47 +0200)]
pbs: fix namespace handling in list_volumes

Before af07f67 ("pbs: use vmid parameter in list_snapshots") the
namespace was set via do_raw_client_command, but now it needs to be
set explicitly here.

Fixes: af07f67 ("pbs: use vmid parameter in list_snapshots")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
22 months agobump version to 7.2-6
Thomas Lamprecht [Thu, 14 Jul 2022 11:47:14 +0000 (13:47 +0200)]
bump version to 7.2-6

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
22 months agopbs: use vmid parameter in list_snapshots
Wolfgang Bumiller [Thu, 14 Jul 2022 11:24:10 +0000 (13:24 +0200)]
pbs: use vmid parameter in list_snapshots

Particularly for operations such as pruning backups after a
scheduled backups we do not want to list the entire
store.

(pbs_api_connect is moved up unmodified)

Note that the 'snapshots' CLI command only takes a full
group, but the API does allow specifying a backup-id without
a backup-type!

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
22 months agoBTRFSPlugin: reuse DirPlugin update/get_volume_attribute
Dominik Csapak [Thu, 2 Jun 2022 08:52:14 +0000 (10:52 +0200)]
BTRFSPlugin: reuse DirPlugin update/get_volume_attribute

this allows setting notes+protected for backups on btrfs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
22 months agoDirPlugin: update_volume_attribute: don't use update_volume_notes
Dominik Csapak [Thu, 2 Jun 2022 08:52:13 +0000 (10:52 +0200)]
DirPlugin: update_volume_attribute: don't use update_volume_notes

by refactoring it into a helper and use that.
With this, we can omit the 'update_volume_notes' in subclasses

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
23 months agobump version to 7.2-5
Wolfgang Bumiller [Wed, 15 Jun 2022 08:54:55 +0000 (10:54 +0200)]
bump version to 7.2-5

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
23 months agofixup tests
Wolfgang Bumiller [Wed, 15 Jun 2022 08:49:11 +0000 (10:49 +0200)]
fixup tests

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
23 months agodiskmanage: only set mounted property for mounted devices
Wolfgang Bumiller [Wed, 15 Jun 2022 08:42:20 +0000 (10:42 +0200)]
diskmanage: only set mounted property for mounted devices

instead of setting an empty string for not mounted devices

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
23 months agoAdded a LOG_EXT constant as a counterpart to NOTES_EXT
Daniel Tschlatscher [Tue, 14 Jun 2022 09:00:13 +0000 (11:00 +0200)]
Added a LOG_EXT constant as a counterpart to NOTES_EXT

and refactored usages for .log and .notes with them.
At some parts in the test case code I had to source new variables to
shorten the line length to not exceed the 100 column line limit.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
23 months agoSwitched to using log_warn of PVE::RESTEnvironment
Daniel Tschlatscher [Tue, 14 Jun 2022 09:00:12 +0000 (11:00 +0200)]
Switched to using log_warn of PVE::RESTEnvironment

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
23 months agoAdapted unlink calls for archive files in case of ENOENT
Daniel Tschlatscher [Tue, 14 Jun 2022 09:00:11 +0000 (11:00 +0200)]
Adapted unlink calls for archive files in case of ENOENT

This improves handling when two archive remove calls are creating a
race condition where one would formerly encounter an error. Now both
finish successfully.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
23 months agofix #3972: Remove the .notes file when a backup is deleted
Daniel Tschlatscher [Tue, 14 Jun 2022 09:00:10 +0000 (11:00 +0200)]
fix #3972: Remove the .notes file when a backup is deleted

When a VM or Container backup was deleted, the .notes file was not
removed, therefore, over time the dump folder would get polluted with
notes for backups that no longer existed. As backup names contain a
timestamp and as the notes cannot be reused because of this, I think
it is safe to just delete them just like we do with the .log file.

Furthermore, I sourced the deletion of the log and notes file into a
new function called "archive_auxiliaries_remove". Additionally, the
archive_info object now returns one more field containing the name of
the notes file. The test cases have to be adapted to expect this new
value as the package will not compile otherwise.

Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
23 months agoapi2: disks: add mounted boolean field
Hannes Laimer [Wed, 8 Jun 2022 07:09:59 +0000 (07:09 +0000)]
api2: disks: add mounted boolean field

... and remove '(mounted)' from usage string

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
23 months agorbd: get_rbd_dev_path: return /dev/rbd path only if cluster matches
Aaron Lauterer [Mon, 23 May 2022 10:54:25 +0000 (12:54 +0200)]
rbd: get_rbd_dev_path: return /dev/rbd path only if cluster matches

The changes in cfe46e2d4a97a83f1bbe6ad656e6416399309ba2 git not catch
all situations.
In the case of a guest having 2 disk images with the same name on a pool
with the same name but in two different ceph clusters we still had
issues when starting it. The first disk got mapped as expected. The
second disk did not get mapped because we returned the old $path to
"/dev/rbd/<pool>/<image>" because it already existed from the first
disk.

In the case that only the "old" /dev/rbd path exists and we do not have
the /dev/rbd-pve/<cluster>/... path available, we now check if the
cluster fsid used by that rbd device matches the one we expect. If it
does, then we are in the situation that the image has been mapped before
the new rbd-pve udev rule was introduced. If it does not, then we have
the situation of an ambiguous mapping in /dev/rbd and return the
$pve_path.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
23 months agorbd: fix #4060 show data-pool usage when configured
Aaron Lauterer [Wed, 18 May 2022 09:04:54 +0000 (11:04 +0200)]
rbd: fix #4060 show data-pool usage when configured

When a data-pool is configured, use it for status infos. The 'data-pool'
config option is used to mark the erasure coded pool while the 'pool'
will be the replicated pool holding meta data such as the omap.

This means, the 'pool' will only use a small amount of space and people
are interested how much they can store in the erasure coded pool anyway.

Therefore this patch reorders the assignment of the used pool name by
availability of the scfg parameters: data-pool -> pool -> fallback 'rbd'

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2 years agobump version to 7.2-4
Thomas Lamprecht [Fri, 13 May 2022 12:27:28 +0000 (14:27 +0200)]
bump version to 7.2-4

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agorbd: warn if no stats for a pool could be gathered
Stoiko Ivanov [Tue, 3 May 2022 11:31:40 +0000 (13:31 +0200)]
rbd: warn if no stats for a pool could be gathered

happens in case of a mistyped poolname, and the new message should be
more helpful than:
`Use of uninitialized value $free in addition (+) at \
/usr/share/perl5/PVE/Storage/RBDPlugin.pm line 64`

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2 years agorbd: add fallback default poolname 'rbd' to status
Stoiko Ivanov [Tue, 3 May 2022 11:31:39 +0000 (13:31 +0200)]
rbd: add fallback default poolname 'rbd' to status

the fallback to a default pool name of 'rbd' was introduced in:
1440604a4b072b88cc1e4f8bbae4511b50d1d68e
and worked for the status command, because it used the `rados_cmd`
sub.

This fallback was lost with the changes in:
41aacc6cdeea9b0c8007cbfb280acf827932c3d6

leading to confusing errors:
`Use of uninitialized value in string eq at \
/usr/share/perl5/PVE/Storage/RBDPlugin.pm line 633`
(e.g. in the journal from pvestatd)

Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
2 years agod/control: bump versioned dependecy for proxmox-backup-client
Thomas Lamprecht [Fri, 13 May 2022 12:06:42 +0000 (14:06 +0200)]
d/control: bump versioned dependecy for proxmox-backup-client

for the s/backup-ns/ns/ change

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agopbs: backup-ns parameter was renamed to ns
Wolfgang Bumiller [Fri, 13 May 2022 11:07:49 +0000 (13:07 +0200)]
pbs: backup-ns parameter was renamed to ns

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2 years agobump version to 7.2-3
Thomas Lamprecht [Thu, 12 May 2022 12:49:01 +0000 (14:49 +0200)]
bump version to 7.2-3

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agod/control: bump versioned dependecy for proxmox-backup-client
Thomas Lamprecht [Thu, 12 May 2022 12:57:00 +0000 (14:57 +0200)]
d/control: bump versioned dependecy for proxmox-backup-client

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agod/control: bump versioned dependency for pve-common
Thomas Lamprecht [Thu, 12 May 2022 12:45:17 +0000 (14:45 +0200)]
d/control: bump versioned dependency for pve-common

for namespace support

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agopbs: namespace support
Wolfgang Bumiller [Thu, 12 May 2022 08:44:53 +0000 (10:44 +0200)]
pbs: namespace support

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2 years agobump version to 7.2-2
Thomas Lamprecht [Thu, 28 Apr 2022 16:20:02 +0000 (18:20 +0200)]
bump version to 7.2-2

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agorbd: get path: allow fake override of fsid in scfg for some regression tests
Thomas Lamprecht [Thu, 28 Apr 2022 16:17:56 +0000 (18:17 +0200)]
rbd: get path: allow fake override of fsid in scfg for some regression tests

to avoid calls into RADOS connect, that trigger RPCEnv not
initialized breakage in regression tests, but wouldn't really work
otherwise either

in the future the RBD $scfg could actually support this (or similarly
named) property, to safe on storage addition and then avoid frequent
mon commands

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agobump version to 7.2-1
Thomas Lamprecht [Thu, 28 Apr 2022 15:38:22 +0000 (17:38 +0200)]
bump version to 7.2-1

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agorbd: unmap volume after rename
Fabian Ebner [Thu, 28 Apr 2022 08:47:18 +0000 (10:47 +0200)]
rbd: unmap volume after rename

When krbd is used, subsequent removal after an an operation
involving a rename could fail with
> librbd::image::PreRemoveRequest: 0x559b7506a470 \
> check_image_watchers: image has watchers - not removing
because the old mapping was still present.

For both operations with a rename, the owning guest should be offline,
but even if it weren't, unmap simply fails when the volume is in-use.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agorbd: drop get_kernel_device_path
Fabian Grünbichler [Wed, 27 Apr 2022 11:03:16 +0000 (13:03 +0200)]
rbd: drop get_kernel_device_path

it only redirected to get_rbd_dev_path with the same signature and both
are private subs..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agorbd: reduce number of stats in likely path
Fabian Grünbichler [Wed, 27 Apr 2022 11:01:42 +0000 (13:01 +0200)]
rbd: reduce number of stats in likely path

the new udev rule is expected to be in place and active, switching the
checks around means 1 instead of 2 stat()s in this rather hot code path.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agorbd: fix #3969: add rbd dev paths with cluster info
Aaron Lauterer [Wed, 13 Apr 2022 09:43:22 +0000 (11:43 +0200)]
rbd: fix #3969: add rbd dev paths with cluster info

By adding our own customized rbd udev rules and ceph-rbdnamer we can
create device paths that include the cluster fsid and avoid any
ambiguity if the same pool and namespace combination is used in
different clusters we connect to.

Additionally to the '/dev/rbd/<pool>/...' paths we now have
'/dev/rbd-pve/<cluster fsid>/<pool>/...' paths.

The other half of the patch makes use of the new device paths in the RBD
plugin.

The new 'get_rbd_dev_path' method the full device path. In case that the
image has been mapped before the rbd-pve udev rule has been installed,
it returns the old path.

The cluster fsid is read from the 'ceph.conf' file in the case of a
hyperconverged setup. In the case of an external Ceph cluster we need to
fetch it via a rados api call.

Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2 years agostorage plugins: en/decode volume notes as UTF-8
Dominik Csapak [Wed, 9 Mar 2022 08:21:28 +0000 (09:21 +0100)]
storage plugins: en/decode volume notes as UTF-8

When writing into the file, explicitly utf8 encode it, and then try
to utf8 decode it on read.

If the notes are not valid utf8, we assume they were iso-8859 encoded
and return as is.

Technically this is a breaking change, since there are iso-8859
comments that would successfully decode as utf8, for example: the
byte sequence "C2 A9" would be "£" in iso, but would decode to "£".

From what i can tell though, this is rather unlikely to happen for
"real world" notes, because the first byte would be in the range of
C0-F7 (which are mostly language dependent characters like "Â") and
the following bytes would have to be in the range of 80-BF, which are
only special characters like "£" (or undefined)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2 years agozfs pool: bump non-worker timeoiut default to 10s
Thomas Lamprecht [Tue, 26 Apr 2022 13:25:38 +0000 (15:25 +0200)]
zfs pool: bump non-worker timeoiut default to 10s

With 30s we got for sync api calls 10s leaves still enough room for
answering and other stuff.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agofix #3803: ZFSPoolPlugin: zfs_request: increase minimum timeout in worker
Dominik Csapak [Thu, 23 Dec 2021 12:06:22 +0000 (13:06 +0100)]
fix #3803: ZFSPoolPlugin: zfs_request: increase minimum timeout in worker

Since most zfs operations can take a while (under certain conditions),
increase the minimum timeout for zfs_request in workers to 5 minutes.

We cannot increase the timeouts in synchronous api calls, since they are
hard limited to 30 seconds, but in worker we do not have such limits.

The existing default timeout does not change (60minutes in worker,
5seconds otherwise), but all zfs_requests with a set timeout (<5minutes)
will use the increased 5 minutes in a worker.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2 years agoceph config: minor code cleanup & comment
Thomas Lamprecht [Tue, 26 Apr 2022 10:47:54 +0000 (12:47 +0200)]
ceph config: minor code cleanup & comment

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agosome code style refcatoring/cleanup
Thomas Lamprecht [Fri, 22 Apr 2022 12:30:01 +0000 (14:30 +0200)]
some code style refcatoring/cleanup

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agobump version to 7.1-2
Fabian Grünbichler [Wed, 6 Apr 2022 11:30:21 +0000 (13:30 +0200)]
bump version to 7.1-2

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agodisks: zfs: code indentation/style improvments
Thomas Lamprecht [Wed, 6 Apr 2022 10:56:43 +0000 (12:56 +0200)]
disks: zfs: code indentation/style improvments

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agoplugins: allow limiting the number of protected backups per guest
Fabian Ebner [Tue, 29 Mar 2022 12:53:13 +0000 (14:53 +0200)]
plugins: allow limiting the number of protected backups per guest

The ability to mark backups as protected broke the implicit assumption
in vzdump that remove=1 and current number of backups being the limit
(i.e. sum of all keep options) will result in a backup being removed.

Introduce a new storage property 'max-protected-backups' to limit the
number of protected backups per guest. Use 5 as a default value, as it
should cover most use cases, while still not having too big of a
potential overhead in many scenarios.

For external plugins that do not return the backup subtype in
list_volumes, all protected backups with the same ID will count
towards the limit.

An alternative would be to count the protected backups when pruning.
While that would avoid the need for a new property, it would break the
current semantics of protected backups being ignored for pruning. It
also would be less flexible, e.g. for PBS, it can make sense to have
both keep-all=1 and a limit for the number of protected snapshots on
the PVE side.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: file restore: use check_volume_access to restrict content type
Fabian Ebner [Wed, 30 Mar 2022 10:24:33 +0000 (12:24 +0200)]
api: file restore: use check_volume_access to restrict content type

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agopvesm: extract config: add content type check
Fabian Ebner [Wed, 30 Mar 2022 10:24:32 +0000 (12:24 +0200)]
pvesm: extract config: add content type check

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocheck volume accesss: add content type parameter
Fabian Ebner [Wed, 30 Mar 2022 10:24:31 +0000 (12:24 +0200)]
check volume accesss: add content type parameter

Adding such a check here avoids the need to parse at the call sites in
some cases.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocheck volume access: allow for images/rootdir if user has VM.Config.Disk
Fabian Ebner [Wed, 30 Mar 2022 10:24:30 +0000 (12:24 +0200)]
check volume access: allow for images/rootdir if user has VM.Config.Disk

Listing guest images should not require Datastore.Allocate in this
case. In preparation for adding disk import to the GUI.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocheck volume access: always allow with Datastore.Allocate privilege
Fabian Ebner [Wed, 30 Mar 2022 10:24:29 +0000 (12:24 +0200)]
check volume access: always allow with Datastore.Allocate privilege

Such users are supposed to be administrators of the storage, but
previously, access to backups was not allowed when not also having
VM.Backup.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agopvesm: extract config: check for VM.Backup privilege
Fabian Ebner [Wed, 30 Mar 2022 10:24:28 +0000 (12:24 +0200)]
pvesm: extract config: check for VM.Backup privilege

In preparation to have check_volume_access() always allow access for
users with Datastore.Allocate privilege. As to not automatically give
all such users permission to extract the config too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agolist volumes: also return backup type for backups
Fabian Ebner [Thu, 16 Dec 2021 12:12:23 +0000 (13:12 +0100)]
list volumes: also return backup type for backups

Otherwise, there is no storage-agnostic way to filter by backup group.

Call it subtype, to not confuse it with content type, and to be able
to re-use it for other content types than backup, if the need ever
arises.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocifs: check connection: bubble up NT_STATUS_LOGON_FAILURE
Fabian Ebner [Mon, 15 Nov 2021 12:37:56 +0000 (13:37 +0100)]
cifs: check connection: bubble up NT_STATUS_LOGON_FAILURE

in the same manner as NT_STATUS_ACCESS_DENIED. It can be assumed to be
a configuration error, so avoid showing the generic "storage <storeid>
is not online". Reported in the community forum:
https://forum.proxmox.com/threads/storage-is-not-online-cifs.99201/post-428858

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoactivate storage: improve error when check_connection dies
Fabian Ebner [Mon, 15 Nov 2021 12:37:55 +0000 (13:37 +0100)]
activate storage: improve error when check_connection dies

by making sure the storage ID is part of the error. This can happen
for (at least) CIFS, and GlusterFS with local server.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agostorage/plugin: factoring out regex for backup extension re
Lorenz Stechauner [Fri, 22 Oct 2021 12:23:11 +0000 (14:23 +0200)]
storage/plugin: factoring out regex for backup extension re

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2 years agostorage: rename REs for iso and vztmpl extensions
Lorenz Stechauner [Fri, 22 Oct 2021 12:23:10 +0000 (14:23 +0200)]
storage: rename REs for iso and vztmpl extensions

these changes make it more clear, how many capture groups each
RE inclues.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2 years agozfs: volume import: use correct format for renaming
Fabian Ebner [Thu, 3 Mar 2022 12:31:21 +0000 (13:31 +0100)]
zfs: volume import: use correct format for renaming

Previously, the transport format (which currently is always 'zfs') was
passed in, resulting in subvol-disks not to be renamed correctly.

Fixes: a97d3ee ("Introduce allow_rename parameter for pvesm import and storage_migrate")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2 years agofile_size_info: cast 'size' and 'used' to integer
Mira Limbeck [Fri, 18 Feb 2022 08:58:27 +0000 (09:58 +0100)]
file_size_info: cast 'size' and 'used' to integer

`qemu-img info --output=json` returns the size and used values as integers in
the JSON format, but the regex match converts them to strings.
As we know they only contain digits, we can simply cast them back to integers
after the regex.

The API requires them to be integers.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agofix #3894: cast 'size' and 'used' to integer
Mira Limbeck [Fri, 18 Feb 2022 08:58:26 +0000 (09:58 +0100)]
fix #3894: cast 'size' and 'used' to integer

Perl's automatic conversion can lead to integers being converted to
strings, for example by matching it in a regex.

To make sure we always return an integer in the API call, add an
explicit cast to integer.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoadd volume_import/export_start helpers
Fabian Grünbichler [Wed, 9 Feb 2022 13:07:50 +0000 (14:07 +0100)]
add volume_import/export_start helpers

exposing the two halves of a storage migration for usage across
cluster boundaries.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agostorage_migrate: pull out import/export_prepare
Fabian Grünbichler [Wed, 9 Feb 2022 13:07:49 +0000 (14:07 +0100)]
storage_migrate: pull out import/export_prepare

for re-use with remote migration, where import and export happen on
different clusters connected via a websocket instead of SSH tunnel.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agostorage_migrate_snapshot: skip for btrfs without snapshots
Fabian Grünbichler [Wed, 9 Feb 2022 13:07:48 +0000 (14:07 +0100)]
storage_migrate_snapshot: skip for btrfs without snapshots

this allows migrating from btrfs to other raw+size accepting storages,
provided no snapshots exist.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agobump version to 7.1-1
Thomas Lamprecht [Fri, 4 Feb 2022 17:08:09 +0000 (18:08 +0100)]
bump version to 7.1-1

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agorbd: followup code style cleanups
Thomas Lamprecht [Fri, 4 Feb 2022 17:04:31 +0000 (18:04 +0100)]
rbd: followup code style cleanups

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agofix #1816: rbd: add support for erasure coded ec pools
Aaron Lauterer [Fri, 28 Jan 2022 11:22:41 +0000 (12:22 +0100)]
fix #1816: rbd: add support for erasure coded ec pools

The first step is to allocate rbd images correctly.

The metadata objects still need to be stored in a replicated pool, but
by providing the --data-pool parameter on image creation, we can place
the data objects on the erasure coded (EC) pool.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2 years agostorage_migrate: pull out snapshot decision
Fabian Grünbichler [Thu, 3 Feb 2022 12:41:41 +0000 (13:41 +0100)]
storage_migrate: pull out snapshot decision

into new top-level helper for re-use with remote migration.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agovolname_for_storage: parse volname before calling
Fabian Grünbichler [Thu, 3 Feb 2022 12:41:40 +0000 (13:41 +0100)]
volname_for_storage: parse volname before calling

to allow reusing this with remote migration, where parsing of the source
volid has to happen on the source node, but this call has to happen on
the target node.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoCephConfig: ensure newline in $secret and $cephfs_secret parameter
Aaron Lauterer [Mon, 24 Jan 2022 15:11:53 +0000 (16:11 +0100)]
CephConfig: ensure newline in $secret and $cephfs_secret parameter

Ensure that the user provided $secret ends in a newline. Otherwise we
will have Input/output errors from rados_connect.

For consistency and possible future proofing, also add a newline to
CephFS secrets.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agozfs: use -r parameter when listing snapshots
Fabian Ebner [Mon, 10 Jan 2022 11:50:44 +0000 (12:50 +0100)]
zfs: use -r parameter when listing snapshots

Some versions of ZFS do not automatically display the child snapshots
when '-t snapshot' is used, but require '-r' to be present
additionally[1]. And in general, it's cleaner to specify the flag
explicitly.

Because of that, commit ac5c1af led to a regression[0] in the context
of ZFS over iSCSI with zfs_get_sorted_snapshot_list. Fix it, by adding
a -r flag again.

The volume_snapshot_info function is currently only used in the
context of replication and that requires a local ZFS pool, but it
would be affected by the same issue if it is ever used in the context
of ZFS over iSCSI, so also add -r there.

[0]: https://forum.proxmox.com/threads/102683/
[1]: https://forum.proxmox.com/threads/102683/post-442577

Fixes: 8c20d8a ("plugin: add volume_snapshot_info function")
Fixes: ac5c1af ("zfspool: add zfs_get_sorted_snapshot_list helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agolvm thin: add missing newline to error message
Fabian Ebner [Thu, 18 Nov 2021 10:17:22 +0000 (11:17 +0100)]
lvm thin: add missing newline to error message

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agopbs: update attribute: cleaner error message if not supported
Fabian Ebner [Fri, 12 Nov 2021 14:29:42 +0000 (15:29 +0100)]
pbs: update attribute: cleaner error message if not supported

Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobump version to 7.0-15
Fabian Grünbichler [Wed, 10 Nov 2021 13:25:26 +0000 (14:25 +0100)]
bump version to 7.0-15

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agolvm thin: don't assume that a thin pool and its volumes are active
Fabian Ebner [Fri, 5 Nov 2021 10:29:45 +0000 (11:29 +0100)]
lvm thin: don't assume that a thin pool and its volumes are active

There are cases where autoactivation can fail, as reported in the
community forum [0]. And it could also be that a volume was
deactivated by something outside of our control.

It doesn't seem strictly necessary to activate the thin pool itself
(creating/removing/activating LVs within the pool still works if it's
not active), but it does not report usage information as long as
neither the pool nor any of its LVs are active. Activate the pool for
that, for being able to use the flag in status(), and it should also
serve as a good indicator that there's a problem with the pool if it
can't be activated.

Before activating, check the (cached) lv_state from lvm_list_volumes.
It's necessary to update the cache in activate_storage, because the
flag is re-used in status(). Also update it for other (de)activations
to be more future-proof.

[0]: https://forum.proxmox.com/threads/local-lvm-not-available-after-kernel-update-on-pve-7.97406

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agolvm thin: status: code cleanup
Fabian Ebner [Fri, 5 Nov 2021 10:29:44 +0000 (11:29 +0100)]
lvm thin: status: code cleanup

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: disks: delete: add flag for cleaning up storage config
Fabian Ebner [Mon, 25 Oct 2021 13:47:49 +0000 (15:47 +0200)]
api: disks: delete: add flag for cleaning up storage config

Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
slight style fixup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoapi: disks: delete: add flag for wiping disks
Fabian Ebner [Mon, 25 Oct 2021 13:47:48 +0000 (15:47 +0200)]
api: disks: delete: add flag for wiping disks

For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agodiskmanage: add helper for udev workaround
Fabian Ebner [Mon, 25 Oct 2021 13:47:47 +0000 (15:47 +0200)]
diskmanage: add helper for udev workaround

to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
Fabian Ebner [Mon, 25 Oct 2021 13:47:45 +0000 (15:47 +0200)]
api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: list thin pools: add volume group to properties
Fabian Ebner [Mon, 25 Oct 2021 13:47:46 +0000 (15:47 +0200)]
api: list thin pools: add volume group to properties

So that DELETE can be called using only information from GET.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoLVM: add lvm_destroy_volume_group
Fabian Ebner [Mon, 25 Oct 2021 13:47:44 +0000 (15:47 +0200)]
LVM: add lvm_destroy_volume_group

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobump version to 7.0-14
Fabian Grünbichler [Tue, 9 Nov 2021 15:13:13 +0000 (16:13 +0100)]
bump version to 7.0-14

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoadd disk rename feature
Aaron Lauterer [Tue, 9 Nov 2021 14:55:32 +0000 (15:55 +0100)]
add disk rename feature

Functionality has been added for the following storage types:

* directory ones, based on the default implementation:
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph

A new feature `rename` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
the intention of this feature is to support the following use-cases:
- reassign a volume from one owning guest to another (which usually
  entails a rename, since the owning vmid is encoded in the volume name)
- rename a volume (e.g., to use a more meaningful name instead of the
  auto-assigned ...-disk-123)

only the former is implemented at the caller side in
qemu-server/pve-container for now, but since the lower-level feature is
basically the same for both, we can take advantage of the storage plugin
API bump now to get the building block for this future feature in place
already.

adapt ApiChangelog change to fix conflicts and added more detail above

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoapi changelog: add volume attributes change
Fabian Grünbichler [Tue, 9 Nov 2021 11:29:48 +0000 (12:29 +0100)]
api changelog: add volume attributes change

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agopbs: integrate support for protected
Fabian Ebner [Thu, 30 Sep 2021 11:42:10 +0000 (13:42 +0200)]
pbs: integrate support for protected

free_image doesn't need to check for protection, because that will
happen on the server.

Getting/updating notes has also been refactored to re-use the code
for the PBS api calls.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
add missing b-d and depend on libposix-strptime-perl

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoprune: mark renamed and protected backups differently
Fabian Ebner [Thu, 30 Sep 2021 11:42:09 +0000 (13:42 +0200)]
prune: mark renamed and protected backups differently

While it makes no difference for pruning itself, protected backups are
additionally protected against removal. Avoid the potential to confuse
the two. Also update the description for the API return value and add
an enum constraint.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agofix #3307: make it possible to set protection for backups
Fabian Ebner [Thu, 30 Sep 2021 11:42:08 +0000 (13:42 +0200)]
fix #3307: make it possible to set protection for backups

A protected backup is not removed by free_image and ignored when
pruning.

The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.

For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoprune mark: preserve additional information for the keep-all case
Fabian Ebner [Thu, 30 Sep 2021 11:42:07 +0000 (13:42 +0200)]
prune mark: preserve additional information for the keep-all case

Currently, if an entry is already marked as 'protected'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>