]> git.proxmox.com Git - pve-storage.git/log
pve-storage.git
2 years agofix #3803: ZFSPoolPlugin: zfs_request: increase minimum timeout in worker
Dominik Csapak [Thu, 23 Dec 2021 12:06:22 +0000 (13:06 +0100)]
fix #3803: ZFSPoolPlugin: zfs_request: increase minimum timeout in worker

Since most zfs operations can take a while (under certain conditions),
increase the minimum timeout for zfs_request in workers to 5 minutes.

We cannot increase the timeouts in synchronous api calls, since they are
hard limited to 30 seconds, but in worker we do not have such limits.

The existing default timeout does not change (60minutes in worker,
5seconds otherwise), but all zfs_requests with a set timeout (<5minutes)
will use the increased 5 minutes in a worker.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2 years agoceph config: minor code cleanup & comment
Thomas Lamprecht [Tue, 26 Apr 2022 10:47:54 +0000 (12:47 +0200)]
ceph config: minor code cleanup & comment

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agosome code style refcatoring/cleanup
Thomas Lamprecht [Fri, 22 Apr 2022 12:30:01 +0000 (14:30 +0200)]
some code style refcatoring/cleanup

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agobump version to 7.1-2
Fabian Grünbichler [Wed, 6 Apr 2022 11:30:21 +0000 (13:30 +0200)]
bump version to 7.1-2

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agodisks: zfs: code indentation/style improvments
Thomas Lamprecht [Wed, 6 Apr 2022 10:56:43 +0000 (12:56 +0200)]
disks: zfs: code indentation/style improvments

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agoplugins: allow limiting the number of protected backups per guest
Fabian Ebner [Tue, 29 Mar 2022 12:53:13 +0000 (14:53 +0200)]
plugins: allow limiting the number of protected backups per guest

The ability to mark backups as protected broke the implicit assumption
in vzdump that remove=1 and current number of backups being the limit
(i.e. sum of all keep options) will result in a backup being removed.

Introduce a new storage property 'max-protected-backups' to limit the
number of protected backups per guest. Use 5 as a default value, as it
should cover most use cases, while still not having too big of a
potential overhead in many scenarios.

For external plugins that do not return the backup subtype in
list_volumes, all protected backups with the same ID will count
towards the limit.

An alternative would be to count the protected backups when pruning.
While that would avoid the need for a new property, it would break the
current semantics of protected backups being ignored for pruning. It
also would be less flexible, e.g. for PBS, it can make sense to have
both keep-all=1 and a limit for the number of protected snapshots on
the PVE side.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: file restore: use check_volume_access to restrict content type
Fabian Ebner [Wed, 30 Mar 2022 10:24:33 +0000 (12:24 +0200)]
api: file restore: use check_volume_access to restrict content type

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agopvesm: extract config: add content type check
Fabian Ebner [Wed, 30 Mar 2022 10:24:32 +0000 (12:24 +0200)]
pvesm: extract config: add content type check

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocheck volume accesss: add content type parameter
Fabian Ebner [Wed, 30 Mar 2022 10:24:31 +0000 (12:24 +0200)]
check volume accesss: add content type parameter

Adding such a check here avoids the need to parse at the call sites in
some cases.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocheck volume access: allow for images/rootdir if user has VM.Config.Disk
Fabian Ebner [Wed, 30 Mar 2022 10:24:30 +0000 (12:24 +0200)]
check volume access: allow for images/rootdir if user has VM.Config.Disk

Listing guest images should not require Datastore.Allocate in this
case. In preparation for adding disk import to the GUI.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocheck volume access: always allow with Datastore.Allocate privilege
Fabian Ebner [Wed, 30 Mar 2022 10:24:29 +0000 (12:24 +0200)]
check volume access: always allow with Datastore.Allocate privilege

Such users are supposed to be administrators of the storage, but
previously, access to backups was not allowed when not also having
VM.Backup.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agopvesm: extract config: check for VM.Backup privilege
Fabian Ebner [Wed, 30 Mar 2022 10:24:28 +0000 (12:24 +0200)]
pvesm: extract config: check for VM.Backup privilege

In preparation to have check_volume_access() always allow access for
users with Datastore.Allocate privilege. As to not automatically give
all such users permission to extract the config too.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agolist volumes: also return backup type for backups
Fabian Ebner [Thu, 16 Dec 2021 12:12:23 +0000 (13:12 +0100)]
list volumes: also return backup type for backups

Otherwise, there is no storage-agnostic way to filter by backup group.

Call it subtype, to not confuse it with content type, and to be able
to re-use it for other content types than backup, if the need ever
arises.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocifs: check connection: bubble up NT_STATUS_LOGON_FAILURE
Fabian Ebner [Mon, 15 Nov 2021 12:37:56 +0000 (13:37 +0100)]
cifs: check connection: bubble up NT_STATUS_LOGON_FAILURE

in the same manner as NT_STATUS_ACCESS_DENIED. It can be assumed to be
a configuration error, so avoid showing the generic "storage <storeid>
is not online". Reported in the community forum:
https://forum.proxmox.com/threads/storage-is-not-online-cifs.99201/post-428858

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoactivate storage: improve error when check_connection dies
Fabian Ebner [Mon, 15 Nov 2021 12:37:55 +0000 (13:37 +0100)]
activate storage: improve error when check_connection dies

by making sure the storage ID is part of the error. This can happen
for (at least) CIFS, and GlusterFS with local server.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agostorage/plugin: factoring out regex for backup extension re
Lorenz Stechauner [Fri, 22 Oct 2021 12:23:11 +0000 (14:23 +0200)]
storage/plugin: factoring out regex for backup extension re

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2 years agostorage: rename REs for iso and vztmpl extensions
Lorenz Stechauner [Fri, 22 Oct 2021 12:23:10 +0000 (14:23 +0200)]
storage: rename REs for iso and vztmpl extensions

these changes make it more clear, how many capture groups each
RE inclues.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2 years agozfs: volume import: use correct format for renaming
Fabian Ebner [Thu, 3 Mar 2022 12:31:21 +0000 (13:31 +0100)]
zfs: volume import: use correct format for renaming

Previously, the transport format (which currently is always 'zfs') was
passed in, resulting in subvol-disks not to be renamed correctly.

Fixes: a97d3ee ("Introduce allow_rename parameter for pvesm import and storage_migrate")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
2 years agofile_size_info: cast 'size' and 'used' to integer
Mira Limbeck [Fri, 18 Feb 2022 08:58:27 +0000 (09:58 +0100)]
file_size_info: cast 'size' and 'used' to integer

`qemu-img info --output=json` returns the size and used values as integers in
the JSON format, but the regex match converts them to strings.
As we know they only contain digits, we can simply cast them back to integers
after the regex.

The API requires them to be integers.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agofix #3894: cast 'size' and 'used' to integer
Mira Limbeck [Fri, 18 Feb 2022 08:58:26 +0000 (09:58 +0100)]
fix #3894: cast 'size' and 'used' to integer

Perl's automatic conversion can lead to integers being converted to
strings, for example by matching it in a regex.

To make sure we always return an integer in the API call, add an
explicit cast to integer.

Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoadd volume_import/export_start helpers
Fabian Grünbichler [Wed, 9 Feb 2022 13:07:50 +0000 (14:07 +0100)]
add volume_import/export_start helpers

exposing the two halves of a storage migration for usage across
cluster boundaries.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agostorage_migrate: pull out import/export_prepare
Fabian Grünbichler [Wed, 9 Feb 2022 13:07:49 +0000 (14:07 +0100)]
storage_migrate: pull out import/export_prepare

for re-use with remote migration, where import and export happen on
different clusters connected via a websocket instead of SSH tunnel.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agostorage_migrate_snapshot: skip for btrfs without snapshots
Fabian Grünbichler [Wed, 9 Feb 2022 13:07:48 +0000 (14:07 +0100)]
storage_migrate_snapshot: skip for btrfs without snapshots

this allows migrating from btrfs to other raw+size accepting storages,
provided no snapshots exist.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agobump version to 7.1-1
Thomas Lamprecht [Fri, 4 Feb 2022 17:08:09 +0000 (18:08 +0100)]
bump version to 7.1-1

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agorbd: followup code style cleanups
Thomas Lamprecht [Fri, 4 Feb 2022 17:04:31 +0000 (18:04 +0100)]
rbd: followup code style cleanups

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agofix #1816: rbd: add support for erasure coded ec pools
Aaron Lauterer [Fri, 28 Jan 2022 11:22:41 +0000 (12:22 +0100)]
fix #1816: rbd: add support for erasure coded ec pools

The first step is to allocate rbd images correctly.

The metadata objects still need to be stored in a replicated pool, but
by providing the --data-pool parameter on image creation, we can place
the data objects on the erasure coded (EC) pool.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2 years agostorage_migrate: pull out snapshot decision
Fabian Grünbichler [Thu, 3 Feb 2022 12:41:41 +0000 (13:41 +0100)]
storage_migrate: pull out snapshot decision

into new top-level helper for re-use with remote migration.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agovolname_for_storage: parse volname before calling
Fabian Grünbichler [Thu, 3 Feb 2022 12:41:40 +0000 (13:41 +0100)]
volname_for_storage: parse volname before calling

to allow reusing this with remote migration, where parsing of the source
volid has to happen on the source node, but this call has to happen on
the target node.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoCephConfig: ensure newline in $secret and $cephfs_secret parameter
Aaron Lauterer [Mon, 24 Jan 2022 15:11:53 +0000 (16:11 +0100)]
CephConfig: ensure newline in $secret and $cephfs_secret parameter

Ensure that the user provided $secret ends in a newline. Otherwise we
will have Input/output errors from rados_connect.

For consistency and possible future proofing, also add a newline to
CephFS secrets.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agozfs: use -r parameter when listing snapshots
Fabian Ebner [Mon, 10 Jan 2022 11:50:44 +0000 (12:50 +0100)]
zfs: use -r parameter when listing snapshots

Some versions of ZFS do not automatically display the child snapshots
when '-t snapshot' is used, but require '-r' to be present
additionally[1]. And in general, it's cleaner to specify the flag
explicitly.

Because of that, commit ac5c1af led to a regression[0] in the context
of ZFS over iSCSI with zfs_get_sorted_snapshot_list. Fix it, by adding
a -r flag again.

The volume_snapshot_info function is currently only used in the
context of replication and that requires a local ZFS pool, but it
would be affected by the same issue if it is ever used in the context
of ZFS over iSCSI, so also add -r there.

[0]: https://forum.proxmox.com/threads/102683/
[1]: https://forum.proxmox.com/threads/102683/post-442577

Fixes: 8c20d8a ("plugin: add volume_snapshot_info function")
Fixes: ac5c1af ("zfspool: add zfs_get_sorted_snapshot_list helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agolvm thin: add missing newline to error message
Fabian Ebner [Thu, 18 Nov 2021 10:17:22 +0000 (11:17 +0100)]
lvm thin: add missing newline to error message

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agopbs: update attribute: cleaner error message if not supported
Fabian Ebner [Fri, 12 Nov 2021 14:29:42 +0000 (15:29 +0100)]
pbs: update attribute: cleaner error message if not supported

Reported-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobump version to 7.0-15
Fabian Grünbichler [Wed, 10 Nov 2021 13:25:26 +0000 (14:25 +0100)]
bump version to 7.0-15

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agolvm thin: don't assume that a thin pool and its volumes are active
Fabian Ebner [Fri, 5 Nov 2021 10:29:45 +0000 (11:29 +0100)]
lvm thin: don't assume that a thin pool and its volumes are active

There are cases where autoactivation can fail, as reported in the
community forum [0]. And it could also be that a volume was
deactivated by something outside of our control.

It doesn't seem strictly necessary to activate the thin pool itself
(creating/removing/activating LVs within the pool still works if it's
not active), but it does not report usage information as long as
neither the pool nor any of its LVs are active. Activate the pool for
that, for being able to use the flag in status(), and it should also
serve as a good indicator that there's a problem with the pool if it
can't be activated.

Before activating, check the (cached) lv_state from lvm_list_volumes.
It's necessary to update the cache in activate_storage, because the
flag is re-used in status(). Also update it for other (de)activations
to be more future-proof.

[0]: https://forum.proxmox.com/threads/local-lvm-not-available-after-kernel-update-on-pve-7.97406

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agolvm thin: status: code cleanup
Fabian Ebner [Fri, 5 Nov 2021 10:29:44 +0000 (11:29 +0100)]
lvm thin: status: code cleanup

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: disks: delete: add flag for cleaning up storage config
Fabian Ebner [Mon, 25 Oct 2021 13:47:49 +0000 (15:47 +0200)]
api: disks: delete: add flag for cleaning up storage config

Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
slight style fixup

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoapi: disks: delete: add flag for wiping disks
Fabian Ebner [Mon, 25 Oct 2021 13:47:48 +0000 (15:47 +0200)]
api: disks: delete: add flag for wiping disks

For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agodiskmanage: add helper for udev workaround
Fabian Ebner [Mon, 25 Oct 2021 13:47:47 +0000 (15:47 +0200)]
diskmanage: add helper for udev workaround

to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
Fabian Ebner [Mon, 25 Oct 2021 13:47:45 +0000 (15:47 +0200)]
api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: list thin pools: add volume group to properties
Fabian Ebner [Mon, 25 Oct 2021 13:47:46 +0000 (15:47 +0200)]
api: list thin pools: add volume group to properties

So that DELETE can be called using only information from GET.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoLVM: add lvm_destroy_volume_group
Fabian Ebner [Mon, 25 Oct 2021 13:47:44 +0000 (15:47 +0200)]
LVM: add lvm_destroy_volume_group

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobump version to 7.0-14
Fabian Grünbichler [Tue, 9 Nov 2021 15:13:13 +0000 (16:13 +0100)]
bump version to 7.0-14

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoadd disk rename feature
Aaron Lauterer [Tue, 9 Nov 2021 14:55:32 +0000 (15:55 +0100)]
add disk rename feature

Functionality has been added for the following storage types:

* directory ones, based on the default implementation:
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph

A new feature `rename` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
the intention of this feature is to support the following use-cases:
- reassign a volume from one owning guest to another (which usually
  entails a rename, since the owning vmid is encoded in the volume name)
- rename a volume (e.g., to use a more meaningful name instead of the
  auto-assigned ...-disk-123)

only the former is implemented at the caller side in
qemu-server/pve-container for now, but since the lower-level feature is
basically the same for both, we can take advantage of the storage plugin
API bump now to get the building block for this future feature in place
already.

adapt ApiChangelog change to fix conflicts and added more detail above

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoapi changelog: add volume attributes change
Fabian Grünbichler [Tue, 9 Nov 2021 11:29:48 +0000 (12:29 +0100)]
api changelog: add volume attributes change

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agopbs: integrate support for protected
Fabian Ebner [Thu, 30 Sep 2021 11:42:10 +0000 (13:42 +0200)]
pbs: integrate support for protected

free_image doesn't need to check for protection, because that will
happen on the server.

Getting/updating notes has also been refactored to re-use the code
for the PBS api calls.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
add missing b-d and depend on libposix-strptime-perl

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
2 years agoprune: mark renamed and protected backups differently
Fabian Ebner [Thu, 30 Sep 2021 11:42:09 +0000 (13:42 +0200)]
prune: mark renamed and protected backups differently

While it makes no difference for pruning itself, protected backups are
additionally protected against removal. Avoid the potential to confuse
the two. Also update the description for the API return value and add
an enum constraint.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agofix #3307: make it possible to set protection for backups
Fabian Ebner [Thu, 30 Sep 2021 11:42:08 +0000 (13:42 +0200)]
fix #3307: make it possible to set protection for backups

A protected backup is not removed by free_image and ignored when
pruning.

The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.

For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoprune mark: preserve additional information for the keep-all case
Fabian Ebner [Thu, 30 Sep 2021 11:42:07 +0000 (13:42 +0200)]
prune mark: preserve additional information for the keep-all case

Currently, if an entry is already marked as 'protected'.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoadd generalized functions to manage volume attributes
Fabian Ebner [Thu, 30 Sep 2021 11:42:06 +0000 (13:42 +0200)]
add generalized functions to manage volume attributes

replacing the ones for handling notes. To ensure backwards
compatibility with external plugins, all plugins that do not just call
another implementation need to call $class->{get, update}_volume_notes
when the attribute is 'notes' to catch any derived implementations.

This is mainly done to avoid the need to add new methods every time a
new attribute is added.

Not adding a timeout parameter like the notes functions have, because
it was not used and can still be added if it ever is needed in the
future.

For get_volume_attribute, undef will indicate that the attribute is
not supported. This makes it possible to distinguish "not supported"
from "error getting the attribute", which is useful when the attribute
is important for an operation. For example, free_image checking for
protection (introduced in a later patch) can abort if getting the
'protected' attribute fails.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agodir plugin: get notes: return undef if notes are not supported
Fabian Ebner [Thu, 30 Sep 2021 11:42:05 +0000 (13:42 +0200)]
dir plugin: get notes: return undef if notes are not supported

This avoids showing empty notes in the result of the content/{volid}
API call for volumes that do not even support notes. It's also in
preparation for the proposed get_volume_attribute generalization,
which expects undef to be returned when an attribute is not supported.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agodir plugin: update notes: don't fail if file is already removed
Fabian Ebner [Thu, 30 Sep 2021 11:42:04 +0000 (13:42 +0200)]
dir plugin: update notes: don't fail if file is already removed

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobump APIVER and APIAGE
Fabian Ebner [Tue, 19 Oct 2021 07:54:52 +0000 (09:54 +0200)]
bump APIVER and APIAGE

Added blockers parameter to volume_rollback_is_possible.
Replaced volume_snapshot_list with volume_snapshot_info.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoplugin: remove volume_snapshot_list
Fabian Ebner [Tue, 19 Oct 2021 07:54:51 +0000 (09:54 +0200)]
plugin: remove volume_snapshot_list

which was only used by replication, but now replication uses
volume_snapshot_info instead.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoplugin: add volume_snapshot_info function
Fabian Ebner [Tue, 19 Oct 2021 07:54:50 +0000 (09:54 +0200)]
plugin: add volume_snapshot_info function

which allows for better choices of common replication snapshots.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agotest: zfspool: extend some rollback is possible tests with new blockers parameter
Fabian Ebner [Thu, 12 Aug 2021 11:01:02 +0000 (13:01 +0200)]
test: zfspool: extend some rollback is possible tests with new blockers parameter

and fix a few typos.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agozfspool: add blockers parameter to volume_snapshot_is_possible
Fabian Ebner [Thu, 12 Aug 2021 11:01:01 +0000 (13:01 +0200)]
zfspool: add blockers parameter to volume_snapshot_is_possible

useful for rollback, so that only the required replication snapshots
can be removed, and it's possible to abort early without deleting any
replication snapshots if there are other non-replication snasphots
blocking rollback.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agozfspool: add zfs_get_sorted_snapshot_list helper
Fabian Ebner [Thu, 12 Aug 2021 11:01:00 +0000 (13:01 +0200)]
zfspool: add zfs_get_sorted_snapshot_list helper

replacing the current zfs_get_latest_snapshot. For
volume_snapshot_list, ignore errors as before.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocephfs: add support for multiple ceph filesystems
Dominik Csapak [Mon, 25 Oct 2021 14:01:27 +0000 (16:01 +0200)]
cephfs: add support for multiple ceph filesystems

by optionally saving the name of the cephfs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2 years agorbd plugin: free image: use actual command in error message
Fabian Ebner [Wed, 27 Oct 2021 12:46:20 +0000 (14:46 +0200)]
rbd plugin: free image: use actual command in error message

For linked clones, the base name was included, which is confusing.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobump version to 7.0-13
Thomas Lamprecht [Thu, 14 Oct 2021 09:22:55 +0000 (11:22 +0200)]
bump version to 7.0-13

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agotest: also pass format for backing base image
Thomas Lamprecht [Thu, 14 Oct 2021 09:17:40 +0000 (11:17 +0200)]
test: also pass format for backing base image

mirroring what commit 9177cc2eda87c9a8f85a5ba73fa5f8e45cdd44de did
for the plugin system already, with QEMU 6.1 this is now a hard
requirement

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agofix #3580: plugins: make preallocation mode selectable for qcow2 and raw images
Lorenz Stechauner [Tue, 12 Oct 2021 12:32:32 +0000 (14:32 +0200)]
fix #3580: plugins: make preallocation mode selectable for qcow2 and raw images

the plugins for file based storages
 * BTRFS
 * CIFS
 * Dir
 * Glusterfs
 * NFS
now allow the option 'preallocation'.

'preallocation' can have four values:
 * default
 * off
 * metadata
 * falloc
 * full
see man pages for `qemu-img` for what these mean exactly. [0]

the defualt value was chosen to be
 * qcow2: metadata (as previously)
 * raw: off

when using 'metadata' as preallocation mode, for raw images 'off'
is used.

[0] https://qemu.readthedocs.io/en/latest/system/images.html#disk-image-file-formats

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoct templates: support zstd compressed archives
Thomas Lamprecht [Wed, 13 Oct 2021 07:06:51 +0000 (09:06 +0200)]
ct templates: support zstd compressed archives

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agoapi: disks: create: set correct partition type
Fabian Ebner [Wed, 6 Oct 2021 09:18:46 +0000 (11:18 +0200)]
api: disks: create: set correct partition type

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agopartially fix #2285: api: disks: allow partitions for creation paths
Fabian Ebner [Wed, 6 Oct 2021 09:18:45 +0000 (11:18 +0200)]
partially fix #2285: api: disks: allow partitions for creation paths

The calls for directory and ZFS need slight adaptations. Except for
those, the only thing that needs to be done is support partitions in
the disk_is_used helper.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: disks: initgpt: explicitly abort for partitions
Fabian Ebner [Wed, 6 Oct 2021 09:18:44 +0000 (11:18 +0200)]
api: disks: initgpt: explicitly abort for partitions

In preparation to extend disk_is_used to support partitions. Without
this new check, initgpt would also allow partitions once disk_is_used
supports partitions, which is not desirable.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agodiskmanage: don't set usage for unused partitions
Fabian Ebner [Wed, 6 Oct 2021 09:18:43 +0000 (11:18 +0200)]
diskmanage: don't set usage for unused partitions

The disk type is already 'partition' so there's no additional
information here. And it would need to serve as a code-word for
unused partitions. The cleaner approach is to not set the usage.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agodiskmanage: wipe blockdev: also change partition type
Fabian Ebner [Wed, 6 Oct 2021 09:18:42 +0000 (11:18 +0200)]
diskmanage: wipe blockdev: also change partition type

when called with a partition. Since get_disks uses the partition type
(among other things) to detect LVM and ZFS volumes, such volumes would
still be seen as in-use after wiping. Thus, also change the partition
type and simply use 0x83 "Linux filesystem".

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agodiskmanage: add change_parttype and is_partition helpers
Fabian Ebner [Wed, 6 Oct 2021 09:18:41 +0000 (11:18 +0200)]
diskmanage: add change_parttype and is_partition helpers

For change_parttype, only GPT-partitioned disks are supported, as I
didn't see an option for sgdisk to make it also work with
MBR-partitioned disks. And while sfdisk could be used instead (or
additionally) it would be a new dependency, and AFAICS require some
conversion of partition type GUIDs to MBR types on our part.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobtrfs: free image: only remove snapshots for current subvol
Fabian Ebner [Mon, 13 Sep 2021 09:01:42 +0000 (11:01 +0200)]
btrfs: free image: only remove snapshots for current subvol

instead of all in the same directory.

Reported in the community forum:
https://forum.proxmox.com/threads/error-could-not-statfs-no-such-file-or-directory.96057/

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobump version to 7.0-12
Thomas Lamprecht [Tue, 5 Oct 2021 04:25:08 +0000 (06:25 +0200)]
bump version to 7.0-12

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agoimport: don't check for 1K aligned size
Stefan Reiter [Mon, 4 Oct 2021 15:29:19 +0000 (17:29 +0200)]
import: don't check for 1K aligned size

TPM state disks on directory storages may have completely unaligned
sizes, this check doesn't make sense for them.

This appears to just be a (weak) safeguard and not serve an actual
functional purpose, so simply get rid of it to allow migration of TPM
state.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2 years agod/control: break libpve-http-server-perl (<< 4.0-3)
Thomas Lamprecht [Mon, 4 Oct 2021 08:23:31 +0000 (10:23 +0200)]
d/control: break libpve-http-server-perl (<< 4.0-3)

commit e4d56f096ed28761d6b9a9e348be0fc682928040 removes a `sleep 1`
hack for removal of a tempfile which earlier happened in the
pve-http-server but we do ourself now.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agostatus: fix tmpfile cleanup
Lorenz Stechauner [Tue, 31 Aug 2021 10:16:32 +0000 (12:16 +0200)]
status: fix tmpfile cleanup

$tmpfilename already gets unlinked after executing the cmd.

furthermore, because this is a local file, it is wrong to delete
it via the ssh command on a remote node.

small change: added \n to the error message.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2 years agofix #3505: status: add checksum and algorithm to file upload
Lorenz Stechauner [Tue, 31 Aug 2021 10:16:31 +0000 (12:16 +0200)]
fix #3505: status: add checksum and algorithm to file upload

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2 years agostatus: remove sleep(1) in file upload
Lorenz Stechauner [Tue, 31 Aug 2021 10:16:30 +0000 (12:16 +0200)]
status: remove sleep(1) in file upload

this racey sleep(1) is only there for legacy reasons: because
we don't use apache anymore and only emulate its behabiour
regarding removing temp files, this is under our own control
now and so we can improve this whole situation.

this change requires a pve-http-server version, in which the
tmpfile gets not automatically removed anymore.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2 years agodiskmanage: allow passing partitions to get_disks
Fabian Ebner [Tue, 28 Sep 2021 11:39:48 +0000 (13:39 +0200)]
diskmanage: allow passing partitions to get_disks

Requires that the $include_partitions parameter is set too, which:
1. Makes sense, because the partition won't be included in the result
   otherwise.
2. Ensures backwards compatibility for existing callers that don't
   use $include_partitions. No existing callers use both $disks and
   $include_partitions at the same time, so nothing learns to
   "support" partitions by accident.

Moving the strip_dev helper to the top, so it can be used everywhere.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agodiskmanage: allow partitions for get_udev_info
Fabian Ebner [Tue, 28 Sep 2021 11:39:47 +0000 (13:39 +0200)]
diskmanage: allow partitions for get_udev_info

both existing callers only call this with non-partitions currently, so
the change should be backwards compatible.

In preparation to enable ZFS creation on top of partitions (where the
udev info is used to get the stable by-id path of a device).

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: disk: work around udev bug to ensure its database is updated
Fabian Ebner [Tue, 28 Sep 2021 11:39:42 +0000 (13:39 +0200)]
api: disk: work around udev bug to ensure its database is updated

There is a udev bug [0] which can ultimately lead to the udev database
for certain devices not being actively updated. Determining whether a
disk is used or not in get_disks() (in part) relies upon lsblk, which
queries the udev database. Ensure the information is updated by
manually calling 'udevadm trigger' for the changed devices.

It's most important for the 'directory' API path, as mounting depends
on the '/dev/disk/by-uuid'-symlink to be generated.

[0]: https://github.com/systemd/systemd/issues/18525

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoapi: disks: create: re-check disk after fork/lock
Fabian Ebner [Tue, 28 Sep 2021 11:39:41 +0000 (13:39 +0200)]
api: disks: create: re-check disk after fork/lock

Because then it might not be unused anymore. If there really is a
race, this prevents e.g. sgdisk creating a partition on a device
already in use by LVM or LVM destroying a partitioned device.

For ZFS, also get the latest udev info once inside the worker.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobtrfs: call free_image correctly
Fabian Ebner [Mon, 20 Sep 2021 10:23:02 +0000 (12:23 +0200)]
btrfs: call free_image correctly

Currently, 'PVE::Storage::DirPlugin' is implicitly passed along as
$class, which means that if the base class's free_image calls another
method (e.g.  filesystem_path) then the DirPlugin's method will be
used, rather than the one from BTRFSPlugin. Change it so that $class
itself is passed along.

See also commit 279d9de5108f6fc6f2d31f77f1b41d6dc7a67cb9 for context,
where the approach in this patch was suggested.

Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agocifs: negotiates the highest SMB2+ version supported by default
Thomas Lamprecht [Tue, 14 Sep 2021 14:23:23 +0000 (16:23 +0200)]
cifs: negotiates the highest SMB2+ version supported by default

instead of hardcoding it to a potential outdated value.

For `smbclient` we only set max-protocol version and that could only
be smb2 or smb3 (no finer granularity) any how, so this was not
really correct.

Nowadays the kernel dropped SMB1 and tries to go for SMB2.1 or higher
by default, depending on what client and server supports. SMB2.1 is
Windows 7/2008R2 - both EOL since quite a bit, so ok as default lower
boundary.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agocifs: allow "3" and "default" for version
Thomas Lamprecht [Tue, 14 Sep 2021 12:28:04 +0000 (14:28 +0200)]
cifs: allow "3" and "default" for version

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agofix #3609: cifs: add support to SMB 3.11
Moayad Almalat [Mon, 13 Sep 2021 12:15:55 +0000 (14:15 +0200)]
fix #3609: cifs: add support to SMB 3.11

Added support for the SMB version SMB3_11 When the `min protocol =
SMB3_11` in the smb.conf, the CIFS mount will return with the
following error:
```
CIFS VFS: cifs_mount failed w/return code = -95
```
added an optional option to use the `vers=3.11`

Signed-off-by: Moayad Almalat <m.almalat@proxmox.com>
Tested-by: Fabian Ebner <f.ebner@proxmox.com>
[ Thomas: move text from cover letter to commit message &
  add S-o-b ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agofix #3610: properly build ZFS detail tree
Fabian Ebner [Fri, 10 Sep 2021 11:45:35 +0000 (13:45 +0200)]
fix #3610: properly build ZFS detail tree

Previously, top-level vdevs like log or special were wrongly added as
children of the previous outer vdev instead of the root.

Fix it by also showing the vdev with the same name as the pool and
start counting from level 1 (the pool itself serves as the root and
should be the only one with level 0). This results in the same kind
of structure as in PBS and (except for the root) zpool status itself.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agofix prune-backups validation (again)
Fabian Ebner [Fri, 10 Sep 2021 09:37:18 +0000 (11:37 +0200)]
fix prune-backups validation (again)

Commit a000e26ce7d30ba2b938402164a9a15e66dd123e caused a test failure
in pve-manager, because now 'keep-all=0' is not thrown out upon
validation anymore. Fix the issue the commit addressed differently,
by simply creating a copy of the (shallow) hash first, and using
the logic from before the commit.

Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agoprune {validate, mark}: preserve input parameter
Fabian Ebner [Thu, 9 Sep 2021 09:58:01 +0000 (11:58 +0200)]
prune {validate, mark}: preserve input parameter

While the current way to detect settings like { 'keep-last' => 0 } is
concise, it's also wrong, because but the delete operation is visible
to the caller. This resulted in e.g.
    # $hash is { 'keep-all' => 1 }
    my $s = print_property_string($hash, 'prune-backups');
    # $hash is now {}, $s is 'keep-all=1'
because validation is called in print_property_string. The same issue
is present when calling prune_mark_backup_group.

Because validation complains when keep-all and something else is set,
this shouldn't have caused any real issues, besides vzdump with
keep-all=1 wrongly taking the removal path, but without any settings,
so not removing anything:
    INFO: prune older backups with retention:
    INFO: pruned 0 backup(s)

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobtrfs: style: add missing semicolon
Fabian Ebner [Wed, 8 Sep 2021 11:26:51 +0000 (13:26 +0200)]
btrfs: style: add missing semicolon

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobtrfs: avoid undef warnings with format
Fabian Ebner [Wed, 8 Sep 2021 11:26:50 +0000 (13:26 +0200)]
btrfs: avoid undef warnings with format

which is only set by parse_volname when the volume is a VM or
container image, but not for other content types.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobump version to 7.0-11
Thomas Lamprecht [Mon, 6 Sep 2021 06:40:39 +0000 (08:40 +0200)]
bump version to 7.0-11

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agoapi: followup style/comment improvements
Thomas Lamprecht [Mon, 6 Sep 2021 06:32:15 +0000 (08:32 +0200)]
api: followup style/comment improvements

try to comment why not what, what is already described good enough by
the code here.

Also, we want to go up to 100cc text-width if it improves
readability, which for post-if's it most often does.

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agostatus: move unlink from http-server to enpoint
Lorenz Stechauner [Tue, 31 Aug 2021 10:16:29 +0000 (12:16 +0200)]
status: move unlink from http-server to enpoint

this is the first step in which not the http server removes the
temporary file, but the worker itself.

Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
2 years agobtrfs: fix calling alloc_image from DirPlugin
Thomas Lamprecht [Mon, 6 Sep 2021 06:25:45 +0000 (08:25 +0200)]
btrfs: fix calling alloc_image from DirPlugin

similar to commit 279d9de5108f6fc6f2d31f77f1b41d6dc7a67cb9

This calling style is pretty dangerous in general for such plugin
systems...

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agoCeph: add keyring parameter for external clusters
Aaron Lauterer [Thu, 26 Aug 2021 10:03:32 +0000 (12:03 +0200)]
Ceph: add keyring parameter for external clusters

By adding the keyring for RBD storage or the secret for CephFS ones, it
is possible to add an external Ceph cluster with only one API call.

Previously the keyring / secret file needed to be placed in
/etc/pve/priv/ceph/$storeID.{keyring,secret} manually.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2 years agoCephConfig: add optional $secret parameter
Aaron Lauterer [Thu, 26 Aug 2021 10:03:31 +0000 (12:03 +0200)]
CephConfig: add optional $secret parameter

This allows us to manually pass the used RBD keyring or CephFS secret.
Useful mostly when adding external Ceph clusters where we have no other
means to fetch them.

I renamed the previous $secret to $cephfs_secret to be able to use
$secret as parameter.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
2 years agozfs: fix unmount request
Fabian Ebner [Thu, 5 Aug 2021 08:33:43 +0000 (10:33 +0200)]
zfs: fix unmount request

by not dying when the dataset is already unmounted. Can be triggered
for a container by doing two rollbacks in a row.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
2 years agobuildsys: change upload dist to bullseye
Thomas Lamprecht [Fri, 30 Jul 2021 13:37:13 +0000 (15:37 +0200)]
buildsys: change upload dist to bullseye

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agobump version to 7.0-10
Thomas Lamprecht [Fri, 30 Jul 2021 13:23:19 +0000 (15:23 +0200)]
bump version to 7.0-10

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 years agoapi: disks: allow zstd compression for zfs pools
Dominik Csapak [Fri, 30 Jul 2021 11:34:15 +0000 (13:34 +0200)]
api: disks: allow zstd compression for zfs pools

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
2 years agofix #3555: BTRFS: call DirPlugin's free_image correctly
Hannes Laimer [Fri, 30 Jul 2021 11:04:55 +0000 (13:04 +0200)]
fix #3555: BTRFS: call DirPlugin's free_image correctly

The method is only derived in the DirPlugin module from the base
Plugin, so we do not have it available there through a static module
method call using ::, but only when using a class dereference.

Other fix options would have been:

  PVE::Storage::Plugin::free_image(@_);

or:
  $class->SUPER::free_image($storeid, ...);

Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ Thomas: add some background to the commit message ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>