prune: mark renamed and protected backups differently
While it makes no difference for pruning itself, protected backups are
additionally protected against removal. Avoid the potential to confuse
the two. Also update the description for the API return value and add
an enum constraint.
fix #3307: make it possible to set protection for backups
A protected backup is not removed by free_image and ignored when
pruning.
The protection_file_path function is introduced in Storage.pm, so that
it can also be used by vzdump itself and in archive_remove.
For pruning, renamed backups already behaved similiar to how protected
backups will, but there are a few reasons to not just use that for
implementing the new feature:
1. It wouldn't protect against removal.
2. It would make it necessary to rename notes and log files too.
3. It wouldn't naturally extend to other volumes if that's needed.
add generalized functions to manage volume attributes
replacing the ones for handling notes. To ensure backwards
compatibility with external plugins, all plugins that do not just call
another implementation need to call $class->{get, update}_volume_notes
when the attribute is 'notes' to catch any derived implementations.
This is mainly done to avoid the need to add new methods every time a
new attribute is added.
Not adding a timeout parameter like the notes functions have, because
it was not used and can still be added if it ever is needed in the
future.
For get_volume_attribute, undef will indicate that the attribute is
not supported. This makes it possible to distinguish "not supported"
from "error getting the attribute", which is useful when the attribute
is important for an operation. For example, free_image checking for
protection (introduced in a later patch) can abort if getting the
'protected' attribute fails.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
dir plugin: get notes: return undef if notes are not supported
This avoids showing empty notes in the result of the content/{volid}
API call for volumes that do not even support notes. It's also in
preparation for the proposed get_volume_attribute generalization,
which expects undef to be returned when an attribute is not supported.
Fabian Ebner [Thu, 12 Aug 2021 11:01:01 +0000 (13:01 +0200)]
zfspool: add blockers parameter to volume_snapshot_is_possible
useful for rollback, so that only the required replication snapshots
can be removed, and it's possible to abort early without deleting any
replication snapshots if there are other non-replication snasphots
blocking rollback.
Fabian Ebner [Wed, 6 Oct 2021 09:18:45 +0000 (11:18 +0200)]
partially fix #2285: api: disks: allow partitions for creation paths
The calls for directory and ZFS need slight adaptations. Except for
those, the only thing that needs to be done is support partitions in
the disk_is_used helper.
Fabian Ebner [Wed, 6 Oct 2021 09:18:44 +0000 (11:18 +0200)]
api: disks: initgpt: explicitly abort for partitions
In preparation to extend disk_is_used to support partitions. Without
this new check, initgpt would also allow partitions once disk_is_used
supports partitions, which is not desirable.
Fabian Ebner [Wed, 6 Oct 2021 09:18:43 +0000 (11:18 +0200)]
diskmanage: don't set usage for unused partitions
The disk type is already 'partition' so there's no additional
information here. And it would need to serve as a code-word for
unused partitions. The cleaner approach is to not set the usage.
Fabian Ebner [Wed, 6 Oct 2021 09:18:42 +0000 (11:18 +0200)]
diskmanage: wipe blockdev: also change partition type
when called with a partition. Since get_disks uses the partition type
(among other things) to detect LVM and ZFS volumes, such volumes would
still be seen as in-use after wiping. Thus, also change the partition
type and simply use 0x83 "Linux filesystem".
Fabian Ebner [Wed, 6 Oct 2021 09:18:41 +0000 (11:18 +0200)]
diskmanage: add change_parttype and is_partition helpers
For change_parttype, only GPT-partitioned disks are supported, as I
didn't see an option for sgdisk to make it also work with
MBR-partitioned disks. And while sfdisk could be used instead (or
additionally) it would be a new dependency, and AFAICS require some
conversion of partition type GUIDs to MBR types on our part.
commit e4d56f096ed28761d6b9a9e348be0fc682928040 removes a `sleep 1`
hack for removal of a tempfile which earlier happened in the
pve-http-server but we do ourself now.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this racey sleep(1) is only there for legacy reasons: because
we don't use apache anymore and only emulate its behabiour
regarding removing temp files, this is under our own control
now and so we can improve this whole situation.
this change requires a pve-http-server version, in which the
tmpfile gets not automatically removed anymore.
Requires that the $include_partitions parameter is set too, which:
1. Makes sense, because the partition won't be included in the result
otherwise.
2. Ensures backwards compatibility for existing callers that don't
use $include_partitions. No existing callers use both $disks and
$include_partitions at the same time, so nothing learns to
"support" partitions by accident.
Moving the strip_dev helper to the top, so it can be used everywhere.
api: disk: work around udev bug to ensure its database is updated
There is a udev bug [0] which can ultimately lead to the udev database
for certain devices not being actively updated. Determining whether a
disk is used or not in get_disks() (in part) relies upon lsblk, which
queries the udev database. Ensure the information is updated by
manually calling 'udevadm trigger' for the changed devices.
It's most important for the 'directory' API path, as mounting depends
on the '/dev/disk/by-uuid'-symlink to be generated.
Because then it might not be unused anymore. If there really is a
race, this prevents e.g. sgdisk creating a partition on a device
already in use by LVM or LVM destroying a partitioned device.
For ZFS, also get the latest udev info once inside the worker.
Currently, 'PVE::Storage::DirPlugin' is implicitly passed along as
$class, which means that if the base class's free_image calls another
method (e.g. filesystem_path) then the DirPlugin's method will be
used, rather than the one from BTRFSPlugin. Change it so that $class
itself is passed along.
Thomas Lamprecht [Tue, 14 Sep 2021 14:23:23 +0000 (16:23 +0200)]
cifs: negotiates the highest SMB2+ version supported by default
instead of hardcoding it to a potential outdated value.
For `smbclient` we only set max-protocol version and that could only
be smb2 or smb3 (no finer granularity) any how, so this was not
really correct.
Nowadays the kernel dropped SMB1 and tries to go for SMB2.1 or higher
by default, depending on what client and server supports. SMB2.1 is
Windows 7/2008R2 - both EOL since quite a bit, so ok as default lower
boundary.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Added support for the SMB version SMB3_11 When the `min protocol =
SMB3_11` in the smb.conf, the CIFS mount will return with the
following error:
```
CIFS VFS: cifs_mount failed w/return code = -95
```
added an optional option to use the `vers=3.11`
Signed-off-by: Moayad Almalat <m.almalat@proxmox.com> Tested-by: Fabian Ebner <f.ebner@proxmox.com>
[ Thomas: move text from cover letter to commit message &
add S-o-b ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Previously, top-level vdevs like log or special were wrongly added as
children of the previous outer vdev instead of the root.
Fix it by also showing the vdev with the same name as the pool and
start counting from level 1 (the pool itself serves as the root and
should be the only one with level 0). This results in the same kind
of structure as in PBS and (except for the root) zpool status itself.
Commit a000e26ce7d30ba2b938402164a9a15e66dd123e caused a test failure
in pve-manager, because now 'keep-all=0' is not thrown out upon
validation anymore. Fix the issue the commit addressed differently,
by simply creating a copy of the (shallow) hash first, and using
the logic from before the commit.
Reported-by: Thomas Lamprecht <t.lamprecht@proxmox.com> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
While the current way to detect settings like { 'keep-last' => 0 } is
concise, it's also wrong, because but the delete operation is visible
to the caller. This resulted in e.g.
# $hash is { 'keep-all' => 1 }
my $s = print_property_string($hash, 'prune-backups');
# $hash is now {}, $s is 'keep-all=1'
because validation is called in print_property_string. The same issue
is present when calling prune_mark_backup_group.
Because validation complains when keep-all and something else is set,
this shouldn't have caused any real issues, besides vzdump with
keep-all=1 wrongly taking the removal path, but without any settings,
so not removing anything:
INFO: prune older backups with retention:
INFO: pruned 0 backup(s)
Aaron Lauterer [Thu, 26 Aug 2021 10:03:31 +0000 (12:03 +0200)]
CephConfig: add optional $secret parameter
This allows us to manually pass the used RBD keyring or CephFS secret.
Useful mostly when adding external Ceph clusters where we have no other
means to fetch them.
I renamed the previous $secret to $cephfs_secret to be able to use
$secret as parameter.
The method is only derived in the DirPlugin module from the base
Plugin, so we do not have it available there through a static module
method call using ::, but only when using a class dereference.
Other fix options would have been:
PVE::Storage::Plugin::free_image(@_);
or:
$class->SUPER::free_image($storeid, ...);
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
[ Thomas: add some background to the commit message ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
after an error while copying the file to its destination the local
path of the destination was unlinked in every case, even when on the
destination was copied to via scp.
extract backup config: less precise matching for broken pipe detection
Extracting the config for zstd compressed vma files was broken:
Failed to extract config from VMA archive: zstd: error 70 : Write
error : cannot write decoded block : Broken pipe (500)
since the error message changed and wouldn't match anymore.
With PVE 7.0 we use upstream's lvm2 packages, which seem to detect
'more' signatures (and refuse creating lvs when they are present)
This prevents creating new disks on LVM (thick) storages as reported
on pve-user [0].
Adding -Wy to wipe signatures, and --yes (to actually wipe them
instead of prompting) fixes the aborted lvcreate.
Adding only to LVMPlugin and not to the lvcreate calls in
LvmThinPlugin, since I assume (and my quick tests confirm) that thin
pools are not affected by this issue..
Tested on a virtual test-setup with a LVM storage on a (virtual) iscsi
target and a local lvmthin storage.
uses common function PVE::Tools::download_file_from_url to download
iso files.
Only users with permissions `Sys.Audit` and `Sys.Modify` on `/` are
permitted to perform this action. This restriction is due to the
fact, that the download function is able to download files from
internal networks (which are not visible/accessible from outside).
Users with these permissions anyway have the means to alter node
(network) config, so this does not create any further security risk.
Thomas Lamprecht [Wed, 23 Jun 2021 18:22:50 +0000 (20:22 +0200)]
btrfs: drop qcow2 and vmdk for now
the web-interface always prefers qcow2 once that is in the list,
itself a bug on it's own as the preferred one from the backend should
be preferred too, but still, vmdk support should not be extended we
can only cope with that in a limited way anyway, and both can always
get enabled later easily, if there's actual user-request for it.
Disabling is never that easy, at least if one cares about backward
compat.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The import methods (volume_import, volume_import_formats):
These additionally get the '$snapshot' parameter which is
already present on the export side as an informational piece
to know which of the snapshots is the *current* one.
This parameter is inserted *in the middle* of the current
parameters, so the import & export format methods now have
the same signatures.
The current "disk" state will be set to this snapshot.
This, too, is required for our btrfs implementation.
`volume_import_formats` can obviously not make much
*use* of this parameter, but it'll still be useful to know
that the information is actually available in the import
call, so its presence will be checked in the btrfs
implementation.
Currently this is intended to be used for btrfs send/recv
support, which in theory could also get additional metadata
similar to how we do the "tar+size" format, however, we
currently only really use this within this repository in
storage_migrate() which has this information readily
available anyway.
On the export side (volume_export, volume_export_formats):
The `$with_snapshots` option is now "defined" to be an
ordered array of snapshots to include, as a hint for
storages which need this. (As of the next commit this is
only btrfs, and only when also specifying a base snapshot,
which is a case we can currently not run into except on the
command line interface.)
The current providers of the `with_snapshot` option will
still treat it as a boolean (since eg. for ZFS you cannot
really "skip" snapshots AFAIK).
This is mainly intended for storages which do not have a
strong association between snapshots and the originals, or
an ordering (eg. btrfs and lvm-thin allow creating
arbitrary snapshot trees, and with btrfs you can even
create a "circular" connection between subvolumes, also we
could consider reflink based copies snapshots on xfs in
the future maybe?)
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
This is mostly the same as a directory storage, with 2 major
differences:
* 'subvol' volumes are actual btrfs subvolumes and therefore
allow snapshots
* 'raw' files are placed *into* a subvolume and therefore
also allow snapshots, the raw file for volume
`btrstore:100/vm-100-disk-1.raw` can be found under
`$path/images/100/vm-100-disk-1/disk.raw`
* in both cases, snapshots add an '@name' suffix to the
subvolume's directory name, so snapshot 'foo' of the above
would be found under
`$path/images/100/vm-100-disk-1@foo/disk.raw`
or for format "subvol":
`$path/images/100/subvol-100-disk-1.subvol@foo`
Note that qgroups aren't included in btrfs-send streams,
therefore for now we will only be using *unsized* subvolumes
for containers and place a regular raw+ext4 file for sized
containers.
We could extend the import/export stream format to include
the information at the front (similar to how we do the
"tar+size" format, but we need to include the size of all
the contained snapshots as well, since they can technically
change). (But before enabling quotas we should do some
performance testing on bigger file systems with multiple
snapshots as there are quite a few reports of the fs slowing
down considerably in such scenarios).
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
One test had to be adapted because it tested obsolete code. Namely:
it expects vztmpl to only end with .tar.gz, but the new regex also
includes .tar.xz, there is nothing against allowing .tar.xz files as
vztmpl files.
Stoiko Ivanov [Tue, 22 Jun 2021 16:39:54 +0000 (18:39 +0200)]
plugins: untaint volume_size_info retuns
the size returned by volume_size_info is used for creating the new
destination image in PVE::QemuServer::clone_disk (and probably
elsewhere). In certain cases the return values are tainted - they are
obtained by a run_command call and depending on the format and length
of the parsed output can still have their tainted attribute.
One example of a tainted return has been reported in our
community-forum:
https://forum.proxmox.com/threads/cannot-clone-vm-or-move-disk-with-more-than-13-snapshots.89628/
A qcow2 image with 13 snapshots generates a output > 4k in length from
`qemu-img info --output=json`, which in turn causes the output to be
considered tainted.
This patch untaints the returns where applicable. The other
storage-plugins are not affected:
* LVMPlugin returns a single number and a newline (thus gets untainted
by run_command)
* RBDPlugin untaints the complete json before decoding
* ZFSPoolplugin and ISCSIDirectPlugin explicitly untaint their
returns.
Fabian Ebner [Fri, 18 Jun 2021 10:59:35 +0000 (12:59 +0200)]
vdisk_list: only scan storages with the correct content type(s)
The enabled check in the lower loop is now redundant and can be removed.
If storeid is provided, initialize the result hash accordingly, mainly for
backwards compatibility (needed by a caller in pve-manager's Ceph/Pools.pm and
the migration code in pve-container and qemu-server), but it also is less
surprising in general.
Remaining vdisk_list users that do not specify a content type are:
1. pve-manager's Pool/Ceph.pm, but the content type for RBD can only be
rootdir and images, so the storage is scanned (if enabled, same as
before).
2. pve-container migration
3. qemu-server migration
For the latter two, it's planned to enforce content type, so the change is fine
too.
This also means that for iscsi(direct) plugins with content type 'none', i.e.
"use LUNs directly" does not return the list of images anymore, but that was
rather a bug anyways as they're not virtual disks then:
0.0.0.scsi-36001405b8f2772e13a04b8e9390db13d
All of the remaining callers not using content types (see above) are fine with
that change too.
Fabian Ebner [Tue, 4 May 2021 09:52:54 +0000 (11:52 +0200)]
lvm: volume import: handle worker returned by free_image
only affects LVM storages with 'saferemove 1' where the import fails at a rather
advanced stage. Previously in such cases, the renamed (by free_image) volume
del-vm-XYZ-disk-N would be left over.
Fabian Ebner [Tue, 4 May 2021 09:52:53 +0000 (11:52 +0200)]
pbs: free image: explicitly return undef
Storage.pm's vdisk_free interprets truthy return values as worker subs, so be
explicit about returning undef here. Not an issue at the moment, because
run_client_command already returns undef, but better be safe than sorry.