Thomas Lamprecht [Thu, 16 Mar 2023 10:46:47 +0000 (11:46 +0100)]
config: ensure definedness for iterating pending & snapshot volumes
while it will work as is, autovivification can be a real PITA so this
should make it more robust and might even avoid having the one or
other warning about accessing undef values in logs.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fiona Ebner [Wed, 15 Mar 2023 14:44:22 +0000 (15:44 +0100)]
fix #4572: config: also update volume IDs in pending section
The method is intended to be used in cases where the volumes actually
got renamed (e.g. migration). Thus, updating the volume IDs should of
course also be done for pending changes to avoid changes referring to
now non-existent volumes or even the wrong existing volume.
Thomas Lamprecht [Mon, 21 Nov 2022 07:09:24 +0000 (08:09 +0100)]
tag helpers: add get_unique_tags method for filtering out duplicates
tags must be unique, allow the user some control in how unique (case
sensitive) and honor the ordering settings (even if I doubt any
production setups wants to spent time and $$$ on cautiously
reordering all tags of their dozens to hundreds virtual guests..
Have some duplicate code to avoid checking to much in the loop
itself, as frequent branches can be more expensive.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Dominik Csapak [Wed, 16 Nov 2022 15:48:00 +0000 (16:48 +0100)]
GuestHelpers: add tag related helpers
'get_allowed_tags':
returns the allowed tags for the given user
'assert_tag_permissions'
helper to check permissions for tag setting/updating/deleting
for both container and qemu-server
gets the list of allowed tags from the DataCenterConfig and the current
user permissions, and checks for each tag that is added/removed if
the user has permissions to modify it
'normal' tags require 'VM.Config.Options' on '/vms/<vmid>', but not
allowed tags (either limited with 'user-tag-access' or
'privileged-tags' in the datacenter.cfg) requrie 'Sys.Modify' on '/'
Thomas Lamprecht [Sat, 12 Nov 2022 15:25:34 +0000 (16:25 +0100)]
vzdump: handle new jobs.cfg when removing VMIDs from backup jobs
we use the relatively new SectionConfig functionallity of allowing to
parse/write unknown config types, that way we can directly use the
directly available base job plugin for vzdump jobs and update only
those, keeping the other jobs untouched.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fiona Ebner [Mon, 3 Oct 2022 13:52:05 +0000 (15:52 +0200)]
vzdump: add 'performance' property string as a setting
Initially, to be used for tuning backup performance with QEMU.
A few users reported IO-related issues during backup after upgrading
to PVE 7.x and using a modified QEMU build with max-workers reduced to
8 instead of 16 helped them [0].
Also generalizes the way vzdump property string are handled for easier
extension in the future.
replication: avoid "expected snapshot missing" warning when irrelevant
Only print it when there is a snapshot that would've been removed
without the safeguard. Mostly relevant when a new volume is added to
an already replicated guest.
Fixes replication tests in pve-manager.
Fixes: c0b2948 ("replication: prepare: safeguard against removal if expected snapshot is missing") Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Fiona Ebner [Mon, 13 Jun 2022 10:29:59 +0000 (12:29 +0200)]
replication: prepare: safeguard against removal if expected snapshot is missing
Such a check would also have prevented the issue in 1aa4d84
("ReplicationState: purge state from non local vms") and other
scenarios where state and disk state are inconsistent with regard to
the last_sync snapshot.
AFAICT, all existing callers intending to remove all snapshots use
last_sync=1 so chaning the behavior for other (non-zero) values should
be fine.
Fiona Ebner [Mon, 13 Jun 2022 10:29:58 +0000 (12:29 +0200)]
replication: also consider storages from replication state upon removal
This prevents left-over volume(s) in the following situation:
1. replication with volumes on different storages A and B
2. remove all volumes on storage B from the guest configuration
3. schedule full removal before the next normal replication runs
Fiona Ebner [Mon, 13 Jun 2022 10:29:57 +0000 (12:29 +0200)]
replication: rename last_snapshots to local_snapshots
because prepare() was changed in 8d1cd44 ("partially fix #3111:
replication: be less picky when selecting incremental base") to return
all local snapshots.
Dominik Csapak [Fri, 3 Jun 2022 07:16:30 +0000 (09:16 +0200)]
ReplicationState: deterministically order replication jobs
if we have multiple jobs for the same vmid with the same schedule,
the last_sync, next_sync and vmid will always be the same, so the order
depends on the order of the $jobs hash (which is random; thanks perl)
to have a fixed order, take the jobid also into consideration
Dominik Csapak [Fri, 3 Jun 2022 07:16:29 +0000 (09:16 +0200)]
ReplicationState: purge state from non local vms
when running replication, we don't want to keep replication states for
non-local vms. Normally this would not be a problem, since on migration,
we transfer the states anyway, but when the ha-manager steals a vm, it
cannot do that. In that case, having an old state lying around is
harmful, since the code does not expect the state to be out-of-sync
with the actual snapshots on disk.
One such problem is the following:
Replicate vm 100 from node A to node B and C, and activate HA. When node
A dies, it will be relocated to e.g. node B and start replicate from
there. If node B now had an old state lying around for it's sync to node
C, it might delete the common base snapshots of B and C and cannot sync
again.
Deleting the state for all non local guests fixes that issue, since it
always starts fresh, and the potentially existing old state cannot be
valid anyway since we just relocated the vm here (from a dead node).
vzdump: schema: add 'notes-template' and 'protected' properties
In command_line(), notes are printed, quoted, but otherwise as is,
which is a bit ugly for multi-line notes. But it is part of the
commandline, so print it.
print snapshot tree: reduce indentation delta per level
previous:
> `-> foo 2021-05-28 12:59:36 no-description
> `-> bar 2021-06-18 12:44:48 no-description
> `-> current You are here!
now:
> `-> foo 2021-05-28 12:59:36 no-description
> `-> bar 2021-06-18 12:44:48 no-description
> `-> current You are here!
So requires less space, allowing deeper snapshot trees to still be
displayed nicely, and looks even better while doing that - the latter
may be subjective though.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fabian Ebner [Thu, 13 Jan 2022 11:04:01 +0000 (12:04 +0100)]
config: activate affected storages for snapshot operations
For snapshot creation, the storage for the vmstate file is activated
via vdisk_alloc when the state file is created.
Do not activate the volumes themselves, as that has unnecessary side
effects (e.g. waiting for zvol device link for ZFS, mapping the volume
for RBD). If a storage can only do snapshot operations on a volume
that has been activated, it needs to activate the volume itself.
The actual implementation will be in the plugins to be able to skip
CD ROM drives and bind-mounts, etc.
Thomas Lamprecht [Tue, 30 Nov 2021 07:13:13 +0000 (08:13 +0100)]
snapshot prepare: log on parent-cycle deletion
for new clones this should not happen anyway, as the API calls clean
such parent references up now, but for old ones it still can happen,
log to inform.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fabian Ebner [Fri, 26 Nov 2021 10:52:30 +0000 (11:52 +0100)]
replication: update last_sync before removing old replication snapshots
If pvesr was terminated after finishing with the new sync and after
removing old replication snapshots, but before it could write the new
state, the next replication would fail. It would wrongly interpret the
actual last replication snapshot as stale, remove it, and (if no other
snapshots are present) attempt a full sync, which would fail.
Reported in the community forum [0], this was brought to light by the
new pvescheduler before it learned graceful reload.
It's not possible to simply preserve a last remaining snapshot in
prepare(), because prepare() is also used for valid removals. Instead,
update last_sync early enough. Stale snapshots will still be removed
on the next run if there are any.
Fabian Ebner [Tue, 19 Oct 2021 07:54:55 +0000 (09:54 +0200)]
replication: find common snapshot: use additional information
which is now available from the storage back-end.
The benefits are:
1. Ability to detect different snapshots even if they have the same
name. Rather hard to reach, but for example with:
Snapshots A -> B -> C -> __replicationXYZ
Remove B, rollback to C (causes __replicationXYZ to be removed),
create a new snapshot B. Previously, B was selected as replication
base, but it didn't match on source and target. Now, C is correctly
selected.
2. Smaller delta in some cases by not prefering replication snapshots
over config snapshots, but using the most recent possible one from
both types of snapshots.
3. Less code complexity for snapshot selection.
If the remote side is old, it's not possible to detect mismatch of
distinct snapshots with the same name, but the timestamps from the
local side can still be used.
Still limit to our snapshots (config and replication), because we
don't have guarantees for other snapshots (could be deleted in the
meantime, name might not fit import/export regex, etc.).
Fabian Ebner [Thu, 12 Aug 2021 11:01:11 +0000 (13:01 +0200)]
fix #3111: config: snapshot delete: check if replication still needs it
and abort if it does and --force is not specified.
After rollback, the rollback snapshot might still be needed as the
base for incremental replication, because rollback removes (blocking)
replication snapshots.
It's not enough to limit the check to the most recent snapshot,
because new snapshots might've been created between rollback and
remove.
It's not enough to limit the check to snapshots without a parent (i.e.
in case of ZFS, the oldest), because some volumes might've been added
only after that, meaning the oldest snapshot is not an incremental
replication base for them.
Fabian Ebner [Thu, 12 Aug 2021 11:01:10 +0000 (13:01 +0200)]
partially fix #3111: replication: be less picky when selecting incremental base
After rollback, it might be necessary to start the replication from an
earlier, possibly non-replication, snapshot, because the replication
snapshot might have been removed from the source node. Previously,
replication could only recover in case the current parent snapshot was
already replicated.
To get into the bad situation (with no replication happening between
the steps):
1. have existing replication
2. take new snapshot
3. rollback to that snapshot
In case the partial fix to only remove blocking replication snapshots
for rollback was already applied, an additional step is necessary to
get into the bad situation:
4. take a second new snapshot
Since non-replication snapshots are now also included, where no
timestamp is readily available, it is necessary to filter them out
when probing for replication snapshots.
If no common replication snapshot is present, iterate backwards
through the config snapshots.
The changes are backwards compatible:
If one side is running the old code, and the other the new code,
the fact that one of the two prepare() calls does not return the
new additional snapshot candidates, means that no new match is
possible. Since the new prepare() returns a superset, no previously
possible match is now impossible.
The branch with @desc_sorted_snap is now taken more often, but
it can still only be taken when the volume exists on the remote side
(and has snapshots). In such cases, it is safe to die if no
incremental base snapshot can be found, because a full sync would not
be possible.
Fabian Ebner [Thu, 12 Aug 2021 11:01:07 +0000 (13:01 +0200)]
partially fix #3111: further improve removing replication snapshots
by using the new $blocker parameter. No longer remove all replication
snapshots from affected volumes unconditionally, but check first if
all blocking snapshots are replication snapshots. If they are, remove
them and proceed with rollback. If they are not, die without removing
any.
For backwards compatibility, it's still necessary to remove all
replication snapshots if $blockers is not available.
Get the replicatable volumes from the snapshot config rather than the
current config. And filter those volumes further to those that will
actually be rolled back.
Previously, a volume that only had replication snapshots (e.g. because
it was added after the snapshot was taken, or the vmstate volume)
would lose them. Then, on the next replication run, such a volume
would lead to an error, because replication tried to do a full sync,
but the target volume still exists.
This is not a complete fix. It is still possible to run into problems:
- by removing the last (non-replication) snapshots after a rollback
before replication can run once.
- by creating a snapshot and making a rollback before replication can
run once.
The list of volumes is not required to be sorted for prepare(), but it
is sorted by how foreach_volume() iterates now, so not random.
Fabian Ebner [Mon, 15 Feb 2021 12:24:59 +0000 (13:24 +0100)]
vzdump: mailto: use email-or-username-list format
because it is a more complete pattern. Also, 'mailto' was a '-list' format in
PVE 6.2 and earlier, so this also fixes whitespace-related backwards
compatibility. In particular, this fixes creating a backup job in the GUI
without setting an address, which passes along ''.
For example,
> vzdump 153 --mailto " ,,,admin@proxmox.com;;; developer@proxmox.com , ; "
was valid and worked in PVE 6.2.
Fabian Ebner [Tue, 24 Nov 2020 11:27:28 +0000 (12:27 +0100)]
vzdump: correctly handle prune-backups option in commandline and cron config
Previously only the hash reference was printed instead of the property string.
It's also necessary to parse the property string when reading the cron config.
which lead to current and pending/delete values being returned
separately, and being misinterpreted by the web interface (and probably
other clients as well).
One case where a config value should be skipped, instead of parsed and added
is when the value is not scalar. This is the case for the raw lxc keys
(e.g. lxc.init.cmd, lxc.apparmor.profile) - which get added as array to the
'lxc' key.
This patch reintroduces the skipping of non-scalar values, when parsing the
config but not for the pending values.
From a short look through the commit history the sanity checks were in place
since 2014 (introduced in qemu-server for handling pending configuration
changes), and their removal did not cause any other regressions.
To my knowledge only the raw lxc config keys are parsed into a non-scalar
value.
Tested by adding a 'lxc.init.cmd' key to a container config.
Thomas Lamprecht [Wed, 20 May 2020 15:00:55 +0000 (17:00 +0200)]
fix config_with_pending_array for falsy current values
one could have a config with:
> acpi: 0
and a pending deletion for that to restore the default 1 value.
The config_with_pending_array method then pushed the key twice, one
in the loop iterating the config itself correctly and once in the
pending delete hash, which is normally only for those options not yet
referenced in the config at all. Here the check was on "truthiness"
not definedness, fix that.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>