Fabian Ebner [Tue, 1 Jun 2021 06:43:06 +0000 (08:43 +0200)]
vm status: force int where appropriate
to avoid potential problems with stringified numbers in Javascript and
elsewehere.
The vmid was not always an integer as the return schema expects, namely
when there was an opt_vmid argument, because the 'ne' comparision coerced the
vmid to be a string then.
avoid setting lun number for drives when pvscsi controller is used
Reported in the community forum[0].
In QEMU's hw/scsi/vmw_pvscsi.c in the SCSIBusInfo struct, the max_lun property
is set to 0. This means that in our stack, one cannot have multiple disks and
use 'scsihw: pvscsi' currently, as kvm would fail with
bad scsi device lun: 1
Instead of increasing the lun number, increase the scsi-id, as we already do for
lsi.* (in hw/scsi/lsi53c895a.c the max_lun property is also 0).
Dominik Csapak [Wed, 16 Jun 2021 13:09:33 +0000 (15:09 +0200)]
fix #3329: turn on cache=writeback for efidisks on rbd
on slower ceph clusters, the write pattern of the ovmf booting process
slows down the boot of the vm, so we turn on caching by default
it seems no other storage (until now) behaves like this. if it does in
the future, we can still add them too, or add a 'cache' property for
the efidisk
Fabian Ebner [Fri, 4 Jun 2021 13:49:29 +0000 (15:49 +0200)]
scan volids: remove superfluous parameter
The only caller that didn't use 'images' was removed as part of the migration
refactoring in commit 62a4c963b824c923a4fc82a48c81d0f63ebaddae, so this is not
even a breaking change as the 'PVE 7' comment might've suggested.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com> Reviewed-by: Stefan Reiter <s.reiter@proxmox.com>
Stefan Reiter [Thu, 27 May 2021 10:27:51 +0000 (12:27 +0200)]
qm: assume correct VNC setup in 'vncproxy', disallow passwordless
The QMP 'change' command is no longer available since QEMU 6.0, so this
cannot work - instead of replacing it, we can just remove it however.
The 'if' branch would only set the VNC socket path anew and enable
password mode, which is always set and enabled on startup already.
The 'else' branch was intended for certificate login (?), which
according to the FIXME comment is long gone anyway - simply forbid
'vncproxy' without the PVE ticket environment variable set.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
for IDE and SATA, setting the whole drive into readonly mode is not
possible. skip the readonly flag for such drives as a workaround until
we find a better solution.
would return the size of scsi0 even though it would first boot
from cdrom/network.
When editing the bootorder with such a legacy config, we
remove the 'bootdisk' property and replace the legacy notation
with an explicit order, but we only search the first disk
for the size now.
Restore that behaviour by iterating over all disks in the boot
order property string until we get one that is not a cdrom
and has a size.
Thomas Lamprecht [Thu, 29 Apr 2021 12:27:41 +0000 (14:27 +0200)]
Revert "migration: do not set default speed limit"
The default was changed for 5.2, so while it is not 32 MiB/s anymore,
it is still 128 MiB/s which I did not notice on my 1 Gbps (or < 125
MiB/s) setup. For users with links faster than one gigabit it now did
some limiting - so setup a very high limit so than even 100G should
not max this out.
The variable is only ever used for calculating the average speed of memory
migration, but it was set before disk mirroring already. But the disk
sizes are not included in the calculation, resulting in (very) wrong values.
Thomas Lamprecht [Mon, 19 Apr 2021 20:01:05 +0000 (22:01 +0200)]
migration: keep log rate steady if polling gets more frequent
Either we're done in a few seconds anyway, or if the VM dirties lots
of pages we need quite a bit of time, and then it does not help to
output roughly the same status 10 times a second...
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 19 Apr 2021 19:54:33 +0000 (21:54 +0200)]
migration: rework logging to more humand friendly, less spammy
* use render_bytes where possible, to get quick to read and grasp
units printed
* xbzrle is only interesting if actually pages/bytes are send using
it, so only log in that case
* log if VM dirties more than we send
* log current speed we get from QEMU
In general there are less lines logged and huge integers are avoided.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fabian Ebner [Fri, 29 Jan 2021 15:11:43 +0000 (16:11 +0100)]
migration: move finishing block jobs to phase2 for better/uniform error handling
avoids the possibility to die during phase3_cleanup and instead of needing to
duplicate the cleanup ourselves, benefit from phase2_cleanup doing so.
The duplicate cleanup was also very incomplete: it didn't stop the remote kvm
process (leading to 'VM already running' when trying to migrate again
afterwards), but it removed its disks, and it didn't unlock the config, didn't
close the tunnel and didn't cancel the block-dirty bitmaps.
Since migrate_cancel should do nothing after the (non-storage) migrate process
has completed, even that cleanup step is fine here.
Since phase3 is empty at the moment, the order of operations is still the same.
Also add a test, that would complain about finish_tunnel not being called before
this patch. That test also checks that local disks are not already removed
before finishing the block jobs.
Fabian Ebner [Fri, 29 Jan 2021 15:11:39 +0000 (16:11 +0100)]
migration: cleanup_remotedisks: simplify and include more disks
Namely, those migrated with storage_migrate by using the information from
volume_map. Call cleanup_remotedisks in phase1_cleanup as well, because that's
where we end if sync_offline_local_volumes fails, and some disks might already
have been transfered successfully. Note that the local disks are still here, so
this is fine.
Fabian Ebner [Fri, 29 Jan 2021 15:11:38 +0000 (16:11 +0100)]
migration: simplify removal of local volumes and get rid of self->{volumes}
This also changes the behavior to remove the local copies of offline migrated
volumes only after the migration has finished successfully (this is relevant
for mixed settings, e.g. online migration with unused/vmstate disks).
local_volumes contains both, the volumes previously in $self->{volumes}
and the volumes in $self->{online_local_volumes}, and hence is the place
to look for which volumes we need to remove. Of course, replicated
volumes still need to be skipped.
Fabian Ebner [Fri, 29 Jan 2021 15:11:35 +0000 (16:11 +0100)]
migration: fix calculation of bandwith limit for non-disk migration
The case with:
1. no generic 'migration' limit from the storage plugin
2. a migrate_speed limit in the VM config
was broken. It would assign 0 to migrate_speed when picking the minimum value
and then default to the default value. Fix it by checking if bwlimit is 0
before picking the minimum.
Also, make it a bit more readable by avoiding the trick of //-assigning bwlimit
before the units match up and relying on getting back the original bwlimit value
as the minimum. Instead, only ||-assign after the units match up and don't rely
on other things.
Fabian Ebner [Fri, 29 Jan 2021 15:11:32 +0000 (16:11 +0100)]
migration: split sync_disks into two functions
by making local_volumes class-accessible. One functions is for scanning all local
volumes and one is for actually syncing offline volumes via storage_migrate. The
exception is replicated volumes, this still happens during the scan for now.
Also introduce a filter_local_volumes helper, to makes life easier.
Fabian Ebner [Mon, 22 Mar 2021 14:32:43 +0000 (15:32 +0100)]
filter by content type when using vdisk_list
except for migration, where it would be subtly backwards-incompatible. Since
there is a scan_volids call for migration, we can't default to filtering in
scan_volids just yet.
Also allows to get rid of the existing filtering hack in rescan().
When checking whether a volume is still referenced by a snapshot, the volid
itself is first checked. When the volid is different, we fall back to comparing
the path.
As the first value to be compared is a volume's path, the second value better be
a volume's path too, and not a snapshot's path.
The error that led me here:
* had a VM with ZFS over iSCSI storage with an exsiting snapshot
* add new unused drive
* try to remove the unsued drive
* fails, because ZFS (not Pool!) Plugin does not support snapshot paths.
Stefan Reiter [Wed, 3 Mar 2021 09:56:11 +0000 (10:56 +0100)]
live-restore: register qmeventd handle
Similar to backups, prevent QEMU from being killed by qmeventd during
the live-restore, so a guest can shut itself down without aborting the
restore operation.
Note that the 'close' is only to be explicit, the handle will also be
closed in case an operation errors (i.e. when the 'eval' is left).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Stefan Reiter [Wed, 3 Mar 2021 09:56:09 +0000 (10:56 +0100)]
enable live-restore for PBS
Enables live-restore functionality using the 'alloc-track' QEMU driver.
This allows starting a VM immediately when restoring from a PBS
snapshot. The snapshot is mounted into the VM, so it can boot from that,
while guest reads and a 'block-stream' job handle the restore in the
background.
If an error occurs, the VM is deleted and all data written during the
restore is lost.
The VM remains locked during the restore, which automatically prohibits
any modifications to the config while restoring. Some modifications
might potentially be safe, however, this is experimental enough that I
believe this would cause more bad stuff(tm) than actually satisfy any
use cases.
Pool handling is slightly adjusted so the VM can be added to the pool
before the restore starts.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Stefan Reiter [Wed, 3 Mar 2021 09:56:08 +0000 (10:56 +0100)]
cfg2cmd: allow PBS snapshots as backing files for drives
Uses the custom 'alloc-track' filter node to redirect writes to the
original drives target, while unwritten blocks will be read from the
specified PBS snapshot.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Stefan Reiter [Wed, 3 Mar 2021 09:56:07 +0000 (10:56 +0100)]
make qemu_drive_mirror_monitor more generic
...so it works with other block jobs as well. Intended use case is
block-stream, which also requires a new "auto" (wait only) completion
mode, since it finishes automatically anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Mira Limbeck [Mon, 29 Mar 2021 12:07:15 +0000 (14:07 +0200)]
fix #2670: cloudinit enable SLAAC
cloud-init's SLAAC option was disabled in 2018 because there was no
support for it. Now that cloud-init 19.4 or newer versions are more
widespread, we can finally reenable it.
Also include minimum required cloud-init version for SLAAC support in
format description.
Stefan Reiter [Tue, 30 Mar 2021 15:59:52 +0000 (17:59 +0200)]
increase timeout for QMP block_resize
In testing this usually completes almost immediately, but in theory this
is a storage/IO operation and as such can take a bit to finish. It's
certainly not unthinkable that it might take longer than the default *3
seconds* we've given it so far. Make it a minute.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Fabian Ebner [Tue, 23 Mar 2021 09:15:23 +0000 (10:15 +0100)]
api: migrate: fix variable name
Commit abff03211f28018ce193911173afa39ba1a6ff24 switched to iterating over the
values instead of the keys, but didn't update the variable name. Use target_sid,
because target is already in use for the target node.
Stefan Reiter [Tue, 16 Mar 2021 16:30:23 +0000 (17:30 +0100)]
snapshot: set migration caps before savevm-start
A "savevm" call (both our async variant and the upstream sync one) use
migration code internally. As such, they both expect migration
capabilities to be set.
This is usually not a problem, as the default set of capabilities is ok,
however, it leads to differing snapshot settings if one does a snapshot
after a machine has been live-migrated (as the capabilities will persist
from that), which could potentially lead to discrepencies in snapshots
(currently it seems to be fine, but it still makes sense to set them to
safeguard against future changes).
Note that we do set the "dirty-bitmaps" capability now (if
query-proxmox-support reports true), which has three effects:
1) PBS dirty-bitmaps are preserved in snapshots, enabling
fast-incremental backups to work after rollback (as long as no newer
backups exist), including for hibernate/resume
2) snapshots taken from now on, with a QEMU version supporting bitmap
migration, *might* lead to incompatibility of these snapshots with
QEMU versions that don't know about bitmaps at all (i.e. < 5.0 IIRC?)
- forward compatibility is still given, and all other capabilities we
set go back to very old versions
3) since we now explicitly disable bitmap saving if the version doesn't
report support, we avoid crashes even with not-updated QEMU versions
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
At this stage, there are no keys in %storage_limits to iterate over. The
refactoring in commit 9f3d73bc353c79f84498122b779764184f504005 broke the logic
by accident.
Also explicitly set zero if there is no limit to avoid repeating the
get_bandwith_limit call for the same storage. When accessing the value later,
zero is already correctly handled as 'no limit'.
Fabian Ebner [Mon, 8 Mar 2021 12:26:57 +0000 (13:26 +0100)]
restore: write new config to variable first
and use file_set_contents to really commit it afterwards. Mostly done as a
preparation for the later patch for sanitizing the config on restore, but
shouldn't hurt by itself either.
Stefan Reiter [Mon, 8 Mar 2021 15:32:30 +0000 (16:32 +0100)]
vzdump: increase PBS 'backup' QMP call timeout
Commit "a941bbd0 client: raise HTTP_TIMEOUT to 120s" in proxmox-backup
did the same, however, we would now still fail after 60 seconds since
the QMP call would time out.
Increase the timeout here to the same +5 seconds to give some time to
receive a response, so if the HTTP call in proxmox-backup times out, we
can still get a useful error message instead of timing out the QMP call
too.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
In the future, it might make sense to generalise the
check_vm_modify_config_perm and have it not only take keys, but both
new and old values, and use that generalised function everywhere.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
always pin windows VMs to a machine version by default
A fix for violating a important standard for booting[0] in recently
packaged QEMU 5.2 surfaced some issues with Windows based VMs in our
forum[1], which seem to be quite sensitive for such changes (it seems
they derive lots of their device assignment from ACPI).
User visible effects are loss of any network configuration due to
windows thinking it was swapped with a new one, and starts with a
fresh config - this is mostly problematic for setups with static
address assignment.
There may be lots of other, more subtle, effects and the PVE admin is
also not always the VM admin, so we really need to avoid such
negative effects. Do this by pinning the version of any windows based
VMs to either the minimum of (5.1, kvm-version) for existing VMs or
the kvm-version at time of VM creation for new ones.
There are patches in pve-manager for user to be able to change the
pinned version themself in the webinterface, so this can now also get
adapted more easily if there surface any other issues (with new or
old version) in the future.