Stefan Reiter [Thu, 14 Oct 2021 09:28:49 +0000 (11:28 +0200)]
swtpm: wait for pidfile
swtpm may take a little bit to daemonize, so the pidfile might not be
available right after run_command. Causes an ugly warning about using an
undefined value in a match, so wait up to 5s for it to appear.
Note that in testing this loop only ever got to the first or second
iteration, so I believe the timeout duration should be more than enough.
Also add a missing 'usleep' import, 'usleep' was used before but never
imported, apparently the other case never got triggered...
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Stefan Reiter [Thu, 14 Oct 2021 09:28:48 +0000 (11:28 +0200)]
snapshot: fix tpmstate with rbd
QEMU doesn't know about the tpmstate, so 'do_snapshots_with_qemu' should
never return true in that case. Note that inconsistencies related to
snapshot timing do not matter much, as the actual TPM data is exported
together with other device state by QEMU anyway.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Dominik Csapak [Thu, 7 Oct 2021 13:45:31 +0000 (15:45 +0200)]
fix #3258: block vm start when pci device is already in use
on vm start, we reserve all pciids that we use, and
remove the reservation again in vm_stop_cleanup
first with only a time-based reservation but after the vm is started,
we reserve again but with the pid.
for this, we have to move the start_timeout calculation above the
hostpci handling.
also moved the pci initialization out of the conf parsing loop
so that we can reserve all ids before we actually touch any of them
while touching the lines, fix the indentation
this way, when a vm starts with a pci device that is already configured
for a different running vm, will not be started and the user gets
the error that the device is already in use
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 15 Oct 2021 16:08:22 +0000 (18:08 +0200)]
pci reservation: rework helpers style and readability wise
both style and readability are naturally subjective to a certain
degree...
Also, this patch mixes a bit much into one thing, but splitting that
up would mean lots of work I just wanted to avoid, sorry about that.
Among other things:
- avoid a level of indentation in the reserve loop
- rename pciids to reservation_list where it was a better fit
- make reserve set either pid or time to avoid suggesting that we
save both
- rename parameters to requested/dropped IDs for easier understanding
what's going on in the code
- avoid old_pid/pid, use running_pid and reserver_pid instead to
clarify what they actually mean
- drop useless returns to avoid suggesting the return value has any
use and save some lnes
- use a hash slice to delete all dropped IDs at once, shorter and
faster
- use 5 second timeout for reservation, this does nothing intensive
nor does it wait for anything, so the critical section should be
really short, 5s is really long enough for a wait..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Fri, 15 Oct 2021 15:02:21 +0000 (17:02 +0200)]
pci reservation: move lock/reservation file into /run/qemu-server
lck needs to die, the days of any 8.3 file naming schemes are long
gone (in the server space that is ;)
/var/run is /run so use the shorter, and while /var/lock is a OK
place for the locks we try to keep lock and lock-object together
nowadays. The qemu-server sub-directory avoids overly cluttering the
already crowded top-level /run dir
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Dominik Csapak [Thu, 7 Oct 2021 13:45:30 +0000 (15:45 +0200)]
pci: add helpers to (un)reserve pciids for a vm
saves a list of pciid <-> vmid mappings in /var/run
that we can check when we start a vm
if we're not given a pid but a timeout, we save the time when the
reservation will run out (current time + timeout + 5s) since each
vm start (until we can save the pid) varies from config to config
reserve_pci_usage and remove_pci_reservation always expect a list of ids
so that we can update the reservation for a vm all at once
Stefan Reiter [Tue, 5 Oct 2021 16:02:06 +0000 (18:02 +0200)]
ovmf: support secure boot with 4m and 4m-ms efidisk types
Provide support for secure boot by using the new "4m" and "4m-ms"
variants of the OVMF code/vars templates. This is specified on the
efidisk via the 'efitype' and 'ms-keys' parameters.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Stefan Reiter [Mon, 4 Oct 2021 15:29:20 +0000 (17:29 +0200)]
fix #3075: add TPM v1.2 and v2.0 support via swtpm
Starts an instance of swtpm per VM in it's systemd scope, it will
terminate by itself if the VM exits, or be terminated manually if
startup fails.
Before first use, a TPM state is created via swtpm_setup. State is
stored in a 'tpmstate0' volume, treated much the same way as an efidisk.
It is migrated 'offline', the important part here is the creation of the
target volume, the actual data transfer happens via the QEMU device
state migration process.
Move-disk can only work offline, as the disk is not registered with
QEMU, so 'drive-mirror' wouldn't work. swtpm itself has no method of
moving a backing storage at runtime.
For backups, a bit of a workaround is necessary (this may later be
replaced by NBD support in swtpm): During the backup, we attach the
backing file of the TPM as a read-only drive to QEMU, so our backup
code can detect it as a block device and back it up as such, while
ensuring consistency with the rest of disk state ("snapshot" semantic).
The name for the ephemeral drive is specifically chosen as
'drive-tpmstate0-backup', diverging from our usual naming scheme with
the '-backup' suffix, to avoid it ever being treated as a regular drive
from the rest of the stack in case it gets left over after a backup for
some reason (shouldn't happen).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
like for other API calls, repeat the cheap checks done for early abort
before forking and without locks after forking and obtaining the lock,
and only hold the flock in the forked worker instead of across the fork.
if a volume is only referenced in the pending section of a config it was
previously not removed when removing the VM, unless the non-default
'remove unreferenced disks' option was enabled.
keeping track of volume IDs which we attempt to remove gets rid of false
warnings in case a volume is referenced both in the config and the
pending section, or multiple times in the config for other reasons.
Fabian Ebner [Fri, 25 Jun 2021 12:32:05 +0000 (14:32 +0200)]
migrate: use correct target storage id for checks
The '--targetstorage' parameter does not apply to shared storages.
Example for a problem solved with the enabled check: Given a VM with
images only on a shared storage 'storeA', not available on the target
node (i.e. restricted by the nodes property). Then using
'--targetstorage storeB' would make offline migration suddenly
"work", but of course the disks would not be accessible and then
trying to migrate back would fail...
Example for a problem solved with the content type check: if a
VM had a shared ISO image, and there was a '--targetstorage storeA'
option, availablity of the 'iso' content type is checked for
'storeA', which is wrong as the ISO would not be moved to that
storage.
the assumption that the index of the controller matches that of the last
removed drive only holds for virtio-scsi-single controller, which makes
the old code print a warning when removing the last drive of a
non-virtio-scsi-single controller except when the indices line up by
chance.
we can simply only call a simplified qemu_iothread_del when removing a
scsi disk of a VM with the virtio-scsi-single controller, and skip the
call for the other controllers which don't support io-threads anyway.
Dominik Csapak [Thu, 5 Aug 2021 11:53:01 +0000 (13:53 +0200)]
api2: only add ide drives for non-legacy bootorders
@bootorder only contains entries for non-legacy bootorder entries,
but the default one contains all cdroms anyway, and if the user
explicitely disabled cdroms, it is ok to not add them back
for the new cdrom drive.
We unconditionally added an entry into the bootorder whenever we
edited the drive, even if it was already in there. Instead we only want to do
that if the bootorder list does not contain it already.
Stefan Reiter [Mon, 5 Jul 2021 09:14:12 +0000 (11:14 +0200)]
api: always add new CD drives to bootorder
Attaching an ISO image to a VM is usually/often done for two reasons:
* booting an installer image
* supplying additional drivers to an installer (e.g. virtio)
Both of these cases (the latter at least with SeaBIOS and the Windows
installer) require the disk to be marked as bootable.
For this reason, enable the bootable flag for all new CDROM drives
attached to a VM by adding it to the bootorder list. It is appended to
the end, as otherwise it would cause new drives to boot before already
existing boot targets, which would be a more grave (and IMO bad)
behaviour change.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Thomas Lamprecht [Fri, 23 Jul 2021 08:55:16 +0000 (10:55 +0200)]
lvm: avoid the use of IO uring
there may be a kernel issue or a bug in how QEMU uses io_uring, but
we have users that report crashes which f.ebner could see on some
workloads, not really deterministic though and it seems that in newer
kernel versions (5.12+) the crash becomes a hang
While we're closing in on the actual issue here (which could be the
same as for RBD) let's disable io_uring for LVM.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Stefan Reiter [Thu, 1 Jul 2021 09:37:29 +0000 (11:37 +0200)]
live-restore: preload efidisk before starting VM
The efidisk never got restored correctly before, since we don't use the
generic print_drive_commandline_full for it, and as such it didn't get a
backing image attached. This not only causes the efidisk data to be lost
on restore, but also an error at the end, since we try to remove a
non-existing PBS blockdev.
Since it is attached differently to a regular drive, adding PBS backing
would be more difficult, but not to worry: an efidisk is small enough
that it doesn't hurt performance to just restore it via the regular
mechanism before starting the VM, and simply excluding it from the live
restore entirely.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Stefan Reiter [Wed, 30 Jun 2021 15:18:17 +0000 (17:18 +0200)]
cfg2cmd/drive: don't use io_uring for krbd with wb/wt cache
As reported here and locally reproduced:
https://forum.proxmox.com/threads/efi-vms-wont-start-under-7-beta-with-writeback-cache.91629/
This configuration is currently broken. Until we figure out how to fix
it properly, we can just have this (luckily very narrow) config pattern
fall back to aio=threads as it used to.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
otherwise it'll produce a whole lot of checksum errors
and while this would be nice as a storage feature check,
it's hard to be 100% accurate there anyway since a directory
storage can point anywhere, like for instance a btrfs
directory, causing the same issue...
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
this allows effectively setting ALL volumes as read-only, even if the
disk controller does not support it. without it, IDE and SATA disks
with (base) volumes which are marked read-only/immutable on the storage
level prevent the template VM from starting for backup purposes.
otherwise backups of templates using UEFI fail with storages like LVM
thin, where the volumes are not writable. disk controllers like IDE and
SATA that don't support being read-only are still broken for UEFI.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
[ drop the readonly=off when not required, resolve merger conflict
from Dominik's EFI disk cache mode fix ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
for unprivileged users (and possibly some root setups). reading from
pmxcfs now results in a hard error for unprivileged users, so there
might be some more of these lurking somewhere..
Stefan Reiter [Mon, 21 Jun 2021 16:35:41 +0000 (18:35 +0200)]
use KillMode 'process' for systemd scope
KillMode 'none' is deprecated, and systemd loudly complains about that
in the journal. To avoid the warning, but keep the behaviour the same,
use KillMode 'process'.
This mode does two things differently, which we have to stop it from
doing:
* it sends SIGTERM right when the scope is cancelled (e.g. on shutdown)
-> but only to the "root" process, which in our case is the worker
instance forking QEMU, so it is already dead by the time this happens
* it sends SIGKILL to *all* children after a timeout
-> can be avoided by setting either SendSIGKILL to false, or
TimeoutStopUSec to infinity - for safety, we do both
In my testing, this replicated the previous behaviour exactly, but
without using the deprecated 'none' mode.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Stefan Reiter [Mon, 21 Jun 2021 15:33:18 +0000 (17:33 +0200)]
cfg2cmd: make io_uring default
The 'aio' setting is not visible to the guest, and so can be changed
during migrations or snapshots without issue. It is thus only
dependendent on the actual QEMU version being >= 6.0, not machine
version.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Fabian Ebner [Fri, 18 Jun 2021 10:59:34 +0000 (12:59 +0200)]
migrate: enforce that image content type is available
and use it for the vdisk_list call too. This avoids scanning (and picking up
volumes from!) storages that are not even configured to hold images.
Previously, the content type was only enforced when a storage map was present.
Also serves a bit as a preparation to enforce content type on guest startup,
because now migration failure happens early and not only when trying to start
the guest on the remote node.
Fabian Ebner [Fri, 18 Jun 2021 10:59:33 +0000 (12:59 +0200)]
prefer storage_check_enabled over storage_check_node
storage_check_enabled simply checks for the 'disable' option and then calls
storage_check_node.
While not strictly necessary for a second call where only the storage differs,
e.g. in case of clone, it is more future-proof: if support for a target storage
is added at some point, it might be easy to miss adapting the call.
For the migration checks, the situation is improved by now always catching
disabled (target) storages.
Fabian Ebner [Mon, 31 May 2021 14:27:10 +0000 (16:27 +0200)]
test: fix restore config test as unprivileged user
after upgrading to bullseye, the cfs_read_file call within
restore_update_config_line() results in an error:
Is a directory!
when done as an unprivileged user.
Fabian Ebner [Tue, 1 Jun 2021 06:43:06 +0000 (08:43 +0200)]
vm status: force int where appropriate
to avoid potential problems with stringified numbers in Javascript and
elsewehere.
The vmid was not always an integer as the return schema expects, namely
when there was an opt_vmid argument, because the 'ne' comparision coerced the
vmid to be a string then.
avoid setting lun number for drives when pvscsi controller is used
Reported in the community forum[0].
In QEMU's hw/scsi/vmw_pvscsi.c in the SCSIBusInfo struct, the max_lun property
is set to 0. This means that in our stack, one cannot have multiple disks and
use 'scsihw: pvscsi' currently, as kvm would fail with
bad scsi device lun: 1
Instead of increasing the lun number, increase the scsi-id, as we already do for
lsi.* (in hw/scsi/lsi53c895a.c the max_lun property is also 0).
Dominik Csapak [Wed, 16 Jun 2021 13:09:33 +0000 (15:09 +0200)]
fix #3329: turn on cache=writeback for efidisks on rbd
on slower ceph clusters, the write pattern of the ovmf booting process
slows down the boot of the vm, so we turn on caching by default
it seems no other storage (until now) behaves like this. if it does in
the future, we can still add them too, or add a 'cache' property for
the efidisk