Fiona Ebner [Wed, 18 Jan 2023 12:21:08 +0000 (13:21 +0100)]
swtpm: enable logging
AFAICT, previously, errors from swtpm would not show up in any logs,
because they were just printed to the stderr of the daemonized
invocation here.
The 'truncate' option is not used, so that the log is not immediately
lost when a new instance is started. This increases the chance that
the relevant errors are still present when requesting the log from a
user.
Log level 1 contains the most relevant errors and seems to be quiet
for working-as-expected invocations. Log level 2 already includes
logging full TPM commands, some of which are 1024 bytes long. Thus,
log level 1 was chosen.
Fiona Ebner [Thu, 23 Feb 2023 09:49:03 +0000 (10:49 +0100)]
start: make not being able to set polling interval for ballooning non-critical
The guest will be running, so it's misleading to fail the start task
here. Also ensures that we clean up the hibernation state upon resume
even if there is an error here, which did not happen previously[0].
Fiona Ebner [Fri, 10 Feb 2023 14:19:12 +0000 (15:19 +0100)]
fix #4525: clone disk: disallow mirror if it might cause problems with io_uring
The target of the drive-mirror operation is opened with (essentially)
the same flags as the source in QEMU, in particular whether io_uring
should be used is inherited.
But io_uring currently causes problems in combination with certain
storage types, sometimes even leading to crashes (LVM with Linux 6.1).
Just disallow live cloning of drives when the source uses io_uring and
the target storage is not ready for it. There is one exception, namely
when source and target storage are the same. In that case, just assume
it will keep working for the target.
Migration does not seem to be affected, because there, the target VM
opens the images with the checked aio setting and then NBD exports of
those are used as the targets for mirroring.
It can be that the default determined for the source is not what's
actually used, because after a drive-mirror to a storage with a
different default, it will still use the default from the old storage.
Unfortunately, aio doesn't seem to be part of the 'query-block' QMP
command's result, so just tolerate this edge case.
The check can be removed if either
1. drive-mirror learns to open the target with a different aio setting
or more ideally
2. there are no more bad storages for io_uring.
Fiona Ebner [Wed, 18 Jan 2023 13:52:40 +0000 (14:52 +0100)]
close #2792: allow online migration with replicated snapshots
Since commit 9b6efe43 ("migrate: add live-migration of replicated
disks") live-migration with replicated volumes is possible. When
handling the replication, it is checked that all local volumes
previously detected as replicatable are actually replicated. So the
check if migration with snapshots is possible can just allow volumes
that are detected as replicatable.
Note that VM state files are also replicated.
If there is an invalid configuration with a non-replicatable volume or
state file and replication is enabled, then replication will fail, and
thus migration will fail early.
Trying to live-migrate to a non-replication target (needs --force)
will still fail if there are snapshots, because they are (correctly)
detected as non-replicated.
Noel Ullreich [Mon, 16 Jan 2023 14:24:10 +0000 (15:24 +0100)]
fix #4378: standardized error for ovmf files
The error messages for missing OVMF_CODE and OVMF_VARS files were
inconsistent as well as the error for the missing base var file not
telling you the expected path.
Fiona Ebner [Fri, 2 Dec 2022 12:54:52 +0000 (13:54 +0100)]
migration: nbd export: switch away from deprecated QMP command
The 'nbd-server-add' QMP command has been deprecated since QEMU 5.2 in
favor of a more general 'block-export-add'.
When using 'nbd-server-add', QEMU internally converts the parameters
and calls blk_exp_add() which is also used by 'block-export-add'. It
does one more thing, namely calling nbd_export_set_on_eject_blk() to
auto-remove the export from the server when the backing drive goes
away. But that behavior is not needed in our case, stopping the NBD
server removes the exports anyways.
It was checked with a debugger that the parameters to blk_exp_add()
are still the same after this change. Well, the block node names are
autogenerated and not consistent across invocations.
The alternative to using 'query-block' would be specifying a
predictable 'node-name' for our '-drive' commandline. It's not that
difficult for this use case, but in general one needs to be careful
(e.g. it can't be specified for an empty CD drive, but would need to
be set when inserting a CD later). Querying the actual 'node-name'
seemed a bit more future-proof.
Stefan Sterz [Tue, 20 Dec 2022 10:30:36 +0000 (11:30 +0100)]
cd rom handling: return a clearer error when there is no cd rom drive
when a vm is configured to use a physical cd rom drive but there is no
such drive a cryptic "uninitialized value" error is thrown. this is
due to `$path` being undefined in `sub print_drive_commandline_full`.
warn that no cd rom drive is available instead.
note that the error was cosmetic as the vm would start just fine.
forum thread: https://forum.proxmox.com/threads/119592/
Stefan Hanreich [Thu, 5 Jan 2023 14:51:56 +0000 (15:51 +0100)]
fix #4358: destroy_vm: Ignore 'suspended' lock when destroying VM
Since we can now differentiate between 'suspended' and 'suspending',
it is possible to ignore the 'suspended' lock when destroying a VM.
It shouldn't matter whether the VM is locked because of hibernation
when you want to remove it. Therefore we can safely ignore the lock.
Fiona Ebner [Tue, 10 Jan 2023 13:41:37 +0000 (14:41 +0100)]
fix #4435: devices list: avoid error for undefined value
When $d->{'pci_bridge'}->{devices} is undef, @-dereferencing it will
die with:
> Can't use an undefined value as an ARRAY reference
This can happen (at least) when the VM is in 'prelaunch' state. The
QAPI definition for '@PciBridgeInfo' also declares the 'devices'
member as optional.
Before commit 721624b ("collect device list for nested pci-bridges"),
there was no issue, because $d->{'pci_bridge'}->{devices} was used in
foreach, so auto-vivified if undef.
Fixes: f721624b ("collect device list for nested pci-bridges") Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Stefan Hanreich [Wed, 21 Dec 2022 16:51:10 +0000 (17:51 +0100)]
rollback: Only create start task with --start if VM is not running
When rolling back to the snapshot of a VM that includes RAM, the VM
gets started by the rollback task anyway, so no additional start task
is needed. Previously, when rolling back with the start parameter and
the VM snapshot included RAM, a start task was created. That task
failed because the VM had already been started by the rollback task.
Additionally documented this behaviour in the description of the start
parameter
Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
Thomas Lamprecht [Mon, 12 Dec 2022 10:35:19 +0000 (11:35 +0100)]
ovmf cmd assembly: rework now that it's in a separate method
We can now do a few things that would be not really possible, or at
least mess with readability when this was still mostly inline
config2command, shaves of quite a few lines of code.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Mon, 12 Dec 2022 10:40:22 +0000 (11:40 +0100)]
ovmf cmd assembly: reorder arguments
in preparation of reworking the new separate method for OVMF cmd
assembly, do this in a separate very targeted commit to make it more
clear that the next reworking-commit doesn't messes with our tests at
all.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the fix for the recently introduced requirement of loading the VM config while
migrating was incomplete, since the vmlist node value could already be out of
date by the time load_config is called.
extend the fallback behaviour even further, by doing the following sequence:
- try regular load_config (likely case, rename already fully processed)
- if it fails, get node from vmlist, and load_config using that
- it that fails, invalidate the PVE::Cluster cache, retry regular load_config
it's not deterministic whether the rename/move of the VM config
triggered on the source side of a migration is already visible on the
target side when vm_resume is executed. check the vmlist for the node
where the config is currently located if $nocheck is set - it is now
needed to add the forwarding DB entries to the bridge.
this fixes an issue on busier or slower clusters, where pmxcfs hasn't
yet processed the rename, and resuming would fail with an error about
the config not existing.
Fiona Ebner [Mon, 21 Nov 2022 07:25:13 +0000 (08:25 +0100)]
api: create/update vm: fix clamping cpuunits function calls
When applying the series introducing those calls, the helper was moved
to pve-common's CGroup.pm (see 07c10d5 ("cgroup: move get_cpuunits
helper from qemu-server as clamp_cpu_shares") in pve-common) instead
of pve-guest-common's GuestHelpers.pm. But these calls were not
updated.
Reported in the community forum:
https://forum.proxmox.com/threads/118267
Thomas Lamprecht [Thu, 17 Nov 2022 16:40:35 +0000 (17:40 +0100)]
memory hot plug: round down to nearest even phys-bits for heuristics
Mira found out that 41 phys-bits the limit is pretty much the same as
with 40, as such odd sizes are a bit unexpected anyway lets mask the
LSB and use that as base, that way we're good again.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Thu, 17 Nov 2022 14:55:40 +0000 (15:55 +0100)]
memory hotplug: rework max memory handling, make phys-bits dependent
QEMU 7.1 introduced some actual checks for the max memory value in 1caab5cf86bd ("i386/pc: bounds check phys-bits against max used GPA")
and while correct it breaks our by-luck working hard coded max mem of
4 TB for cases with smaller phys bit address sizes, like some older
CPUs or most CPU types have per default if not 'host' or 'max'.
QEMU uses 40 bits per default if the CPU isn't set to host or
phys-bits is not set explicitly.
For 40 bit it seems that depending on machine type one has a max
possible mem of: i440 -> 752, q35 -> 722 GiB, but instead of reducing
it to 704 GiB (512+1128+64) in a hard coded way we acutally check for
the bit size that will probably be used and use that to determine the
max memory size useable.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
will migrate the local VM 1234 to the host 123.123.1232.123 using the
given API token, mapping the VMID to 4321 on the target cluster, all its
virtual NICs to the target vm bridge 'vmbr0', any volumes on storage
zfs-a to storage rbd-b, any on storage nfs-c to storage dir-d, and any
other volumes to storage zfs-e. the source VM will be stopped but remain
on the source node/cluster after the migration has finished.
entry point for the remote migration on the source side, mainly
preparing the API client that gets passed to the actual migration code
and doing some parameter parsing.
querying of the remote sides resources (like available storages, free
VMIDs, lookup of endpoint details for specific nodes, ...) should be
done by the client - see next commit with CLI example.
remote migration uses a websocket connection to a task worker running on
the target node instead of commands via SSH to control the migration.
this websocket tunnel is started earlier than the SSH tunnel, and allows
adding UNIX-socket forwarding over additional websocket connections
on-demand.
the main differences to regular intra-cluster migration are:
- source VM config and disks are only removed upon request via --delete
- shared storages are treated like local storages, since we can't
assume they are shared across clusters (with potentical to extend this
by marking storages as shared)
- NBD migrated disks are explicitly pre-allocated on the target node via
tunnel command before starting the target VM instance
- in addition to storages, network bridges and the VMID itself is
transformed via a user defined mapping
- all commands and migration data streams are sent via a WS tunnel proxy
- pending changes and snapshots are discarded on the target side (for
the time being)
no semantic changes intended, except for:
- no longer passing the main migration UNIX socket to SSH twice for
forwarding
- dropping the 'unix:' prefix in start_remote_tunnel's timeout error message
the following two endpoints are used for migration on the remote side
POST /nodes/NODE/qemu/VMID/mtunnel
which creates and locks an empty VM config, and spawns the main qmtunnel
worker which binds to a VM-specific UNIX socket.
this worker handles JSON-encoded migration commands coming in via this
UNIX socket:
- config (set target VM config)
-- checks permissions for updating config
-- strips pending changes and snapshots
-- sets (optional) firewall config
- disk (allocate disk for NBD migration)
-- checks permission for target storage
-- returns drive string for allocated volume
- disk-import, query-disk-import, bwlimit
-- handled by PVE::StorageTunnel
- start (returning migration info)
- fstrim (via agent)
- ticket (creates a ticket for a WS connection to a specific socket)
- resume
- stop
- nbdstop
- unlock
- quit (+ cleanup)
this worker serves as a replacement for both 'qm mtunnel' and various
manual calls via SSH. the API call will return a ticket valid for
connecting to the worker's UNIX socket via a websocket connection.
gets called for connecting to a UNIX socket via websocket forwarding,
i.e. once for the main command mtunnel, and once each for the memory
migration and each NBD drive-mirror/storage migration.
access is guarded by a short-lived ticket binding the authenticated user
to the socket path. such tickets can be requested over the main mtunnel,
which keeps track of socket paths currently used by that
mtunnel/migration instance.
each command handler should check privileges for the requested action if
necessary.
both mtunnel and mtunnelwebsocket endpoints are not proxied, the
client/caller is responsible for ensuring the passed 'node' parameter
and the endpoint handling the call are matching.
in case of remote migration, we use the `update_vm_api` helper for
checking permissions on the incoming config. this would also cause an
incoming cloud-init image to be overwritten, since the VM is not running
yet at this point.
provide a parameter which can be set by an incoming *remote* migration
to avoid having inconsistent cloud init images on the source and target
side.
The process for editing Cloud-init drives checked for inconsistent
permissions: for adding, the VM.Config.Disk permission was needed, while
the VM.Config.CDROM permission was needed to remove a drive. The regex
in drive_is_cloudinit needed to be adapted since the drive names have
different formats before/after they are actually generated.
Due to the regex letting names fall through before, Cloud-init drives
were being checked as disks, even though they are actually treated as
CDROM drives. Due to this, it makes more sense to check for
VM.Config.CDROM instead, while also requiring VM.Config.Cloudinit, since
generating a Cloud-init drive already generates default values that are
passed to the VM.
drop get_pending_changes and simplify cloudinit_pending api call
- The forced-remove flag wasn't really used AFAICT and makes
no sense IMO.
- Whether or not we care about non-MAC changes does not
belong here, but should instead taken into account in the
actual hotplug path recording the cloud-init state (iow.
into $cloudinit_record_changed().)
(This is not done here atm.)
- It seems much simpler to just have:
* 'old' = the old value if it's not a new value
* 'new' = the new value unless it's being deleted
* If only one of them is set it's an addition or removal.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
It performs schema valdiation (and normalization).
We only ever write values into it which came from an
already validated config, and we also add an additional
"added" key which is not covered by the schema, so this
would fail.
Simply skip it.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Hotpluggieg generated a cloudinit image based on old values
in order to attach the device and later update it again, but
the update was only done if cloudinit hotplug was enabled.
This is weird, let's not.
Also introduce 'apply_cloudinit_config' which also write the
config, which, as it turns out, is the only thing we
actually need anyway, currently.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Partially-revert "cloudinit: add cloudinit section for current generated config"
This partially reverts commit 95a5135dad974c7eae249cf92b62b06fe911af33.
Particularly the unprotected write to the config when
generating the cloudinit file. We leave the rest as is for
now and update the callers to deal with the config later.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
max supported queues tx + rx = 256, so 128 for combined
https://lists.gnu.org/archive/html/qemu-devel/2015-03/msg03917.html
But from above link it also seems that x86 only supports 80 pairs in
practice, so for now "only" quadruple the limit to 64 and see if we
get user feedback for more requested.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reduce from 128 to 64 and add short rationale for that ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 16 Nov 2022 11:03:49 +0000 (12:03 +0100)]
cloudinit: avoid unsafe write of VM config
there's no guarantee that we're locked here and it also produces
unnecessary extra IO in most cases.
While at it also avoid that a special:cloudinit section is added on
start to *every* VM, which caused another bug to trigger (see prev.
commit) and is just odd for users that ain't using cloudinit
Note in two call sites that we may need to write the config indeed
out there on the caller side.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 16 Nov 2022 10:32:14 +0000 (11:32 +0100)]
config: fix dropping description on parsing special cloud init section
we now always write out a new clouding special section on start (to
be fixed) independent of any cloudinit drive/config configured or
not, and thus always run into that section after a VM started with
the new qemu-server installed, which in turn set the description
always to undef.
Fixes: 95a5135 ("cloudinit: add cloudinit section for current generated config.") Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 15 Nov 2022 06:27:07 +0000 (07:27 +0100)]
add fixme comment to replace duplicate nodename cache
that function also caches the value, and it recently was changed to
be importable, so we can just import and drop this once a new enough
pve-common is available.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Sun, 13 Nov 2022 12:38:55 +0000 (13:38 +0100)]
net devs: avoid registering MAC to fdb if not static
In theory we can have a config with netX records that do not specify
a `macaddr` property, we just auto-generate on in config2cmd for
startup transitively, but don't save that explicitly back to the
config; so while we could parse the /proc/$pid/cmdline or try to get
the info from QMP (not fully straight forward) it seems rather a
hassle; especially if one has in mind that this cannot happen via the
API FWICT; as there a "deletion" *saves* a newly auto generated value
out to the config, same with clone of a VM and restore of a backup.
So, in basically all reasonable cases we got the `macaddr` available,
but if we don't it makes no sense to add a FDB variable for a *newly*
generated one by the parse_net call, as the VM won't use that (well,
at least if one doesn't get "lucky" and it randomly re-generates the
same as on startup), so allow telling parse_net to skip auto
generating MACs and use that in the add-fdb-entries helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
net devs: register vNIC mac to FDB on start/resume
On plain VM start (no live migration), we can simply add MAC address
into the fdb. In case of a live migration, we add the mac address
just before the resume.
Mira Limbeck [Fri, 11 Nov 2022 15:46:35 +0000 (16:46 +0100)]
fix #4201: delete cloud-init disk on rollback
If the config doesn't contain the cloud-init disk anymore after the
rollback, we have to clean it up since otherwise no further disk can be
attached unless the one still existing on the storage is deleted.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com> Reviewed-by: Stefan Hanreich <s.hanreich@proxmox.com> Tested-by: Stefan Hanreich <s.hanreich@proxmox.com>
Dominik Csapak [Fri, 11 Nov 2022 06:59:55 +0000 (07:59 +0100)]
tests: add tests for various combinations of configs for usb
q35 + usb passthrough
q35 + usb3 passthrough
q35 + usb3 passthrough with new xhci controller
old machine type + new usb config error
old machine type + q35 + new usb config error
old ostype (w2k) + new usb config error