will migrate the local VM 1234 to the host 123.123.1232.123 using the
given API token, mapping the VMID to 4321 on the target cluster, all its
virtual NICs to the target vm bridge 'vmbr0', any volumes on storage
zfs-a to storage rbd-b, any on storage nfs-c to storage dir-d, and any
other volumes to storage zfs-e. the source VM will be stopped but remain
on the source node/cluster after the migration has finished.
entry point for the remote migration on the source side, mainly
preparing the API client that gets passed to the actual migration code
and doing some parameter parsing.
querying of the remote sides resources (like available storages, free
VMIDs, lookup of endpoint details for specific nodes, ...) should be
done by the client - see next commit with CLI example.
remote migration uses a websocket connection to a task worker running on
the target node instead of commands via SSH to control the migration.
this websocket tunnel is started earlier than the SSH tunnel, and allows
adding UNIX-socket forwarding over additional websocket connections
on-demand.
the main differences to regular intra-cluster migration are:
- source VM config and disks are only removed upon request via --delete
- shared storages are treated like local storages, since we can't
assume they are shared across clusters (with potentical to extend this
by marking storages as shared)
- NBD migrated disks are explicitly pre-allocated on the target node via
tunnel command before starting the target VM instance
- in addition to storages, network bridges and the VMID itself is
transformed via a user defined mapping
- all commands and migration data streams are sent via a WS tunnel proxy
- pending changes and snapshots are discarded on the target side (for
the time being)
no semantic changes intended, except for:
- no longer passing the main migration UNIX socket to SSH twice for
forwarding
- dropping the 'unix:' prefix in start_remote_tunnel's timeout error message
the following two endpoints are used for migration on the remote side
POST /nodes/NODE/qemu/VMID/mtunnel
which creates and locks an empty VM config, and spawns the main qmtunnel
worker which binds to a VM-specific UNIX socket.
this worker handles JSON-encoded migration commands coming in via this
UNIX socket:
- config (set target VM config)
-- checks permissions for updating config
-- strips pending changes and snapshots
-- sets (optional) firewall config
- disk (allocate disk for NBD migration)
-- checks permission for target storage
-- returns drive string for allocated volume
- disk-import, query-disk-import, bwlimit
-- handled by PVE::StorageTunnel
- start (returning migration info)
- fstrim (via agent)
- ticket (creates a ticket for a WS connection to a specific socket)
- resume
- stop
- nbdstop
- unlock
- quit (+ cleanup)
this worker serves as a replacement for both 'qm mtunnel' and various
manual calls via SSH. the API call will return a ticket valid for
connecting to the worker's UNIX socket via a websocket connection.
gets called for connecting to a UNIX socket via websocket forwarding,
i.e. once for the main command mtunnel, and once each for the memory
migration and each NBD drive-mirror/storage migration.
access is guarded by a short-lived ticket binding the authenticated user
to the socket path. such tickets can be requested over the main mtunnel,
which keeps track of socket paths currently used by that
mtunnel/migration instance.
each command handler should check privileges for the requested action if
necessary.
both mtunnel and mtunnelwebsocket endpoints are not proxied, the
client/caller is responsible for ensuring the passed 'node' parameter
and the endpoint handling the call are matching.
in case of remote migration, we use the `update_vm_api` helper for
checking permissions on the incoming config. this would also cause an
incoming cloud-init image to be overwritten, since the VM is not running
yet at this point.
provide a parameter which can be set by an incoming *remote* migration
to avoid having inconsistent cloud init images on the source and target
side.
The process for editing Cloud-init drives checked for inconsistent
permissions: for adding, the VM.Config.Disk permission was needed, while
the VM.Config.CDROM permission was needed to remove a drive. The regex
in drive_is_cloudinit needed to be adapted since the drive names have
different formats before/after they are actually generated.
Due to the regex letting names fall through before, Cloud-init drives
were being checked as disks, even though they are actually treated as
CDROM drives. Due to this, it makes more sense to check for
VM.Config.CDROM instead, while also requiring VM.Config.Cloudinit, since
generating a Cloud-init drive already generates default values that are
passed to the VM.
drop get_pending_changes and simplify cloudinit_pending api call
- The forced-remove flag wasn't really used AFAICT and makes
no sense IMO.
- Whether or not we care about non-MAC changes does not
belong here, but should instead taken into account in the
actual hotplug path recording the cloud-init state (iow.
into $cloudinit_record_changed().)
(This is not done here atm.)
- It seems much simpler to just have:
* 'old' = the old value if it's not a new value
* 'new' = the new value unless it's being deleted
* If only one of them is set it's an addition or removal.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
It performs schema valdiation (and normalization).
We only ever write values into it which came from an
already validated config, and we also add an additional
"added" key which is not covered by the schema, so this
would fail.
Simply skip it.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Hotpluggieg generated a cloudinit image based on old values
in order to attach the device and later update it again, but
the update was only done if cloudinit hotplug was enabled.
This is weird, let's not.
Also introduce 'apply_cloudinit_config' which also write the
config, which, as it turns out, is the only thing we
actually need anyway, currently.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Partially-revert "cloudinit: add cloudinit section for current generated config"
This partially reverts commit 95a5135dad974c7eae249cf92b62b06fe911af33.
Particularly the unprotected write to the config when
generating the cloudinit file. We leave the rest as is for
now and update the callers to deal with the config later.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
max supported queues tx + rx = 256, so 128 for combined
https://lists.gnu.org/archive/html/qemu-devel/2015-03/msg03917.html
But from above link it also seems that x86 only supports 80 pairs in
practice, so for now "only" quadruple the limit to 64 and see if we
get user feedback for more requested.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[ T: reduce from 128 to 64 and add short rationale for that ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 16 Nov 2022 11:03:49 +0000 (12:03 +0100)]
cloudinit: avoid unsafe write of VM config
there's no guarantee that we're locked here and it also produces
unnecessary extra IO in most cases.
While at it also avoid that a special:cloudinit section is added on
start to *every* VM, which caused another bug to trigger (see prev.
commit) and is just odd for users that ain't using cloudinit
Note in two call sites that we may need to write the config indeed
out there on the caller side.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Wed, 16 Nov 2022 10:32:14 +0000 (11:32 +0100)]
config: fix dropping description on parsing special cloud init section
we now always write out a new clouding special section on start (to
be fixed) independent of any cloudinit drive/config configured or
not, and thus always run into that section after a VM started with
the new qemu-server installed, which in turn set the description
always to undef.
Fixes: 95a5135 ("cloudinit: add cloudinit section for current generated config.") Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Tue, 15 Nov 2022 06:27:07 +0000 (07:27 +0100)]
add fixme comment to replace duplicate nodename cache
that function also caches the value, and it recently was changed to
be importable, so we can just import and drop this once a new enough
pve-common is available.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Thomas Lamprecht [Sun, 13 Nov 2022 12:38:55 +0000 (13:38 +0100)]
net devs: avoid registering MAC to fdb if not static
In theory we can have a config with netX records that do not specify
a `macaddr` property, we just auto-generate on in config2cmd for
startup transitively, but don't save that explicitly back to the
config; so while we could parse the /proc/$pid/cmdline or try to get
the info from QMP (not fully straight forward) it seems rather a
hassle; especially if one has in mind that this cannot happen via the
API FWICT; as there a "deletion" *saves* a newly auto generated value
out to the config, same with clone of a VM and restore of a backup.
So, in basically all reasonable cases we got the `macaddr` available,
but if we don't it makes no sense to add a FDB variable for a *newly*
generated one by the parse_net call, as the VM won't use that (well,
at least if one doesn't get "lucky" and it randomly re-generates the
same as on startup), so allow telling parse_net to skip auto
generating MACs and use that in the add-fdb-entries helper
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
net devs: register vNIC mac to FDB on start/resume
On plain VM start (no live migration), we can simply add MAC address
into the fdb. In case of a live migration, we add the mac address
just before the resume.
Mira Limbeck [Fri, 11 Nov 2022 15:46:35 +0000 (16:46 +0100)]
fix #4201: delete cloud-init disk on rollback
If the config doesn't contain the cloud-init disk anymore after the
rollback, we have to clean it up since otherwise no further disk can be
attached unless the one still existing on the storage is deleted.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com> Reviewed-by: Stefan Hanreich <s.hanreich@proxmox.com> Tested-by: Stefan Hanreich <s.hanreich@proxmox.com>
Dominik Csapak [Fri, 11 Nov 2022 06:59:55 +0000 (07:59 +0100)]
tests: add tests for various combinations of configs for usb
q35 + usb passthrough
q35 + usb3 passthrough
q35 + usb3 passthrough with new xhci controller
old machine type + new usb config error
old machine type + q35 + new usb config error
old ostype (w2k) + new usb config error
Dominik Csapak [Thu, 10 Nov 2022 14:35:58 +0000 (15:35 +0100)]
fix #3271: USB: allow usb hotplugging for modern guests
same as with the extended support for more usb devices, allow
hotplugging for guests that can use the qemu-xhci controller which
require a machine type >= 7.1 and a ostype l26 or windows > 7
if no usb device was passed through on startup, dynamically add
the xhci controller (and remove if the last usb device is unplugged)
so that live migration is still possible
much of the usb hotplug code was already there, but it still needed
a few adaptions, for example we have to add a chardev when adding
a spice redir port (that gets automatically removed when the
usb-redir device gets removed)
since the spice devices use the id 'usbredirdevX' instead of 'usbX', we
have to manually map that a bit around
Dominik Csapak [Thu, 10 Nov 2022 14:35:57 +0000 (15:35 +0100)]
USB: increase max usb devices to 14 for newer machine version and ostype
for machine versions >= 7.1 and ostype linux or windows > 7, we use the
qemu-xhci controller where we have up to 14 usable ports, so make them
available to the user
Dominik Csapak [Thu, 10 Nov 2022 14:35:56 +0000 (15:35 +0100)]
fix #4324: USB: use qemu-xhci for machine versions >= 7.1
going by reports in the forum (e.g. [0]) and semi-official qemu
information[1], we should prefer qemu-xhci over nec-usb-xhci
for compatibility purposes, we guard that behind the machine version,
so that guests with a fixed version don't suddenly have a different usb
controller after a reboot (which could potentially break some hardcoded
guest configs)
Dominik Csapak [Thu, 10 Nov 2022 14:35:54 +0000 (15:35 +0100)]
USB: print_usbdevice_full: error out on invalid configuration
should not happen normally, but an inattentive user of that function
may forget to check the validity of the parsed device, so err
on the safe side here
Daniel Bowder [Fri, 1 Jul 2022 00:09:45 +0000 (17:09 -0700)]
fix #3593: add affinity to qemu
Reuse the PVE::CpuSet to validate cpuset formatting.
Add new qemu property called 'affinity' to store the cpuset.
Push taskset command in front of kvm if 'affinity' is set.
Signed-off-by: Daniel Bowder <daniel@bowdernet.com>
pci: make mediated device sysfs path independent of PCI id
mdevs have a host-unique UUID they are indexed with in the PCI-id
independent `/sys/bus/mdev/devices/<uuid>` path, so there is no need
to go through the PCI id for them.
vm start/stop: cleanup passed-through pci devices in more situations
if the preparing of PCI devices or the start of the VM fails, we need
to cleanup the PCI devices (reservations *and* mdevs), or else it
might happen that there are leftovers which must be manually removed.
to include also mdevs now, refactor the cleanup code from
'vm_stop_cleanup' into it's own function, and call that instead of
only 'remove_pci_reservation'
also simplifies the code, such that it now removes all PCI ids
reserved for that VMID, since we cannot have multiple VMs with the
same VMID anyway
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fiona Ebner [Fri, 7 Oct 2022 12:41:50 +0000 (14:41 +0200)]
api: create/update vm: clamp cpuunit value
While the clamping already happens before setting the actual systemd
CPU{Shares, Weight}, it can be done here too, to avoid writing new
out-of-range values into the config.
Can't use a validator enforcing this because existing out-of-range
values should not become errors upon parsing the config.
set aliases for the previous ones for backward compat.
There's still cleanup potential, e.g., for snapshots, but to do that
nicely we may need (or want) to extend CLIHandler to accept commands
without fixed params also on the command group itself.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fiona Ebner [Thu, 27 Oct 2022 07:13:45 +0000 (09:13 +0200)]
fix #4099: disable io_uring for virtual disks on CIFS storages
Since kernel 5.15, there is an issue with io_uring when used in
combination with CIFS [0]. Unfortunately, the kernel developers did
not suggest any way to resolve the issue and didn't comment on my
proposed one. So for now, just disable io_uring when the storage is
CIFS, like is done for other storage types that had problematic
interactions.
It is rather easy to reproduce when writing large amounts of data
within the VM. I used
dd if=/dev/urandom of=file bs=1M count=1000
to reproduce it consistently, but your mileage may vary.
Some forum reports about users running into the issue [1][2][3].
qmeventd: rework 'forced_cleanup' handling and set timeout to 60s
currently, the 'forced_cleanup' (sending SIGKILL to the qemu process),
is intended to be triggered 5 seconds after sending the initial shutdown
signal (SIGTERM) which is sadly not enough for some setups.
Accidentally, it could be triggered earlier than 5 seconds, if a
SIGALRM triggers in the timespan directly before setting it again.
Also, this approach means that depending on when machines are shutdown
their forced cleanup may happen after 5 seconds, or any time after, if
new vms are shut off in the meantime.
Improve this situation by reworking the way we deal with this cleanup.
We save the pidfd, time incl. timeout in the Client, and set a timeout
to 'epoll_wait' of 10 seconds, which will then trigger a forced_cleanup.
Remove entries from the forced_cleanup list when that entry is killed,
or when the normal cleanup took place.
To improve the shutdown behaviour, increase the default timeout to 60
seconds, which should be enough, but add a commandline toggle where
users can set it to a different value.
if the preparing of pci devices or the start of the vm fails, we need
to cleanup the pci devices (reservations *and* mdevs), or else
it might happen that there are leftovers which must be manually removed.
to include also mdevs now, refactor the cleanup code from 'vm_stop_cleanup'
into it's own function, and call that instead of only 'remove_pci_reservation'
also print the errors of the cleanup steps with 'warn', otherwise we
might discard important errors
Thomas Lamprecht [Fri, 16 Sep 2022 10:56:29 +0000 (12:56 +0200)]
qmp client: increase default fallback timeout to 5s
allowing slower or overloaded systems a higher chance to finish
commands while not being to long to be problematic for sync api calls
with their 30s total budget
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>