+If you require a delay between the host boot and the booting of the first VM,
+see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
+
+
+[[qm_qemu_agent]]
+QEMU Guest Agent
+~~~~~~~~~~~~~~~~
+
+The QEMU Guest Agent is a service which runs inside the VM, providing a
+communication channel between the host and the guest. It is used to exchange
+information and allows the host to issue commands to the guest.
+
+For example, the IP addresses in the VM summary panel are fetched via the guest
+agent.
+
+Or when starting a backup, the guest is told via the guest agent to sync
+outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
+
+For the guest agent to work properly the following steps must be taken:
+
+* install the agent in the guest and make sure it is running
+* enable the communication via the agent in {pve}
+
+Install Guest Agent
+^^^^^^^^^^^^^^^^^^^
+
+For most Linux distributions, the guest agent is available. The package is
+usually named `qemu-guest-agent`.
+
+For Windows, it can be installed from the
+https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
+VirtIO driver ISO].
+
+[[qm_qga_enable]]
+Enable Guest Agent Communication
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Communication from {pve} with the guest agent can be enabled in the VM's
+*Options* panel. A fresh start of the VM is necessary for the changes to take
+effect.
+
+[[qm_qga_auto_trim]]
+Automatic TRIM Using QGA
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+It is possible to enable the 'Run guest-trim' option. With this enabled,
+{pve} will issue a trim command to the guest after the following
+operations that have the potential to write out zeros to the storage:
+
+* moving a disk to another storage
+* live migrating a VM to another node with local storage
+
+On a thin provisioned storage, this can help to free up unused space.
+
+NOTE: There is a caveat with ext4 on Linux, because it uses an in-memory
+optimization to avoid issuing duplicate TRIM requests. Since the guest doesn't
+know about the change in the underlying storage, only the first guest-trim will
+run as expected. Subsequent ones, until the next reboot, will only consider
+parts of the filesystem that changed since then.
+
+[[qm_qga_fsfreeze]]
+Filesystem Freeze & Thaw on Backup
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By default, guest filesystems are synced via the 'fs-freeze' QEMU Guest Agent
+Command when a backup is performed, to provide consistency.
+
+On Windows guests, some applications might handle consistent backups themselves
+by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
+'fs-freeze' then might interfere with that. For example, it has been observed
+that calling 'fs-freeze' with some SQL Servers triggers VSS to call the SQL
+Writer VSS module in a mode that breaks the SQL Server backup chain for
+differential backups.
+
+For such setups you can configure {pve} to not issue a freeze-and-thaw cycle on
+backup by setting the `freeze-fs-on-backup` QGA option to `0`. This can also be
+done via the GUI with the 'Freeze/thaw guest filesystems on backup for
+consistency' option.
+
+IMPORTANT: Disabling this option can potentially lead to backups with inconsistent
+filesystems and should therefore only be disabled if you know what you are
+doing.
+
+Troubleshooting
+^^^^^^^^^^^^^^^
+
+.VM does not shut down
+
+Make sure the guest agent is installed and running.
+
+Once the guest agent is enabled, {pve} will send power commands like
+'shutdown' via the guest agent. If the guest agent is not running, commands
+cannot get executed properly and the shutdown command will run into a timeout.
+
+[[qm_spice_enhancements]]
+SPICE Enhancements
+~~~~~~~~~~~~~~~~~~
+
+SPICE Enhancements are optional features that can improve the remote viewer
+experience.
+
+To enable them via the GUI go to the *Options* panel of the virtual machine. Run
+the following command to enable them via the CLI:
+
+----
+qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
+----
+
+NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
+must be set to SPICE (qxl).
+
+Folder Sharing
+^^^^^^^^^^^^^^
+
+Share a local folder with the guest. The `spice-webdavd` daemon needs to be
+installed in the guest. It makes the shared folder available through a local
+WebDAV server located at http://localhost:9843.
+
+For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
+from the
+https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
+
+Most Linux distributions have a package called `spice-webdavd` that can be
+installed.
+
+To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
+Select the folder to share and then enable the checkbox.
+
+NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
+
+CAUTION: Experimental! Currently this feature does not work reliably.
+
+Video Streaming
+^^^^^^^^^^^^^^^
+
+Fast refreshing areas are encoded into a video stream. Two options exist:
+
+* *all*: Any fast refreshing area will be encoded into a video stream.
+* *filter*: Additional filters are used to decide if video streaming should be
+ used (currently only small window surfaces are skipped).
+
+A general recommendation if video streaming should be enabled and which option
+to choose from cannot be given. Your mileage may vary depending on the specific
+circumstances.
+
+Troubleshooting
+^^^^^^^^^^^^^^^
+
+.Shared folder does not show up
+
+Make sure the WebDAV service is enabled and running in the guest. On Windows it
+is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
+different depending on the distribution.
+
+If the service is running, check the WebDAV server by opening
+http://localhost:9843 in a browser in the guest.
+
+It can help to restart the SPICE session.
+
+[[qm_migration]]
+Migration
+---------
+
+[thumbnail="screenshot/gui-qemu-migrate.png"]
+
+If you have a cluster, you can migrate your VM to another host with
+
+----
+# qm migrate <vmid> <target>
+----
+
+There are generally two mechanisms for this
+
+* Online Migration (aka Live Migration)
+* Offline Migration
+
+Online Migration
+~~~~~~~~~~~~~~~~
+
+If your VM is running and no locally bound resources are configured (such as
+devices that are passed through), you can initiate a live migration with the `--online`
+flag in the `qm migration` command evocation. The web interface defaults to
+live migration when the VM is running.
+
+How it works
+^^^^^^^^^^^^
+
+Online migration first starts a new QEMU process on the target host with the
+'incoming' flag, which performs only basic initialization with the guest vCPUs
+still paused and then waits for the guest memory and device state data streams
+of the source Virtual Machine.
+All other resources, such as disks, are either shared or got already sent
+before runtime state migration of the VMs begins; so only the memory content
+and device state remain to be transferred.
+
+Once this connection is established, the source begins asynchronously sending
+the memory content to the target. If the guest memory on the source changes,
+those sections are marked dirty and another pass is made to send the guest
+memory data.
+This loop is repeated until the data difference between running source VM
+and incoming target VM is small enough to be sent in a few milliseconds,
+because then the source VM can be paused completely, without a user or program
+noticing the pause, so that the remaining data can be sent to the target, and
+then unpause the targets VM's CPU to make it the new running VM in well under a
+second.
+
+Requirements
+^^^^^^^^^^^^
+
+For Live Migration to work, there are some things required:
+
+* The VM has no local resources that cannot be migrated. For example,
+ PCI or USB devices that are passed through currently block live-migration.
+ Local Disks, on the other hand, can be migrated by sending them to the target
+ just fine.
+* The hosts are located in the same {pve} cluster.
+* The hosts have a working (and reliable) network connection between them.
+* The target host must have the same, or higher versions of the
+ {pve} packages. Although it can sometimes work the other way around, this
+ cannot be guaranteed.
+* The hosts have CPUs from the same vendor with similar capabilities. Different
+ vendor *might* work depending on the actual models and VMs CPU type
+ configured, but it cannot be guaranteed - so please test before deploying
+ such a setup in production.
+
+Offline Migration
+~~~~~~~~~~~~~~~~~
+
+If you have local resources, you can still migrate your VMs offline as long as
+all disk are on storage defined on both hosts.
+Migration then copies the disks to the target host over the network, as with
+online migration. Note that any hardware passthrough configuration may need to
+be adapted to the device location on the target host.
+
+// TODO: mention hardware map IDs as better way to solve that, once available
+
+[[qm_copy_and_clone]]
+Copies and Clones
+-----------------
+
+[thumbnail="screenshot/gui-qemu-full-clone.png"]
+
+VM installation is usually done using an installation media (CD-ROM)
+from the operating system vendor. Depending on the OS, this can be a
+time consuming task one might want to avoid.
+
+An easy way to deploy many VMs of the same type is to copy an existing
+VM. We use the term 'clone' for such copies, and distinguish between
+'linked' and 'full' clones.
+
+Full Clone::
+
+The result of such copy is an independent VM. The
+new VM does not share any storage resources with the original.
++
+
+It is possible to select a *Target Storage*, so one can use this to
+migrate a VM to a totally different storage. You can also change the
+disk image *Format* if the storage driver supports several formats.
++
+
+NOTE: A full clone needs to read and copy all VM image data. This is
+usually much slower than creating a linked clone.
++
+
+Some storage types allows to copy a specific *Snapshot*, which
+defaults to the 'current' VM data. This also means that the final copy
+never includes any additional snapshots from the original VM.
+
+
+Linked Clone::
+
+Modern storage drivers support a way to generate fast linked
+clones. Such a clone is a writable copy whose initial contents are the
+same as the original data. Creating a linked clone is nearly
+instantaneous, and initially consumes no additional space.
++
+
+They are called 'linked' because the new image still refers to the
+original. Unmodified data blocks are read from the original image, but
+modification are written (and afterwards read) from a new
+location. This technique is called 'Copy-on-write'.
++
+
+This requires that the original volume is read-only. With {pve} one
+can convert any VM into a read-only <<qm_templates, Template>>). Such
+templates can later be used to create linked clones efficiently.
++
+
+NOTE: You cannot delete an original template while linked clones
+exist.
++
+
+It is not possible to change the *Target storage* for linked clones,
+because this is a storage internal feature.
+
+
+The *Target node* option allows you to create the new VM on a
+different node. The only restriction is that the VM is on shared
+storage, and that storage is also available on the target node.
+
+To avoid resource conflicts, all network interface MAC addresses get
+randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
+setting.
+
+
+[[qm_templates]]
+Virtual Machine Templates
+-------------------------
+
+One can convert a VM into a Template. Such templates are read-only,
+and you can use them to create linked clones.
+
+NOTE: It is not possible to start templates, because this would modify
+the disk images. If you want to change the template, create a linked
+clone and modify that.
+
+VM Generation ID
+----------------
+
+{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
+'vmgenid' Specification
+https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
+for virtual machines.
+This can be used by the guest operating system to detect any event resulting
+in a time shift event, for example, restoring a backup or a snapshot rollback.
+
+When creating new VMs, a 'vmgenid' will be automatically generated and saved
+in its configuration file.
+
+To create and add a 'vmgenid' to an already existing VM one can pass the
+special value `1' to let {pve} autogenerate one or manually set the 'UUID'
+footnote:[Online GUID generator http://guid.one/] by using it as value, for
+example:
+
+----
+# qm set VMID -vmgenid 1
+# qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
+----
+
+NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
+in the same effects as a change on snapshot rollback, backup restore, etc., has
+as the VM can interpret this as generation change.
+
+In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
+its value on VM creation, or retroactively delete the property in the
+configuration with:
+
+----
+# qm set VMID -delete vmgenid
+----
+
+The most prominent use case for 'vmgenid' are newer Microsoft Windows
+operating systems, which use it to avoid problems in time sensitive or
+replicate services (such as databases or domain controller
+footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
+on snapshot rollback, backup restore or a whole VM clone operation.
+
+[[qm_import_virtual_machines]]
+Importing Virtual Machines
+--------------------------
+
+Importing existing virtual machines from foreign hypervisors or other {pve}
+clusters can be achieved through various methods, the most common ones are:
+
+* Using the native import wizard, which utilizes the 'import' content type, such
+ as provided by the ESXi special storage.
+* Performing a backup on the source and then restoring on the target. This
+ method works best when migrating from another {pve} instance.
+* using the OVF-specific import command of the `qm` command-line tool.
+
+If you import VMs to {pve} from other hypervisors, it’s recommended to
+familiarize yourself with the
+https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Concepts[concepts of {pve}].
+
+Import Wizard
+~~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-import-wizard-general.png"]
+
+{pve} provides an integrated VM importer using the storage plugin system for
+native integration into the API and web-based user interface. You can use this
+to import the VM as a whole, with most of its config mapped to {pve}'s config
+model and reduced downtime.
+
+NOTE: The import wizard was added during the {pve} 8.2 development cycle and is
+in tech preview state. While it's already promising and working stable, it's
+still under active development, focusing on adding other import-sources, like
+for example OVF/OVA files, in the future.
+
+To use the import wizard you have to first set up a new storage for an import
+source, you can do so on the web-interface under _Datacenter -> Storage -> Add_.
+
+Then you can select the new storage in the resource tree and use the 'Virtual
+Guests' content tab to see all available guests that can be imported.
+
+[thumbnail="screenshot/gui-import-wizard-advanced.png"]
+
+Select one and use the 'Import' button (or double-click) to open the import
+wizard. You can modify a subset of the available options here and then start the
+import. Please note that you can do more advanced modifications after the import
+finished.
+
+TIP: The import wizard is currently (2024-03) available for ESXi and has been
+tested with ESXi versions 6.5 through 8.0. Note that guests using vSAN storage
+cannot be directly imported directly; their disks must first be moved to another
+storage. While it is possible to use a vCenter as the import source, performance
+is dramatically degraded (5 to 10 times slower).
+
+For a step-by-step guide and tips for how to adapt the virtual guest to the new
+hyper-visor see our
+https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Migration[migrate to {pve}
+wiki article].
+
+Import OVF/OVA Through CLI
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A VM export from a foreign hypervisor takes usually the form of one or more disk
+ images, with a configuration file describing the settings of the VM (RAM,
+ number of cores). +
+The disk images can be in the vmdk format, if the disks come from
+VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
+The most popular configuration format for VM exports is the OVF standard, but in
+practice interoperation is limited because many settings are not implemented in
+the standard itself, and hypervisors export the supplementary information
+in non-standard extensions.
+
+Besides the problem of format, importing disk images from other hypervisors
+may fail if the emulated hardware changes too much from one hypervisor to
+another. Windows VMs are particularly concerned by this, as the OS is very
+picky about any changes of hardware. This problem may be solved by
+installing the MergeIDE.zip utility available from the Internet before exporting
+and choosing a hard disk type of *IDE* before booting the imported Windows VM.
+
+Finally there is the question of paravirtualized drivers, which improve the
+speed of the emulated system and are specific to the hypervisor.
+GNU/Linux and other free Unix OSes have all the necessary drivers installed by
+default and you can switch to the paravirtualized drivers right after importing
+the VM. For Windows VMs, you need to install the Windows paravirtualized
+drivers by yourself.
+
+GNU/Linux and other free Unix can usually be imported without hassle. Note
+that we cannot guarantee a successful import/export of Windows VMs in all
+cases due to the problems above.
+
+Step-by-step example of a Windows OVF import
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Microsoft provides
+https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
+ to get started with Windows development.We are going to use one of these
+to demonstrate the OVF import feature.
+
+Download the Virtual Machine zip
+++++++++++++++++++++++++++++++++
+
+After getting informed about the user agreement, choose the _Windows 10
+Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
+
+Extract the disk image from the zip
++++++++++++++++++++++++++++++++++++
+
+Using the `unzip` utility or any archiver of your choice, unpack the zip,
+and copy via ssh/scp the ovf and vmdk files to your {pve} host.
+
+Import the Virtual Machine
+++++++++++++++++++++++++++
+
+This will create a new virtual machine, using cores, memory and
+VM name as read from the OVF manifest, and import the disks to the +local-lvm+
+ storage. You have to configure the network manually.
+
+----
+# qm importovf 999 WinDev1709Eval.ovf local-lvm
+----
+
+The VM is ready to be started.
+
+Adding an external disk image to a Virtual Machine
+++++++++++++++++++++++++++++++++++++++++++++++++++
+
+You can also add an existing disk image to a VM, either coming from a
+foreign hypervisor, or one that you created yourself.
+
+Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
+
+ vmdebootstrap --verbose \
+ --size 10GiB --serial-console \
+ --grub --no-extlinux \
+ --package openssh-server \
+ --package avahi-daemon \
+ --package qemu-guest-agent \
+ --hostname vm600 --enable-dhcp \
+ --customize=./copy_pub_ssh.sh \
+ --sparse --image vm600.raw
+
+You can now create a new target VM, importing the image to the storage `pvedir`
+and attaching it to the VM's SCSI controller:
+
+----
+# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
+ --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
+ --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
+----
+
+The VM is ready to be started.
+
+
+ifndef::wiki[]
+include::qm-cloud-init.adoc[]
+endif::wiki[]
+
+ifndef::wiki[]
+include::qm-pci-passthrough.adoc[]
+endif::wiki[]
+
+Hookscripts
+-----------
+
+You can add a hook script to VMs with the config property `hookscript`.
+
+----
+# qm set 100 --hookscript local:snippets/hookscript.pl
+----
+
+It will be called during various phases of the guests lifetime.
+For an example and documentation see the example script under
+`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
+
+[[qm_hibernate]]
+Hibernation
+-----------
+
+You can suspend a VM to disk with the GUI option `Hibernate` or with
+
+----
+# qm suspend ID --todisk
+----
+
+That means that the current content of the memory will be saved onto disk
+and the VM gets stopped. On the next start, the memory content will be
+loaded and the VM can continue where it was left off.
+
+[[qm_vmstatestorage]]
+.State storage selection
+If no target storage for the memory is given, it will be automatically
+chosen, the first of:
+
+1. The storage `vmstatestorage` from the VM config.
+2. The first shared storage from any VM disk.
+3. The first non-shared storage from any VM disk.
+4. The storage `local` as a fallback.
+
+[[resource_mapping]]
+Resource Mapping
+----------------
+
+[thumbnail="screenshot/gui-datacenter-resource-mappings.png"]
+
+When using or referencing local resources (e.g. address of a pci device), using
+the raw address or id is sometimes problematic, for example:
+
+* when using HA, a different device with the same id or path may exist on the
+ target node, and if one is not careful when assigning such guests to HA
+ groups, the wrong device could be used, breaking configurations.
+
+* changing hardware can change ids and paths, so one would have to check all
+ assigned devices and see if the path or id is still correct.
+
+To handle this better, one can define cluster wide resource mappings, such that
+a resource has a cluster unique, user selected identifier which can correspond
+to different devices on different hosts. With this, HA won't start a guest with
+a wrong device, and hardware changes can be detected.
+
+Creating such a mapping can be done with the {pve} web GUI under `Datacenter`
+in the relevant tab in the `Resource Mappings` category, or on the cli with
+
+----
+# pvesh create /cluster/mapping/<type> <options>
+----
+
+[thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
+
+Where `<type>` is the hardware type (currently either `pci` or `usb`) and
+`<options>` are the device mappings and other configuration parameters.
+
+Note that the options must include a map property with all identifying
+properties of that hardware, so that it's possible to verify the hardware did
+not change and the correct device is passed through.
+
+For example to add a PCI device as `device1` with the path `0000:01:00.0` that
+has the device id `0001` and the vendor id `0002` on the node `node1`, and
+`0000:02:00.0` on `node2` you can add it with:
+
+----
+# pvesh create /cluster/mapping/pci --id device1 \
+ --map node=node1,path=0000:01:00.0,id=0002:0001 \
+ --map node=node2,path=0000:02:00.0,id=0002:0001
+----
+
+You must repeat the `map` parameter for each node where that device should have
+a mapping (note that you can currently only map one USB device per node per
+mapping).
+
+Using the GUI makes this much easier, as the correct properties are
+automatically picked up and sent to the API.
+
+[thumbnail="screenshot/gui-datacenter-mapping-usb-edit.png"]
+
+It's also possible for PCI devices to provide multiple devices per node with
+multiple map properties for the nodes. If such a device is assigned to a guest,
+the first free one will be used when the guest is started. The order of the
+paths given is also the order in which they are tried, so arbitrary allocation
+policies can be implemented.
+
+This is useful for devices with SR-IOV, since some times it is not important
+which exact virtual function is passed through.
+
+You can assign such a device to a guest either with the GUI or with
+
+----
+# qm set ID -hostpci0 <name>
+----
+
+for PCI devices, or
+
+----
+# qm set <vmid> -usb0 <name>
+----
+
+for USB devices.
+
+Where `<vmid>` is the guests id and `<name>` is the chosen name for the created
+mapping. All usual options for passing through the devices are allowed, such as
+`mdev`.
+
+To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
+(where `<type>` is the device type and `<name>` is the name of the mapping).
+
+To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
+(in addition to the normal guest privileges to edit the configuration).
+