https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
chipset, which also provides a virtual PCIe bus, and thus may be
desired if you want to pass through PCIe hardware.
+Additionally, you can select a xref:qm_pci_viommu[vIOMMU] implementation.
Machine Version
+++++++++++++++
VM never uses more CPU time than vCPUs assigned, set the *cpulimit* to
the same value as the total core count.
-*cpuuntis*
+*cpuunits*
With the *cpuunits* option, nowadays often called CPU shares or CPU weight, you
can control how much CPU time a VM gets compared to other running VMs. It is a
of 3000 to the database VM, leaving the other VMs to the Shares default setting
of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
* 80/100 - 16 = 9GB RAM to be allocated to the VMs on top of their configured
-minimum memory amount. The database VM will benefit from 9 * 3000 / (3000 +
-1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server from 1.5 GB.
+minimum memory amount. The database VM will benefit from 9 * 3000 / (3000
++ 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server from 1.5 GB.
All Linux distributions released after 2010 have the balloon kernel driver
included. For Windows OSes, the balloon driver needs to be added manually and can
where X is the number of the number of vCPUs of the VM.
To configure a Windows guest for Multiqueue install the
-https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[
-Redhat VirtIO Ethernet Adapter drivers], then adapt the NIC's configuration as
-follows. Open the device manager, right click the NIC under "Network adapters",
-and select "Properties". Then open the "Advanced" tab and select "Receive Side
-Scaling" from the list on the left. Make sure it is set to "Enabled". Next,
-navigate to "Maximum number of RSS Queues" in the list and set it to the number
-of vCPUs of your VM. Once you verified that the settings are correct, click "OK"
-to confirm them.
+https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers[Redhat VirtIO Ethernet
+Adapter drivers], then adapt the NIC's configuration as follows. Open the
+device manager, right click the NIC under "Network adapters", and select
+"Properties". Then open the "Advanced" tab and select "Receive Side Scaling"
+from the list on the left. Make sure it is set to "Enabled". Next, navigate to
+"Maximum number of RSS Queues" in the list and set it to the number of vCPUs of
+your VM. Once you verified that the settings are correct, click "OK" to confirm
+them.
You should note that setting the Multiqueue parameter to a value greater
than one will increase the CPU load on the host and guest systems as the
In order to use the clipboard feature, you must first install the
SPICE guest tools. On Debian-based distributions, this can be achieved
by installing `spice-vdagent`. For other Operating Systems search for it
-in the offical repositories or see: https://www.spice-space.org/download.html
+in the official repositories or see: https://www.spice-space.org/download.html
Once you have installed the spice guest tools, you can use the VNC clipboard
function (e.g. in the noVNC console panel). However, if you're using
footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
on snapshot rollback, backup restore or a whole VM clone operation.
-Importing Virtual Machines and disk images
-------------------------------------------
+[[qm_import_virtual_machines]]
+Importing Virtual Machines
+--------------------------
+
+Importing existing virtual machines from foreign hypervisors or other {pve}
+clusters can be achieved through various methods, the most common ones are:
+
+* Using the native import wizard, which utilizes the 'import' content type, such
+ as provided by the ESXi special storage.
+* Performing a backup on the source and then restoring on the target. This
+ method works best when migrating from another {pve} instance.
+* using the OVF-specific import command of the `qm` command-line tool.
+
+If you import VMs to {pve} from other hypervisors, it’s recommended to
+familiarize yourself with the
+https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Concepts[concepts of {pve}].
+
+Import Wizard
+~~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-import-wizard-general.png"]
+
+{pve} provides an integrated VM importer using the storage plugin system for
+native integration into the API and web-based user interface. You can use this
+to import the VM as a whole, with most of its config mapped to {pve}'s config
+model and reduced downtime.
+
+NOTE: The import wizard was added during the {pve} 8.2 development cycle and is
+in tech preview state. While it's already promising and working stable, it's
+still under active development, focusing on adding other import-sources, like
+for example OVF/OVA files, in the future.
+
+To use the import wizard you have to first set up a new storage for an import
+source, you can do so on the web-interface under _Datacenter -> Storage -> Add_.
+
+Then you can select the new storage in the resource tree and use the 'Virtual
+Guests' content tab to see all available guests that can be imported.
+
+[thumbnail="screenshot/gui-import-wizard-advanced.png"]
+
+Select one and use the 'Import' button (or double-click) to open the import
+wizard. You can modify a subset of the available options here and then start the
+import. Please note that you can do more advanced modifications after the import
+finished.
+
+TIP: The import wizard is currently (2024-03) available for ESXi and has been
+tested with ESXi versions 6.5 through 8.0. Note that guests using vSAN storage
+cannot be directly imported directly; their disks must first be moved to another
+storage. While it is possible to use a vCenter as the import source, performance
+is dramatically degraded (5 to 10 times slower).
+
+For a step-by-step guide and tips for how to adapt the virtual guest to the new
+hyper-visor see our
+https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Migration[migrate to {pve}
+wiki article].
+
+Import OVF/OVA Through CLI
+~~~~~~~~~~~~~~~~~~~~~~~~~~
A VM export from a foreign hypervisor takes usually the form of one or more disk
images, with a configuration file describing the settings of the VM (RAM,
cases due to the problems above.
Step-by-step example of a Windows OVF import
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Microsoft provides
https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
to demonstrate the OVF import feature.
Download the Virtual Machine zip
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++
After getting informed about the user agreement, choose the _Windows 10
Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
Extract the disk image from the zip
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++++++++
Using the `unzip` utility or any archiver of your choice, unpack the zip,
and copy via ssh/scp the ovf and vmdk files to your {pve} host.
Import the Virtual Machine
-^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++
This will create a new virtual machine, using cores, memory and
VM name as read from the OVF manifest, and import the disks to the +local-lvm+
The VM is ready to be started.
Adding an external disk image to a Virtual Machine
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+++++++++++++++++++++++++++++++++++++++++++++++++++
You can also add an existing disk image to a VM, either coming from a
foreign hypervisor, or one that you created yourself.
# qm shutdown 300 && qm wait 300 -timeout 40
----
+If the VM does not shut down, force-stop it and overrule any running shutdown
+tasks. As stopping VMs may incur data loss, use it with caution.
+
+----
+# qm stop 300 -overrule-shutdown 1
+----
+
Destroying a VM always removes it from Access Control Lists and it always
removes the firewall configuration of the VM. You have to activate
'--purge', if you want to additionally remove the VM from replication jobs,