+{pve} guide), you can activate the *Discard* option on a drive. With *Discard*
+set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
+https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
+marks blocks as unused after deleting files, the controller will relay this
+information to the storage, which will then shrink the disk image accordingly.
+For the guest to be able to issue _TRIM_ commands, you must either use a
+*VirtIO SCSI* (or *VirtIO SCSI Single*) controller or set the *SSD emulation*
+option on the drive. Note that *Discard* is not supported on *VirtIO Block*
+drives.
+
+If you would like a drive to be presented to the guest as a solid-state drive
+rather than a rotational hard disk, you can set the *SSD emulation* option on
+that drive. There is no requirement that the underlying storage actually be
+backed by SSDs; this feature can be used with physical media of any type.
+Note that *SSD emulation* is not supported on *VirtIO Block* drives.
+
+.IO Thread
+The option *IO Thread* can only be used when using a disk with the
+*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
+ type is *VirtIO SCSI single*.
+With this enabled, Qemu creates one I/O thread per storage controller,
+instead of a single thread for all I/O, so it increases performance when
+multiple disks are used and each disk has its own storage controller.
+Note that backups do not currently work with *IO Thread* enabled.
+
+
+[[qm_cpu]]
+CPU
+~~~
+
+[thumbnail="screenshot/gui-create-vm-cpu.png"]
+
+A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
+This CPU can then contain one or many *cores*, which are independent
+processing units. Whether you have a single CPU socket with 4 cores, or two CPU
+sockets with two cores is mostly irrelevant from a performance point of view.
+However some software licenses depend on the number of sockets a machine has,
+in that case it makes sense to set the number of sockets to what the license
+allows you.
+
+Increasing the number of virtual cpus (cores and sockets) will usually provide a
+performance improvement though that is heavily dependent on the use of the VM.
+Multithreaded applications will of course benefit from a large number of
+virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
+execution on the host system. If you're not sure about the workload of your VM,
+it is usually a safe bet to set the number of *Total cores* to 2.
+
+NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
+is greater than the number of cores on the server (e.g., 4 VMs with each 4
+cores on a machine with only 8 cores). In that case the host system will
+balance the Qemu execution threads between your server cores, just like if you
+were running a standard multithreaded application. However, {pve} will prevent
+you from assigning more virtual CPU cores than physically available, as this will
+only bring the performance down due to the cost of context switches.
+
+[[qm_cpu_resource_limits]]
+Resource Limits
+^^^^^^^^^^^^^^^
+
+In addition to the number of virtual cores, you can configure how much resources
+a VM can get in relation to the host CPU time and also in relation to other
+VMs.
+With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
+the whole VM can use on the host. It is a floating point value representing CPU
+time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
+single process would fully use one single core it would have `100%` CPU Time
+usage. If a VM with four cores utilizes all its cores fully it would
+theoretically use `400%`. In reality the usage may be even a bit higher as Qemu
+can have additional threads for VM peripherals besides the vCPU core ones.
+This setting can be useful if a VM should have multiple vCPUs, as it runs a few
+processes in parallel, but the VM as a whole should not be able to run all
+vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
+which would profit from having 8 vCPUs, but at no time all of those 8 cores
+should run at full load - as this would make the server so overloaded that
+other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
+`4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
+real host cores CPU time. But, if only 4 would do work they could still get
+almost 100% of a real core each.
+
+NOTE: VMs can, depending on their configuration, use additional threads e.g.,
+for networking or IO operations but also live migration. Thus a VM can show up
+to use more CPU time than just its virtual CPUs could use. To ensure that a VM
+never uses more CPU time than virtual CPUs assigned set the *cpulimit* setting
+to the same value as the total core count.
+
+The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
+shares or CPU weight), controls how much CPU time a VM gets in regards to other
+VMs running. It is a relative weight which defaults to `1024`, if you increase
+this for a VM it will be prioritized by the scheduler in comparison to other
+VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was
+changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than
+the first VM 100.
+
+For more information see `man systemd.resource-control`, here `CPUQuota`
+corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits`
+setting, visit its Notes section for references and implementation details.
+
+CPU Type
+^^^^^^^^
+
+Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
+processors. Each new processor generation adds new features, like hardware
+assisted 3d rendering, random number generation, memory protection, etc ...
+Usually you should select for your VM a processor type which closely matches the
+CPU of the host system, as it means that the host CPU features (also called _CPU
+flags_ ) will be available in your VMs. If you want an exact match, you can set
+the CPU type to *host* in which case the VM will have exactly the same CPU flags
+as your host system.
+
+This has a downside though. If you want to do a live migration of VMs between
+different hosts, your VM might end up on a new system with a different CPU type.
+If the CPU flags passed to the guest are missing, the qemu process will stop. To
+remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
+kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
+but is guaranteed to work everywhere.
+
+In short, if you care about live migration and moving VMs between nodes, leave
+the kvm64 default. If you don’t care about live migration or have a homogeneous
+cluster where all nodes have the same CPU, set the CPU type to host, as in
+theory this will give your guests maximum performance.
+
+Meltdown / Spectre related CPU flags
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There are several CPU flags related to the Meltdown and Spectre vulnerabilities
+footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
+manually unless the selected CPU type of your VM already enables them by default.
+
+There are two requirements that need to be fulfilled in order to use these
+CPU flags:
+
+* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
+* The guest operating system must be updated to a version which mitigates the
+ attacks and is able to utilize the CPU feature
+
+Otherwise you need to set the desired CPU flag of the virtual CPU, either by
+editing the CPU options in the WebUI, or by setting the 'flags' property of the
+'cpu' option in the VM configuration file.
+
+For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
+so-called ``microcode update'' footnote:[You can use `intel-microcode' /
+`amd-microcode' from Debian non-free if your vendor does not provide such an
+update. Note that not all affected CPUs can be updated to support spec-ctrl.]
+for your CPU.
+
+
+To check if the {pve} host is vulnerable, execute the following command as root:
+
+----
+for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
+----
+
+A community script is also available to detect is the host is still vulnerable.
+footnote:[spectre-meltdown-checker https://meltdown.ovh/]
+
+Intel processors
+^^^^^^^^^^^^^^^^
+
+* 'pcid'
++
+This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
+called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
+the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
+mechanism footnote:[PCID is now a critical performance/security feature on x86
+https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
++
+To check if the {pve} host supports PCID, execute the following command as root:
++
+----
+# grep ' pcid ' /proc/cpuinfo
+----
++
+If this does not return empty your host's CPU has support for 'pcid'.
+
+* 'spec-ctrl'
++
+Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
+in cases where retpolines are not sufficient.
+Included by default in Intel CPU models with -IBRS suffix.
+Must be explicitly turned on for Intel CPU models without -IBRS suffix.
+Requires an updated host CPU microcode (intel-microcode >= 20180425).
++
+* 'ssbd'
++
+Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
+Must be explicitly turned on for all Intel CPU models.
+Requires an updated host CPU microcode(intel-microcode >= 20180703).
+
+
+AMD processors
+^^^^^^^^^^^^^^
+
+* 'ibpb'
++
+Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
+in cases where retpolines are not sufficient.
+Included by default in AMD CPU models with -IBPB suffix.
+Must be explicitly turned on for AMD CPU models without -IBPB suffix.
+Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
+
+
+
+* 'virt-ssbd'
++
+Required to enable the Spectre v4 (CVE-2018-3639) fix.
+Not included by default in any AMD CPU model.
+Must be explicitly turned on for all AMD CPU models.
+This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
+Note that this must be explicitly enabled when when using the "host" cpu model,
+because this is a virtual feature which does not exist in the physical CPUs.
+
+
+* 'amd-ssbd'
++
+Required to enable the Spectre v4 (CVE-2018-3639) fix.
+Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
+This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
+virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
+
+
+* 'amd-no-ssb'
++
+Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
+Not included by default in any AMD CPU model.
+Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
+and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
+This is mutually exclusive with virt-ssbd and amd-ssbd.
+
+
+NUMA
+^^^^
+You can also optionally emulate a *NUMA*
+footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
+in your VMs. The basics of the NUMA architecture mean that instead of having a
+global memory pool available to all your cores, the memory is spread into local
+banks close to each socket.
+This can bring speed improvements as the memory bus is not a bottleneck
+anymore. If your system has a NUMA architecture footnote:[if the command
+`numactl --hardware | grep available` returns more than one node, then your host
+system has a NUMA architecture] we recommend to activate the option, as this
+will allow proper distribution of the VM resources on the host system.
+This option is also required to hot-plug cores or RAM in a VM.
+
+If the NUMA option is used, it is recommended to set the number of sockets to
+the number of nodes of the host system.
+
+vCPU hot-plug
+^^^^^^^^^^^^^
+
+Modern operating systems introduced the capability to hot-plug and, to a
+certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us
+to avoid a lot of the (physical) problems real hardware can cause in such
+scenarios.
+Still, this is a rather new and complicated feature, so its use should be
+restricted to cases where its absolutely needed. Most of the functionality can
+be replicated with other, well tested and less complicated, features, see
+xref:qm_cpu_resource_limits[Resource Limits].
+
+In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
+To start a VM with less than this total core count of CPUs you may use the
+*vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
+
+Currently only this feature is only supported on Linux, a kernel newer than 3.10
+is needed, a kernel newer than 4.7 is recommended.
+
+You can use a udev rule as follow to automatically set new CPUs as online in
+the guest:
+
+----
+SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
+----
+
+Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
+
+Note: CPU hot-remove is machine dependent and requires guest cooperation.
+The deletion command does not guarantee CPU removal to actually happen,
+typically it's a request forwarded to guest using target dependent mechanism,
+e.g., ACPI on x86/amd64.
+
+
+[[qm_memory]]
+Memory
+~~~~~~
+
+For each VM you have the option to set a fixed size memory or asking
+{pve} to dynamically allocate memory based on the current RAM usage of the
+host.
+
+.Fixed Memory Allocation
+[thumbnail="screenshot/gui-create-vm-memory.png"]
+
+When setting memory and minimum memory to the same amount
+{pve} will simply allocate what you specify to your VM.
+
+Even when using a fixed memory size, the ballooning device gets added to the
+VM, because it delivers useful information such as how much memory the guest
+really uses.
+In general, you should leave *ballooning* enabled, but if you want to disable
+it (e.g. for debugging purposes), simply uncheck
+*Ballooning Device* or set
+
+ balloon: 0
+
+in the configuration.
+
+.Automatic Memory Allocation
+
+// see autoballoon() in pvestatd.pm
+When setting the minimum memory lower than memory, {pve} will make sure that the
+minimum amount you specified is always available to the VM, and if RAM usage on
+the host is below 80%, will dynamically add memory to the guest up to the
+maximum memory specified.
+
+When the host is running low on RAM, the VM will then release some memory
+back to the host, swapping running processes if needed and starting the oom
+killer in last resort. The passing around of memory between host and guest is
+done via a special `balloon` kernel driver running inside the guest, which will
+grab or release memory pages from the host.
+footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
+
+When multiple VMs use the autoallocate facility, it is possible to set a
+*Shares* coefficient which indicates the relative amount of the free host memory
+that each VM should take. Suppose for instance you have four VMs, three of them
+running an HTTP server and the last one is a database server. To cache more
+database blocks in the database server RAM, you would like to prioritize the
+database VM when spare RAM is available. For this you assign a Shares property
+of 3000 to the database VM, leaving the other VMs to the Shares default setting
+of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
+* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
+3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
+get 1.5 GB.
+
+All Linux distributions released after 2010 have the balloon kernel driver
+included. For Windows OSes, the balloon driver needs to be added manually and can
+incur a slowdown of the guest, so we don't recommend using it on critical
+systems.
+// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
+
+When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
+of RAM available to the host.
+
+
+[[qm_network_device]]
+Network Device
+~~~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-create-vm-network.png"]
+
+Each VM can have many _Network interface controllers_ (NIC), of four different
+types:
+
+ * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
+ * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
+performance. Like all VirtIO devices, the guest OS should have the proper driver
+installed.
+ * the *Realtek 8139* emulates an older 100 MB/s network card, and should
+only be used when emulating older operating systems ( released before 2002 )
+ * the *vmxnet3* is another paravirtualized device, which should only be used
+when importing a VM from another hypervisor.
+
+{pve} will generate for each NIC a random *MAC address*, so that your VM is
+addressable on Ethernet networks.
+
+The NIC you added to the VM can follow one of two different models:
+
+ * in the default *Bridged mode* each virtual NIC is backed on the host by a
+_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
+tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
+have direct access to the Ethernet LAN on which the host is located.
+ * in the alternative *NAT mode*, each virtual NIC will only communicate with
+the Qemu user networking stack, where a built-in router and DHCP server can
+provide network access. This built-in DHCP will serve addresses in the private
+10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
+should only be used for testing. This mode is only available via CLI or the API,
+but not via the WebUI.
+
+You can also skip adding a network device when creating a VM by selecting *No
+network device*.
+
+.Multiqueue
+If you are using the VirtIO driver, you can optionally activate the
+*Multiqueue* option. This option allows the guest OS to process networking
+packets using multiple virtual CPUs, providing an increase in the total number
+of packets transferred.
+
+//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
+When using the VirtIO driver with {pve}, each NIC network queue is passed to the
+host kernel, where the queue will be processed by a kernel thread spawned by the
+vhost driver. With this option activated, it is possible to pass _multiple_
+network queues to the host kernel for each NIC.
+
+//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
+When using Multiqueue, it is recommended to set it to a value equal
+to the number of Total Cores of your guest. You also need to set in
+the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
+command:
+
+`ethtool -L ens1 combined X`
+
+where X is the number of the number of vcpus of the VM.
+
+You should note that setting the Multiqueue parameter to a value greater
+than one will increase the CPU load on the host and guest systems as the
+traffic increases. We recommend to set this option only when the VM has to
+process a great number of incoming connections, such as when the VM is running
+as a router, reverse proxy or a busy HTTP server doing long polling.
+
+[[qm_display]]
+Display
+~~~~~~~
+
+QEMU can virtualize a few types of VGA hardware. Some examples are:
+
+* *std*, the default, emulates a card with Bochs VBE extensions.
+* *cirrus*, this was once the default, it emulates a very old hardware module
+with all its problems. This display type should only be used if really
+necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
+qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier
+* *vmware*, is a VMWare SVGA-II compatible adapter.
+* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
+enables SPICE for the VM.
+
+You can edit the amount of memory given to the virtual GPU, by setting
+the 'memory' option. This can enable higher resolutions inside the VM,
+especially with SPICE/QXL.
+
+As the memory is reserved by display device, selecting Multi-Monitor mode
+for SPICE (e.g., `qxl2` for dual monitors) has some implications:
+
+* Windows needs a device for each monitor, so if your 'ostype' is some
+version of Windows, {pve} gives the VM an extra device per monitor.
+Each device gets the specified amount of memory.
+
+* Linux VMs, can always enable more virtual monitors, but selecting
+a Multi-Monitor mode multiplies the memory given to the device with
+the number of monitors.
+
+Selecting `serialX` as display 'type' disables the VGA output, and redirects
+the Web Console to the selected serial port. A configured display 'memory'
+setting will be ignored in that case.
+
+[[qm_usb_passthrough]]
+USB Passthrough
+~~~~~~~~~~~~~~~
+
+There are two different types of USB passthrough devices:
+
+* Host USB passthrough
+* SPICE USB passthrough
+
+Host USB passthrough works by giving a VM a USB device of the host.
+This can either be done via the vendor- and product-id, or
+via the host bus and port.
+
+The vendor/product-id looks like this: *0123:abcd*,
+where *0123* is the id of the vendor, and *abcd* is the id
+of the product, meaning two pieces of the same usb device
+have the same id.
+
+The bus/port looks like this: *1-2.3.4*, where *1* is the bus
+and *2.3.4* is the port path. This represents the physical
+ports of your host (depending of the internal order of the
+usb controllers).
+
+If a device is present in a VM configuration when the VM starts up,
+but the device is not present in the host, the VM can boot without problems.
+As soon as the device/port is available in the host, it gets passed through.
+
+WARNING: Using this kind of USB passthrough means that you cannot move
+a VM online to another host, since the hardware is only available
+on the host the VM is currently residing.
+
+The second type of passthrough is SPICE USB passthrough. This is useful
+if you use a SPICE client which supports it. If you add a SPICE USB port
+to your VM, you can passthrough a USB device from where your SPICE client is,
+directly to the VM (for example an input device or hardware dongle).
+
+
+[[qm_bios_and_uefi]]
+BIOS and UEFI
+~~~~~~~~~~~~~
+
+In order to properly emulate a computer, QEMU needs to use a firmware.
+By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS
+implementation. SeaBIOS is a good choice for most standard setups.
+
+There are, however, some scenarios in which a BIOS is not a good firmware
+to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
+http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
+In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
+
+If you want to use OVMF, there are several things to consider:
+
+In order to save things like the *boot order*, there needs to be an EFI Disk.
+This disk will be included in backups and snapshots, and there can only be one.
+
+You can create such a disk with the following command:
+
+ qm set <vmid> -efidisk0 <storage>:1,format=<format>
+
+Where *<storage>* is the storage where you want to have the disk, and
+*<format>* is a format which the storage supports. Alternatively, you can
+create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
+hardware section of a VM.
+
+When using OVMF with a virtual display (without VGA passthrough),
+you need to set the client resolution in the OVMF menu(which you can reach
+with a press of the ESC button during boot), or you have to choose
+SPICE as the display type.
+
+[[qm_ivshmem]]
+Inter-VM shared memory
+~~~~~~~~~~~~~~~~~~~~~~
+
+You can add a Inter-VM shared memory device (`ivshmem`) to be able to
+share memory between the host and a guest, or between multiple guests.
+
+To add such a device, you can use `qm`: