so it should increase performance when using multiple disks.
Note that backups do not currently work with *IO Thread* enabled.
+CPU
+~~~
+A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
+This CPU can then contain one or many *cores*, which are independent
+processing units. Whether you have a single CPU socket with 4 cores, or two CPU
+sockets with two cores is mostly irrelevant from a performance point of view.
+However some software is licensed depending on the number of sockets you have in
+your machine, in that case it makes sense to set the number of of sockets to
+what the license allows you, and increase the number of cores. +
+Increasing the number of virtual cpus (cores and sockets) will usually provide a
+performance improvement though that is heavily dependent on the use of the VM.
+Multithreaded applications will of course benefit from a large number of
+virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
+execution on the host system. If you're not sure about the workload of your VM,
+it is usually a safe bet to set the number of *Total cores* to 2.
+
+NOTE: It is perfectly safe to set the _overall_ number of total cores in all
+your VMs to be greater than the number of of cores you have on your server (ie.
+4 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
+the host system will balance the Qemu execution threads between your server
+cores just like if you were running a standard multithreaded application.
+However {pve} will prevent you to allocate on a _single_ machine more vcpus than
+physically available, as this will only bring the performance down due to the
+cost of context switches.
+
+Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
+processors. Each new processor generation adds new features, like hardware
+assisted 3d rendering, random number generation, memory protection, etc ...
+Usually you should select for your VM a processor type which closely matches the
+CPU of the host system, as it means that the host CPU features (also called _CPU
+flags_ ) will be available in your VMs. If you want an exact match, you can set
+the CPU type to *host* in which case the VM will have exactly the same CPU flags
+as your host system. +
+This has a downside though. If you want to do a live migration of VMs between
+different hosts, your VM might end up on a new system with a different CPU type.
+If the CPU flags passed to the guest are missing, the qemu process will stop. To
+remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
+kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
+but is guaranteed to work everywhere. +
+ In short, if you care about live migration and moving VMs between nodes, leave
+the kvm64 default. If you don’t care about live migration, set the CPU type to
+host, as in theory this will give your guests maximum performance.
+
+You can also optionally emulate a *NUMA* architecture in your VMs. The basics of
+the NUMA architecture mean that instead of having a global memory pool available
+to all your cores, the memory is spread into local banks close to each socket.
+This can bring speed improvements as the memory bus is not a bottleneck
+anymore. If your system has a NUMA architecture footnote:[if the command
+`numactl --hardware | grep available` returns more than one node, then your host
+system has a NUMA architecture] we recommend to activate the option, as this
+will allow proper distribution of the VM resources on the host system. This
+option is also required in {pve} to allow hotplugging of cores and RAM to a VM.
+
+If the NUMA option is used, it is recommended to set the number of sockets to
+the number of sockets of the host system.
+
+Memory
+~~~~~~
+For each VM you have the option to set a fixed size memory or asking
+{pve} to dynamically allocate memory based on the current RAM usage of the
+host.
+
+When choosing a *fixed size memory* {pve} will simply allocate what you
+specify to your VM.
+
+// see autoballoon() in pvestatd.pm
+When choosing to *automatically allocate memory*, {pve} will make sure that the
+minimum amount you specified is always available to the VM, and if RAM usage on
+the host is below 80%, will dynamically add memory to the guest up to the
+maximum memory specified. +
+When the host is becoming short on RAM, the VM will then release some memory
+back to the host, swapping running processes if needed and starting the oom
+killer in last resort. The passing around of memory between host and guest is
+done via a special `balloon` kernel driver running inside the guest, which will
+grab or release memory pages from the host.
+footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
+
+When multiple VMs use the autoallocate facility, it is possible to set a
+*Shares* coefficient which indicates the relative amount of the free host memory
+that each VM shoud take. Suppose for instance you have four VMs, three of them
+running a HTTP server and the last one is a database server. To cache more
+database blocks in the database server RAM, you would like to prioritize the
+database VM when spare RAM is available. For this you assign a Shares property
+of 3000 to the database VM, leaving the other VMs to the Shares default setting
+of 1000. The host server has 32GB of RAM, and is curring using 16GB, leaving 32
+* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
+3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
+get 1/5 GB.
+
+All Linux distributions released after 2010 have the balloon kernel driver
+included. For Windows OSes, the balloon driver needs to be added manually and can
+incur a slowdown of the guest, so we don't recommend using it on critical
+systems.
+// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
+
+When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
+of RAM available to the host.
+
+USB Passthrough
+~~~~~~~~~~~~~~~
+There are two different types of USB passthrough devices:
+
+* Host USB passtrough
+* SPICE USB passthrough
+
+Host USB passthrough works by giving a VM a USB device of the host.
+This can either be done via the vendor- and product-id, or
+via the host bus and port.
+
+The vendor/product-id looks like this: *0123:abcd*,
+where *0123* is the id of the vendor, and *abcd* is the id
+of the product, meaning two pieces of the same usb device
+have the same id.
+
+The bus/port looks like this: *1-2.3.4*, where *1* is the bus
+and *2.3.4* is the port path. This represents the physical
+ports of your host (depending of the internal order of the
+usb controllers).
+
+If a device is present in a VM configuration when the VM starts up,
+but the device is not present in the host, the VM can boot without problems.
+As soon as the device/port ist available in the host, it gets passed through.
+
+WARNING: Using this kind of USB passthrough, means that you cannot move
+a VM online to another host, since the hardware is only available
+on the host the VM is currently residing.
+
+The second type of passthrough is SPICE USB passthrough. This is useful
+if you use a SPICE client which supports it. If you add a SPICE USB port
+to your VM, you can passthrough a USB device from where your SPICE client is,
+directly to the VM (for example an input device or hardware dongle).
+
Managing Virtual Machines with 'qm'
------------------------------------