thin provisioning of the disk image.
* the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
you would get when executing the `dd` command on a block device in Linux. This
- format do not support thin provisioning or snapshots by itself, requiring
+ format does not support thin provisioning or snapshots by itself, requiring
cooperation from the storage layer for these tasks. It may, however, be up to
10% faster than the *QEMU image format*. footnote:[See this benchmark for details
http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
execution on the host system. If you're not sure about the workload of your VM,
it is usually a safe bet to set the number of *Total cores* to 2.
-NOTE: It is perfectly safe to set the _overall_ number of total cores in all
-your VMs to be greater than the number of of cores you have on your server (i.e.
-4 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
-the host system will balance the Qemu execution threads between your server
-cores just like if you were running a standard multithreaded application.
-However {pve} will prevent you to allocate on a _single_ machine more vcpus than
-physically available, as this will only bring the performance down due to the
-cost of context switches.
+NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
+is greater than the number of cores on the server (e.g., 4 VMs with each 4
+cores on a machine with only 8 cores). In that case the host system will
+balance the Qemu execution threads between your server cores, just like if you
+were running a standard multithreaded application. However, {pve} will prevent
+you from assigning more virtual CPU cores than physically available, as this will
+only bring the performance down due to the cost of context switches.
[[qm_cpu_resource_limits]]
Resource Limits
^^^^^^^^^^^^^^^
-Additional, to the count of virtual cores, you can configure how much resources
+In addition to the number of virtual cores, you can configure how much resources
a VM can get in relation to the host CPU time and also in relation to other
VMs.
-With the *cpulimit* (`Host CPU Time') option you can limit how much CPU time the
-whole VM can use on the host. It is a floating point value representing CPU
+With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
+the whole VM can use on the host. It is a floating point value representing CPU
time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
-single process would fully use one single core he would have `100%` CPU Time
+single process would fully use one single core it would have `100%` CPU Time
usage. If a VM with four cores utilizes all its cores fully it would
theoretically use `400%`. In reality the usage may be even a bit higher as Qemu
can have additional threads for VM peripherals besides the vCPU core ones.
cluster where all nodes have the same CPU, set the CPU type to host, as in
theory this will give your guests maximum performance.
+PCID Flag
+^^^^^^^^^
+
+The *PCID* CPU flag helps to improve performance of the Meltdown vulnerability
+footnote:[Meltdown Attack https://meltdownattack.com/] mitigation approach. In
+Linux the mitigation is called 'Kernel Page-Table Isolation (KPTI)', which
+effectively hides the Kernel memory from the user space, which, without PCID,
+is an expensive operation footnote:[PCID is now a critical performance/security
+feature on x86
+https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
+
+There are two requirements to reduce the cost of the mitigation:
+
+* The host CPU must support PCID and propagate it to the guest's virtual CPU(s)
+* The guest Operating System must be updated to a version which mitigates the
+ attack and utilizes the PCID feature marked by its flag.
+
+To check if the {pve} host supports PCID, execute the following command as root:
+
+----
+# grep ' pcid ' /proc/cpuinfo
+----
+
+If this does not return empty your host's CPU has support for PCID. If you use
+`host' as CPU type and the guest OS is able to use it, you're done.
+Otherwise you need to set the PCID CPU flag for the virtual CPU. This can be
+done by editing the CPU options through the WebUI.
+
NUMA
^^^^
You can also optionally emulate a *NUMA*
^^^^^^^^^^^^^
Modern operating systems introduced the capability to hot-plug and, to a
-certain extent, hot-unplug CPU in a running systems. With Virtualisation we
-have even the luck that we avoid a lot of (physical) problem from real
-hardware.
-But it is still a complicated and not always well tested feature, so its use
-should be restricted to cases where its absolutely needed. Its uses can be
-replicated with other, well tested and less complicated, features, see
+certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us
+to avoid a lot of the (physical) problems real hardware can cause in such
+scenarios.
+Still, this is a rather new and complicated feature, so its use should be
+restricted to cases where its absolutely needed. Most of the functionality can
+be replicated with other, well tested and less complicated, features, see
xref:qm_cpu_resource_limits[Resource Limits].
In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
To start a VM with less than this total core count of CPUs you may use the
-*vpus* setting, it denotes how many vCPUs should be plugged at VM start.
+*vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
-Currently only Linux is working OK with this feature, a kernel newer than 3.10
+Currently only this feature is only supported on Linux, a kernel newer than 3.10
is needed, a kernel newer than 4.7 is recommended.
You can use a udev rule as follow to automatically set new CPUs as online in
Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
vmdebootstrap --verbose \
- --size 10G --serial-console \
+ --size 10GiB --serial-console \
--grub --no-extlinux \
--package openssh-server \
--package avahi-daemon \