execution on the host system. If you're not sure about the workload of your VM,
it is usually a safe bet to set the number of *Total cores* to 2.
-NOTE: It is perfectly safe if the _overall_ number of cores from all your VMs
+NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
is greater than the number of cores on the server (e.g., 4 VMs with each 4
cores on a machine with only 8 cores). In that case the host system will
balance the Qemu execution threads between your server cores, just like if you
were running a standard multithreaded application. However, {pve} will prevent
-you to assign more virtual CPU cores than physically available, as this will
+you from assigning more virtual CPU cores than physically available, as this will
only bring the performance down due to the cost of context switches.
[[qm_cpu_resource_limits]]
In addition to the number of virtual cores, you can configure how much resources
a VM can get in relation to the host CPU time and also in relation to other
VMs.
-With the *cpulimit* (`Host CPU Time') option you can limit how much CPU time the
-whole VM can use on the host. It is a floating point value representing CPU
+With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
+the whole VM can use on the host. It is a floating point value representing CPU
time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
single process would fully use one single core it would have `100%` CPU Time
usage. If a VM with four cores utilizes all its cores fully it would
cluster where all nodes have the same CPU, set the CPU type to host, as in
theory this will give your guests maximum performance.
+Meltdown / Spectre related CPU flags
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There are two CPU flags related to the Meltdown and Spectre vulnerabilities
+footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
+manually unless the selected CPU type of your VM already enables them by default.
+
+The first, called 'pcid', helps to reduce the performance impact of the Meltdown
+mitigation called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
+the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
+mechanism footnote:[PCID is now a critical performance/security feature on x86
+https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
+
+The second CPU flag is called 'spec-ctrl', which allows an operating system to
+selectively disable or restrict speculative execution in order to limit the
+ability of attackers to exploit the Spectre vulnerability.
+
+There are two requirements that need to be fulfilled in order to use these two
+CPU flags:
+
+* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
+* The guest operating system must be updated to a version which mitigates the
+ attacks and is able to utilize the CPU feature
+
+In order to use 'spec-ctrl', your CPU or system vendor also needs to provide a
+so-called ``microcode update'' footnote:[You can use `intel-microcode' /
+`amd-microcode' from Debian non-free if your vendor does not provide such an
+update. Note that not all affected CPUs can be updated to support spec-ctrl.]
+for your CPU.
+
+To check if the {pve} host supports PCID, execute the following command as root:
+
+----
+# grep ' pcid ' /proc/cpuinfo
+----
+
+If this does not return empty your host's CPU has support for 'pcid'.
+
+To check if the {pve} host supports spec-ctrl, execute the following command as root:
+
+----
+# grep ' spec_ctrl ' /proc/cpuinfo
+----
+
+If this does not return empty your host's CPU has support for 'spec-ctrl'.
+
+If you use `host' or another CPU type which enables the desired flags by
+default, and you updated your guest OS to make use of the associated CPU
+features, you're already set.
+
+Otherwise you need to set the desired CPU flag of the virtual CPU, either by
+editing the CPU options in the WebUI, or by setting the 'flags' property of the
+'cpu' option in the VM configuration file.
+
NUMA
^^^^
You can also optionally emulate a *NUMA*
as a router, reverse proxy or a busy HTTP server doing long polling.
+[[qm_cloud_init]]
+Cloud-Init Support
+~~~~~~~~~~~~~~~~~~
+
+http://cloudinit.readthedocs.io[Cloud-Init] is the defacto
+multi-distribution package that handles early initialization of a
+virtual machine instance. Using Cloud-Init, one can configure network
+devices and ssh keys on the hypervisor side. When the VM starts the
+first time, the Cloud-Init software inside the VM applies those
+settings.
+
+Many Linux distributions provides ready-to-use Cloud-Init images,
+mostly designed for 'OpenStack'. Those images also works with
+{pve}. While it may be convenient to use such read-to-use images, we
+usually recommend to prepare those images by yourself. That way you know
+exactly what is installed, and you can easily customize the image for
+your needs.
+
+Once you created such image, it is best practice to convert it into a
+VM template. It is really fast to create linked clones of VM
+templates, so this is a very fast way to roll out new VM
+instances. You just need to configure the network (any maybe ssh keys)
+before you start the new VM.
+
+We recommend the use of SSH key-based authentication to login to VMs
+provisioned by Cloud-Init. It is also possible to set a password, but
+{pve} needs to store an encrypted version of that password inside the
+Cloud-Init data. So this is not as safe as using SSH key-based
+authentication.
+
+{pve} generates an ISO image to pass the Cloud-Init data to the VM. So
+all Cloud-Init VMs needs to have an assigned CDROM drive for that
+purpose. Also, many Cloud-Init Images assumes to have a serial
+console, so it is best to add a serial console and use that as display
+for those VMs.
+
+
+Prepare Cloud-Init Templates
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The first step is to prepare your VM. You can basically use any VM,
+and simply install the Cloud-Init packages inside the VM you want to
+prepare. On Debian/Ubuntu based systems this is as simple as:
+
+----
+apt-get install cloud-init
+----
+
+Many distributions provides ready-to-use Cloud-Init images (provided
+as `.qcow2` files), so as alternative you can simply download and
+import such image. For the following example, we will use the cloud
+images provided by Ubuntu at https://cloud-images.ubuntu.com.
+
+----
+# download the image
+wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
+
+# create a new VM
+qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0
+
+# import the downloaded disk to local-lvm storage
+qm importdisk 9000 bionic-server-cloudimg-amd64.img local-lvm
+
+# finally attach the new disk to the VM as scsi drive
+qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-1
+----
+
+NOTE: Ubuntu Cloud-Init images requires the `virtio-scsi-pci`
+controller type for SCSI drives.
+
+
+The next step is to configure a CDROM drive, used to pass the
+Cloud-Init data to the VM.
+
+----
+qm set 9000 --ide2 local-lvm:cloudinit
+----
+
+We want to boot directly from the Cloud-Init image, so we set the
+`bootdisk` parameter to `scsi0` and restrict BIOS to boot from disk
+only. This simply speeds up booting, because VM BIOS skips testing for
+a bootable CDROM.
+
+----
+qm set 9000 --boot c --bootdisk scsi0
+----
+
+We also want to configure a serial console and use that as display. Many Cloud-Init images rely on that, because it is an requirement for OpenStack images.
+
+----
+qm set 9000 --serial0 socket --vga serial0
+----
+
+Finally, it is usually a good idea to transform such VM into a template. You can create linked clones with them, so deployment from VM templates is much faster than creating a full clone (copy).
+
+----
+qm template 9000
+----
+
+
+Deploy Cloud-Init Templates
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can easily deploy such template by cloning:
+
+----
+qm clone 9000 123 --name ubuntu2
+----
+
+Then configure the SSH public key used for authentication, and the IP setup
+
+----
+qm set 123 --sshkey ~/.ssh/id_rsa.pub
+qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1
+----
+
+You can configure all Cloud-Init options using a single command. I
+just split above example to separate commands to reduce the line
+length. Also make sure you adopt the IP setup for your environment.
+
+
+Cloud-Init specific Options
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+
+
+include::qm-cloud-init-opts.adoc[]
+
+
+
[[qm_usb_passthrough]]
USB Passthrough
~~~~~~~~~~~~~~~
* *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
you want the VM to be the first to be started. (We use the reverse startup
order for shutdown, so a machine with a start order of 1 would be the last to
-be shut down)
+be shut down). If multiple VMs have the same order defined on a host, they will
+additionally be ordered by 'VMID' in ascending order.
* *Startup delay*: Defines the interval between this VM start and subsequent
VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting
other VMs.
* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
for the VM to be offline after issuing a shutdown command.
-By default this value is set to 60, which means that {pve} will issue a
-shutdown request, wait 60s for the machine to be offline, and if after 60s
-the machine is still online will notify that the shutdown action failed.
+By default this value is set to 180, which means that {pve} will issue a
+shutdown request and wait 180 seconds for the machine to be offline. If
+the machine is still online after the timeout it will be stopped forcefully.
NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
'boot order' options currently. Those VMs will be skipped by the startup and
stopped.
Please note that machines without a Start/Shutdown order parameter will always
-start after those where the parameter is set, and this parameter only
-makes sense between the machines running locally on a host, and not
+start after those where the parameter is set. Further, this parameter can only
+be enforced between virtual machines running on the same host, not
cluster-wide.
Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
vmdebootstrap --verbose \
- --size 10G --serial-console \
+ --size 10GiB --serial-console \
--grub --no-extlinux \
--package openssh-server \
--package avahi-daemon \