X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=qm.adoc;h=4f9ae15f9710102bbae15da40154fc733d0f75a7;hp=947c1440c3eef647119fe5bc367bbde872843b1f;hb=7d6078845fa6a3bd308c7dc843273e56be33f315;hpb=cfd48f55a05104f391ccdf2dcd4627fa2593fb76 diff --git a/qm.adoc b/qm.adoc index 947c144..4f9ae15 100644 --- a/qm.adoc +++ b/qm.adoc @@ -193,11 +193,21 @@ As of {pve} 5.0, replication requires the disk images to be on a storage of type configured requires to skip replication for this disk image. If your storage supports _thin provisioning_ (see the storage chapter in the -{pve} guide), and your VM has a *SCSI* controller you can activate the *Discard* -option on the hard disks connected to that controller. With *Discard* enabled, -when the filesystem of a VM marks blocks as unused after removing files, the -emulated SCSI controller will relay this information to the storage, which will -then shrink the disk image accordingly. +{pve} guide), you can activate the *Discard* option on a drive. With *Discard* +set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard +https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem +marks blocks as unused after deleting files, the controller will relay this +information to the storage, which will then shrink the disk image accordingly. +For the guest to be able to issue _TRIM_ commands, you must either use a +*VirtIO SCSI* (or *VirtIO SCSI Single*) controller or set the *SSD emulation* +option on the drive. Note that *Discard* is not supported on *VirtIO Block* +drives. + +If you would like a drive to be presented to the guest as a solid-state drive +rather than a rotational hard disk, you can set the *SSD emulation* option on +that drive. There is no requirement that the underlying storage actually be +backed by SSDs; this feature can be used with physical media of any type. +Note that *SSD emulation* is not supported on *VirtIO Block* drives. .IO Thread The option *IO Thread* can only be used when using a disk with the @@ -307,56 +317,110 @@ theory this will give your guests maximum performance. Meltdown / Spectre related CPU flags ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -There are two CPU flags related to the Meltdown and Spectre vulnerabilities +There are several CPU flags related to the Meltdown and Spectre vulnerabilities footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set manually unless the selected CPU type of your VM already enables them by default. -The first, called 'pcid', helps to reduce the performance impact of the Meltdown -mitigation called 'Kernel Page-Table Isolation (KPTI)', which effectively hides -the Kernel memory from the user space. Without PCID, KPTI is quite an expensive -mechanism footnote:[PCID is now a critical performance/security feature on x86 -https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU]. - -The second CPU flag is called 'spec-ctrl', which allows an operating system to -selectively disable or restrict speculative execution in order to limit the -ability of attackers to exploit the Spectre vulnerability. - -There are two requirements that need to be fulfilled in order to use these two +There are two requirements that need to be fulfilled in order to use these CPU flags: * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s) * The guest operating system must be updated to a version which mitigates the attacks and is able to utilize the CPU feature -In order to use 'spec-ctrl', your CPU or system vendor also needs to provide a +Otherwise you need to set the desired CPU flag of the virtual CPU, either by +editing the CPU options in the WebUI, or by setting the 'flags' property of the +'cpu' option in the VM configuration file. + +For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a so-called ``microcode update'' footnote:[You can use `intel-microcode' / `amd-microcode' from Debian non-free if your vendor does not provide such an update. Note that not all affected CPUs can be updated to support spec-ctrl.] for your CPU. -To check if the {pve} host supports PCID, execute the following command as root: + +To check if the {pve} host is vulnerable, execute the following command as root: ---- -# grep ' pcid ' /proc/cpuinfo +for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done ---- -If this does not return empty your host's CPU has support for 'pcid'. +A community script is also available to detect is the host is still vulnerable. +footnote:[spectre-meltdown-checker https://meltdown.ovh/] -To check if the {pve} host supports spec-ctrl, execute the following command as root: +Intel processors +^^^^^^^^^^^^^^^^ +* 'pcid' ++ +This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation +called 'Kernel Page-Table Isolation (KPTI)', which effectively hides +the Kernel memory from the user space. Without PCID, KPTI is quite an expensive +mechanism footnote:[PCID is now a critical performance/security feature on x86 +https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU]. ++ +To check if the {pve} host supports PCID, execute the following command as root: ++ ---- -# grep ' spec_ctrl ' /proc/cpuinfo +# grep ' pcid ' /proc/cpuinfo ---- ++ +If this does not return empty your host's CPU has support for 'pcid'. + +* 'spec-ctrl' ++ +Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix, +in cases where retpolines are not sufficient. +Included by default in Intel CPU models with -IBRS suffix. +Must be explicitly turned on for Intel CPU models without -IBRS suffix. +Requires an updated host CPU microcode (intel-microcode >= 20180425). ++ +* 'ssbd' ++ +Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model. +Must be explicitly turned on for all Intel CPU models. +Requires an updated host CPU microcode(intel-microcode >= 20180703). -If this does not return empty your host's CPU has support for 'spec-ctrl'. -If you use `host' or another CPU type which enables the desired flags by -default, and you updated your guest OS to make use of the associated CPU -features, you're already set. +AMD processors +^^^^^^^^^^^^^^ + +* 'ibpb' ++ +Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix, +in cases where retpolines are not sufficient. +Included by default in AMD CPU models with -IBPB suffix. +Must be explicitly turned on for AMD CPU models without -IBPB suffix. +Requires the host CPU microcode to support this feature before it can be used for guest CPUs. + + + +* 'virt-ssbd' ++ +Required to enable the Spectre v4 (CVE-2018-3639) fix. +Not included by default in any AMD CPU model. +Must be explicitly turned on for all AMD CPU models. +This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility. +Note that this must be explicitly enabled when when using the "host" cpu model, +because this is a virtual feature which does not exist in the physical CPUs. + + +* 'amd-ssbd' ++ +Required to enable the Spectre v4 (CVE-2018-3639) fix. +Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models. +This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible. +virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd. + + +* 'amd-no-ssb' ++ +Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639). +Not included by default in any AMD CPU model. +Future hardware generations of CPU will not be vulnerable to CVE-2018-3639, +and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb. +This is mutually exclusive with virt-ssbd and amd-ssbd. -Otherwise you need to set the desired CPU flag of the virtual CPU, either by -editing the CPU options in the WebUI, or by setting the 'flags' property of the -'cpu' option in the VM configuration file. NUMA ^^^^ @@ -373,7 +437,7 @@ will allow proper distribution of the VM resources on the host system. This option is also required to hot-plug cores or RAM in a VM. If the NUMA option is used, it is recommended to set the number of sockets to -the number of sockets of the host system. +the number of nodes of the host system. vCPU hot-plug ^^^^^^^^^^^^^ @@ -420,7 +484,7 @@ host. .Fixed Memory Allocation [thumbnail="screenshot/gui-create-vm-memory.png"] -When setting memory and minimum memory to the same amount +ghen setting memory and minimum memory to the same amount {pve} will simply allocate what you specify to your VM. Even when using a fixed memory size, the ballooning device gets added to the @@ -536,6 +600,39 @@ traffic increases. We recommend to set this option only when the VM has to process a great number of incoming connections, such as when the VM is running as a router, reverse proxy or a busy HTTP server doing long polling. +[[qm_display]] +Display +~~~~~~~ + +QEMU can virtualize a few types of VGA hardware. Some examples are: + +* *std*, the default, emulates a card with Bochs VBE extensions. +* *cirrus*, this was once the default, it emulates a very old hardware module +with all its problems. This display type should only be used if really +necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ +qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier +* *vmware*, is a VMWare SVGA-II compatible adapter. +* *qxl*, is the QXL paravirtualized graphics card. Selecting this also +enables SPICE for the VM. + +You can edit the amount of memory given to the virtual GPU, by setting +the 'memory' option. This can enable higher resolutions inside the VM, +especially with SPICE/QXL. + +As the memory is reserved by display device, selecting Multi-Monitor mode +for SPICE (e.g., `qxl2` for dual monitors) has some implications: + +* Windows needs a device for each monitor, so if your 'ostype' is some +version of Windows, {pve} gives the VM an extra device per monitor. +Each device gets the specified amount of memory. + +* Linux VMs, can always enable more virtual monitors, but selecting +a Multi-Monitor mode multiplies the memory given to the device with +the number of monitors. + +Selecting `serialX` as display 'type' disables the VGA output, and redirects +the Web Console to the selected serial port. A configured display 'memory' +setting will be ignored in that case. [[qm_usb_passthrough]] USB Passthrough @@ -606,6 +703,29 @@ you need to set the client resolution in the OVMF menu(which you can reach with a press of the ESC button during boot), or you have to choose SPICE as the display type. +[[qm_ivshmem]] +Inter-VM shared memory +~~~~~~~~~~~~~~~~~~~~~~ + +You can add an Inter-VM shared memory device (`ivshmem`), which allows one to +share memory between the host and a guest, or also between multiple guests. + +To add such a device, you can use `qm`: + + qm set -ivshmem size=32,name=foo + +Where the size is in MiB. The file will be located under +`/dev/shm/pve-shm-$name` (the default name is the vmid). + +NOTE: Currently the device will get deleted as soon as any VM using it got +shutdown or stopped. Open connections will still persist, but new connections +to the exact same device cannot be made anymore. + +A use case for such a device is the Looking Glass +footnote:[Looking Glass: https://looking-glass.hostfission.com/] project, +which enables high performance, low-latency display mirroring between +host and guest. + [[qm_startup_and_shutdown]] Automatic Start and Shutdown of Virtual Machines ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -732,7 +852,7 @@ migrate a VM to a totally different storage. You can also change the disk image *Format* if the storage driver supports several formats. + -NOTE: A full clone need to read and copy all VM image data. This is +NOTE: A full clone needs to read and copy all VM image data. This is usually much slower than creating a linked clone. + @@ -743,7 +863,7 @@ never includes any additional snapshots from the original VM. Linked Clone:: -Modern storage drivers supports a way to generate fast linked +Modern storage drivers support a way to generate fast linked clones. Such a clone is a writable copy whose initial contents are the same as the original data. Creating a linked clone is nearly instantaneous, and initially consumes no additional space. @@ -760,8 +880,8 @@ can convert any VM into a read-only <>). Such templates can later be used to create linked clones efficiently. + -NOTE: You cannot delete the original template while linked clones -exists. +NOTE: You cannot delete an original template while linked clones +exist. + It is not possible to change the *Target storage* for linked clones, @@ -772,7 +892,7 @@ The *Target node* option allows you to create the new VM on a different node. The only restriction is that the VM is on shared storage, and that storage is also available on the target node. -To avoid resource conflicts, all network interface MAC addresses gets +To avoid resource conflicts, all network interface MAC addresses get randomized, and we generate a new 'UUID' for the VM BIOS (smbios1) setting. @@ -791,7 +911,7 @@ clone and modify that. VM Generation ID ---------------- -{pve} supports Virtual Machine Generation ID ('vmgedid') footnote:[Official +{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official 'vmgenid' Specification https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier] for virtual machines. @@ -865,13 +985,13 @@ Step-by-step example of a Windows OVF import Microsoft provides https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads] - to get started with Windows development.We are going to use one of these + to get started with Windows development.We are going to use one of these to demonstrate the OVF import feature. Download the Virtual Machine zip ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -After getting informed about the user agreement, choose the _Windows 10 +After getting informed about the user agreement, choose the _Windows 10 Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip. Extract the disk image from the zip @@ -894,7 +1014,7 @@ The VM is ready to be started. Adding an external disk image to a Virtual Machine ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -You can also add an existing disk image to a VM, either coming from a +You can also add an existing disk image to a VM, either coming from a foreign hypervisor, or one that you created yourself. Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool: @@ -929,7 +1049,20 @@ ifndef::wiki[] include::qm-cloud-init.adoc[] endif::wiki[] +ifndef::wiki[] +include::qm-pci-passthrough.adoc[] +endif::wiki[] + +Hookscripts +~~~~~~~~~~~ + +You can add a hook script to VMs with the config property `hookscript`. + + qm set 100 -hookscript local:snippets/hookscript.pl +It will be called during various phases of the guests lifetime. +For an example and documentation see the example script under +`/usr/share/pve-docs/examples/guest-example-hookscript.pl`. Managing Virtual Machines with `qm` ------------------------------------