X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=qm.adoc;h=0b699e24b0b440eb6f31b5922883f2b79092994b;hb=c6e098a291471715218db3edb6b90f09b3dd8f33;hp=d6a0228f5197a90180496c6d0cd5ba297f74af57;hpb=ca8c30096d94e360c94cdb0496bd57373b92a144;p=pve-docs.git diff --git a/qm.adoc b/qm.adoc index d6a0228..0b699e2 100644 --- a/qm.adoc +++ b/qm.adoc @@ -144,12 +144,15 @@ hardware layout of the VM's virtual motherboard. You can choose between the default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35] chipset, which also provides a virtual PCIe bus, and thus may be desired if -one want's to pass through PCIe hardware. +one wants to pass through PCIe hardware. [[qm_hard_disk]] Hard Disk ~~~~~~~~~ +[[qm_hard_disk_bus]] +Bus/Controller +^^^^^^^^^^^^^^ Qemu can emulate a number of storage controllers: * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk @@ -182,6 +185,10 @@ is an older type of paravirtualized controller. It has been superseded by the VirtIO SCSI Controller, in terms of features. [thumbnail="screenshot/gui-create-vm-hard-disk.png"] + +[[qm_hard_disk_formats]] +Image Format +^^^^^^^^^^^^ On each controller you attach a number of emulated hard disks, which are backed by a file or a block device residing in the configured storage. The choice of a storage type will determine the format of the hard disk image. Storages which @@ -200,6 +207,9 @@ either the *raw disk image format* or the *QEMU image format*. * the *VMware image format* only makes sense if you intend to import/export the disk image to other hypervisors. +[[qm_hard_disk_cache]] +Cache Mode +^^^^^^^^^^ Setting the *Cache* mode of the hard drive will impact how the host system will notify the guest systems of block write completions. The *No cache* default means that the guest system will be notified that a write is complete when each @@ -215,16 +225,19 @@ As of {pve} 5.0, replication requires the disk images to be on a storage of type `zfspool`, so adding a disk image to other storages when the VM has replication configured requires to skip replication for this disk image. +[[qm_hard_disk_discard]] +Trim/Discard +^^^^^^^^^^^^ If your storage supports _thin provisioning_ (see the storage chapter in the {pve} guide), you can activate the *Discard* option on a drive. With *Discard* set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem marks blocks as unused after deleting files, the controller will relay this information to the storage, which will then shrink the disk image accordingly. -For the guest to be able to issue _TRIM_ commands, you must either use a -*VirtIO SCSI* (or *VirtIO SCSI Single*) controller or set the *SSD emulation* -option on the drive. Note that *Discard* is not supported on *VirtIO Block* -drives. +For the guest to be able to issue _TRIM_ commands, you must enable the *Discard* +option on the drive. Some guest operating systems may also require the +*SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is +only supported on guests using Linux Kernel 5.0 or higher. If you would like a drive to be presented to the guest as a solid-state drive rather than a rotational hard disk, you can set the *SSD emulation* option on @@ -232,14 +245,16 @@ that drive. There is no requirement that the underlying storage actually be backed by SSDs; this feature can be used with physical media of any type. Note that *SSD emulation* is not supported on *VirtIO Block* drives. -.IO Thread + +[[qm_hard_disk_iothread]] +IO Thread +^^^^^^^^^ The option *IO Thread* can only be used when using a disk with the *VirtIO* controller, or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI single*. With this enabled, Qemu creates one I/O thread per storage controller, -instead of a single thread for all I/O, so it increases performance when -multiple disks are used and each disk has its own storage controller. -Note that backups do not currently work with *IO Thread* enabled. +instead of a single thread for all I/O, so it can increase performance when +multiple isks are used and each disk has its own storage controller. [[qm_cpu]] @@ -268,8 +283,8 @@ is greater than the number of cores on the server (e.g., 4 VMs with each 4 cores on a machine with only 8 cores). In that case the host system will balance the Qemu execution threads between your server cores, just like if you were running a standard multithreaded application. However, {pve} will prevent -you from assigning more virtual CPU cores than physically available, as this will -only bring the performance down due to the cost of context switches. +you from starting VMs with more virtual CPU cores than physically available, as +this will only bring the performance down due to the cost of context switches. [[qm_cpu_resource_limits]] Resource Limits @@ -507,7 +522,7 @@ host. .Fixed Memory Allocation [thumbnail="screenshot/gui-create-vm-memory.png"] -ghen setting memory and minimum memory to the same amount +When setting memory and minimum memory to the same amount {pve} will simply allocate what you specify to your VM. Even when using a fixed memory size, the ballooning device gets added to the @@ -636,7 +651,8 @@ necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-consid qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier * *vmware*, is a VMWare SVGA-II compatible adapter. * *qxl*, is the QXL paravirtualized graphics card. Selecting this also -enables SPICE for the VM. +enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the +VM. You can edit the amount of memory given to the virtual GPU, by setting the 'memory' option. This can enable higher resolutions inside the VM, @@ -819,6 +835,70 @@ start after those where the parameter is set. Further, this parameter can only be enforced between virtual machines running on the same host, not cluster-wide. +[[qm_spice_enhancements]] +SPICE Enhancements +~~~~~~~~~~~~~~~~~~ + +SPICE Enhancements are optional features that can improve the remote viewer +experience. + +To enable them via the GUI go to the *Options* panel of the virtual machine. Run +the following command to enable them via the CLI: + +---- +qm set -spice_enhancements foldersharing=1,videostreaming=all +---- + +NOTE: To use these features the <> of the virtual machine +must be set to SPICE (qxl). + +Folder Sharing +^^^^^^^^^^^^^^ + +Share a local folder with the guest. The `spice-webdavd` daemon needs to be +installed in the guest. It makes the shared folder available through a local +WebDAV server located at http://localhost:9843. + +For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded +from the +https://www.spice-space.org/download.html#windows-binaries[official SPICE website]. + +Most Linux distributions have a package called `spice-webdavd` that can be +installed. + +To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'. +Select the folder to share and then enable the checkbox. + +NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer. + +CAUTION: Experimental! Currently this feature does not work reliably. + +Video Streaming +^^^^^^^^^^^^^^^ + +Fast refreshing areas are encoded into a video stream. Two options exist: + +* *all*: Any fast refreshing area will be encoded into a video stream. +* *filter*: Additional filters are used to decide if video streaming should be + used (currently only small window surfaces are skipped). + +A general recommendation if video streaming should be enabled and which option +to choose from cannot be given. Your mileage may vary depending on the specific +circumstances. + +Troubleshooting +^^^^^^^^^^^^^^^ + +.Shared folder does not show up + +Make sure the WebDAV service is enabled and running in the guest. On Windows it +is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be +different depending on the distribution. + +If the service is running, check the WebDAV server by opening +http://localhost:9843 in a browser in the guest. + +It can help to restart the SPICE session. [[qm_migration]] Migration @@ -1113,6 +1193,28 @@ It will be called during various phases of the guests lifetime. For an example and documentation see the example script under `/usr/share/pve-docs/examples/guest-example-hookscript.pl`. +[[qm_hibernate]] +Hibernation +----------- + +You can suspend a VM to disk with the GUI option `Hibernate` or with + + qm suspend ID --todisk + +That means that the current content of the memory will be saved onto disk +and the VM gets stopped. On the next start, the memory content will be +loaded and the VM can continue where it was left off. + +[[qm_vmstatestorage]] +.State storage selection +If no target storage for the memory is given, it will be automatically +chosen, the first of: + +1. The storage `vmstatestorage` from the VM config. +2. The first shared storage from any VM disk. +3. The first non-shared storage from any VM disk. +4. The storage `local` as a fallback. + Managing Virtual Machines with `qm` ------------------------------------ @@ -1222,6 +1324,10 @@ There are a few snapshot related properties like `parent` and relationship between snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch). +You can optionally save the memory of a running VM with the option `vmstate`. +For details about how the target storage gets chosen for the VM state, see +xref:qm_vmstatestorage[State storage selection] in the chapter +xref:qm_hibernate[Hibernation]. [[qm_options]] Options