X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=qm.adoc;h=b13f0f4ccb419222bb8dd3729e77801eb515a5a0;hp=a98803accb24712987b56092a276bce0b63421da;hb=bb0fdf615fff05f1e16e8b4063c7668971334c86;hpb=1368dc0214105c533616ddd84a69da5872b6b79f diff --git a/qm.adoc b/qm.adoc index a98803a..b13f0f4 100644 --- a/qm.adoc +++ b/qm.adoc @@ -117,16 +117,42 @@ OS Settings [thumbnail="screenshot/gui-create-vm-os.png"] -When creating a VM, setting the proper Operating System(OS) allows {pve} to -optimize some low level parameters. For instance Windows OS expect the BIOS -clock to use the local time, while Unix based OS expect the BIOS clock to have -the UTC time. +When creating a virtual machine (VM), setting the proper Operating System(OS) +allows {pve} to optimize some low level parameters. For instance Windows OS +expect the BIOS clock to use the local time, while Unix based OS expect the +BIOS clock to have the UTC time. +[[qm_system_settings]] +System Settings +~~~~~~~~~~~~~~~ + +On VM creation you can change some basic system components of the new VM. You +can specify which xref:qm_display[display type] you want to use. +[thumbnail="screenshot/gui-create-vm-system.png"] +Additionally, the xref:qm_hard_disk[SCSI controller] can be changed. +If you plan to install the QEMU Guest Agent, or if your selected ISO image +already ships and installs it automatically, you may want to tick the 'Qemu +Agent' box, which lets {pve} know that it can use its features to show some +more information, and complete some actions (for example, shutdown or +snapshots) more intelligently. + +{pve} allows to boot VMs with different firmware and machine types, namely +xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from +the default SeabBIOS to OVMF only if you plan to use +xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the +hardware layout of the VM's virtual motherboard. You can choose between the +default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the +https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35] +chipset, which also provides a virtual PCIe bus, and thus may be desired if +one want's to pass through PCIe hardware. [[qm_hard_disk]] Hard Disk ~~~~~~~~~ +[[qm_hard_disk_bus]] +Bus/Controller +^^^^^^^^^^^^^^ Qemu can emulate a number of storage controllers: * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk @@ -159,6 +185,10 @@ is an older type of paravirtualized controller. It has been superseded by the VirtIO SCSI Controller, in terms of features. [thumbnail="screenshot/gui-create-vm-hard-disk.png"] + +[[qm_hard_disk_formats]] +Image Format +^^^^^^^^^^^^ On each controller you attach a number of emulated hard disks, which are backed by a file or a block device residing in the configured storage. The choice of a storage type will determine the format of the hard disk image. Storages which @@ -177,6 +207,9 @@ either the *raw disk image format* or the *QEMU image format*. * the *VMware image format* only makes sense if you intend to import/export the disk image to other hypervisors. +[[qm_hard_disk_cache]] +Cache Mode +^^^^^^^^^^ Setting the *Cache* mode of the hard drive will impact how the host system will notify the guest systems of block write completions. The *No cache* default means that the guest system will be notified that a write is complete when each @@ -192,26 +225,36 @@ As of {pve} 5.0, replication requires the disk images to be on a storage of type `zfspool`, so adding a disk image to other storages when the VM has replication configured requires to skip replication for this disk image. +[[qm_hard_disk_discard]] +Trim/Discard +^^^^^^^^^^^^ If your storage supports _thin provisioning_ (see the storage chapter in the -{pve} guide), and your VM has a *SCSI* controller you can activate the *Discard* -option on the hard disks connected to that controller. With *Discard* enabled, -when the filesystem of a VM marks blocks as unused after removing files, the -emulated SCSI controller will relay this information to the storage, which will -then shrink the disk image accordingly. +{pve} guide), you can activate the *Discard* option on a drive. With *Discard* +set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard +https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem +marks blocks as unused after deleting files, the controller will relay this +information to the storage, which will then shrink the disk image accordingly. +For the guest to be able to issue _TRIM_ commands, you must enable the *Discard* +option on the drive. Some guest operating systems may also require the +*SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is +only supported on guests using Linux Kernel 5.0 or higher. If you would like a drive to be presented to the guest as a solid-state drive rather than a rotational hard disk, you can set the *SSD emulation* option on that drive. There is no requirement that the underlying storage actually be backed by SSDs; this feature can be used with physical media of any type. +Note that *SSD emulation* is not supported on *VirtIO Block* drives. + -.IO Thread +[[qm_hard_disk_iothread]] +IO Thread +^^^^^^^^^ The option *IO Thread* can only be used when using a disk with the *VirtIO* controller, or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI single*. With this enabled, Qemu creates one I/O thread per storage controller, -instead of a single thread for all I/O, so it increases performance when -multiple disks are used and each disk has its own storage controller. -Note that backups do not currently work with *IO Thread* enabled. +instead of a single thread for all I/O, so it can increase performance when +multiple isks are used and each disk has its own storage controller. [[qm_cpu]] @@ -240,8 +283,8 @@ is greater than the number of cores on the server (e.g., 4 VMs with each 4 cores on a machine with only 8 cores). In that case the host system will balance the Qemu execution threads between your server cores, just like if you were running a standard multithreaded application. However, {pve} will prevent -you from assigning more virtual CPU cores than physically available, as this will -only bring the performance down due to the cost of context switches. +you from starting VMs with more virtual CPU cores than physically available, as +this will only bring the performance down due to the cost of context switches. [[qm_cpu_resource_limits]] Resource Limits @@ -432,7 +475,7 @@ will allow proper distribution of the VM resources on the host system. This option is also required to hot-plug cores or RAM in a VM. If the NUMA option is used, it is recommended to set the number of sockets to -the number of sockets of the host system. +the number of nodes of the host system. vCPU hot-plug ^^^^^^^^^^^^^ @@ -608,7 +651,8 @@ necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-consid qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier * *vmware*, is a VMWare SVGA-II compatible adapter. * *qxl*, is the QXL paravirtualized graphics card. Selecting this also -enables SPICE for the VM. +enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the +VM. You can edit the amount of memory given to the virtual GPU, by setting the 'memory' option. This can enable higher resolutions inside the VM, @@ -671,8 +715,12 @@ BIOS and UEFI ~~~~~~~~~~~~~ In order to properly emulate a computer, QEMU needs to use a firmware. -By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS -implementation. SeaBIOS is a good choice for most standard setups. +Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the +first steps when booting a VM. It is responsible for doing basic hardware +initialization and for providing an interface to the firmware and hardware for +the operating system. By default QEMU uses *SeaBIOS* for this, which is an +open-source, x86 BIOS implementation. SeaBIOS is a good choice for most +standard setups. There are, however, some scenarios in which a BIOS is not a good firmware to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this. @@ -698,6 +746,51 @@ you need to set the client resolution in the OVMF menu(which you can reach with a press of the ESC button during boot), or you have to choose SPICE as the display type. +[[qm_ivshmem]] +Inter-VM shared memory +~~~~~~~~~~~~~~~~~~~~~~ + +You can add an Inter-VM shared memory device (`ivshmem`), which allows one to +share memory between the host and a guest, or also between multiple guests. + +To add such a device, you can use `qm`: + + qm set -ivshmem size=32,name=foo + +Where the size is in MiB. The file will be located under +`/dev/shm/pve-shm-$name` (the default name is the vmid). + +NOTE: Currently the device will get deleted as soon as any VM using it got +shutdown or stopped. Open connections will still persist, but new connections +to the exact same device cannot be made anymore. + +A use case for such a device is the Looking Glass +footnote:[Looking Glass: https://looking-glass.hostfission.com/] project, +which enables high performance, low-latency display mirroring between +host and guest. + +[[qm_audio_device]] +Audio Device +~~~~~~~~~~~~ + +To add an audio device run the following command: + +---- +qm set -audio0 device= +---- + +Supported audio devices are: + +* `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9 +* `intel-hda`: Intel HD Audio Controller, emulates ICH6 +* `AC97`: Audio Codec '97, useful for older operating systems like Windows XP + +NOTE: The audio device works only in combination with SPICE. Remote protocols +like Microsoft's RDP have options to play sound. To use the physical audio +device of the host use device passthrough (see +xref:qm_pci_passthrough[PCI Passthrough] and +xref:qm_usb_passthrough[USB Passthrough]). + [[qm_startup_and_shutdown]] Automatic Start and Shutdown of Virtual Machines ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -742,6 +835,70 @@ start after those where the parameter is set. Further, this parameter can only be enforced between virtual machines running on the same host, not cluster-wide. +[[qm_spice_enhancements]] +SPICE Enhancements +~~~~~~~~~~~~~~~~~~ + +SPICE Enhancements are optional features that can improve the remote viewer +experience. + +To enable them via the GUI go to the *Options* panel of the virtual machine. Run +the following command to enable them via the CLI: + +---- +qm set -spice_enhancements foldersharing=1,videostreaming=all +---- + +NOTE: To use these features the <> of the virtual machine +must be set to SPICE (qxl). + +Folder Sharing +^^^^^^^^^^^^^^ + +Share a local folder with the guest. The `spice-webdavd` daemon needs to be +installed in the guest. It makes the shared folder available through a local +WebDAV server located at http://localhost:9843. + +For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded +from the +https://www.spice-space.org/download.html#windows-binaries[official SPICE website]. + +Most Linux distributions have a package called `spice-webdavd` that can be +installed. + +To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'. +Select the folder to share and then enable the checkbox. + +NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer. + +CAUTION: Experimental! Currently this feature does not work reliably. + +Video Streaming +^^^^^^^^^^^^^^^ + +Fast refreshing areas are encoded into a video stream. Two options exist: + +* *all*: Any fast refreshing area will be encoded into a video stream. +* *filter*: Additional filters are used to decide if video streaming should be + used (currently only small window surfaces are skipped). + +A general recommendation if video streaming should be enabled and which option +to choose from cannot be given. Your mileage may vary depending on the specific +circumstances. + +Troubleshooting +^^^^^^^^^^^^^^^ + +.Shared folder does not show up + +Make sure the WebDAV service is enabled and running in the guest. On Windows it +is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be +different depending on the distribution. + +If the service is running, check the WebDAV server by opening +http://localhost:9843 in a browser in the guest. + +It can help to restart the SPICE session. [[qm_migration]] Migration @@ -824,7 +981,7 @@ migrate a VM to a totally different storage. You can also change the disk image *Format* if the storage driver supports several formats. + -NOTE: A full clone need to read and copy all VM image data. This is +NOTE: A full clone needs to read and copy all VM image data. This is usually much slower than creating a linked clone. + @@ -835,7 +992,7 @@ never includes any additional snapshots from the original VM. Linked Clone:: -Modern storage drivers supports a way to generate fast linked +Modern storage drivers support a way to generate fast linked clones. Such a clone is a writable copy whose initial contents are the same as the original data. Creating a linked clone is nearly instantaneous, and initially consumes no additional space. @@ -852,8 +1009,8 @@ can convert any VM into a read-only <>). Such templates can later be used to create linked clones efficiently. + -NOTE: You cannot delete the original template while linked clones -exists. +NOTE: You cannot delete an original template while linked clones +exist. + It is not possible to change the *Target storage* for linked clones, @@ -864,7 +1021,7 @@ The *Target node* option allows you to create the new VM on a different node. The only restriction is that the VM is on shared storage, and that storage is also available on the target node. -To avoid resource conflicts, all network interface MAC addresses gets +To avoid resource conflicts, all network interface MAC addresses get randomized, and we generate a new 'UUID' for the VM BIOS (smbios1) setting. @@ -883,7 +1040,7 @@ clone and modify that. VM Generation ID ---------------- -{pve} supports Virtual Machine Generation ID ('vmgedid') footnote:[Official +{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official 'vmgenid' Specification https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier] for virtual machines. @@ -1021,7 +1178,20 @@ ifndef::wiki[] include::qm-cloud-init.adoc[] endif::wiki[] +ifndef::wiki[] +include::qm-pci-passthrough.adoc[] +endif::wiki[] + +Hookscripts +----------- + +You can add a hook script to VMs with the config property `hookscript`. + + qm set 100 -hookscript local:snippets/hookscript.pl +It will be called during various phases of the guests lifetime. +For an example and documentation see the example script under +`/usr/share/pve-docs/examples/guest-example-hookscript.pl`. Managing Virtual Machines with `qm` ------------------------------------