X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=qm.adoc;h=b84de9ea9f8751ed69afd57aa7e9958dde167da9;hp=49563048205197ad56221008dc84a8a72fad7779;hb=43530f6fe44c20926717a95e02aa19400ad2409c;hpb=19a58e023d658ac147ffccc6543f0d9d182b4e20 diff --git a/qm.adoc b/qm.adoc index 4956304..b84de9e 100644 --- a/qm.adoc +++ b/qm.adoc @@ -144,12 +144,15 @@ hardware layout of the VM's virtual motherboard. You can choose between the default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35] chipset, which also provides a virtual PCIe bus, and thus may be desired if -one want's to pass through PCIe hardware. +one wants to pass through PCIe hardware. [[qm_hard_disk]] Hard Disk ~~~~~~~~~ +[[qm_hard_disk_bus]] +Bus/Controller +^^^^^^^^^^^^^^ Qemu can emulate a number of storage controllers: * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk @@ -182,6 +185,10 @@ is an older type of paravirtualized controller. It has been superseded by the VirtIO SCSI Controller, in terms of features. [thumbnail="screenshot/gui-create-vm-hard-disk.png"] + +[[qm_hard_disk_formats]] +Image Format +^^^^^^^^^^^^ On each controller you attach a number of emulated hard disks, which are backed by a file or a block device residing in the configured storage. The choice of a storage type will determine the format of the hard disk image. Storages which @@ -196,10 +203,13 @@ either the *raw disk image format* or the *QEMU image format*. format does not support thin provisioning or snapshots by itself, requiring cooperation from the storage layer for these tasks. It may, however, be up to 10% faster than the *QEMU image format*. footnote:[See this benchmark for details - http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] + https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] * the *VMware image format* only makes sense if you intend to import/export the disk image to other hypervisors. +[[qm_hard_disk_cache]] +Cache Mode +^^^^^^^^^^ Setting the *Cache* mode of the hard drive will impact how the host system will notify the guest systems of block write completions. The *No cache* default means that the guest system will be notified that a write is complete when each @@ -215,6 +225,9 @@ As of {pve} 5.0, replication requires the disk images to be on a storage of type `zfspool`, so adding a disk image to other storages when the VM has replication configured requires to skip replication for this disk image. +[[qm_hard_disk_discard]] +Trim/Discard +^^^^^^^^^^^^ If your storage supports _thin provisioning_ (see the storage chapter in the {pve} guide), you can activate the *Discard* option on a drive. With *Discard* set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard @@ -232,14 +245,16 @@ that drive. There is no requirement that the underlying storage actually be backed by SSDs; this feature can be used with physical media of any type. Note that *SSD emulation* is not supported on *VirtIO Block* drives. -.IO Thread + +[[qm_hard_disk_iothread]] +IO Thread +^^^^^^^^^ The option *IO Thread* can only be used when using a disk with the *VirtIO* controller, or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI single*. With this enabled, Qemu creates one I/O thread per storage controller, -instead of a single thread for all I/O, so it increases performance when +rather than a single thread for all I/O. This can increase performance when multiple disks are used and each disk has its own storage controller. -Note that backups do not currently work with *IO Thread* enabled. [[qm_cpu]] @@ -268,8 +283,8 @@ is greater than the number of cores on the server (e.g., 4 VMs with each 4 cores on a machine with only 8 cores). In that case the host system will balance the Qemu execution threads between your server cores, just like if you were running a standard multithreaded application. However, {pve} will prevent -you from assigning more virtual CPU cores than physically available, as this will -only bring the performance down due to the cost of context switches. +you from starting VMs with more virtual CPU cores than physically available, as +this will only bring the performance down due to the cost of context switches. [[qm_cpu_resource_limits]] Resource Limits @@ -337,6 +352,17 @@ the kvm64 default. If you don’t care about live migration or have a homogeneou cluster where all nodes have the same CPU, set the CPU type to host, as in theory this will give your guests maximum performance. +Custom CPU Types +^^^^^^^^^^^^^^^^ + +You can specify custom CPU types with a configurable set of features. These are +maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by +an administrator. See `man cpu-models.conf` for format details. + +Specified custom types can be selected by any user with the `Sys.Audit` +privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI +or API, the name needs to be prefixed with 'custom-'. + Meltdown / Spectre related CPU flags ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -507,7 +533,7 @@ host. .Fixed Memory Allocation [thumbnail="screenshot/gui-create-vm-memory.png"] -ghen setting memory and minimum memory to the same amount +When setting memory and minimum memory to the same amount {pve} will simply allocate what you specify to your VM. Even when using a fixed memory size, the ballooning device gets added to the @@ -636,7 +662,8 @@ necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-consid qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier * *vmware*, is a VMWare SVGA-II compatible adapter. * *qxl*, is the QXL paravirtualized graphics card. Selecting this also -enables SPICE for the VM. +enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the +VM. You can edit the amount of memory given to the virtual GPU, by setting the 'memory' option. This can enable higher resolutions inside the VM, @@ -749,9 +776,8 @@ shutdown or stopped. Open connections will still persist, but new connections to the exact same device cannot be made anymore. A use case for such a device is the Looking Glass -footnote:[Looking Glass: https://looking-glass.hostfission.com/] project, -which enables high performance, low-latency display mirroring between -host and guest. +footnote:[Looking Glass: https://looking-glass.io/] project, which enables high +performance, low-latency display mirroring between host and guest. [[qm_audio_device]] Audio Device @@ -769,11 +795,89 @@ Supported audio devices are: * `intel-hda`: Intel HD Audio Controller, emulates ICH6 * `AC97`: Audio Codec '97, useful for older operating systems like Windows XP -NOTE: The audio device works only in combination with SPICE. Remote protocols -like Microsoft's RDP have options to play sound. To use the physical audio -device of the host use device passthrough (see -xref:qm_pci_passthrough[PCI Passthrough] and -xref:qm_usb_passthrough[USB Passthrough]). +There are two backends available: + +* 'spice' +* 'none' + +The 'spice' backend can be used in combination with xref:qm_display[SPICE] while +the 'none' backend can be useful if an audio device is needed in the VM for some +software to work. To use the physical audio device of the host use device +passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and +xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP +have options to play sound. + + +[[qm_virtio_rng]] +VirtIO RNG +~~~~~~~~~~ + +A RNG (Random Number Generator) is a device providing entropy ('randomness') to +a system. A virtual hardware-RNG can be used to provide such entropy from the +host system to a guest VM. This helps to avoid entropy starvation problems in +the guest (a situation where not enough entropy is available and the system may +slow down or run into problems), especially during the guests boot process. + +To add a VirtIO-based emulated RNG, run the following command: + +---- +qm set -rng0 source=[,max_bytes=X,period=Y] +---- + +`source` specifies where entropy is read from on the host and has to be one of +the following: + +* `/dev/urandom`: Non-blocking kernel entropy pool (preferred) +* `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy + starvation on the host system) +* `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple + are available, the one selected in + `/sys/devices/virtual/misc/hw_random/rng_current` will be used) + +A limit can be specified via the `max_bytes` and `period` parameters, they are +read as `max_bytes` per `period` in milliseconds. However, it does not represent +a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes +available on a 1 second timer, not that 1 KiB is streamed to the guest over the +course of one second. Reducing the `period` can thus be used to inject entropy +into the guest at a faster rate. + +By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is +recommended to always use a limiter to avoid guests using too many host +resources. If desired, a value of '0' for `max_bytes` can be used to disable +all limits. + +[[qm_bootorder]] +Device Boot Order +~~~~~~~~~~~~~~~~~ + +QEMU can tell the guest which devices it should boot from, and in which order. +This can be specified in the config via the `boot` property, e.g.: + +---- +boot: order=scsi0;net0;hostpci0 +---- + +[thumbnail="screenshot/gui-qemu-edit-bootorder.png"] + +This way, the guest would first attempt to boot from the disk `scsi0`, if that +fails, it would go on to attempt network boot from `net0`, and in case that +fails too, finally attempt to boot from a passed through PCIe device (seen as +disk in case of NVMe, otherwise tries to launch into an option ROM). + +On the GUI you can use a drag-and-drop editor to specify the boot order, and use +the checkbox to enable or disable certain devices for booting altogether. + +NOTE: If your guest uses multiple disks to boot the OS or load the bootloader, +all of them must be marked as 'bootable' (that is, they must have the checkbox +enabled or appear in the list in the config) for the guest to be able to boot. +This is because recent SeaBIOS and OVMF versions only initialize disks if they +are marked 'bootable'. + +In any case, even devices not appearing in the list or having the checkmark +disabled will still be available to the guest, once it's operating system has +booted and initialized them. The 'bootable' flag only affects the guest BIOS and +bootloader. + [[qm_startup_and_shutdown]] Automatic Start and Shutdown of Virtual Machines @@ -819,6 +923,63 @@ start after those where the parameter is set. Further, this parameter can only be enforced between virtual machines running on the same host, not cluster-wide. + +[[qm_qemu_agent]] +Qemu Guest Agent +~~~~~~~~~~~~~~~~ + +The Qemu Guest Agent is a service which runs inside the VM, providing a +communication channel between the host and the guest. It is used to exchange +information and allows the host to issue commands to the guest. + +For example, the IP addresses in the VM summary panel are fetched via the guest +agent. + +Or when starting a backup, the guest is told via the guest agent to sync +outstanding writes via the 'fs-freeze' and 'fs-thaw' commands. + +For the guest agent to work properly the following steps must be taken: + +* install the agent in the guest and make sure it is running +* enable the communication via the agent in {pve} + +Install Guest Agent +^^^^^^^^^^^^^^^^^^^ + +For most Linux distributions, the guest agent is available. The package is +usually named `qemu-guest-agent`. + +For Windows, it can be installed from the +https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora +VirtIO driver ISO]. + +Enable Guest Agent Communication +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Communication from {pve} with the guest agent can be enabled in the VM's +*Options* panel. A fresh start of the VM is necessary for the changes to take +effect. + +It is possible to enable the 'Run guest-trim' option. With this enabled, +{pve} will issue a trim command to the guest after the following +operations that have the potential to write out zeros to the storage: + +* moving a disk to another storage +* live migrating a VM to another node with local storage + +On a thin provisioned storage, this can help to free up unused space. + +Troubleshooting +^^^^^^^^^^^^^^^ + +.VM does not shut down + +Make sure the guest agent is installed and running. + +Once the guest agent is enabled, {pve} will send power commands like +'shutdown' via the guest agent. If the guest agent is not running, commands +cannot get executed properly and the shutdown command will run into a timeout. + [[qm_spice_enhancements]] SPICE Enhancements ~~~~~~~~~~~~~~~~~~ @@ -855,6 +1016,8 @@ Select the folder to share and then enable the checkbox. NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer. +CAUTION: Experimental! Currently this feature does not work reliably. + Video Streaming ^^^^^^^^^^^^^^^ @@ -1175,6 +1338,28 @@ It will be called during various phases of the guests lifetime. For an example and documentation see the example script under `/usr/share/pve-docs/examples/guest-example-hookscript.pl`. +[[qm_hibernate]] +Hibernation +----------- + +You can suspend a VM to disk with the GUI option `Hibernate` or with + + qm suspend ID --todisk + +That means that the current content of the memory will be saved onto disk +and the VM gets stopped. On the next start, the memory content will be +loaded and the VM can continue where it was left off. + +[[qm_vmstatestorage]] +.State storage selection +If no target storage for the memory is given, it will be automatically +chosen, the first of: + +1. The storage `vmstatestorage` from the VM config. +2. The first shared storage from any VM disk. +3. The first non-shared storage from any VM disk. +4. The storage `local` as a fallback. + Managing Virtual Machines with `qm` ------------------------------------ @@ -1204,6 +1389,14 @@ Same as above, but only wait for 40 seconds. qm shutdown 300 && qm wait 300 -timeout 40 +Destroying a VM always removes it from Access Control Lists and it always +removes the firewall configuration of the VM. You have to activate +'--purge', if you want to additionally remove the VM from replication jobs, +backup jobs and HA resource configurations. + + qm destroy 300 --purge + + [[qm_configuration]] Configuration @@ -1219,12 +1412,12 @@ unique cluster wide. .Example VM Configuration ---- +boot: order=virtio0;net0 cores: 1 sockets: 1 memory: 512 name: webmail ostype: l26 -bootdisk: virtio0 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0 virtio0: local:vm-100-disk-1,size=32G ---- @@ -1284,6 +1477,10 @@ There are a few snapshot related properties like `parent` and relationship between snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch). +You can optionally save the memory of a running VM with the option `vmstate`. +For details about how the target storage gets chosen for the VM state, see +xref:qm_vmstatestorage[State storage selection] in the chapter +xref:qm_hibernate[Hibernation]. [[qm_options]] Options