X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=qm.adoc;h=b84de9ea9f8751ed69afd57aa7e9958dde167da9;hp=45832e9ad118dc10463776949e3937ebb4bad670;hb=43530f6fe44c20926717a95e02aa19400ad2409c;hpb=9e797d8c2c259a0633f7fa3bc09dcac7ad9d5d57 diff --git a/qm.adoc b/qm.adoc index 45832e9..b84de9e 100644 --- a/qm.adoc +++ b/qm.adoc @@ -203,7 +203,7 @@ either the *raw disk image format* or the *QEMU image format*. format does not support thin provisioning or snapshots by itself, requiring cooperation from the storage layer for these tasks. It may, however, be up to 10% faster than the *QEMU image format*. footnote:[See this benchmark for details - http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] + https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] * the *VMware image format* only makes sense if you intend to import/export the disk image to other hypervisors. @@ -253,8 +253,8 @@ The option *IO Thread* can only be used when using a disk with the *VirtIO* controller, or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI single*. With this enabled, Qemu creates one I/O thread per storage controller, -instead of a single thread for all I/O, so it can increase performance when -multiple isks are used and each disk has its own storage controller. +rather than a single thread for all I/O. This can increase performance when +multiple disks are used and each disk has its own storage controller. [[qm_cpu]] @@ -776,9 +776,8 @@ shutdown or stopped. Open connections will still persist, but new connections to the exact same device cannot be made anymore. A use case for such a device is the Looking Glass -footnote:[Looking Glass: https://looking-glass.hostfission.com/] project, -which enables high performance, low-latency display mirroring between -host and guest. +footnote:[Looking Glass: https://looking-glass.io/] project, which enables high +performance, low-latency display mirroring between host and guest. [[qm_audio_device]] Audio Device @@ -796,11 +795,18 @@ Supported audio devices are: * `intel-hda`: Intel HD Audio Controller, emulates ICH6 * `AC97`: Audio Codec '97, useful for older operating systems like Windows XP -NOTE: The audio device works only in combination with SPICE. Remote protocols -like Microsoft's RDP have options to play sound. To use the physical audio -device of the host use device passthrough (see -xref:qm_pci_passthrough[PCI Passthrough] and -xref:qm_usb_passthrough[USB Passthrough]). +There are two backends available: + +* 'spice' +* 'none' + +The 'spice' backend can be used in combination with xref:qm_display[SPICE] while +the 'none' backend can be useful if an audio device is needed in the VM for some +software to work. To use the physical audio device of the host use device +passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and +xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP +have options to play sound. + [[qm_virtio_rng]] VirtIO RNG @@ -840,6 +846,39 @@ recommended to always use a limiter to avoid guests using too many host resources. If desired, a value of '0' for `max_bytes` can be used to disable all limits. +[[qm_bootorder]] +Device Boot Order +~~~~~~~~~~~~~~~~~ + +QEMU can tell the guest which devices it should boot from, and in which order. +This can be specified in the config via the `boot` property, e.g.: + +---- +boot: order=scsi0;net0;hostpci0 +---- + +[thumbnail="screenshot/gui-qemu-edit-bootorder.png"] + +This way, the guest would first attempt to boot from the disk `scsi0`, if that +fails, it would go on to attempt network boot from `net0`, and in case that +fails too, finally attempt to boot from a passed through PCIe device (seen as +disk in case of NVMe, otherwise tries to launch into an option ROM). + +On the GUI you can use a drag-and-drop editor to specify the boot order, and use +the checkbox to enable or disable certain devices for booting altogether. + +NOTE: If your guest uses multiple disks to boot the OS or load the bootloader, +all of them must be marked as 'bootable' (that is, they must have the checkbox +enabled or appear in the list in the config) for the guest to be able to boot. +This is because recent SeaBIOS and OVMF versions only initialize disks if they +are marked 'bootable'. + +In any case, even devices not appearing in the list or having the checkmark +disabled will still be available to the guest, once it's operating system has +booted and initialized them. The 'bootable' flag only affects the guest BIOS and +bootloader. + + [[qm_startup_and_shutdown]] Automatic Start and Shutdown of Virtual Machines ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -884,6 +923,63 @@ start after those where the parameter is set. Further, this parameter can only be enforced between virtual machines running on the same host, not cluster-wide. + +[[qm_qemu_agent]] +Qemu Guest Agent +~~~~~~~~~~~~~~~~ + +The Qemu Guest Agent is a service which runs inside the VM, providing a +communication channel between the host and the guest. It is used to exchange +information and allows the host to issue commands to the guest. + +For example, the IP addresses in the VM summary panel are fetched via the guest +agent. + +Or when starting a backup, the guest is told via the guest agent to sync +outstanding writes via the 'fs-freeze' and 'fs-thaw' commands. + +For the guest agent to work properly the following steps must be taken: + +* install the agent in the guest and make sure it is running +* enable the communication via the agent in {pve} + +Install Guest Agent +^^^^^^^^^^^^^^^^^^^ + +For most Linux distributions, the guest agent is available. The package is +usually named `qemu-guest-agent`. + +For Windows, it can be installed from the +https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora +VirtIO driver ISO]. + +Enable Guest Agent Communication +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Communication from {pve} with the guest agent can be enabled in the VM's +*Options* panel. A fresh start of the VM is necessary for the changes to take +effect. + +It is possible to enable the 'Run guest-trim' option. With this enabled, +{pve} will issue a trim command to the guest after the following +operations that have the potential to write out zeros to the storage: + +* moving a disk to another storage +* live migrating a VM to another node with local storage + +On a thin provisioned storage, this can help to free up unused space. + +Troubleshooting +^^^^^^^^^^^^^^^ + +.VM does not shut down + +Make sure the guest agent is installed and running. + +Once the guest agent is enabled, {pve} will send power commands like +'shutdown' via the guest agent. If the guest agent is not running, commands +cannot get executed properly and the shutdown command will run into a timeout. + [[qm_spice_enhancements]] SPICE Enhancements ~~~~~~~~~~~~~~~~~~ @@ -1293,6 +1389,14 @@ Same as above, but only wait for 40 seconds. qm shutdown 300 && qm wait 300 -timeout 40 +Destroying a VM always removes it from Access Control Lists and it always +removes the firewall configuration of the VM. You have to activate +'--purge', if you want to additionally remove the VM from replication jobs, +backup jobs and HA resource configurations. + + qm destroy 300 --purge + + [[qm_configuration]] Configuration @@ -1308,12 +1412,12 @@ unique cluster wide. .Example VM Configuration ---- +boot: order=virtio0;net0 cores: 1 sockets: 1 memory: 512 name: webmail ostype: l26 -bootdisk: virtio0 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0 virtio0: local:vm-100-disk-1,size=32G ----