Hard Disk
~~~~~~~~~
+[[qm_hard_disk_bus]]
+Bus/Controller
+^^^^^^^^^^^^^^
Qemu can emulate a number of storage controllers:
* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
VirtIO SCSI Controller, in terms of features.
[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
+
+[[qm_hard_disk_formats]]
+Image Format
+^^^^^^^^^^^^
On each controller you attach a number of emulated hard disks, which are backed
by a file or a block device residing in the configured storage. The choice of
a storage type will determine the format of the hard disk image. Storages which
* the *VMware image format* only makes sense if you intend to import/export the
disk image to other hypervisors.
+[[qm_hard_disk_cache]]
+Cache Mode
+^^^^^^^^^^
Setting the *Cache* mode of the hard drive will impact how the host system will
notify the guest systems of block write completions. The *No cache* default
means that the guest system will be notified that a write is complete when each
`zfspool`, so adding a disk image to other storages when the VM has replication
configured requires to skip replication for this disk image.
+[[qm_hard_disk_discard]]
+Trim/Discard
+^^^^^^^^^^^^
If your storage supports _thin provisioning_ (see the storage chapter in the
{pve} guide), you can activate the *Discard* option on a drive. With *Discard*
set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
backed by SSDs; this feature can be used with physical media of any type.
Note that *SSD emulation* is not supported on *VirtIO Block* drives.
-.IO Thread
+
+[[qm_hard_disk_iothread]]
+IO Thread
+^^^^^^^^^
The option *IO Thread* can only be used when using a disk with the
*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
type is *VirtIO SCSI single*.
With this enabled, Qemu creates one I/O thread per storage controller,
-instead of a single thread for all I/O, so it increases performance when
-multiple disks are used and each disk has its own storage controller.
-Note that backups do not currently work with *IO Thread* enabled.
+instead of a single thread for all I/O, so it can increase performance when
+multiple isks are used and each disk has its own storage controller.
[[qm_cpu]]
cores on a machine with only 8 cores). In that case the host system will
balance the Qemu execution threads between your server cores, just like if you
were running a standard multithreaded application. However, {pve} will prevent
-you from assigning more virtual CPU cores than physically available, as this will
-only bring the performance down due to the cost of context switches.
+you from starting VMs with more virtual CPU cores than physically available, as
+this will only bring the performance down due to the cost of context switches.
[[qm_cpu_resource_limits]]
Resource Limits
.Fixed Memory Allocation
[thumbnail="screenshot/gui-create-vm-memory.png"]
-ghen setting memory and minimum memory to the same amount
+When setting memory and minimum memory to the same amount
{pve} will simply allocate what you specify to your VM.
Even when using a fixed memory size, the ballooning device gets added to the
qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier
* *vmware*, is a VMWare SVGA-II compatible adapter.
* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
-enables SPICE for the VM.
+enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
+VM.
You can edit the amount of memory given to the virtual GPU, by setting
the 'memory' option. This can enable higher resolutions inside the VM,
NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
+CAUTION: Experimental! Currently this feature does not work reliably.
+
Video Streaming
^^^^^^^^^^^^^^^
Troubleshooting
^^^^^^^^^^^^^^^
-Shared folder does not show up
-++++++++++++++++++++++++++++++
+.Shared folder does not show up
Make sure the WebDAV service is enabled and running in the guest. On Windows it
is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be