default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
chipset, which also provides a virtual PCIe bus, and thus may be desired if
-one want's to pass through PCIe hardware.
+one wants to pass through PCIe hardware.
[[qm_hard_disk]]
Hard Disk
~~~~~~~~~
+[[qm_hard_disk_bus]]
+Bus/Controller
+^^^^^^^^^^^^^^
Qemu can emulate a number of storage controllers:
* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
VirtIO SCSI Controller, in terms of features.
[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
+
+[[qm_hard_disk_formats]]
+Image Format
+^^^^^^^^^^^^
On each controller you attach a number of emulated hard disks, which are backed
by a file or a block device residing in the configured storage. The choice of
a storage type will determine the format of the hard disk image. Storages which
* the *VMware image format* only makes sense if you intend to import/export the
disk image to other hypervisors.
+[[qm_hard_disk_cache]]
+Cache Mode
+^^^^^^^^^^
Setting the *Cache* mode of the hard drive will impact how the host system will
notify the guest systems of block write completions. The *No cache* default
means that the guest system will be notified that a write is complete when each
`zfspool`, so adding a disk image to other storages when the VM has replication
configured requires to skip replication for this disk image.
+[[qm_hard_disk_discard]]
+Trim/Discard
+^^^^^^^^^^^^
If your storage supports _thin provisioning_ (see the storage chapter in the
{pve} guide), you can activate the *Discard* option on a drive. With *Discard*
set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
backed by SSDs; this feature can be used with physical media of any type.
Note that *SSD emulation* is not supported on *VirtIO Block* drives.
-.IO Thread
+
+[[qm_hard_disk_iothread]]
+IO Thread
+^^^^^^^^^
The option *IO Thread* can only be used when using a disk with the
*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
type is *VirtIO SCSI single*.
With this enabled, Qemu creates one I/O thread per storage controller,
-instead of a single thread for all I/O, so it increases performance when
-multiple disks are used and each disk has its own storage controller.
-Note that backups do not currently work with *IO Thread* enabled.
+instead of a single thread for all I/O, so it can increase performance when
+multiple isks are used and each disk has its own storage controller.
[[qm_cpu]]
cores on a machine with only 8 cores). In that case the host system will
balance the Qemu execution threads between your server cores, just like if you
were running a standard multithreaded application. However, {pve} will prevent
-you from assigning more virtual CPU cores than physically available, as this will
-only bring the performance down due to the cost of context switches.
+you from starting VMs with more virtual CPU cores than physically available, as
+this will only bring the performance down due to the cost of context switches.
[[qm_cpu_resource_limits]]
Resource Limits
.Fixed Memory Allocation
[thumbnail="screenshot/gui-create-vm-memory.png"]
-ghen setting memory and minimum memory to the same amount
+When setting memory and minimum memory to the same amount
{pve} will simply allocate what you specify to your VM.
Even when using a fixed memory size, the ballooning device gets added to the
qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier
* *vmware*, is a VMWare SVGA-II compatible adapter.
* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
-enables SPICE for the VM.
+enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
+VM.
You can edit the amount of memory given to the virtual GPU, by setting
the 'memory' option. This can enable higher resolutions inside the VM,
xref:qm_pci_passthrough[PCI Passthrough] and
xref:qm_usb_passthrough[USB Passthrough]).
+[[qm_virtio_rng]]
+VirtIO RNG
+~~~~~~~~~~
+
+A RNG (Random Number Generator) is a device providing entropy ('randomness') to
+a system. A virtual hardware-RNG can be used to provide such entropy from the
+host system to a guest VM. This helps to avoid entropy starvation problems in
+the guest (a situation where not enough entropy is available and the system may
+slow down or run into problems), especially during the guests boot process.
+
+To add a VirtIO-based emulated RNG, run the following command:
+
+----
+qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
+----
+
+`source` specifies where entropy is read from on the host and has to be one of
+the following:
+
+* `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
+* `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
+ starvation on the host system)
+* `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
+ are available, the one selected in
+ `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
+
+A limit can be specified via the `max_bytes` and `period` parameters, they are
+read as `max_bytes` per `period` in milliseconds. However, it does not represent
+a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
+available on a 1 second timer, not that 1 KiB is streamed to the guest over the
+course of one second. Reducing the `period` can thus be used to inject entropy
+into the guest at a faster rate.
+
+By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
+recommended to always use a limiter to avoid guests using too many host
+resources. If desired, a value of '0' for `max_bytes` can be used to disable
+all limits.
+
[[qm_startup_and_shutdown]]
Automatic Start and Shutdown of Virtual Machines
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
+CAUTION: Experimental! Currently this feature does not work reliably.
+
Video Streaming
^^^^^^^^^^^^^^^
For an example and documentation see the example script under
`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
+[[qm_hibernate]]
+Hibernation
+-----------
+
+You can suspend a VM to disk with the GUI option `Hibernate` or with
+
+ qm suspend ID --todisk
+
+That means that the current content of the memory will be saved onto disk
+and the VM gets stopped. On the next start, the memory content will be
+loaded and the VM can continue where it was left off.
+
+[[qm_vmstatestorage]]
+.State storage selection
+If no target storage for the memory is given, it will be automatically
+chosen, the first of:
+
+1. The storage `vmstatestorage` from the VM config.
+2. The first shared storage from any VM disk.
+3. The first non-shared storage from any VM disk.
+4. The storage `local` as a fallback.
+
Managing Virtual Machines with `qm`
------------------------------------
relationship between snapshots. `snaptime` is the snapshot creation
time stamp (Unix epoch).
+You can optionally save the memory of a running VM with the option `vmstate`.
+For details about how the target storage gets chosen for the VM state, see
+xref:qm_vmstatestorage[State storage selection] in the chapter
+xref:qm_hibernate[Hibernation].
[[qm_options]]
Options