]> git.proxmox.com Git - pve-docs.git/blame - qm.adoc
fix #3967: add ZFS dRAID documentation
[pve-docs.git] / qm.adoc
CommitLineData
80c0adcb 1[[chapter_virtual_machines]]
f69cfd23 2ifdef::manvolnum[]
b2f242ab
DM
3qm(1)
4=====
5f09af76
DM
5:pve-toplevel:
6
f69cfd23
DM
7NAME
8----
9
10qm - Qemu/KVM Virtual Machine Manager
11
12
49a5e11c 13SYNOPSIS
f69cfd23
DM
14--------
15
16include::qm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
f69cfd23
DM
21ifndef::manvolnum[]
22Qemu/KVM Virtual Machines
23=========================
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
c4cba5d7
EK
27// deprecates
28// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29// http://pve.proxmox.com/wiki/KVM
30// http://pve.proxmox.com/wiki/Qemu_Server
31
5eba0743 32Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
c4cba5d7
EK
33physical computer. From the perspective of the host system where Qemu is
34running, Qemu is a user program which has access to a number of local resources
35like partitions, files, network cards which are then passed to an
189d3661 36emulated computer which sees them as if they were real devices.
c4cba5d7
EK
37
38A guest operating system running in the emulated computer accesses these
3a433e9b
OB
39devices, and runs as if it were running on real hardware. For instance, you can pass
40an ISO image as a parameter to Qemu, and the OS running in the emulated computer
41will see a real CD-ROM inserted into a CD drive.
c4cba5d7 42
6fb50457 43Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
c4cba5d7
EK
44only concerned with 32 and 64 bits PC clone emulation, since it represents the
45overwhelming majority of server hardware. The emulation of PC clones is also one
46of the fastest due to the availability of processor extensions which greatly
47speed up Qemu when the emulated architecture is the same as the host
9c63b5d9
EK
48architecture.
49
50NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51It means that Qemu is running with the support of the virtualization processor
3a433e9b
OB
52extensions, via the Linux KVM module. In the context of {pve} _Qemu_ and
53_KVM_ can be used interchangeably, as Qemu in {pve} will always try to load the KVM
9c63b5d9
EK
54module.
55
c4cba5d7
EK
56Qemu inside {pve} runs as a root process, since this is required to access block
57and PCI devices.
58
5eba0743 59
c4cba5d7
EK
60Emulated devices and paravirtualized devices
61--------------------------------------------
62
189d3661 63The PC hardware emulated by Qemu includes a mainboard, network controllers,
3a433e9b 64SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
189d3661
DC
65the `kvm(1)` man page) all of them emulated in software. All these devices
66are the exact software equivalent of existing hardware devices, and if the OS
67running in the guest has the proper drivers it will use the devices as if it
c4cba5d7
EK
68were running on real hardware. This allows Qemu to runs _unmodified_ operating
69systems.
70
71This however has a performance cost, as running in software what was meant to
72run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73Qemu can present to the guest operating system _paravirtualized devices_, where
74the guest OS recognizes it is running inside Qemu and cooperates with the
75hypervisor.
76
470d4313 77Qemu relies on the virtio virtualization standard, and is thus able to present
189d3661
DC
78paravirtualized virtio devices, which includes a paravirtualized generic disk
79controller, a paravirtualized network card, a paravirtualized serial port,
c4cba5d7
EK
80a paravirtualized SCSI controller, etc ...
81
189d3661
DC
82It is highly recommended to use the virtio devices whenever you can, as they
83provide a big performance improvement. Using the virtio generic disk controller
84versus an emulated IDE controller will double the sequential write throughput,
85as measured with `bonnie++(8)`. Using the virtio network interface can deliver
c4cba5d7 86up to three times the throughput of an emulated Intel E1000 network card, as
189d3661 87measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
a55d30db 88https://www.linux-kvm.org/page/Using_VirtIO_NIC]
c4cba5d7 89
5eba0743 90
80c0adcb 91[[qm_virtual_machines_settings]]
5274ad28 92Virtual Machines Settings
c4cba5d7 93-------------------------
80c0adcb 94
c4cba5d7
EK
95Generally speaking {pve} tries to choose sane defaults for virtual machines
96(VM). Make sure you understand the meaning of the settings you change, as it
97could incur a performance slowdown, or putting your data at risk.
98
5eba0743 99
80c0adcb 100[[qm_general_settings]]
c4cba5d7
EK
101General Settings
102~~~~~~~~~~~~~~~~
80c0adcb 103
1ff5e4e8 104[thumbnail="screenshot/gui-create-vm-general.png"]
b16d767f 105
c4cba5d7
EK
106General settings of a VM include
107
108* the *Node* : the physical server on which the VM will run
109* the *VM ID*: a unique number in this {pve} installation used to identify your VM
110* *Name*: a free form text string you can use to describe the VM
111* *Resource Pool*: a logical group of VMs
112
5eba0743 113
80c0adcb 114[[qm_os_settings]]
c4cba5d7
EK
115OS Settings
116~~~~~~~~~~~
80c0adcb 117
1ff5e4e8 118[thumbnail="screenshot/gui-create-vm-os.png"]
200114a7 119
d3c00374
TL
120When creating a virtual machine (VM), setting the proper Operating System(OS)
121allows {pve} to optimize some low level parameters. For instance Windows OS
122expect the BIOS clock to use the local time, while Unix based OS expect the
123BIOS clock to have the UTC time.
124
125[[qm_system_settings]]
126System Settings
127~~~~~~~~~~~~~~~
128
ade78a55
TL
129On VM creation you can change some basic system components of the new VM. You
130can specify which xref:qm_display[display type] you want to use.
d3c00374
TL
131[thumbnail="screenshot/gui-create-vm-system.png"]
132Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133If you plan to install the QEMU Guest Agent, or if your selected ISO image
134already ships and installs it automatically, you may want to tick the 'Qemu
135Agent' box, which lets {pve} know that it can use its features to show some
136more information, and complete some actions (for example, shutdown or
137snapshots) more intelligently.
138
139{pve} allows to boot VMs with different firmware and machine types, namely
140xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
3a433e9b 141the default SeaBIOS to OVMF only if you plan to use
d3c00374
TL
142xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the
143hardware layout of the VM's virtual motherboard. You can choose between the
144default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
145https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
146chipset, which also provides a virtual PCIe bus, and thus may be desired if
5f318cc0 147one wants to pass through PCIe hardware.
5eba0743 148
80c0adcb 149[[qm_hard_disk]]
c4cba5d7
EK
150Hard Disk
151~~~~~~~~~
80c0adcb 152
3dbe1daa
TL
153[[qm_hard_disk_bus]]
154Bus/Controller
155^^^^^^^^^^^^^^
2ec49380 156Qemu can emulate a number of storage controllers:
c4cba5d7
EK
157
158* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
44f38275 159controller. Even if this controller has been superseded by recent designs,
6fb50457 160each and every OS you can think of has support for it, making it a great choice
c4cba5d7
EK
161if you want to run an OS released before 2003. You can connect up to 4 devices
162on this controller.
163
164* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
165design, allowing higher throughput and a greater number of devices to be
166connected. You can connect up to 6 devices on this controller.
167
b0b6802b
EK
168* the *SCSI* controller, designed in 1985, is commonly found on server grade
169hardware, and can connect up to 14 storage devices. {pve} emulates by default a
f4bfd701
DM
170LSI 53C895A controller.
171+
81868c7e 172A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for
b0b6802b
EK
173performance and is automatically selected for newly created Linux VMs since
174{pve} 4.3. Linux distributions have support for this controller since 2012, and
c4cba5d7 175FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
b0b6802b
EK
176containing the drivers during the installation.
177// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
81868c7e
EK
178If you aim at maximum performance, you can select a SCSI controller of type
179_VirtIO SCSI single_ which will allow you to select the *IO Thread* option.
180When selecting _VirtIO SCSI single_ Qemu will create a new controller for
181each disk, instead of adding all disks to the same controller.
b0b6802b 182
30e6fe00
TL
183* The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
184is an older type of paravirtualized controller. It has been superseded by the
185VirtIO SCSI Controller, in terms of features.
c4cba5d7 186
1ff5e4e8 187[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
3dbe1daa
TL
188
189[[qm_hard_disk_formats]]
190Image Format
191^^^^^^^^^^^^
c4cba5d7
EK
192On each controller you attach a number of emulated hard disks, which are backed
193by a file or a block device residing in the configured storage. The choice of
194a storage type will determine the format of the hard disk image. Storages which
195present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
de14ebff 196whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
c4cba5d7
EK
197either the *raw disk image format* or the *QEMU image format*.
198
199 * the *QEMU image format* is a copy on write format which allows snapshots, and
200 thin provisioning of the disk image.
189d3661
DC
201 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
202 you would get when executing the `dd` command on a block device in Linux. This
4371b2fe 203 format does not support thin provisioning or snapshots by itself, requiring
30e6fe00
TL
204 cooperation from the storage layer for these tasks. It may, however, be up to
205 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
43530f6f 206 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
189d3661 207 * the *VMware image format* only makes sense if you intend to import/export the
c4cba5d7
EK
208 disk image to other hypervisors.
209
3dbe1daa
TL
210[[qm_hard_disk_cache]]
211Cache Mode
212^^^^^^^^^^
c4cba5d7
EK
213Setting the *Cache* mode of the hard drive will impact how the host system will
214notify the guest systems of block write completions. The *No cache* default
215means that the guest system will be notified that a write is complete when each
216block reaches the physical storage write queue, ignoring the host page cache.
217This provides a good balance between safety and speed.
218
219If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
220you can set the *No backup* option on that disk.
221
3205ac49
EK
222If you want the {pve} storage replication mechanism to skip a disk when starting
223 a replication job, you can set the *Skip replication* option on that disk.
6fb50457 224As of {pve} 5.0, replication requires the disk images to be on a storage of type
3205ac49 225`zfspool`, so adding a disk image to other storages when the VM has replication
6fb50457 226configured requires to skip replication for this disk image.
3205ac49 227
3dbe1daa
TL
228[[qm_hard_disk_discard]]
229Trim/Discard
230^^^^^^^^^^^^
c4cba5d7 231If your storage supports _thin provisioning_ (see the storage chapter in the
53cbac40
NC
232{pve} guide), you can activate the *Discard* option on a drive. With *Discard*
233set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
234https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
235marks blocks as unused after deleting files, the controller will relay this
236information to the storage, which will then shrink the disk image accordingly.
43975153
SR
237For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
238option on the drive. Some guest operating systems may also require the
239*SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
240only supported on guests using Linux Kernel 5.0 or higher.
c4cba5d7 241
25203dc1
NC
242If you would like a drive to be presented to the guest as a solid-state drive
243rather than a rotational hard disk, you can set the *SSD emulation* option on
244that drive. There is no requirement that the underlying storage actually be
245backed by SSDs; this feature can be used with physical media of any type.
53cbac40 246Note that *SSD emulation* is not supported on *VirtIO Block* drives.
25203dc1 247
3dbe1daa
TL
248
249[[qm_hard_disk_iothread]]
250IO Thread
251^^^^^^^^^
59552707 252The option *IO Thread* can only be used when using a disk with the
81868c7e
EK
253*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
254 type is *VirtIO SCSI single*.
255With this enabled, Qemu creates one I/O thread per storage controller,
cb335798
DW
256rather than a single thread for all I/O. This can increase performance when
257multiple disks are used and each disk has its own storage controller.
c564fc52 258
80c0adcb
DM
259
260[[qm_cpu]]
34e541c5
EK
261CPU
262~~~
80c0adcb 263
1ff5e4e8 264[thumbnail="screenshot/gui-create-vm-cpu.png"]
397c74c3 265
34e541c5
EK
266A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
267This CPU can then contain one or many *cores*, which are independent
268processing units. Whether you have a single CPU socket with 4 cores, or two CPU
269sockets with two cores is mostly irrelevant from a performance point of view.
44f38275
TL
270However some software licenses depend on the number of sockets a machine has,
271in that case it makes sense to set the number of sockets to what the license
272allows you.
f4bfd701 273
3a433e9b 274Increasing the number of virtual CPUs (cores and sockets) will usually provide a
34e541c5 275performance improvement though that is heavily dependent on the use of the VM.
3a433e9b
OB
276Multi-threaded applications will of course benefit from a large number of
277virtual CPUs, as for each virtual cpu you add, Qemu will create a new thread of
34e541c5
EK
278execution on the host system. If you're not sure about the workload of your VM,
279it is usually a safe bet to set the number of *Total cores* to 2.
280
fb29acdd 281NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
d6466262
TL
282is greater than the number of cores on the server (for example, 4 VMs each with
2834 cores (= total 16) on a machine with only 8 cores). In that case the host
284system will balance the QEMU execution threads between your server cores, just
285like if you were running a standard multi-threaded application. However, {pve}
286will prevent you from starting VMs with more virtual CPU cores than physically
287available, as this will only bring the performance down due to the cost of
288context switches.
34e541c5 289
af54f54d
TL
290[[qm_cpu_resource_limits]]
291Resource Limits
292^^^^^^^^^^^^^^^
293
4371b2fe 294In addition to the number of virtual cores, you can configure how much resources
af54f54d
TL
295a VM can get in relation to the host CPU time and also in relation to other
296VMs.
046643ec
FG
297With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
298the whole VM can use on the host. It is a floating point value representing CPU
af54f54d 299time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
4371b2fe 300single process would fully use one single core it would have `100%` CPU Time
af54f54d
TL
301usage. If a VM with four cores utilizes all its cores fully it would
302theoretically use `400%`. In reality the usage may be even a bit higher as Qemu
303can have additional threads for VM peripherals besides the vCPU core ones.
304This setting can be useful if a VM should have multiple vCPUs, as it runs a few
305processes in parallel, but the VM as a whole should not be able to run all
306vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
307which would profit from having 8 vCPUs, but at no time all of those 8 cores
308should run at full load - as this would make the server so overloaded that
309other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
310`4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
311real host cores CPU time. But, if only 4 would do work they could still get
312almost 100% of a real core each.
313
d6466262
TL
314NOTE: VMs can, depending on their configuration, use additional threads, such
315as for networking or IO operations but also live migration. Thus a VM can show
316up to use more CPU time than just its virtual CPUs could use. To ensure that a
317VM never uses more CPU time than virtual CPUs assigned set the *cpulimit*
318setting to the same value as the total core count.
af54f54d
TL
319
320The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
48219c58
FE
321shares or CPU weight), controls how much CPU time a VM gets compared to other
322running VMs. It is a relative weight which defaults to `100` (or `1024` if the
323host uses legacy cgroup v1). If you increase this for a VM it will be
d6466262
TL
324prioritized by the scheduler in comparison to other VMs with lower weight. For
325example, if VM 100 has set the default `100` and VM 200 was changed to `200`,
326the latter VM 200 would receive twice the CPU bandwidth than the first VM 100.
af54f54d
TL
327
328For more information see `man systemd.resource-control`, here `CPUQuota`
b90b797f 329corresponds to `cpulimit` and `CPUWeight` corresponds to our `cpuunits`
af54f54d
TL
330setting, visit its Notes section for references and implementation details.
331
1e6b30b5
DB
332The third CPU resource limiting setting, *affinity*, controls what host cores
333the virtual machine will be permitted to execute on. E.g., if an affinity value
334of `0-3,8-11` is provided, the virtual machine will be restricted to using the
335host cores `0,1,2,3,8,9,10,` and `11`. Valid *affinity* values are written in
336cpuset `List Format`. List Format is a comma-separated list of CPU numbers and
337ranges of numbers, in ASCII decimal.
338
339NOTE: CPU *affinity* uses the `taskset` command to restrict virtual machines to
340a given set of cores. This restriction will not take effect for some types of
341processes that may be created for IO. *CPU affinity is not a security feature.*
342
343For more information regarding *affinity* see `man cpuset`. Here the
344`List Format` corresponds to valid *affinity* values. Visit its `Formats`
345section for more examples.
346
af54f54d
TL
347CPU Type
348^^^^^^^^
349
34e541c5
EK
350Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
351processors. Each new processor generation adds new features, like hardware
352assisted 3d rendering, random number generation, memory protection, etc ...
353Usually you should select for your VM a processor type which closely matches the
354CPU of the host system, as it means that the host CPU features (also called _CPU
355flags_ ) will be available in your VMs. If you want an exact match, you can set
356the CPU type to *host* in which case the VM will have exactly the same CPU flags
f4bfd701
DM
357as your host system.
358
34e541c5
EK
359This has a downside though. If you want to do a live migration of VMs between
360different hosts, your VM might end up on a new system with a different CPU type.
361If the CPU flags passed to the guest are missing, the qemu process will stop. To
362remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
363kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
f4bfd701
DM
364but is guaranteed to work everywhere.
365
366In short, if you care about live migration and moving VMs between nodes, leave
af54f54d
TL
367the kvm64 default. If you don’t care about live migration or have a homogeneous
368cluster where all nodes have the same CPU, set the CPU type to host, as in
369theory this will give your guests maximum performance.
370
9e797d8c
SR
371Custom CPU Types
372^^^^^^^^^^^^^^^^
373
374You can specify custom CPU types with a configurable set of features. These are
375maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
376an administrator. See `man cpu-models.conf` for format details.
377
378Specified custom types can be selected by any user with the `Sys.Audit`
379privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
380or API, the name needs to be prefixed with 'custom-'.
381
72ae8aa2
FG
382Meltdown / Spectre related CPU flags
383^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
384
2975cb7a 385There are several CPU flags related to the Meltdown and Spectre vulnerabilities
72ae8aa2
FG
386footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
387manually unless the selected CPU type of your VM already enables them by default.
388
2975cb7a 389There are two requirements that need to be fulfilled in order to use these
72ae8aa2 390CPU flags:
5dba2677 391
72ae8aa2
FG
392* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
393* The guest operating system must be updated to a version which mitigates the
394 attacks and is able to utilize the CPU feature
395
2975cb7a
AD
396Otherwise you need to set the desired CPU flag of the virtual CPU, either by
397editing the CPU options in the WebUI, or by setting the 'flags' property of the
398'cpu' option in the VM configuration file.
399
400For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
72ae8aa2
FG
401so-called ``microcode update'' footnote:[You can use `intel-microcode' /
402`amd-microcode' from Debian non-free if your vendor does not provide such an
403update. Note that not all affected CPUs can be updated to support spec-ctrl.]
404for your CPU.
5dba2677 405
2975cb7a
AD
406
407To check if the {pve} host is vulnerable, execute the following command as root:
5dba2677
TL
408
409----
2975cb7a 410for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
5dba2677
TL
411----
412
144d5ede 413A community script is also available to detect is the host is still vulnerable.
2975cb7a 414footnote:[spectre-meltdown-checker https://meltdown.ovh/]
72ae8aa2 415
2975cb7a
AD
416Intel processors
417^^^^^^^^^^^^^^^^
72ae8aa2 418
2975cb7a
AD
419* 'pcid'
420+
144d5ede 421This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
2975cb7a
AD
422called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
423the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
424mechanism footnote:[PCID is now a critical performance/security feature on x86
425https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
426+
427To check if the {pve} host supports PCID, execute the following command as root:
428+
72ae8aa2 429----
2975cb7a 430# grep ' pcid ' /proc/cpuinfo
72ae8aa2 431----
2975cb7a
AD
432+
433If this does not return empty your host's CPU has support for 'pcid'.
72ae8aa2 434
2975cb7a
AD
435* 'spec-ctrl'
436+
144d5ede
WB
437Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
438in cases where retpolines are not sufficient.
439Included by default in Intel CPU models with -IBRS suffix.
440Must be explicitly turned on for Intel CPU models without -IBRS suffix.
441Requires an updated host CPU microcode (intel-microcode >= 20180425).
2975cb7a
AD
442+
443* 'ssbd'
444+
144d5ede
WB
445Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
446Must be explicitly turned on for all Intel CPU models.
447Requires an updated host CPU microcode(intel-microcode >= 20180703).
72ae8aa2 448
72ae8aa2 449
2975cb7a
AD
450AMD processors
451^^^^^^^^^^^^^^
452
453* 'ibpb'
454+
144d5ede
WB
455Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
456in cases where retpolines are not sufficient.
457Included by default in AMD CPU models with -IBPB suffix.
458Must be explicitly turned on for AMD CPU models without -IBPB suffix.
2975cb7a
AD
459Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
460
461
462
463* 'virt-ssbd'
464+
465Required to enable the Spectre v4 (CVE-2018-3639) fix.
144d5ede
WB
466Not included by default in any AMD CPU model.
467Must be explicitly turned on for all AMD CPU models.
468This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
469Note that this must be explicitly enabled when when using the "host" cpu model,
470because this is a virtual feature which does not exist in the physical CPUs.
2975cb7a
AD
471
472
473* 'amd-ssbd'
474+
144d5ede
WB
475Required to enable the Spectre v4 (CVE-2018-3639) fix.
476Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
477This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
2975cb7a
AD
478virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
479
480
481* 'amd-no-ssb'
482+
483Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
144d5ede
WB
484Not included by default in any AMD CPU model.
485Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
486and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
2975cb7a
AD
487This is mutually exclusive with virt-ssbd and amd-ssbd.
488
5dba2677 489
af54f54d
TL
490NUMA
491^^^^
492You can also optionally emulate a *NUMA*
493footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
494in your VMs. The basics of the NUMA architecture mean that instead of having a
495global memory pool available to all your cores, the memory is spread into local
496banks close to each socket.
34e541c5
EK
497This can bring speed improvements as the memory bus is not a bottleneck
498anymore. If your system has a NUMA architecture footnote:[if the command
499`numactl --hardware | grep available` returns more than one node, then your host
500system has a NUMA architecture] we recommend to activate the option, as this
af54f54d
TL
501will allow proper distribution of the VM resources on the host system.
502This option is also required to hot-plug cores or RAM in a VM.
34e541c5
EK
503
504If the NUMA option is used, it is recommended to set the number of sockets to
4ccb911c 505the number of nodes of the host system.
34e541c5 506
af54f54d
TL
507vCPU hot-plug
508^^^^^^^^^^^^^
509
510Modern operating systems introduced the capability to hot-plug and, to a
3a433e9b 511certain extent, hot-unplug CPUs in a running system. Virtualization allows us
4371b2fe
FG
512to avoid a lot of the (physical) problems real hardware can cause in such
513scenarios.
514Still, this is a rather new and complicated feature, so its use should be
515restricted to cases where its absolutely needed. Most of the functionality can
516be replicated with other, well tested and less complicated, features, see
af54f54d
TL
517xref:qm_cpu_resource_limits[Resource Limits].
518
519In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
520To start a VM with less than this total core count of CPUs you may use the
4371b2fe 521*vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
af54f54d 522
4371b2fe 523Currently only this feature is only supported on Linux, a kernel newer than 3.10
af54f54d
TL
524is needed, a kernel newer than 4.7 is recommended.
525
526You can use a udev rule as follow to automatically set new CPUs as online in
527the guest:
528
529----
530SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
531----
532
533Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
534
d6466262
TL
535Note: CPU hot-remove is machine dependent and requires guest cooperation. The
536deletion command does not guarantee CPU removal to actually happen, typically
537it's a request forwarded to guest OS using target dependent mechanism, such as
538ACPI on x86/amd64.
af54f54d 539
80c0adcb
DM
540
541[[qm_memory]]
34e541c5
EK
542Memory
543~~~~~~
80c0adcb 544
34e541c5
EK
545For each VM you have the option to set a fixed size memory or asking
546{pve} to dynamically allocate memory based on the current RAM usage of the
59552707 547host.
34e541c5 548
96124d0f 549.Fixed Memory Allocation
1ff5e4e8 550[thumbnail="screenshot/gui-create-vm-memory.png"]
96124d0f 551
9ea21953 552When setting memory and minimum memory to the same amount
9fb002e6 553{pve} will simply allocate what you specify to your VM.
34e541c5 554
9abfec65
DC
555Even when using a fixed memory size, the ballooning device gets added to the
556VM, because it delivers useful information such as how much memory the guest
557really uses.
558In general, you should leave *ballooning* enabled, but if you want to disable
d6466262 559it (like for debugging purposes), simply uncheck *Ballooning Device* or set
9abfec65
DC
560
561 balloon: 0
562
563in the configuration.
564
96124d0f 565.Automatic Memory Allocation
96124d0f 566
34e541c5 567// see autoballoon() in pvestatd.pm
58e04593 568When setting the minimum memory lower than memory, {pve} will make sure that the
34e541c5
EK
569minimum amount you specified is always available to the VM, and if RAM usage on
570the host is below 80%, will dynamically add memory to the guest up to the
f4bfd701
DM
571maximum memory specified.
572
a35aad4a 573When the host is running low on RAM, the VM will then release some memory
34e541c5
EK
574back to the host, swapping running processes if needed and starting the oom
575killer in last resort. The passing around of memory between host and guest is
576done via a special `balloon` kernel driver running inside the guest, which will
577grab or release memory pages from the host.
578footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
579
c9f6e1a4
EK
580When multiple VMs use the autoallocate facility, it is possible to set a
581*Shares* coefficient which indicates the relative amount of the free host memory
470d4313 582that each VM should take. Suppose for instance you have four VMs, three of them
a35aad4a 583running an HTTP server and the last one is a database server. To cache more
c9f6e1a4
EK
584database blocks in the database server RAM, you would like to prioritize the
585database VM when spare RAM is available. For this you assign a Shares property
586of 3000 to the database VM, leaving the other VMs to the Shares default setting
470d4313 587of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
c9f6e1a4
EK
588* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
5893000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
a35aad4a 590get 1.5 GB.
c9f6e1a4 591
34e541c5
EK
592All Linux distributions released after 2010 have the balloon kernel driver
593included. For Windows OSes, the balloon driver needs to be added manually and can
594incur a slowdown of the guest, so we don't recommend using it on critical
59552707 595systems.
34e541c5
EK
596// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
597
470d4313 598When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
34e541c5
EK
599of RAM available to the host.
600
80c0adcb
DM
601
602[[qm_network_device]]
1ff7835b
EK
603Network Device
604~~~~~~~~~~~~~~
80c0adcb 605
1ff5e4e8 606[thumbnail="screenshot/gui-create-vm-network.png"]
c24ddb0a 607
1ff7835b
EK
608Each VM can have many _Network interface controllers_ (NIC), of four different
609types:
610
611 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
612 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
613performance. Like all VirtIO devices, the guest OS should have the proper driver
614installed.
615 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
59552707 616only be used when emulating older operating systems ( released before 2002 )
1ff7835b
EK
617 * the *vmxnet3* is another paravirtualized device, which should only be used
618when importing a VM from another hypervisor.
619
620{pve} will generate for each NIC a random *MAC address*, so that your VM is
621addressable on Ethernet networks.
622
470d4313 623The NIC you added to the VM can follow one of two different models:
af9c6de1
EK
624
625 * in the default *Bridged mode* each virtual NIC is backed on the host by a
626_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
627tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
628have direct access to the Ethernet LAN on which the host is located.
629 * in the alternative *NAT mode*, each virtual NIC will only communicate with
470d4313
DC
630the Qemu user networking stack, where a built-in router and DHCP server can
631provide network access. This built-in DHCP will serve addresses in the private
af9c6de1 63210.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
f5041150
DC
633should only be used for testing. This mode is only available via CLI or the API,
634but not via the WebUI.
af9c6de1
EK
635
636You can also skip adding a network device when creating a VM by selecting *No
637network device*.
638
639.Multiqueue
1ff7835b 640If you are using the VirtIO driver, you can optionally activate the
af9c6de1 641*Multiqueue* option. This option allows the guest OS to process networking
1ff7835b 642packets using multiple virtual CPUs, providing an increase in the total number
470d4313 643of packets transferred.
1ff7835b
EK
644
645//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
646When using the VirtIO driver with {pve}, each NIC network queue is passed to the
a35aad4a 647host kernel, where the queue will be processed by a kernel thread spawned by the
1ff7835b
EK
648vhost driver. With this option activated, it is possible to pass _multiple_
649network queues to the host kernel for each NIC.
650
651//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
af9c6de1 652When using Multiqueue, it is recommended to set it to a value equal
1ff7835b
EK
653to the number of Total Cores of your guest. You also need to set in
654the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
59552707 655command:
1ff7835b 656
7a0d4784 657`ethtool -L ens1 combined X`
1ff7835b
EK
658
659where X is the number of the number of vcpus of the VM.
660
af9c6de1 661You should note that setting the Multiqueue parameter to a value greater
1ff7835b
EK
662than one will increase the CPU load on the host and guest systems as the
663traffic increases. We recommend to set this option only when the VM has to
664process a great number of incoming connections, such as when the VM is running
665as a router, reverse proxy or a busy HTTP server doing long polling.
666
6cb67d7f
DC
667[[qm_display]]
668Display
669~~~~~~~
670
671QEMU can virtualize a few types of VGA hardware. Some examples are:
672
673* *std*, the default, emulates a card with Bochs VBE extensions.
1368dc02
TL
674* *cirrus*, this was once the default, it emulates a very old hardware module
675with all its problems. This display type should only be used if really
676necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
d6466262
TL
677qemu: using cirrus considered harmful], for example, if using Windows XP or
678earlier
6cb67d7f
DC
679* *vmware*, is a VMWare SVGA-II compatible adapter.
680* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
37422176
AL
681enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
682VM.
e039fe3c
TL
683* *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
684 can offload workloads to the host GPU without requiring special (expensive)
685 models and drivers and neither binding the host GPU completely, allowing
686 reuse between multiple guests and or the host.
687+
688NOTE: VirGL support needs some extra libraries that aren't installed by
689default due to being relatively big and also not available as open source for
690all GPU models/vendors. For most setups you'll just need to do:
691`apt install libgl1 libegl1`
6cb67d7f
DC
692
693You can edit the amount of memory given to the virtual GPU, by setting
1368dc02 694the 'memory' option. This can enable higher resolutions inside the VM,
6cb67d7f
DC
695especially with SPICE/QXL.
696
1368dc02 697As the memory is reserved by display device, selecting Multi-Monitor mode
d6466262 698for SPICE (such as `qxl2` for dual monitors) has some implications:
6cb67d7f 699
1368dc02
TL
700* Windows needs a device for each monitor, so if your 'ostype' is some
701version of Windows, {pve} gives the VM an extra device per monitor.
6cb67d7f 702Each device gets the specified amount of memory.
1368dc02 703
6cb67d7f
DC
704* Linux VMs, can always enable more virtual monitors, but selecting
705a Multi-Monitor mode multiplies the memory given to the device with
706the number of monitors.
707
1368dc02
TL
708Selecting `serialX` as display 'type' disables the VGA output, and redirects
709the Web Console to the selected serial port. A configured display 'memory'
710setting will be ignored in that case.
80c0adcb 711
dbb44ef0 712[[qm_usb_passthrough]]
685cc8e0
DC
713USB Passthrough
714~~~~~~~~~~~~~~~
80c0adcb 715
685cc8e0
DC
716There are two different types of USB passthrough devices:
717
470d4313 718* Host USB passthrough
685cc8e0
DC
719* SPICE USB passthrough
720
721Host USB passthrough works by giving a VM a USB device of the host.
722This can either be done via the vendor- and product-id, or
723via the host bus and port.
724
725The vendor/product-id looks like this: *0123:abcd*,
726where *0123* is the id of the vendor, and *abcd* is the id
727of the product, meaning two pieces of the same usb device
728have the same id.
729
730The bus/port looks like this: *1-2.3.4*, where *1* is the bus
731and *2.3.4* is the port path. This represents the physical
732ports of your host (depending of the internal order of the
733usb controllers).
734
735If a device is present in a VM configuration when the VM starts up,
736but the device is not present in the host, the VM can boot without problems.
470d4313 737As soon as the device/port is available in the host, it gets passed through.
685cc8e0 738
e60ce90c 739WARNING: Using this kind of USB passthrough means that you cannot move
685cc8e0
DC
740a VM online to another host, since the hardware is only available
741on the host the VM is currently residing.
742
743The second type of passthrough is SPICE USB passthrough. This is useful
744if you use a SPICE client which supports it. If you add a SPICE USB port
745to your VM, you can passthrough a USB device from where your SPICE client is,
746directly to the VM (for example an input device or hardware dongle).
747
80c0adcb
DM
748
749[[qm_bios_and_uefi]]
076d60ae
DC
750BIOS and UEFI
751~~~~~~~~~~~~~
752
753In order to properly emulate a computer, QEMU needs to use a firmware.
55ce3375
TL
754Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
755first steps when booting a VM. It is responsible for doing basic hardware
756initialization and for providing an interface to the firmware and hardware for
757the operating system. By default QEMU uses *SeaBIOS* for this, which is an
758open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
759standard setups.
076d60ae 760
8e5720fd
SR
761Some operating systems (such as Windows 11) may require use of an UEFI
762compatible implementation instead. In such cases, you must rather use *OVMF*,
763which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
764
d6466262
TL
765There are other scenarios in which the SeaBIOS may not be the ideal firmware to
766boot from, for example if you want to do VGA passthrough. footnote:[Alex
767Williamson has a good blog entry about this
768https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
076d60ae
DC
769
770If you want to use OVMF, there are several things to consider:
771
772In order to save things like the *boot order*, there needs to be an EFI Disk.
773This disk will be included in backups and snapshots, and there can only be one.
774
775You can create such a disk with the following command:
776
32e8b5b2
AL
777----
778# qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
779----
076d60ae
DC
780
781Where *<storage>* is the storage where you want to have the disk, and
782*<format>* is a format which the storage supports. Alternatively, you can
783create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
784hardware section of a VM.
785
8e5720fd
SR
786The *efitype* option specifies which version of the OVMF firmware should be
787used. For new VMs, this should always be '4m', as it supports Secure Boot and
788has more space allocated to support future development (this is the default in
789the GUI).
790
791*pre-enroll-keys* specifies if the efidisk should come pre-loaded with
792distribution-specific and Microsoft Standard Secure Boot keys. It also enables
793Secure Boot by default (though it can still be disabled in the OVMF menu within
794the VM).
795
796NOTE: If you want to start using Secure Boot in an existing VM (that still uses
797a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
798(`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
799will reset any custom configurations you have made in the OVMF menu!
800
076d60ae 801When using OVMF with a virtual display (without VGA passthrough),
8e5720fd 802you need to set the client resolution in the OVMF menu (which you can reach
076d60ae
DC
803with a press of the ESC button during boot), or you have to choose
804SPICE as the display type.
805
95e8e1b7
SR
806[[qm_tpm]]
807Trusted Platform Module (TPM)
808~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
809
810A *Trusted Platform Module* is a device which stores secret data - such as
811encryption keys - securely and provides tamper-resistance functions for
812validating system boot.
813
d6466262
TL
814Certain operating systems (such as Windows 11) require such a device to be
815attached to a machine (be it physical or virtual).
95e8e1b7
SR
816
817A TPM is added by specifying a *tpmstate* volume. This works similar to an
818efidisk, in that it cannot be changed (only removed) once created. You can add
819one via the following command:
820
32e8b5b2
AL
821----
822# qm set <vmid> -tpmstate0 <storage>:1,version=<version>
823----
95e8e1b7
SR
824
825Where *<storage>* is the storage you want to put the state on, and *<version>*
826is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
827choosing 'Add' -> 'TPM State' in the hardware section of a VM.
828
829The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
830implementation that requires a 'v1.2' TPM, it should be preferred.
831
832NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
833security benefits. The point of a TPM is that the data on it cannot be modified
834easily, except via commands specified as part of the TPM spec. Since with an
835emulated device the data storage happens on a regular volume, it can potentially
836be edited by anyone with access to it.
837
0ad30983
DC
838[[qm_ivshmem]]
839Inter-VM shared memory
840~~~~~~~~~~~~~~~~~~~~~~
841
8861c7ad
TL
842You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
843share memory between the host and a guest, or also between multiple guests.
0ad30983
DC
844
845To add such a device, you can use `qm`:
846
32e8b5b2
AL
847----
848# qm set <vmid> -ivshmem size=32,name=foo
849----
0ad30983
DC
850
851Where the size is in MiB. The file will be located under
852`/dev/shm/pve-shm-$name` (the default name is the vmid).
853
4d1a19eb
TL
854NOTE: Currently the device will get deleted as soon as any VM using it got
855shutdown or stopped. Open connections will still persist, but new connections
856to the exact same device cannot be made anymore.
857
8861c7ad 858A use case for such a device is the Looking Glass
451bb75f
SR
859footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
860performance, low-latency display mirroring between host and guest.
0ad30983 861
ca8c3009
AL
862[[qm_audio_device]]
863Audio Device
864~~~~~~~~~~~~
865
866To add an audio device run the following command:
867
868----
869qm set <vmid> -audio0 device=<device>
870----
871
872Supported audio devices are:
873
874* `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
875* `intel-hda`: Intel HD Audio Controller, emulates ICH6
876* `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
877
cf41761d
AL
878There are two backends available:
879
880* 'spice'
881* 'none'
882
883The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
884the 'none' backend can be useful if an audio device is needed in the VM for some
885software to work. To use the physical audio device of the host use device
886passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
887xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
888have options to play sound.
889
ca8c3009 890
adb2c91d
SR
891[[qm_virtio_rng]]
892VirtIO RNG
893~~~~~~~~~~
894
895A RNG (Random Number Generator) is a device providing entropy ('randomness') to
896a system. A virtual hardware-RNG can be used to provide such entropy from the
897host system to a guest VM. This helps to avoid entropy starvation problems in
898the guest (a situation where not enough entropy is available and the system may
899slow down or run into problems), especially during the guests boot process.
900
901To add a VirtIO-based emulated RNG, run the following command:
902
903----
904qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
905----
906
907`source` specifies where entropy is read from on the host and has to be one of
908the following:
909
910* `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
911* `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
912 starvation on the host system)
913* `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
914 are available, the one selected in
915 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
916
917A limit can be specified via the `max_bytes` and `period` parameters, they are
918read as `max_bytes` per `period` in milliseconds. However, it does not represent
919a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
920available on a 1 second timer, not that 1 KiB is streamed to the guest over the
921course of one second. Reducing the `period` can thus be used to inject entropy
922into the guest at a faster rate.
923
924By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
925recommended to always use a limiter to avoid guests using too many host
926resources. If desired, a value of '0' for `max_bytes` can be used to disable
927all limits.
928
777cf894 929[[qm_bootorder]]
8cd6f474
TL
930Device Boot Order
931~~~~~~~~~~~~~~~~~
777cf894
SR
932
933QEMU can tell the guest which devices it should boot from, and in which order.
d6466262 934This can be specified in the config via the `boot` property, for example:
777cf894
SR
935
936----
937boot: order=scsi0;net0;hostpci0
938----
939
940[thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
941
942This way, the guest would first attempt to boot from the disk `scsi0`, if that
943fails, it would go on to attempt network boot from `net0`, and in case that
944fails too, finally attempt to boot from a passed through PCIe device (seen as
945disk in case of NVMe, otherwise tries to launch into an option ROM).
946
947On the GUI you can use a drag-and-drop editor to specify the boot order, and use
948the checkbox to enable or disable certain devices for booting altogether.
949
950NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
951all of them must be marked as 'bootable' (that is, they must have the checkbox
952enabled or appear in the list in the config) for the guest to be able to boot.
953This is because recent SeaBIOS and OVMF versions only initialize disks if they
954are marked 'bootable'.
955
956In any case, even devices not appearing in the list or having the checkmark
957disabled will still be available to the guest, once it's operating system has
958booted and initialized them. The 'bootable' flag only affects the guest BIOS and
959bootloader.
960
961
288e3f46
EK
962[[qm_startup_and_shutdown]]
963Automatic Start and Shutdown of Virtual Machines
964~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
965
966After creating your VMs, you probably want them to start automatically
967when the host system boots. For this you need to select the option 'Start at
968boot' from the 'Options' Tab of your VM in the web interface, or set it with
969the following command:
970
32e8b5b2
AL
971----
972# qm set <vmid> -onboot 1
973----
288e3f46 974
4dbeb548
DM
975.Start and Shutdown Order
976
1ff5e4e8 977[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548
DM
978
979In some case you want to be able to fine tune the boot order of your
980VMs, for instance if one of your VM is providing firewalling or DHCP
981to other guest systems. For this you can use the following
982parameters:
288e3f46 983
d6466262
TL
984* *Start/Shutdown order*: Defines the start order priority. For example, set it
985* to 1 if
288e3f46
EK
986you want the VM to be the first to be started. (We use the reverse startup
987order for shutdown, so a machine with a start order of 1 would be the last to
7eed72d8 988be shut down). If multiple VMs have the same order defined on a host, they will
d750c851 989additionally be ordered by 'VMID' in ascending order.
288e3f46 990* *Startup delay*: Defines the interval between this VM start and subsequent
d6466262
TL
991VMs starts. For example, set it to 240 if you want to wait 240 seconds before
992starting other VMs.
288e3f46 993* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
d6466262
TL
994for the VM to be offline after issuing a shutdown command. By default this
995value is set to 180, which means that {pve} will issue a shutdown request and
996wait 180 seconds for the machine to be offline. If the machine is still online
997after the timeout it will be stopped forcefully.
288e3f46 998
2b2c6286
TL
999NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
1000'boot order' options currently. Those VMs will be skipped by the startup and
1001shutdown algorithm as the HA manager itself ensures that VMs get started and
1002stopped.
1003
288e3f46 1004Please note that machines without a Start/Shutdown order parameter will always
7eed72d8 1005start after those where the parameter is set. Further, this parameter can only
d750c851 1006be enforced between virtual machines running on the same host, not
288e3f46 1007cluster-wide.
076d60ae 1008
0f7778ac
DW
1009If you require a delay between the host boot and the booting of the first VM,
1010see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
1011
c0f039aa
AL
1012
1013[[qm_qemu_agent]]
1014Qemu Guest Agent
1015~~~~~~~~~~~~~~~~
1016
1017The Qemu Guest Agent is a service which runs inside the VM, providing a
1018communication channel between the host and the guest. It is used to exchange
1019information and allows the host to issue commands to the guest.
1020
1021For example, the IP addresses in the VM summary panel are fetched via the guest
1022agent.
1023
1024Or when starting a backup, the guest is told via the guest agent to sync
1025outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1026
1027For the guest agent to work properly the following steps must be taken:
1028
1029* install the agent in the guest and make sure it is running
1030* enable the communication via the agent in {pve}
1031
1032Install Guest Agent
1033^^^^^^^^^^^^^^^^^^^
1034
1035For most Linux distributions, the guest agent is available. The package is
1036usually named `qemu-guest-agent`.
1037
1038For Windows, it can be installed from the
1039https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1040VirtIO driver ISO].
1041
1042Enable Guest Agent Communication
1043^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1044
1045Communication from {pve} with the guest agent can be enabled in the VM's
1046*Options* panel. A fresh start of the VM is necessary for the changes to take
1047effect.
1048
1049It is possible to enable the 'Run guest-trim' option. With this enabled,
1050{pve} will issue a trim command to the guest after the following
1051operations that have the potential to write out zeros to the storage:
1052
1053* moving a disk to another storage
1054* live migrating a VM to another node with local storage
1055
1056On a thin provisioned storage, this can help to free up unused space.
1057
1058Troubleshooting
1059^^^^^^^^^^^^^^^
1060
1061.VM does not shut down
1062
1063Make sure the guest agent is installed and running.
1064
1065Once the guest agent is enabled, {pve} will send power commands like
1066'shutdown' via the guest agent. If the guest agent is not running, commands
1067cannot get executed properly and the shutdown command will run into a timeout.
1068
22a0091c
AL
1069[[qm_spice_enhancements]]
1070SPICE Enhancements
1071~~~~~~~~~~~~~~~~~~
1072
1073SPICE Enhancements are optional features that can improve the remote viewer
1074experience.
1075
1076To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1077the following command to enable them via the CLI:
1078
1079----
1080qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1081----
1082
1083NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1084must be set to SPICE (qxl).
1085
1086Folder Sharing
1087^^^^^^^^^^^^^^
1088
1089Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1090installed in the guest. It makes the shared folder available through a local
1091WebDAV server located at http://localhost:9843.
1092
1093For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1094from the
1095https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1096
1097Most Linux distributions have a package called `spice-webdavd` that can be
1098installed.
1099
1100To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1101Select the folder to share and then enable the checkbox.
1102
1103NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1104
0dcd22f5
AL
1105CAUTION: Experimental! Currently this feature does not work reliably.
1106
22a0091c
AL
1107Video Streaming
1108^^^^^^^^^^^^^^^
1109
1110Fast refreshing areas are encoded into a video stream. Two options exist:
1111
1112* *all*: Any fast refreshing area will be encoded into a video stream.
1113* *filter*: Additional filters are used to decide if video streaming should be
1114 used (currently only small window surfaces are skipped).
1115
1116A general recommendation if video streaming should be enabled and which option
1117to choose from cannot be given. Your mileage may vary depending on the specific
1118circumstances.
1119
1120Troubleshooting
1121^^^^^^^^^^^^^^^
1122
19a58e02 1123.Shared folder does not show up
22a0091c
AL
1124
1125Make sure the WebDAV service is enabled and running in the guest. On Windows it
1126is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1127different depending on the distribution.
1128
1129If the service is running, check the WebDAV server by opening
1130http://localhost:9843 in a browser in the guest.
1131
1132It can help to restart the SPICE session.
c73c190f
DM
1133
1134[[qm_migration]]
1135Migration
1136---------
1137
1ff5e4e8 1138[thumbnail="screenshot/gui-qemu-migrate.png"]
e4bcef0a 1139
c73c190f
DM
1140If you have a cluster, you can migrate your VM to another host with
1141
32e8b5b2
AL
1142----
1143# qm migrate <vmid> <target>
1144----
c73c190f 1145
8df8cfb7
DC
1146There are generally two mechanisms for this
1147
1148* Online Migration (aka Live Migration)
1149* Offline Migration
1150
1151Online Migration
1152~~~~~~~~~~~~~~~~
1153
27780834
TL
1154If your VM is running and no locally bound resources are configured (such as
1155passed-through devices), you can initiate a live migration with the `--online`
1156flag in the `qm migration` command evocation. The web-interface defaults to
1157live migration when the VM is running.
c73c190f 1158
8df8cfb7
DC
1159How it works
1160^^^^^^^^^^^^
1161
27780834
TL
1162Online migration first starts a new QEMU process on the target host with the
1163'incoming' flag, which performs only basic initialization with the guest vCPUs
1164still paused and then waits for the guest memory and device state data streams
1165of the source Virtual Machine.
1166All other resources, such as disks, are either shared or got already sent
1167before runtime state migration of the VMs begins; so only the memory content
1168and device state remain to be transferred.
1169
1170Once this connection is established, the source begins asynchronously sending
1171the memory content to the target. If the guest memory on the source changes,
1172those sections are marked dirty and another pass is made to send the guest
1173memory data.
1174This loop is repeated until the data difference between running source VM
1175and incoming target VM is small enough to be sent in a few milliseconds,
1176because then the source VM can be paused completely, without a user or program
1177noticing the pause, so that the remaining data can be sent to the target, and
1178then unpause the targets VM's CPU to make it the new running VM in well under a
1179second.
8df8cfb7
DC
1180
1181Requirements
1182^^^^^^^^^^^^
1183
1184For Live Migration to work, there are some things required:
1185
27780834
TL
1186* The VM has no local resources that cannot be migrated. For example,
1187 PCI or USB devices that are passed through currently block live-migration.
1188 Local Disks, on the other hand, can be migrated by sending them to the target
1189 just fine.
1190* The hosts are located in the same {pve} cluster.
1191* The hosts have a working (and reliable) network connection between them.
1192* The target host must have the same, or higher versions of the
1193 {pve} packages. Although it can sometimes work the other way around, this
1194 cannot be guaranteed.
1195* The hosts have CPUs from the same vendor with similar capabilities. Different
1196 vendor *might* work depending on the actual models and VMs CPU type
1197 configured, but it cannot be guaranteed - so please test before deploying
1198 such a setup in production.
8df8cfb7
DC
1199
1200Offline Migration
1201~~~~~~~~~~~~~~~~~
1202
27780834
TL
1203If you have local resources, you can still migrate your VMs offline as long as
1204all disk are on storage defined on both hosts.
1205Migration then copies the disks to the target host over the network, as with
1206online migration. Note that any hardware pass-through configuration may need to
1207be adapted to the device location on the target host.
1208
1209// TODO: mention hardware map IDs as better way to solve that, once available
c73c190f 1210
eeb87f95
DM
1211[[qm_copy_and_clone]]
1212Copies and Clones
1213-----------------
9e55c76d 1214
1ff5e4e8 1215[thumbnail="screenshot/gui-qemu-full-clone.png"]
9e55c76d
DM
1216
1217VM installation is usually done using an installation media (CD-ROM)
61018238 1218from the operating system vendor. Depending on the OS, this can be a
9e55c76d
DM
1219time consuming task one might want to avoid.
1220
1221An easy way to deploy many VMs of the same type is to copy an existing
1222VM. We use the term 'clone' for such copies, and distinguish between
1223'linked' and 'full' clones.
1224
1225Full Clone::
1226
1227The result of such copy is an independent VM. The
1228new VM does not share any storage resources with the original.
1229+
707e37a2 1230
9e55c76d
DM
1231It is possible to select a *Target Storage*, so one can use this to
1232migrate a VM to a totally different storage. You can also change the
1233disk image *Format* if the storage driver supports several formats.
1234+
707e37a2 1235
730fbca4 1236NOTE: A full clone needs to read and copy all VM image data. This is
9e55c76d 1237usually much slower than creating a linked clone.
707e37a2
DM
1238+
1239
1240Some storage types allows to copy a specific *Snapshot*, which
1241defaults to the 'current' VM data. This also means that the final copy
1242never includes any additional snapshots from the original VM.
1243
9e55c76d
DM
1244
1245Linked Clone::
1246
730fbca4 1247Modern storage drivers support a way to generate fast linked
9e55c76d
DM
1248clones. Such a clone is a writable copy whose initial contents are the
1249same as the original data. Creating a linked clone is nearly
1250instantaneous, and initially consumes no additional space.
1251+
707e37a2 1252
9e55c76d
DM
1253They are called 'linked' because the new image still refers to the
1254original. Unmodified data blocks are read from the original image, but
1255modification are written (and afterwards read) from a new
1256location. This technique is called 'Copy-on-write'.
1257+
707e37a2
DM
1258
1259This requires that the original volume is read-only. With {pve} one
1260can convert any VM into a read-only <<qm_templates, Template>>). Such
1261templates can later be used to create linked clones efficiently.
1262+
1263
730fbca4
OB
1264NOTE: You cannot delete an original template while linked clones
1265exist.
9e55c76d 1266+
707e37a2
DM
1267
1268It is not possible to change the *Target storage* for linked clones,
1269because this is a storage internal feature.
9e55c76d
DM
1270
1271
1272The *Target node* option allows you to create the new VM on a
1273different node. The only restriction is that the VM is on shared
1274storage, and that storage is also available on the target node.
1275
730fbca4 1276To avoid resource conflicts, all network interface MAC addresses get
9e55c76d
DM
1277randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1278setting.
1279
1280
707e37a2
DM
1281[[qm_templates]]
1282Virtual Machine Templates
1283-------------------------
1284
1285One can convert a VM into a Template. Such templates are read-only,
1286and you can use them to create linked clones.
1287
1288NOTE: It is not possible to start templates, because this would modify
1289the disk images. If you want to change the template, create a linked
1290clone and modify that.
1291
319d5325
DC
1292VM Generation ID
1293----------------
1294
941ff8d3 1295{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
effa4818
TL
1296'vmgenid' Specification
1297https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1298for virtual machines.
1299This can be used by the guest operating system to detect any event resulting
1300in a time shift event, for example, restoring a backup or a snapshot rollback.
319d5325 1301
effa4818
TL
1302When creating new VMs, a 'vmgenid' will be automatically generated and saved
1303in its configuration file.
319d5325 1304
effa4818
TL
1305To create and add a 'vmgenid' to an already existing VM one can pass the
1306special value `1' to let {pve} autogenerate one or manually set the 'UUID'
d6466262
TL
1307footnote:[Online GUID generator http://guid.one/] by using it as value, for
1308example:
319d5325 1309
effa4818 1310----
32e8b5b2
AL
1311# qm set VMID -vmgenid 1
1312# qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
effa4818 1313----
319d5325 1314
cfd48f55
TL
1315NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1316in the same effects as a change on snapshot rollback, backup restore, etc., has
1317as the VM can interpret this as generation change.
1318
effa4818
TL
1319In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1320its value on VM creation, or retroactively delete the property in the
1321configuration with:
319d5325 1322
effa4818 1323----
32e8b5b2 1324# qm set VMID -delete vmgenid
effa4818 1325----
319d5325 1326
effa4818
TL
1327The most prominent use case for 'vmgenid' are newer Microsoft Windows
1328operating systems, which use it to avoid problems in time sensitive or
d6466262 1329replicate services (such as databases or domain controller
cfd48f55
TL
1330footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1331on snapshot rollback, backup restore or a whole VM clone operation.
319d5325 1332
c069256d
EK
1333Importing Virtual Machines and disk images
1334------------------------------------------
56368da8
EK
1335
1336A VM export from a foreign hypervisor takes usually the form of one or more disk
59552707 1337 images, with a configuration file describing the settings of the VM (RAM,
56368da8
EK
1338 number of cores). +
1339The disk images can be in the vmdk format, if the disks come from
59552707
DM
1340VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1341The most popular configuration format for VM exports is the OVF standard, but in
1342practice interoperation is limited because many settings are not implemented in
1343the standard itself, and hypervisors export the supplementary information
56368da8
EK
1344in non-standard extensions.
1345
1346Besides the problem of format, importing disk images from other hypervisors
1347may fail if the emulated hardware changes too much from one hypervisor to
1348another. Windows VMs are particularly concerned by this, as the OS is very
1349picky about any changes of hardware. This problem may be solved by
1350installing the MergeIDE.zip utility available from the Internet before exporting
1351and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1352
59552707 1353Finally there is the question of paravirtualized drivers, which improve the
56368da8
EK
1354speed of the emulated system and are specific to the hypervisor.
1355GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1356default and you can switch to the paravirtualized drivers right after importing
59552707 1357the VM. For Windows VMs, you need to install the Windows paravirtualized
56368da8
EK
1358drivers by yourself.
1359
1360GNU/Linux and other free Unix can usually be imported without hassle. Note
eb01c5cf 1361that we cannot guarantee a successful import/export of Windows VMs in all
56368da8
EK
1362cases due to the problems above.
1363
c069256d
EK
1364Step-by-step example of a Windows OVF import
1365~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8 1366
59552707 1367Microsoft provides
c069256d 1368https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
144d5ede 1369 to get started with Windows development.We are going to use one of these
c069256d 1370to demonstrate the OVF import feature.
56368da8 1371
c069256d
EK
1372Download the Virtual Machine zip
1373^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 1374
144d5ede 1375After getting informed about the user agreement, choose the _Windows 10
c069256d 1376Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
56368da8
EK
1377
1378Extract the disk image from the zip
1379^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1380
c069256d
EK
1381Using the `unzip` utility or any archiver of your choice, unpack the zip,
1382and copy via ssh/scp the ovf and vmdk files to your {pve} host.
56368da8 1383
c069256d
EK
1384Import the Virtual Machine
1385^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 1386
c069256d
EK
1387This will create a new virtual machine, using cores, memory and
1388VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1389 storage. You have to configure the network manually.
56368da8 1390
32e8b5b2
AL
1391----
1392# qm importovf 999 WinDev1709Eval.ovf local-lvm
1393----
56368da8 1394
c069256d 1395The VM is ready to be started.
56368da8 1396
c069256d
EK
1397Adding an external disk image to a Virtual Machine
1398~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8 1399
144d5ede 1400You can also add an existing disk image to a VM, either coming from a
c069256d
EK
1401foreign hypervisor, or one that you created yourself.
1402
1403Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1404
1405 vmdebootstrap --verbose \
67d59a35 1406 --size 10GiB --serial-console \
c069256d
EK
1407 --grub --no-extlinux \
1408 --package openssh-server \
1409 --package avahi-daemon \
1410 --package qemu-guest-agent \
1411 --hostname vm600 --enable-dhcp \
1412 --customize=./copy_pub_ssh.sh \
1413 --sparse --image vm600.raw
1414
10a2a4aa
FE
1415You can now create a new target VM, importing the image to the storage `pvedir`
1416and attaching it to the VM's SCSI controller:
c069256d 1417
32e8b5b2
AL
1418----
1419# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
10a2a4aa
FE
1420 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1421 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
32e8b5b2 1422----
c069256d
EK
1423
1424The VM is ready to be started.
707e37a2 1425
7eb69fd2 1426
16b4185a 1427ifndef::wiki[]
7eb69fd2 1428include::qm-cloud-init.adoc[]
16b4185a
DM
1429endif::wiki[]
1430
6e4c46c4
DC
1431ifndef::wiki[]
1432include::qm-pci-passthrough.adoc[]
1433endif::wiki[]
16b4185a 1434
c2c8eb89 1435Hookscripts
91f416b7 1436-----------
c2c8eb89
DC
1437
1438You can add a hook script to VMs with the config property `hookscript`.
1439
32e8b5b2
AL
1440----
1441# qm set 100 --hookscript local:snippets/hookscript.pl
1442----
c2c8eb89
DC
1443
1444It will be called during various phases of the guests lifetime.
1445For an example and documentation see the example script under
1446`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
7eb69fd2 1447
88a31964
DC
1448[[qm_hibernate]]
1449Hibernation
1450-----------
1451
1452You can suspend a VM to disk with the GUI option `Hibernate` or with
1453
32e8b5b2
AL
1454----
1455# qm suspend ID --todisk
1456----
88a31964
DC
1457
1458That means that the current content of the memory will be saved onto disk
1459and the VM gets stopped. On the next start, the memory content will be
1460loaded and the VM can continue where it was left off.
1461
1462[[qm_vmstatestorage]]
1463.State storage selection
1464If no target storage for the memory is given, it will be automatically
1465chosen, the first of:
1466
14671. The storage `vmstatestorage` from the VM config.
14682. The first shared storage from any VM disk.
14693. The first non-shared storage from any VM disk.
14704. The storage `local` as a fallback.
1471
8c1189b6 1472Managing Virtual Machines with `qm`
dd042288 1473------------------------------------
f69cfd23 1474
dd042288 1475qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
f69cfd23
DM
1476create and destroy virtual machines, and control execution
1477(start/stop/suspend/resume). Besides that, you can use qm to set
1478parameters in the associated config file. It is also possible to
1479create and delete virtual disks.
1480
dd042288
EK
1481CLI Usage Examples
1482~~~~~~~~~~~~~~~~~~
1483
b01b1f2c
EK
1484Using an iso file uploaded on the 'local' storage, create a VM
1485with a 4 GB IDE disk on the 'local-lvm' storage
dd042288 1486
32e8b5b2
AL
1487----
1488# qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1489----
dd042288
EK
1490
1491Start the new VM
1492
32e8b5b2
AL
1493----
1494# qm start 300
1495----
dd042288
EK
1496
1497Send a shutdown request, then wait until the VM is stopped.
1498
32e8b5b2
AL
1499----
1500# qm shutdown 300 && qm wait 300
1501----
dd042288
EK
1502
1503Same as above, but only wait for 40 seconds.
1504
32e8b5b2
AL
1505----
1506# qm shutdown 300 && qm wait 300 -timeout 40
1507----
dd042288 1508
87927c65
DJ
1509Destroying a VM always removes it from Access Control Lists and it always
1510removes the firewall configuration of the VM. You have to activate
1511'--purge', if you want to additionally remove the VM from replication jobs,
1512backup jobs and HA resource configurations.
1513
32e8b5b2
AL
1514----
1515# qm destroy 300 --purge
1516----
87927c65 1517
66aecccb
AL
1518Move a disk image to a different storage.
1519
32e8b5b2
AL
1520----
1521# qm move-disk 300 scsi0 other-storage
1522----
66aecccb
AL
1523
1524Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1525the source VM and attaches it as `scsi3` to the target VM. In the background
1526the disk image is being renamed so that the name matches the new owner.
1527
32e8b5b2
AL
1528----
1529# qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1530----
87927c65 1531
f0a8ab95
DM
1532
1533[[qm_configuration]]
f69cfd23
DM
1534Configuration
1535-------------
1536
f0a8ab95
DM
1537VM configuration files are stored inside the Proxmox cluster file
1538system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1539Like other files stored inside `/etc/pve/`, they get automatically
1540replicated to all other cluster nodes.
f69cfd23 1541
f0a8ab95
DM
1542NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1543unique cluster wide.
1544
1545.Example VM Configuration
1546----
777cf894 1547boot: order=virtio0;net0
f0a8ab95
DM
1548cores: 1
1549sockets: 1
1550memory: 512
1551name: webmail
1552ostype: l26
f0a8ab95
DM
1553net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1554virtio0: local:vm-100-disk-1,size=32G
1555----
1556
1557Those configuration files are simple text files, and you can edit them
1558using a normal text editor (`vi`, `nano`, ...). This is sometimes
1559useful to do small corrections, but keep in mind that you need to
1560restart the VM to apply such changes.
1561
1562For that reason, it is usually better to use the `qm` command to
1563generate and modify those files, or do the whole thing using the GUI.
1564Our toolkit is smart enough to instantaneously apply most changes to
1565running VM. This feature is called "hot plug", and there is no
1566need to restart the VM in that case.
1567
1568
1569File Format
1570~~~~~~~~~~~
1571
1572VM configuration files use a simple colon separated key/value
1573format. Each line has the following format:
1574
1575-----
1576# this is a comment
1577OPTION: value
1578-----
1579
1580Blank lines in those files are ignored, and lines starting with a `#`
1581character are treated as comments and are also ignored.
1582
1583
1584[[qm_snapshots]]
1585Snapshots
1586~~~~~~~~~
1587
1588When you create a snapshot, `qm` stores the configuration at snapshot
1589time into a separate snapshot section within the same configuration
1590file. For example, after creating a snapshot called ``testsnapshot'',
1591your configuration file will look like this:
1592
1593.VM configuration with snapshot
1594----
1595memory: 512
1596swap: 512
1597parent: testsnaphot
1598...
1599
1600[testsnaphot]
1601memory: 512
1602swap: 512
1603snaptime: 1457170803
1604...
1605----
1606
1607There are a few snapshot related properties like `parent` and
1608`snaptime`. The `parent` property is used to store the parent/child
1609relationship between snapshots. `snaptime` is the snapshot creation
1610time stamp (Unix epoch).
f69cfd23 1611
88a31964
DC
1612You can optionally save the memory of a running VM with the option `vmstate`.
1613For details about how the target storage gets chosen for the VM state, see
1614xref:qm_vmstatestorage[State storage selection] in the chapter
1615xref:qm_hibernate[Hibernation].
f69cfd23 1616
80c0adcb 1617[[qm_options]]
a7f36905
DM
1618Options
1619~~~~~~~
1620
1621include::qm.conf.5-opts.adoc[]
1622
f69cfd23
DM
1623
1624Locks
1625-----
1626
d6466262
TL
1627Online migrations, snapshots and backups (`vzdump`) set a lock to prevent
1628incompatible concurrent actions on the affected VMs. Sometimes you need to
1629remove such a lock manually (for example after a power failure).
f69cfd23 1630
32e8b5b2
AL
1631----
1632# qm unlock <vmid>
1633----
f69cfd23 1634
0bcc62dd
DM
1635CAUTION: Only do that if you are sure the action which set the lock is
1636no longer running.
1637
f69cfd23 1638
16b4185a
DM
1639ifdef::wiki[]
1640
1641See Also
1642~~~~~~~~
1643
1644* link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1645
1646endif::wiki[]
1647
1648
f69cfd23 1649ifdef::manvolnum[]
704f19fb
DM
1650
1651Files
1652------
1653
1654`/etc/pve/qemu-server/<VMID>.conf`::
1655
1656Configuration file for the VM '<VMID>'.
1657
1658
f69cfd23
DM
1659include::pve-copyright.adoc[]
1660endif::manvolnum[]