]> git.proxmox.com Git - pve-docs.git/blob - qm.adoc
fix #5087: qm: formatting of start/shutdown order list
[pve-docs.git] / qm.adoc
1 [[chapter_virtual_machines]]
2 ifdef::manvolnum[]
3 qm(1)
4 =====
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 qm - QEMU/KVM Virtual Machine Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::qm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 QEMU/KVM Virtual Machines
23 =========================
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 // deprecates
28 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29 // http://pve.proxmox.com/wiki/KVM
30 // http://pve.proxmox.com/wiki/Qemu_Server
31
32 QEMU (short form for Quick Emulator) is an open source hypervisor that emulates a
33 physical computer. From the perspective of the host system where QEMU is
34 running, QEMU is a user program which has access to a number of local resources
35 like partitions, files, network cards which are then passed to an
36 emulated computer which sees them as if they were real devices.
37
38 A guest operating system running in the emulated computer accesses these
39 devices, and runs as if it were running on real hardware. For instance, you can pass
40 an ISO image as a parameter to QEMU, and the OS running in the emulated computer
41 will see a real CD-ROM inserted into a CD drive.
42
43 QEMU can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44 only concerned with 32 and 64 bits PC clone emulation, since it represents the
45 overwhelming majority of server hardware. The emulation of PC clones is also one
46 of the fastest due to the availability of processor extensions which greatly
47 speed up QEMU when the emulated architecture is the same as the host
48 architecture.
49
50 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51 It means that QEMU is running with the support of the virtualization processor
52 extensions, via the Linux KVM module. In the context of {pve} _QEMU_ and
53 _KVM_ can be used interchangeably, as QEMU in {pve} will always try to load the KVM
54 module.
55
56 QEMU inside {pve} runs as a root process, since this is required to access block
57 and PCI devices.
58
59
60 Emulated devices and paravirtualized devices
61 --------------------------------------------
62
63 The PC hardware emulated by QEMU includes a motherboard, network controllers,
64 SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
65 the `kvm(1)` man page) all of them emulated in software. All these devices
66 are the exact software equivalent of existing hardware devices, and if the OS
67 running in the guest has the proper drivers it will use the devices as if it
68 were running on real hardware. This allows QEMU to run _unmodified_ operating
69 systems.
70
71 This however has a performance cost, as running in software what was meant to
72 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73 QEMU can present to the guest operating system _paravirtualized devices_, where
74 the guest OS recognizes it is running inside QEMU and cooperates with the
75 hypervisor.
76
77 QEMU relies on the virtio virtualization standard, and is thus able to present
78 paravirtualized virtio devices, which includes a paravirtualized generic disk
79 controller, a paravirtualized network card, a paravirtualized serial port,
80 a paravirtualized SCSI controller, etc ...
81
82 TIP: It is *highly recommended* to use the virtio devices whenever you can, as
83 they provide a big performance improvement and are generally better maintained.
84 Using the virtio generic disk controller versus an emulated IDE controller will
85 double the sequential write throughput, as measured with `bonnie++(8)`. Using
86 the virtio network interface can deliver up to three times the throughput of an
87 emulated Intel E1000 network card, as measured with `iperf(1)`. footnote:[See
88 this benchmark on the KVM wiki https://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
90
91 [[qm_virtual_machines_settings]]
92 Virtual Machines Settings
93 -------------------------
94
95 Generally speaking {pve} tries to choose sane defaults for virtual machines
96 (VM). Make sure you understand the meaning of the settings you change, as it
97 could incur a performance slowdown, or putting your data at risk.
98
99
100 [[qm_general_settings]]
101 General Settings
102 ~~~~~~~~~~~~~~~~
103
104 [thumbnail="screenshot/gui-create-vm-general.png"]
105
106 General settings of a VM include
107
108 * the *Node* : the physical server on which the VM will run
109 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
110 * *Name*: a free form text string you can use to describe the VM
111 * *Resource Pool*: a logical group of VMs
112
113
114 [[qm_os_settings]]
115 OS Settings
116 ~~~~~~~~~~~
117
118 [thumbnail="screenshot/gui-create-vm-os.png"]
119
120 When creating a virtual machine (VM), setting the proper Operating System(OS)
121 allows {pve} to optimize some low level parameters. For instance Windows OS
122 expect the BIOS clock to use the local time, while Unix based OS expect the
123 BIOS clock to have the UTC time.
124
125 [[qm_system_settings]]
126 System Settings
127 ~~~~~~~~~~~~~~~
128
129 On VM creation you can change some basic system components of the new VM. You
130 can specify which xref:qm_display[display type] you want to use.
131 [thumbnail="screenshot/gui-create-vm-system.png"]
132 Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133 If you plan to install the QEMU Guest Agent, or if your selected ISO image
134 already ships and installs it automatically, you may want to tick the 'QEMU
135 Agent' box, which lets {pve} know that it can use its features to show some
136 more information, and complete some actions (for example, shutdown or
137 snapshots) more intelligently.
138
139 {pve} allows to boot VMs with different firmware and machine types, namely
140 xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
141 the default SeaBIOS to OVMF only if you plan to use
142 xref:qm_pci_passthrough[PCIe passthrough].
143
144 [[qm_machine_type]]
145
146 Machine Type
147 ^^^^^^^^^^^^
148
149 A VM's 'Machine Type' defines the hardware layout of the VM's virtual
150 motherboard. You can choose between the default
151 https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
152 https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
153 chipset, which also provides a virtual PCIe bus, and thus may be
154 desired if you want to pass through PCIe hardware.
155
156 Machine Version
157 +++++++++++++++
158
159 Each machine type is versioned in QEMU and a given QEMU binary supports many
160 machine versions. New versions might bring support for new features, fixes or
161 general improvements. However, they also change properties of the virtual
162 hardware. To avoid sudden changes from the guest's perspective and ensure
163 compatibility of the VM state, live-migration and snapshots with RAM will keep
164 using the same machine version in the new QEMU instance.
165
166 For Windows guests, the machine version is pinned during creation, because
167 Windows is sensitive to changes in the virtual hardware - even between cold
168 boots. For example, the enumeration of network devices might be different with
169 different machine versions. Other OSes like Linux can usually deal with such
170 changes just fine. For those, the 'Latest' machine version is used by default.
171 This means that after a fresh start, the newest machine version supported by the
172 QEMU binary is used (e.g. the newest machine version QEMU 8.1 supports is
173 version 8.1 for each machine type).
174
175 [[qm_machine_update]]
176
177 Update to a Newer Machine Version
178 +++++++++++++++++++++++++++++++++
179
180 Very old machine versions might become deprecated in QEMU. For example, this is
181 the case for versions 1.4 to 1.7 for the i440fx machine type. It is expected
182 that support for these machine versions will be dropped at some point. If you
183 see a deprecation warning, you should change the machine version to a newer one.
184 Be sure to have a working backup first and be prepared for changes to how the
185 guest sees hardware. In some scenarios, re-installing certain drivers might be
186 required. You should also check for snapshots with RAM that were taken with
187 these machine versions (i.e. the `runningmachine` configuration entry).
188 Unfortunately, there is no way to change the machine version of a snapshot, so
189 you'd need to load the snapshot to salvage any data from it.
190
191 [[qm_hard_disk]]
192 Hard Disk
193 ~~~~~~~~~
194
195 [[qm_hard_disk_bus]]
196 Bus/Controller
197 ^^^^^^^^^^^^^^
198 QEMU can emulate a number of storage controllers:
199
200 TIP: It is highly recommended to use the *VirtIO SCSI* or *VirtIO Block*
201 controller for performance reasons and because they are better maintained.
202
203 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
204 controller. Even if this controller has been superseded by recent designs,
205 each and every OS you can think of has support for it, making it a great choice
206 if you want to run an OS released before 2003. You can connect up to 4 devices
207 on this controller.
208
209 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
210 design, allowing higher throughput and a greater number of devices to be
211 connected. You can connect up to 6 devices on this controller.
212
213 * the *SCSI* controller, designed in 1985, is commonly found on server grade
214 hardware, and can connect up to 14 storage devices. {pve} emulates by default a
215 LSI 53C895A controller.
216 +
217 A SCSI controller of type _VirtIO SCSI single_ and enabling the
218 xref:qm_hard_disk_iothread[IO Thread] setting for the attached disks is
219 recommended if you aim for performance. This is the default for newly created
220 Linux VMs since {pve} 7.3. Each disk will have its own _VirtIO SCSI_ controller,
221 and QEMU will handle the disks IO in a dedicated thread. Linux distributions
222 have support for this controller since 2012, and FreeBSD since 2014. For Windows
223 OSes, you need to provide an extra ISO containing the drivers during the
224 installation.
225 // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
226
227 * The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
228 is an older type of paravirtualized controller. It has been superseded by the
229 VirtIO SCSI Controller, in terms of features.
230
231 [thumbnail="screenshot/gui-create-vm-hard-disk.png"]
232
233 [[qm_hard_disk_formats]]
234 Image Format
235 ^^^^^^^^^^^^
236 On each controller you attach a number of emulated hard disks, which are backed
237 by a file or a block device residing in the configured storage. The choice of
238 a storage type will determine the format of the hard disk image. Storages which
239 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
240 whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
241 either the *raw disk image format* or the *QEMU image format*.
242
243 * the *QEMU image format* is a copy on write format which allows snapshots, and
244 thin provisioning of the disk image.
245 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
246 you would get when executing the `dd` command on a block device in Linux. This
247 format does not support thin provisioning or snapshots by itself, requiring
248 cooperation from the storage layer for these tasks. It may, however, be up to
249 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
250 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
251 * the *VMware image format* only makes sense if you intend to import/export the
252 disk image to other hypervisors.
253
254 [[qm_hard_disk_cache]]
255 Cache Mode
256 ^^^^^^^^^^
257 Setting the *Cache* mode of the hard drive will impact how the host system will
258 notify the guest systems of block write completions. The *No cache* default
259 means that the guest system will be notified that a write is complete when each
260 block reaches the physical storage write queue, ignoring the host page cache.
261 This provides a good balance between safety and speed.
262
263 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
264 you can set the *No backup* option on that disk.
265
266 If you want the {pve} storage replication mechanism to skip a disk when starting
267 a replication job, you can set the *Skip replication* option on that disk.
268 As of {pve} 5.0, replication requires the disk images to be on a storage of type
269 `zfspool`, so adding a disk image to other storages when the VM has replication
270 configured requires to skip replication for this disk image.
271
272 [[qm_hard_disk_discard]]
273 Trim/Discard
274 ^^^^^^^^^^^^
275 If your storage supports _thin provisioning_ (see the storage chapter in the
276 {pve} guide), you can activate the *Discard* option on a drive. With *Discard*
277 set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
278 https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
279 marks blocks as unused after deleting files, the controller will relay this
280 information to the storage, which will then shrink the disk image accordingly.
281 For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
282 option on the drive. Some guest operating systems may also require the
283 *SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
284 only supported on guests using Linux Kernel 5.0 or higher.
285
286 If you would like a drive to be presented to the guest as a solid-state drive
287 rather than a rotational hard disk, you can set the *SSD emulation* option on
288 that drive. There is no requirement that the underlying storage actually be
289 backed by SSDs; this feature can be used with physical media of any type.
290 Note that *SSD emulation* is not supported on *VirtIO Block* drives.
291
292
293 [[qm_hard_disk_iothread]]
294 IO Thread
295 ^^^^^^^^^
296 The option *IO Thread* can only be used when using a disk with the *VirtIO*
297 controller, or with the *SCSI* controller, when the emulated controller type is
298 *VirtIO SCSI single*. With *IO Thread* enabled, QEMU creates one I/O thread per
299 storage controller rather than handling all I/O in the main event loop or vCPU
300 threads. One benefit is better work distribution and utilization of the
301 underlying storage. Another benefit is reduced latency (hangs) in the guest for
302 very I/O-intensive host workloads, since neither the main thread nor a vCPU
303 thread can be blocked by disk I/O.
304
305 [[qm_cpu]]
306 CPU
307 ~~~
308
309 [thumbnail="screenshot/gui-create-vm-cpu.png"]
310
311 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
312 This CPU can then contain one or many *cores*, which are independent
313 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
314 sockets with two cores is mostly irrelevant from a performance point of view.
315 However some software licenses depend on the number of sockets a machine has,
316 in that case it makes sense to set the number of sockets to what the license
317 allows you.
318
319 Increasing the number of virtual CPUs (cores and sockets) will usually provide a
320 performance improvement though that is heavily dependent on the use of the VM.
321 Multi-threaded applications will of course benefit from a large number of
322 virtual CPUs, as for each virtual cpu you add, QEMU will create a new thread of
323 execution on the host system. If you're not sure about the workload of your VM,
324 it is usually a safe bet to set the number of *Total cores* to 2.
325
326 NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
327 is greater than the number of cores on the server (for example, 4 VMs each with
328 4 cores (= total 16) on a machine with only 8 cores). In that case the host
329 system will balance the QEMU execution threads between your server cores, just
330 like if you were running a standard multi-threaded application. However, {pve}
331 will prevent you from starting VMs with more virtual CPU cores than physically
332 available, as this will only bring the performance down due to the cost of
333 context switches.
334
335 [[qm_cpu_resource_limits]]
336 Resource Limits
337 ^^^^^^^^^^^^^^^
338
339 In addition to the number of virtual cores, you can configure how much resources
340 a VM can get in relation to the host CPU time and also in relation to other
341 VMs.
342 With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
343 the whole VM can use on the host. It is a floating point value representing CPU
344 time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
345 single process would fully use one single core it would have `100%` CPU Time
346 usage. If a VM with four cores utilizes all its cores fully it would
347 theoretically use `400%`. In reality the usage may be even a bit higher as QEMU
348 can have additional threads for VM peripherals besides the vCPU core ones.
349 This setting can be useful if a VM should have multiple vCPUs, as it runs a few
350 processes in parallel, but the VM as a whole should not be able to run all
351 vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
352 which would profit from having 8 vCPUs, but at no time all of those 8 cores
353 should run at full load - as this would make the server so overloaded that
354 other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
355 `4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
356 real host cores CPU time. But, if only 4 would do work they could still get
357 almost 100% of a real core each.
358
359 NOTE: VMs can, depending on their configuration, use additional threads, such
360 as for networking or IO operations but also live migration. Thus a VM can show
361 up to use more CPU time than just its virtual CPUs could use. To ensure that a
362 VM never uses more CPU time than virtual CPUs assigned set the *cpulimit*
363 setting to the same value as the total core count.
364
365 The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
366 shares or CPU weight), controls how much CPU time a VM gets compared to other
367 running VMs. It is a relative weight which defaults to `100` (or `1024` if the
368 host uses legacy cgroup v1). If you increase this for a VM it will be
369 prioritized by the scheduler in comparison to other VMs with lower weight. For
370 example, if VM 100 has set the default `100` and VM 200 was changed to `200`,
371 the latter VM 200 would receive twice the CPU bandwidth than the first VM 100.
372
373 For more information see `man systemd.resource-control`, here `CPUQuota`
374 corresponds to `cpulimit` and `CPUWeight` corresponds to our `cpuunits`
375 setting, visit its Notes section for references and implementation details.
376
377 The third CPU resource limiting setting, *affinity*, controls what host cores
378 the virtual machine will be permitted to execute on. E.g., if an affinity value
379 of `0-3,8-11` is provided, the virtual machine will be restricted to using the
380 host cores `0,1,2,3,8,9,10,` and `11`. Valid *affinity* values are written in
381 cpuset `List Format`. List Format is a comma-separated list of CPU numbers and
382 ranges of numbers, in ASCII decimal.
383
384 NOTE: CPU *affinity* uses the `taskset` command to restrict virtual machines to
385 a given set of cores. This restriction will not take effect for some types of
386 processes that may be created for IO. *CPU affinity is not a security feature.*
387
388 For more information regarding *affinity* see `man cpuset`. Here the
389 `List Format` corresponds to valid *affinity* values. Visit its `Formats`
390 section for more examples.
391
392 CPU Type
393 ^^^^^^^^
394
395 QEMU can emulate a number different of *CPU types* from 486 to the latest Xeon
396 processors. Each new processor generation adds new features, like hardware
397 assisted 3d rendering, random number generation, memory protection, etc. Also,
398 a current generation can be upgraded through
399 xref:chapter_firmware_updates[microcode update] with bug or security fixes.
400
401 Usually you should select for your VM a processor type which closely matches the
402 CPU of the host system, as it means that the host CPU features (also called _CPU
403 flags_ ) will be available in your VMs. If you want an exact match, you can set
404 the CPU type to *host* in which case the VM will have exactly the same CPU flags
405 as your host system.
406
407 This has a downside though. If you want to do a live migration of VMs between
408 different hosts, your VM might end up on a new system with a different CPU type
409 or a different microcode version.
410 If the CPU flags passed to the guest are missing, the QEMU process will stop. To
411 remedy this QEMU has also its own virtual CPU types, that {pve} uses by default.
412
413 The backend default is 'kvm64' which works on essentially all x86_64 host CPUs
414 and the UI default when creating a new VM is 'x86-64-v2-AES', which requires a
415 host CPU starting from Westmere for Intel or at least a fourth generation
416 Opteron for AMD.
417
418 In short:
419
420 If you don’t care about live migration or have a homogeneous cluster where all
421 nodes have the same CPU and same microcode version, set the CPU type to host, as
422 in theory this will give your guests maximum performance.
423
424 If you care about live migration and security, and you have only Intel CPUs or
425 only AMD CPUs, choose the lowest generation CPU model of your cluster.
426
427 If you care about live migration without security, or have mixed Intel/AMD
428 cluster, choose the lowest compatible virtual QEMU CPU type.
429
430 NOTE: Live migrations between Intel and AMD host CPUs have no guarantee to work.
431
432 See also
433 xref:chapter_qm_vcpu_list[List of AMD and Intel CPU Types as Defined in QEMU].
434
435 QEMU CPU Types
436 ^^^^^^^^^^^^^^
437
438 QEMU also provide virtual CPU types, compatible with both Intel and AMD host
439 CPUs.
440
441 NOTE: To mitigate the Spectre vulnerability for virtual CPU types, you need to
442 add the relevant CPU flags, see
443 xref:qm_meltdown_spectre[Meltdown / Spectre related CPU flags].
444
445 Historically, {pve} had the 'kvm64' CPU model, with CPU flags at the level of
446 Pentium 4 enabled, so performance was not great for certain workloads.
447
448 In the summer of 2020, AMD, Intel, Red Hat, and SUSE collaborated to define
449 three x86-64 microarchitecture levels on top of the x86-64 baseline, with modern
450 flags enabled. For details, see the
451 https://gitlab.com/x86-psABIs/x86-64-ABI[x86-64-ABI specification].
452
453 NOTE: Some newer distributions like CentOS 9 are now built with 'x86-64-v2'
454 flags as a minimum requirement.
455
456 * 'kvm64 (x86-64-v1)': Compatible with Intel CPU >= Pentium 4, AMD CPU >=
457 Phenom.
458 +
459 * 'x86-64-v2': Compatible with Intel CPU >= Nehalem, AMD CPU >= Opteron_G3.
460 Added CPU flags compared to 'x86-64-v1': '+cx16', '+lahf-lm', '+popcnt', '+pni',
461 '+sse4.1', '+sse4.2', '+ssse3'.
462 +
463 * 'x86-64-v2-AES': Compatible with Intel CPU >= Westmere, AMD CPU >= Opteron_G4.
464 Added CPU flags compared to 'x86-64-v2': '+aes'.
465 +
466 * 'x86-64-v3': Compatible with Intel CPU >= Broadwell, AMD CPU >= EPYC. Added
467 CPU flags compared to 'x86-64-v2-AES': '+avx', '+avx2', '+bmi1', '+bmi2',
468 '+f16c', '+fma', '+movbe', '+xsave'.
469 +
470 * 'x86-64-v4': Compatible with Intel CPU >= Skylake, AMD CPU >= EPYC v4 Genoa.
471 Added CPU flags compared to 'x86-64-v3': '+avx512f', '+avx512bw', '+avx512cd',
472 '+avx512dq', '+avx512vl'.
473
474 Custom CPU Types
475 ^^^^^^^^^^^^^^^^
476
477 You can specify custom CPU types with a configurable set of features. These are
478 maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
479 an administrator. See `man cpu-models.conf` for format details.
480
481 Specified custom types can be selected by any user with the `Sys.Audit`
482 privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
483 or API, the name needs to be prefixed with 'custom-'.
484
485 [[qm_meltdown_spectre]]
486 Meltdown / Spectre related CPU flags
487 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
488
489 There are several CPU flags related to the Meltdown and Spectre vulnerabilities
490 footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
491 manually unless the selected CPU type of your VM already enables them by default.
492
493 There are two requirements that need to be fulfilled in order to use these
494 CPU flags:
495
496 * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
497 * The guest operating system must be updated to a version which mitigates the
498 attacks and is able to utilize the CPU feature
499
500 Otherwise you need to set the desired CPU flag of the virtual CPU, either by
501 editing the CPU options in the web UI, or by setting the 'flags' property of the
502 'cpu' option in the VM configuration file.
503
504 For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
505 so-called ``microcode update'' for your CPU, see
506 xref:chapter_firmware_updates[chapter Firmware Updates]. Note that not all
507 affected CPUs can be updated to support spec-ctrl.
508
509
510 To check if the {pve} host is vulnerable, execute the following command as root:
511
512 ----
513 for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
514 ----
515
516 A community script is also available to detect if the host is still vulnerable.
517 footnote:[spectre-meltdown-checker https://meltdown.ovh/]
518
519 Intel processors
520 ^^^^^^^^^^^^^^^^
521
522 * 'pcid'
523 +
524 This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
525 called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
526 the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
527 mechanism footnote:[PCID is now a critical performance/security feature on x86
528 https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
529 +
530 To check if the {pve} host supports PCID, execute the following command as root:
531 +
532 ----
533 # grep ' pcid ' /proc/cpuinfo
534 ----
535 +
536 If this does not return empty your host's CPU has support for 'pcid'.
537
538 * 'spec-ctrl'
539 +
540 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
541 in cases where retpolines are not sufficient.
542 Included by default in Intel CPU models with -IBRS suffix.
543 Must be explicitly turned on for Intel CPU models without -IBRS suffix.
544 Requires an updated host CPU microcode (intel-microcode >= 20180425).
545 +
546 * 'ssbd'
547 +
548 Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
549 Must be explicitly turned on for all Intel CPU models.
550 Requires an updated host CPU microcode(intel-microcode >= 20180703).
551
552
553 AMD processors
554 ^^^^^^^^^^^^^^
555
556 * 'ibpb'
557 +
558 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
559 in cases where retpolines are not sufficient.
560 Included by default in AMD CPU models with -IBPB suffix.
561 Must be explicitly turned on for AMD CPU models without -IBPB suffix.
562 Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
563
564
565
566 * 'virt-ssbd'
567 +
568 Required to enable the Spectre v4 (CVE-2018-3639) fix.
569 Not included by default in any AMD CPU model.
570 Must be explicitly turned on for all AMD CPU models.
571 This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
572 Note that this must be explicitly enabled when when using the "host" cpu model,
573 because this is a virtual feature which does not exist in the physical CPUs.
574
575
576 * 'amd-ssbd'
577 +
578 Required to enable the Spectre v4 (CVE-2018-3639) fix.
579 Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
580 This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
581 virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
582
583
584 * 'amd-no-ssb'
585 +
586 Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
587 Not included by default in any AMD CPU model.
588 Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
589 and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
590 This is mutually exclusive with virt-ssbd and amd-ssbd.
591
592
593 NUMA
594 ^^^^
595 You can also optionally emulate a *NUMA*
596 footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
597 in your VMs. The basics of the NUMA architecture mean that instead of having a
598 global memory pool available to all your cores, the memory is spread into local
599 banks close to each socket.
600 This can bring speed improvements as the memory bus is not a bottleneck
601 anymore. If your system has a NUMA architecture footnote:[if the command
602 `numactl --hardware | grep available` returns more than one node, then your host
603 system has a NUMA architecture] we recommend to activate the option, as this
604 will allow proper distribution of the VM resources on the host system.
605 This option is also required to hot-plug cores or RAM in a VM.
606
607 If the NUMA option is used, it is recommended to set the number of sockets to
608 the number of nodes of the host system.
609
610 vCPU hot-plug
611 ^^^^^^^^^^^^^
612
613 Modern operating systems introduced the capability to hot-plug and, to a
614 certain extent, hot-unplug CPUs in a running system. Virtualization allows us
615 to avoid a lot of the (physical) problems real hardware can cause in such
616 scenarios.
617 Still, this is a rather new and complicated feature, so its use should be
618 restricted to cases where its absolutely needed. Most of the functionality can
619 be replicated with other, well tested and less complicated, features, see
620 xref:qm_cpu_resource_limits[Resource Limits].
621
622 In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
623 To start a VM with less than this total core count of CPUs you may use the
624 *vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
625
626 Currently only this feature is only supported on Linux, a kernel newer than 3.10
627 is needed, a kernel newer than 4.7 is recommended.
628
629 You can use a udev rule as follow to automatically set new CPUs as online in
630 the guest:
631
632 ----
633 SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
634 ----
635
636 Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
637
638 Note: CPU hot-remove is machine dependent and requires guest cooperation. The
639 deletion command does not guarantee CPU removal to actually happen, typically
640 it's a request forwarded to guest OS using target dependent mechanism, such as
641 ACPI on x86/amd64.
642
643
644 [[qm_memory]]
645 Memory
646 ~~~~~~
647
648 For each VM you have the option to set a fixed size memory or asking
649 {pve} to dynamically allocate memory based on the current RAM usage of the
650 host.
651
652 .Fixed Memory Allocation
653 [thumbnail="screenshot/gui-create-vm-memory.png"]
654
655 When setting memory and minimum memory to the same amount
656 {pve} will simply allocate what you specify to your VM.
657
658 Even when using a fixed memory size, the ballooning device gets added to the
659 VM, because it delivers useful information such as how much memory the guest
660 really uses.
661 In general, you should leave *ballooning* enabled, but if you want to disable
662 it (like for debugging purposes), simply uncheck *Ballooning Device* or set
663
664 balloon: 0
665
666 in the configuration.
667
668 .Automatic Memory Allocation
669
670 // see autoballoon() in pvestatd.pm
671 When setting the minimum memory lower than memory, {pve} will make sure that the
672 minimum amount you specified is always available to the VM, and if RAM usage on
673 the host is below 80%, will dynamically add memory to the guest up to the
674 maximum memory specified.
675
676 When the host is running low on RAM, the VM will then release some memory
677 back to the host, swapping running processes if needed and starting the oom
678 killer in last resort. The passing around of memory between host and guest is
679 done via a special `balloon` kernel driver running inside the guest, which will
680 grab or release memory pages from the host.
681 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
682
683 When multiple VMs use the autoallocate facility, it is possible to set a
684 *Shares* coefficient which indicates the relative amount of the free host memory
685 that each VM should take. Suppose for instance you have four VMs, three of them
686 running an HTTP server and the last one is a database server. To cache more
687 database blocks in the database server RAM, you would like to prioritize the
688 database VM when spare RAM is available. For this you assign a Shares property
689 of 3000 to the database VM, leaving the other VMs to the Shares default setting
690 of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
691 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
692 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
693 get 1.5 GB.
694
695 All Linux distributions released after 2010 have the balloon kernel driver
696 included. For Windows OSes, the balloon driver needs to be added manually and can
697 incur a slowdown of the guest, so we don't recommend using it on critical
698 systems.
699 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
700
701 When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
702 of RAM available to the host.
703
704
705 [[qm_network_device]]
706 Network Device
707 ~~~~~~~~~~~~~~
708
709 [thumbnail="screenshot/gui-create-vm-network.png"]
710
711 Each VM can have many _Network interface controllers_ (NIC), of four different
712 types:
713
714 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
715 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
716 performance. Like all VirtIO devices, the guest OS should have the proper driver
717 installed.
718 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
719 only be used when emulating older operating systems ( released before 2002 )
720 * the *vmxnet3* is another paravirtualized device, which should only be used
721 when importing a VM from another hypervisor.
722
723 {pve} will generate for each NIC a random *MAC address*, so that your VM is
724 addressable on Ethernet networks.
725
726 The NIC you added to the VM can follow one of two different models:
727
728 * in the default *Bridged mode* each virtual NIC is backed on the host by a
729 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
730 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
731 have direct access to the Ethernet LAN on which the host is located.
732 * in the alternative *NAT mode*, each virtual NIC will only communicate with
733 the QEMU user networking stack, where a built-in router and DHCP server can
734 provide network access. This built-in DHCP will serve addresses in the private
735 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
736 should only be used for testing. This mode is only available via CLI or the API,
737 but not via the web UI.
738
739 You can also skip adding a network device when creating a VM by selecting *No
740 network device*.
741
742 You can overwrite the *MTU* setting for each VM network device. The option
743 `mtu=1` represents a special case, in which the MTU value will be inherited
744 from the underlying bridge.
745 This option is only available for *VirtIO* network devices.
746
747 .Multiqueue
748 If you are using the VirtIO driver, you can optionally activate the
749 *Multiqueue* option. This option allows the guest OS to process networking
750 packets using multiple virtual CPUs, providing an increase in the total number
751 of packets transferred.
752
753 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
754 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
755 host kernel, where the queue will be processed by a kernel thread spawned by the
756 vhost driver. With this option activated, it is possible to pass _multiple_
757 network queues to the host kernel for each NIC.
758
759 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
760 When using Multiqueue, it is recommended to set it to a value equal
761 to the number of Total Cores of your guest. You also need to set in
762 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
763 command:
764
765 `ethtool -L ens1 combined X`
766
767 where X is the number of the number of vcpus of the VM.
768
769 You should note that setting the Multiqueue parameter to a value greater
770 than one will increase the CPU load on the host and guest systems as the
771 traffic increases. We recommend to set this option only when the VM has to
772 process a great number of incoming connections, such as when the VM is running
773 as a router, reverse proxy or a busy HTTP server doing long polling.
774
775 [[qm_display]]
776 Display
777 ~~~~~~~
778
779 QEMU can virtualize a few types of VGA hardware. Some examples are:
780
781 * *std*, the default, emulates a card with Bochs VBE extensions.
782 * *cirrus*, this was once the default, it emulates a very old hardware module
783 with all its problems. This display type should only be used if really
784 necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
785 qemu: using cirrus considered harmful], for example, if using Windows XP or
786 earlier
787 * *vmware*, is a VMWare SVGA-II compatible adapter.
788 * *qxl*, is the QXL paravirtualized graphics card. Selecting this also
789 enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
790 VM.
791 * *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
792 can offload workloads to the host GPU without requiring special (expensive)
793 models and drivers and neither binding the host GPU completely, allowing
794 reuse between multiple guests and or the host.
795 +
796 NOTE: VirGL support needs some extra libraries that aren't installed by
797 default due to being relatively big and also not available as open source for
798 all GPU models/vendors. For most setups you'll just need to do:
799 `apt install libgl1 libegl1`
800
801 You can edit the amount of memory given to the virtual GPU, by setting
802 the 'memory' option. This can enable higher resolutions inside the VM,
803 especially with SPICE/QXL.
804
805 As the memory is reserved by display device, selecting Multi-Monitor mode
806 for SPICE (such as `qxl2` for dual monitors) has some implications:
807
808 * Windows needs a device for each monitor, so if your 'ostype' is some
809 version of Windows, {pve} gives the VM an extra device per monitor.
810 Each device gets the specified amount of memory.
811
812 * Linux VMs, can always enable more virtual monitors, but selecting
813 a Multi-Monitor mode multiplies the memory given to the device with
814 the number of monitors.
815
816 Selecting `serialX` as display 'type' disables the VGA output, and redirects
817 the Web Console to the selected serial port. A configured display 'memory'
818 setting will be ignored in that case.
819
820 .VNC clipboard
821 You can enable the VNC clipboard by setting `clipboard` to `vnc`.
822
823 ----
824 # qm set <vmid> -vga <displaytype>,clipboard=vnc
825 ----
826
827 In order to use the clipboard feature, you must first install the
828 SPICE guest tools. On Debian-based distributions, this can be achieved
829 by installing `spice-vdagent`. For other Operating Systems search for it
830 in the offical repositories or see: https://www.spice-space.org/download.html
831
832 Once you have installed the spice guest tools, you can use the VNC clipboard
833 function (e.g. in the noVNC console panel). However, if you're using
834 SPICE, virtio or virgl, you'll need to choose which clipboard to use.
835 This is because the default *SPICE* clipboard will be replaced by the
836 *VNC* clipboard, if `clipboard` is set to `vnc`.
837
838 [[qm_usb_passthrough]]
839 USB Passthrough
840 ~~~~~~~~~~~~~~~
841
842 There are two different types of USB passthrough devices:
843
844 * Host USB passthrough
845 * SPICE USB passthrough
846
847 Host USB passthrough works by giving a VM a USB device of the host.
848 This can either be done via the vendor- and product-id, or
849 via the host bus and port.
850
851 The vendor/product-id looks like this: *0123:abcd*,
852 where *0123* is the id of the vendor, and *abcd* is the id
853 of the product, meaning two pieces of the same usb device
854 have the same id.
855
856 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
857 and *2.3.4* is the port path. This represents the physical
858 ports of your host (depending of the internal order of the
859 usb controllers).
860
861 If a device is present in a VM configuration when the VM starts up,
862 but the device is not present in the host, the VM can boot without problems.
863 As soon as the device/port is available in the host, it gets passed through.
864
865 WARNING: Using this kind of USB passthrough means that you cannot move
866 a VM online to another host, since the hardware is only available
867 on the host the VM is currently residing.
868
869 The second type of passthrough is SPICE USB passthrough. If you add one or more
870 SPICE USB ports to your VM, you can dynamically pass a local USB device from
871 your SPICE client through to the VM. This can be useful to redirect an input
872 device or hardware dongle temporarily.
873
874 It is also possible to map devices on a cluster level, so that they can be
875 properly used with HA and hardware changes are detected and non root users
876 can configure them. See xref:resource_mapping[Resource Mapping]
877 for details on that.
878
879 [[qm_bios_and_uefi]]
880 BIOS and UEFI
881 ~~~~~~~~~~~~~
882
883 In order to properly emulate a computer, QEMU needs to use a firmware.
884 Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
885 first steps when booting a VM. It is responsible for doing basic hardware
886 initialization and for providing an interface to the firmware and hardware for
887 the operating system. By default QEMU uses *SeaBIOS* for this, which is an
888 open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
889 standard setups.
890
891 Some operating systems (such as Windows 11) may require use of an UEFI
892 compatible implementation. In such cases, you must use *OVMF* instead,
893 which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
894
895 There are other scenarios in which the SeaBIOS may not be the ideal firmware to
896 boot from, for example if you want to do VGA passthrough. footnote:[Alex
897 Williamson has a good blog entry about this
898 https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
899
900 If you want to use OVMF, there are several things to consider:
901
902 In order to save things like the *boot order*, there needs to be an EFI Disk.
903 This disk will be included in backups and snapshots, and there can only be one.
904
905 You can create such a disk with the following command:
906
907 ----
908 # qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
909 ----
910
911 Where *<storage>* is the storage where you want to have the disk, and
912 *<format>* is a format which the storage supports. Alternatively, you can
913 create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
914 hardware section of a VM.
915
916 The *efitype* option specifies which version of the OVMF firmware should be
917 used. For new VMs, this should always be '4m', as it supports Secure Boot and
918 has more space allocated to support future development (this is the default in
919 the GUI).
920
921 *pre-enroll-keys* specifies if the efidisk should come pre-loaded with
922 distribution-specific and Microsoft Standard Secure Boot keys. It also enables
923 Secure Boot by default (though it can still be disabled in the OVMF menu within
924 the VM).
925
926 NOTE: If you want to start using Secure Boot in an existing VM (that still uses
927 a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
928 (`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
929 will reset any custom configurations you have made in the OVMF menu!
930
931 When using OVMF with a virtual display (without VGA passthrough),
932 you need to set the client resolution in the OVMF menu (which you can reach
933 with a press of the ESC button during boot), or you have to choose
934 SPICE as the display type.
935
936 [[qm_tpm]]
937 Trusted Platform Module (TPM)
938 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
939
940 A *Trusted Platform Module* is a device which stores secret data - such as
941 encryption keys - securely and provides tamper-resistance functions for
942 validating system boot.
943
944 Certain operating systems (such as Windows 11) require such a device to be
945 attached to a machine (be it physical or virtual).
946
947 A TPM is added by specifying a *tpmstate* volume. This works similar to an
948 efidisk, in that it cannot be changed (only removed) once created. You can add
949 one via the following command:
950
951 ----
952 # qm set <vmid> -tpmstate0 <storage>:1,version=<version>
953 ----
954
955 Where *<storage>* is the storage you want to put the state on, and *<version>*
956 is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
957 choosing 'Add' -> 'TPM State' in the hardware section of a VM.
958
959 The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
960 implementation that requires a 'v1.2' TPM, it should be preferred.
961
962 NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
963 security benefits. The point of a TPM is that the data on it cannot be modified
964 easily, except via commands specified as part of the TPM spec. Since with an
965 emulated device the data storage happens on a regular volume, it can potentially
966 be edited by anyone with access to it.
967
968 [[qm_ivshmem]]
969 Inter-VM shared memory
970 ~~~~~~~~~~~~~~~~~~~~~~
971
972 You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
973 share memory between the host and a guest, or also between multiple guests.
974
975 To add such a device, you can use `qm`:
976
977 ----
978 # qm set <vmid> -ivshmem size=32,name=foo
979 ----
980
981 Where the size is in MiB. The file will be located under
982 `/dev/shm/pve-shm-$name` (the default name is the vmid).
983
984 NOTE: Currently the device will get deleted as soon as any VM using it got
985 shutdown or stopped. Open connections will still persist, but new connections
986 to the exact same device cannot be made anymore.
987
988 A use case for such a device is the Looking Glass
989 footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
990 performance, low-latency display mirroring between host and guest.
991
992 [[qm_audio_device]]
993 Audio Device
994 ~~~~~~~~~~~~
995
996 To add an audio device run the following command:
997
998 ----
999 qm set <vmid> -audio0 device=<device>
1000 ----
1001
1002 Supported audio devices are:
1003
1004 * `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
1005 * `intel-hda`: Intel HD Audio Controller, emulates ICH6
1006 * `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
1007
1008 There are two backends available:
1009
1010 * 'spice'
1011 * 'none'
1012
1013 The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
1014 the 'none' backend can be useful if an audio device is needed in the VM for some
1015 software to work. To use the physical audio device of the host use device
1016 passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
1017 xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
1018 have options to play sound.
1019
1020
1021 [[qm_virtio_rng]]
1022 VirtIO RNG
1023 ~~~~~~~~~~
1024
1025 A RNG (Random Number Generator) is a device providing entropy ('randomness') to
1026 a system. A virtual hardware-RNG can be used to provide such entropy from the
1027 host system to a guest VM. This helps to avoid entropy starvation problems in
1028 the guest (a situation where not enough entropy is available and the system may
1029 slow down or run into problems), especially during the guests boot process.
1030
1031 To add a VirtIO-based emulated RNG, run the following command:
1032
1033 ----
1034 qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
1035 ----
1036
1037 `source` specifies where entropy is read from on the host and has to be one of
1038 the following:
1039
1040 * `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
1041 * `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
1042 starvation on the host system)
1043 * `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
1044 are available, the one selected in
1045 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
1046
1047 A limit can be specified via the `max_bytes` and `period` parameters, they are
1048 read as `max_bytes` per `period` in milliseconds. However, it does not represent
1049 a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
1050 available on a 1 second timer, not that 1 KiB is streamed to the guest over the
1051 course of one second. Reducing the `period` can thus be used to inject entropy
1052 into the guest at a faster rate.
1053
1054 By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
1055 recommended to always use a limiter to avoid guests using too many host
1056 resources. If desired, a value of '0' for `max_bytes` can be used to disable
1057 all limits.
1058
1059 [[qm_bootorder]]
1060 Device Boot Order
1061 ~~~~~~~~~~~~~~~~~
1062
1063 QEMU can tell the guest which devices it should boot from, and in which order.
1064 This can be specified in the config via the `boot` property, for example:
1065
1066 ----
1067 boot: order=scsi0;net0;hostpci0
1068 ----
1069
1070 [thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
1071
1072 This way, the guest would first attempt to boot from the disk `scsi0`, if that
1073 fails, it would go on to attempt network boot from `net0`, and in case that
1074 fails too, finally attempt to boot from a passed through PCIe device (seen as
1075 disk in case of NVMe, otherwise tries to launch into an option ROM).
1076
1077 On the GUI you can use a drag-and-drop editor to specify the boot order, and use
1078 the checkbox to enable or disable certain devices for booting altogether.
1079
1080 NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
1081 all of them must be marked as 'bootable' (that is, they must have the checkbox
1082 enabled or appear in the list in the config) for the guest to be able to boot.
1083 This is because recent SeaBIOS and OVMF versions only initialize disks if they
1084 are marked 'bootable'.
1085
1086 In any case, even devices not appearing in the list or having the checkmark
1087 disabled will still be available to the guest, once it's operating system has
1088 booted and initialized them. The 'bootable' flag only affects the guest BIOS and
1089 bootloader.
1090
1091
1092 [[qm_startup_and_shutdown]]
1093 Automatic Start and Shutdown of Virtual Machines
1094 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1095
1096 After creating your VMs, you probably want them to start automatically
1097 when the host system boots. For this you need to select the option 'Start at
1098 boot' from the 'Options' Tab of your VM in the web interface, or set it with
1099 the following command:
1100
1101 ----
1102 # qm set <vmid> -onboot 1
1103 ----
1104
1105 .Start and Shutdown Order
1106
1107 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
1108
1109 In some case you want to be able to fine tune the boot order of your
1110 VMs, for instance if one of your VM is providing firewalling or DHCP
1111 to other guest systems. For this you can use the following
1112 parameters:
1113
1114 * *Start/Shutdown order*: Defines the start order priority. For example, set it
1115 to 1 if you want the VM to be the first to be started. (We use the reverse
1116 startup order for shutdown, so a machine with a start order of 1 would be the
1117 last to be shut down). If multiple VMs have the same order defined on a host,
1118 they will additionally be ordered by 'VMID' in ascending order.
1119 * *Startup delay*: Defines the interval between this VM start and subsequent
1120 VMs starts. For example, set it to 240 if you want to wait 240 seconds before
1121 starting other VMs.
1122 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
1123 for the VM to be offline after issuing a shutdown command. By default this
1124 value is set to 180, which means that {pve} will issue a shutdown request and
1125 wait 180 seconds for the machine to be offline. If the machine is still online
1126 after the timeout it will be stopped forcefully.
1127
1128 NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
1129 'boot order' options currently. Those VMs will be skipped by the startup and
1130 shutdown algorithm as the HA manager itself ensures that VMs get started and
1131 stopped.
1132
1133 Please note that machines without a Start/Shutdown order parameter will always
1134 start after those where the parameter is set. Further, this parameter can only
1135 be enforced between virtual machines running on the same host, not
1136 cluster-wide.
1137
1138 If you require a delay between the host boot and the booting of the first VM,
1139 see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
1140
1141
1142 [[qm_qemu_agent]]
1143 QEMU Guest Agent
1144 ~~~~~~~~~~~~~~~~
1145
1146 The QEMU Guest Agent is a service which runs inside the VM, providing a
1147 communication channel between the host and the guest. It is used to exchange
1148 information and allows the host to issue commands to the guest.
1149
1150 For example, the IP addresses in the VM summary panel are fetched via the guest
1151 agent.
1152
1153 Or when starting a backup, the guest is told via the guest agent to sync
1154 outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1155
1156 For the guest agent to work properly the following steps must be taken:
1157
1158 * install the agent in the guest and make sure it is running
1159 * enable the communication via the agent in {pve}
1160
1161 Install Guest Agent
1162 ^^^^^^^^^^^^^^^^^^^
1163
1164 For most Linux distributions, the guest agent is available. The package is
1165 usually named `qemu-guest-agent`.
1166
1167 For Windows, it can be installed from the
1168 https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1169 VirtIO driver ISO].
1170
1171 [[qm_qga_enable]]
1172 Enable Guest Agent Communication
1173 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1174
1175 Communication from {pve} with the guest agent can be enabled in the VM's
1176 *Options* panel. A fresh start of the VM is necessary for the changes to take
1177 effect.
1178
1179 [[qm_qga_auto_trim]]
1180 Automatic TRIM Using QGA
1181 ^^^^^^^^^^^^^^^^^^^^^^^^
1182
1183 It is possible to enable the 'Run guest-trim' option. With this enabled,
1184 {pve} will issue a trim command to the guest after the following
1185 operations that have the potential to write out zeros to the storage:
1186
1187 * moving a disk to another storage
1188 * live migrating a VM to another node with local storage
1189
1190 On a thin provisioned storage, this can help to free up unused space.
1191
1192 NOTE: There is a caveat with ext4 on Linux, because it uses an in-memory
1193 optimization to avoid issuing duplicate TRIM requests. Since the guest doesn't
1194 know about the change in the underlying storage, only the first guest-trim will
1195 run as expected. Subsequent ones, until the next reboot, will only consider
1196 parts of the filesystem that changed since then.
1197
1198 [[qm_qga_fsfreeze]]
1199 Filesystem Freeze & Thaw on Backup
1200 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1201
1202 By default, guest filesystems are synced via the 'fs-freeze' QEMU Guest Agent
1203 Command when a backup is performed, to provide consistency.
1204
1205 On Windows guests, some applications might handle consistent backups themselves
1206 by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
1207 'fs-freeze' then might interfere with that. For example, it has been observed
1208 that calling 'fs-freeze' with some SQL Servers triggers VSS to call the SQL
1209 Writer VSS module in a mode that breaks the SQL Server backup chain for
1210 differential backups.
1211
1212 For such setups you can configure {pve} to not issue a freeze-and-thaw cycle on
1213 backup by setting the `freeze-fs-on-backup` QGA option to `0`. This can also be
1214 done via the GUI with the 'Freeze/thaw guest filesystems on backup for
1215 consistency' option.
1216
1217 IMPORTANT: Disabling this option can potentially lead to backups with inconsistent
1218 filesystems and should therefore only be disabled if you know what you are
1219 doing.
1220
1221 Troubleshooting
1222 ^^^^^^^^^^^^^^^
1223
1224 .VM does not shut down
1225
1226 Make sure the guest agent is installed and running.
1227
1228 Once the guest agent is enabled, {pve} will send power commands like
1229 'shutdown' via the guest agent. If the guest agent is not running, commands
1230 cannot get executed properly and the shutdown command will run into a timeout.
1231
1232 [[qm_spice_enhancements]]
1233 SPICE Enhancements
1234 ~~~~~~~~~~~~~~~~~~
1235
1236 SPICE Enhancements are optional features that can improve the remote viewer
1237 experience.
1238
1239 To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1240 the following command to enable them via the CLI:
1241
1242 ----
1243 qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1244 ----
1245
1246 NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1247 must be set to SPICE (qxl).
1248
1249 Folder Sharing
1250 ^^^^^^^^^^^^^^
1251
1252 Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1253 installed in the guest. It makes the shared folder available through a local
1254 WebDAV server located at http://localhost:9843.
1255
1256 For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1257 from the
1258 https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1259
1260 Most Linux distributions have a package called `spice-webdavd` that can be
1261 installed.
1262
1263 To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1264 Select the folder to share and then enable the checkbox.
1265
1266 NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1267
1268 CAUTION: Experimental! Currently this feature does not work reliably.
1269
1270 Video Streaming
1271 ^^^^^^^^^^^^^^^
1272
1273 Fast refreshing areas are encoded into a video stream. Two options exist:
1274
1275 * *all*: Any fast refreshing area will be encoded into a video stream.
1276 * *filter*: Additional filters are used to decide if video streaming should be
1277 used (currently only small window surfaces are skipped).
1278
1279 A general recommendation if video streaming should be enabled and which option
1280 to choose from cannot be given. Your mileage may vary depending on the specific
1281 circumstances.
1282
1283 Troubleshooting
1284 ^^^^^^^^^^^^^^^
1285
1286 .Shared folder does not show up
1287
1288 Make sure the WebDAV service is enabled and running in the guest. On Windows it
1289 is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1290 different depending on the distribution.
1291
1292 If the service is running, check the WebDAV server by opening
1293 http://localhost:9843 in a browser in the guest.
1294
1295 It can help to restart the SPICE session.
1296
1297 [[qm_migration]]
1298 Migration
1299 ---------
1300
1301 [thumbnail="screenshot/gui-qemu-migrate.png"]
1302
1303 If you have a cluster, you can migrate your VM to another host with
1304
1305 ----
1306 # qm migrate <vmid> <target>
1307 ----
1308
1309 There are generally two mechanisms for this
1310
1311 * Online Migration (aka Live Migration)
1312 * Offline Migration
1313
1314 Online Migration
1315 ~~~~~~~~~~~~~~~~
1316
1317 If your VM is running and no locally bound resources are configured (such as
1318 devices that are passed through), you can initiate a live migration with the `--online`
1319 flag in the `qm migration` command evocation. The web interface defaults to
1320 live migration when the VM is running.
1321
1322 How it works
1323 ^^^^^^^^^^^^
1324
1325 Online migration first starts a new QEMU process on the target host with the
1326 'incoming' flag, which performs only basic initialization with the guest vCPUs
1327 still paused and then waits for the guest memory and device state data streams
1328 of the source Virtual Machine.
1329 All other resources, such as disks, are either shared or got already sent
1330 before runtime state migration of the VMs begins; so only the memory content
1331 and device state remain to be transferred.
1332
1333 Once this connection is established, the source begins asynchronously sending
1334 the memory content to the target. If the guest memory on the source changes,
1335 those sections are marked dirty and another pass is made to send the guest
1336 memory data.
1337 This loop is repeated until the data difference between running source VM
1338 and incoming target VM is small enough to be sent in a few milliseconds,
1339 because then the source VM can be paused completely, without a user or program
1340 noticing the pause, so that the remaining data can be sent to the target, and
1341 then unpause the targets VM's CPU to make it the new running VM in well under a
1342 second.
1343
1344 Requirements
1345 ^^^^^^^^^^^^
1346
1347 For Live Migration to work, there are some things required:
1348
1349 * The VM has no local resources that cannot be migrated. For example,
1350 PCI or USB devices that are passed through currently block live-migration.
1351 Local Disks, on the other hand, can be migrated by sending them to the target
1352 just fine.
1353 * The hosts are located in the same {pve} cluster.
1354 * The hosts have a working (and reliable) network connection between them.
1355 * The target host must have the same, or higher versions of the
1356 {pve} packages. Although it can sometimes work the other way around, this
1357 cannot be guaranteed.
1358 * The hosts have CPUs from the same vendor with similar capabilities. Different
1359 vendor *might* work depending on the actual models and VMs CPU type
1360 configured, but it cannot be guaranteed - so please test before deploying
1361 such a setup in production.
1362
1363 Offline Migration
1364 ~~~~~~~~~~~~~~~~~
1365
1366 If you have local resources, you can still migrate your VMs offline as long as
1367 all disk are on storage defined on both hosts.
1368 Migration then copies the disks to the target host over the network, as with
1369 online migration. Note that any hardware passthrough configuration may need to
1370 be adapted to the device location on the target host.
1371
1372 // TODO: mention hardware map IDs as better way to solve that, once available
1373
1374 [[qm_copy_and_clone]]
1375 Copies and Clones
1376 -----------------
1377
1378 [thumbnail="screenshot/gui-qemu-full-clone.png"]
1379
1380 VM installation is usually done using an installation media (CD-ROM)
1381 from the operating system vendor. Depending on the OS, this can be a
1382 time consuming task one might want to avoid.
1383
1384 An easy way to deploy many VMs of the same type is to copy an existing
1385 VM. We use the term 'clone' for such copies, and distinguish between
1386 'linked' and 'full' clones.
1387
1388 Full Clone::
1389
1390 The result of such copy is an independent VM. The
1391 new VM does not share any storage resources with the original.
1392 +
1393
1394 It is possible to select a *Target Storage*, so one can use this to
1395 migrate a VM to a totally different storage. You can also change the
1396 disk image *Format* if the storage driver supports several formats.
1397 +
1398
1399 NOTE: A full clone needs to read and copy all VM image data. This is
1400 usually much slower than creating a linked clone.
1401 +
1402
1403 Some storage types allows to copy a specific *Snapshot*, which
1404 defaults to the 'current' VM data. This also means that the final copy
1405 never includes any additional snapshots from the original VM.
1406
1407
1408 Linked Clone::
1409
1410 Modern storage drivers support a way to generate fast linked
1411 clones. Such a clone is a writable copy whose initial contents are the
1412 same as the original data. Creating a linked clone is nearly
1413 instantaneous, and initially consumes no additional space.
1414 +
1415
1416 They are called 'linked' because the new image still refers to the
1417 original. Unmodified data blocks are read from the original image, but
1418 modification are written (and afterwards read) from a new
1419 location. This technique is called 'Copy-on-write'.
1420 +
1421
1422 This requires that the original volume is read-only. With {pve} one
1423 can convert any VM into a read-only <<qm_templates, Template>>). Such
1424 templates can later be used to create linked clones efficiently.
1425 +
1426
1427 NOTE: You cannot delete an original template while linked clones
1428 exist.
1429 +
1430
1431 It is not possible to change the *Target storage* for linked clones,
1432 because this is a storage internal feature.
1433
1434
1435 The *Target node* option allows you to create the new VM on a
1436 different node. The only restriction is that the VM is on shared
1437 storage, and that storage is also available on the target node.
1438
1439 To avoid resource conflicts, all network interface MAC addresses get
1440 randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1441 setting.
1442
1443
1444 [[qm_templates]]
1445 Virtual Machine Templates
1446 -------------------------
1447
1448 One can convert a VM into a Template. Such templates are read-only,
1449 and you can use them to create linked clones.
1450
1451 NOTE: It is not possible to start templates, because this would modify
1452 the disk images. If you want to change the template, create a linked
1453 clone and modify that.
1454
1455 VM Generation ID
1456 ----------------
1457
1458 {pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
1459 'vmgenid' Specification
1460 https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1461 for virtual machines.
1462 This can be used by the guest operating system to detect any event resulting
1463 in a time shift event, for example, restoring a backup or a snapshot rollback.
1464
1465 When creating new VMs, a 'vmgenid' will be automatically generated and saved
1466 in its configuration file.
1467
1468 To create and add a 'vmgenid' to an already existing VM one can pass the
1469 special value `1' to let {pve} autogenerate one or manually set the 'UUID'
1470 footnote:[Online GUID generator http://guid.one/] by using it as value, for
1471 example:
1472
1473 ----
1474 # qm set VMID -vmgenid 1
1475 # qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
1476 ----
1477
1478 NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1479 in the same effects as a change on snapshot rollback, backup restore, etc., has
1480 as the VM can interpret this as generation change.
1481
1482 In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1483 its value on VM creation, or retroactively delete the property in the
1484 configuration with:
1485
1486 ----
1487 # qm set VMID -delete vmgenid
1488 ----
1489
1490 The most prominent use case for 'vmgenid' are newer Microsoft Windows
1491 operating systems, which use it to avoid problems in time sensitive or
1492 replicate services (such as databases or domain controller
1493 footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1494 on snapshot rollback, backup restore or a whole VM clone operation.
1495
1496 Importing Virtual Machines and disk images
1497 ------------------------------------------
1498
1499 A VM export from a foreign hypervisor takes usually the form of one or more disk
1500 images, with a configuration file describing the settings of the VM (RAM,
1501 number of cores). +
1502 The disk images can be in the vmdk format, if the disks come from
1503 VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1504 The most popular configuration format for VM exports is the OVF standard, but in
1505 practice interoperation is limited because many settings are not implemented in
1506 the standard itself, and hypervisors export the supplementary information
1507 in non-standard extensions.
1508
1509 Besides the problem of format, importing disk images from other hypervisors
1510 may fail if the emulated hardware changes too much from one hypervisor to
1511 another. Windows VMs are particularly concerned by this, as the OS is very
1512 picky about any changes of hardware. This problem may be solved by
1513 installing the MergeIDE.zip utility available from the Internet before exporting
1514 and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1515
1516 Finally there is the question of paravirtualized drivers, which improve the
1517 speed of the emulated system and are specific to the hypervisor.
1518 GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1519 default and you can switch to the paravirtualized drivers right after importing
1520 the VM. For Windows VMs, you need to install the Windows paravirtualized
1521 drivers by yourself.
1522
1523 GNU/Linux and other free Unix can usually be imported without hassle. Note
1524 that we cannot guarantee a successful import/export of Windows VMs in all
1525 cases due to the problems above.
1526
1527 Step-by-step example of a Windows OVF import
1528 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1529
1530 Microsoft provides
1531 https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
1532 to get started with Windows development.We are going to use one of these
1533 to demonstrate the OVF import feature.
1534
1535 Download the Virtual Machine zip
1536 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1537
1538 After getting informed about the user agreement, choose the _Windows 10
1539 Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
1540
1541 Extract the disk image from the zip
1542 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1543
1544 Using the `unzip` utility or any archiver of your choice, unpack the zip,
1545 and copy via ssh/scp the ovf and vmdk files to your {pve} host.
1546
1547 Import the Virtual Machine
1548 ^^^^^^^^^^^^^^^^^^^^^^^^^^
1549
1550 This will create a new virtual machine, using cores, memory and
1551 VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1552 storage. You have to configure the network manually.
1553
1554 ----
1555 # qm importovf 999 WinDev1709Eval.ovf local-lvm
1556 ----
1557
1558 The VM is ready to be started.
1559
1560 Adding an external disk image to a Virtual Machine
1561 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1562
1563 You can also add an existing disk image to a VM, either coming from a
1564 foreign hypervisor, or one that you created yourself.
1565
1566 Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1567
1568 vmdebootstrap --verbose \
1569 --size 10GiB --serial-console \
1570 --grub --no-extlinux \
1571 --package openssh-server \
1572 --package avahi-daemon \
1573 --package qemu-guest-agent \
1574 --hostname vm600 --enable-dhcp \
1575 --customize=./copy_pub_ssh.sh \
1576 --sparse --image vm600.raw
1577
1578 You can now create a new target VM, importing the image to the storage `pvedir`
1579 and attaching it to the VM's SCSI controller:
1580
1581 ----
1582 # qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
1583 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1584 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
1585 ----
1586
1587 The VM is ready to be started.
1588
1589
1590 ifndef::wiki[]
1591 include::qm-cloud-init.adoc[]
1592 endif::wiki[]
1593
1594 ifndef::wiki[]
1595 include::qm-pci-passthrough.adoc[]
1596 endif::wiki[]
1597
1598 Hookscripts
1599 -----------
1600
1601 You can add a hook script to VMs with the config property `hookscript`.
1602
1603 ----
1604 # qm set 100 --hookscript local:snippets/hookscript.pl
1605 ----
1606
1607 It will be called during various phases of the guests lifetime.
1608 For an example and documentation see the example script under
1609 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
1610
1611 [[qm_hibernate]]
1612 Hibernation
1613 -----------
1614
1615 You can suspend a VM to disk with the GUI option `Hibernate` or with
1616
1617 ----
1618 # qm suspend ID --todisk
1619 ----
1620
1621 That means that the current content of the memory will be saved onto disk
1622 and the VM gets stopped. On the next start, the memory content will be
1623 loaded and the VM can continue where it was left off.
1624
1625 [[qm_vmstatestorage]]
1626 .State storage selection
1627 If no target storage for the memory is given, it will be automatically
1628 chosen, the first of:
1629
1630 1. The storage `vmstatestorage` from the VM config.
1631 2. The first shared storage from any VM disk.
1632 3. The first non-shared storage from any VM disk.
1633 4. The storage `local` as a fallback.
1634
1635 [[resource_mapping]]
1636 Resource Mapping
1637 ----------------
1638
1639 [thumbnail="screenshot/gui-datacenter-resource-mappings.png"]
1640
1641 When using or referencing local resources (e.g. address of a pci device), using
1642 the raw address or id is sometimes problematic, for example:
1643
1644 * when using HA, a different device with the same id or path may exist on the
1645 target node, and if one is not careful when assigning such guests to HA
1646 groups, the wrong device could be used, breaking configurations.
1647
1648 * changing hardware can change ids and paths, so one would have to check all
1649 assigned devices and see if the path or id is still correct.
1650
1651 To handle this better, one can define cluster wide resource mappings, such that
1652 a resource has a cluster unique, user selected identifier which can correspond
1653 to different devices on different hosts. With this, HA won't start a guest with
1654 a wrong device, and hardware changes can be detected.
1655
1656 Creating such a mapping can be done with the {pve} web GUI under `Datacenter`
1657 in the relevant tab in the `Resource Mappings` category, or on the cli with
1658
1659 ----
1660 # pvesh create /cluster/mapping/<type> <options>
1661 ----
1662
1663 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
1664
1665 Where `<type>` is the hardware type (currently either `pci` or `usb`) and
1666 `<options>` are the device mappings and other configuration parameters.
1667
1668 Note that the options must include a map property with all identifying
1669 properties of that hardware, so that it's possible to verify the hardware did
1670 not change and the correct device is passed through.
1671
1672 For example to add a PCI device as `device1` with the path `0000:01:00.0` that
1673 has the device id `0001` and the vendor id `0002` on the node `node1`, and
1674 `0000:02:00.0` on `node2` you can add it with:
1675
1676 ----
1677 # pvesh create /cluster/mapping/pci --id device1 \
1678 --map node=node1,path=0000:01:00.0,id=0002:0001 \
1679 --map node=node2,path=0000:02:00.0,id=0002:0001
1680 ----
1681
1682 You must repeat the `map` parameter for each node where that device should have
1683 a mapping (note that you can currently only map one USB device per node per
1684 mapping).
1685
1686 Using the GUI makes this much easier, as the correct properties are
1687 automatically picked up and sent to the API.
1688
1689 [thumbnail="screenshot/gui-datacenter-mapping-usb-edit.png"]
1690
1691 It's also possible for PCI devices to provide multiple devices per node with
1692 multiple map properties for the nodes. If such a device is assigned to a guest,
1693 the first free one will be used when the guest is started. The order of the
1694 paths given is also the order in which they are tried, so arbitrary allocation
1695 policies can be implemented.
1696
1697 This is useful for devices with SR-IOV, since some times it is not important
1698 which exact virtual function is passed through.
1699
1700 You can assign such a device to a guest either with the GUI or with
1701
1702 ----
1703 # qm set ID -hostpci0 <name>
1704 ----
1705
1706 for PCI devices, or
1707
1708 ----
1709 # qm set <vmid> -usb0 <name>
1710 ----
1711
1712 for USB devices.
1713
1714 Where `<vmid>` is the guests id and `<name>` is the chosen name for the created
1715 mapping. All usual options for passing through the devices are allowed, such as
1716 `mdev`.
1717
1718 To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
1719 (where `<type>` is the device type and `<name>` is the name of the mapping).
1720
1721 To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
1722 (in addition to the normal guest privileges to edit the configuration).
1723
1724 Managing Virtual Machines with `qm`
1725 ------------------------------------
1726
1727 qm is the tool to manage QEMU/KVM virtual machines on {pve}. You can
1728 create and destroy virtual machines, and control execution
1729 (start/stop/suspend/resume). Besides that, you can use qm to set
1730 parameters in the associated config file. It is also possible to
1731 create and delete virtual disks.
1732
1733 CLI Usage Examples
1734 ~~~~~~~~~~~~~~~~~~
1735
1736 Using an iso file uploaded on the 'local' storage, create a VM
1737 with a 4 GB IDE disk on the 'local-lvm' storage
1738
1739 ----
1740 # qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1741 ----
1742
1743 Start the new VM
1744
1745 ----
1746 # qm start 300
1747 ----
1748
1749 Send a shutdown request, then wait until the VM is stopped.
1750
1751 ----
1752 # qm shutdown 300 && qm wait 300
1753 ----
1754
1755 Same as above, but only wait for 40 seconds.
1756
1757 ----
1758 # qm shutdown 300 && qm wait 300 -timeout 40
1759 ----
1760
1761 Destroying a VM always removes it from Access Control Lists and it always
1762 removes the firewall configuration of the VM. You have to activate
1763 '--purge', if you want to additionally remove the VM from replication jobs,
1764 backup jobs and HA resource configurations.
1765
1766 ----
1767 # qm destroy 300 --purge
1768 ----
1769
1770 Move a disk image to a different storage.
1771
1772 ----
1773 # qm move-disk 300 scsi0 other-storage
1774 ----
1775
1776 Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1777 the source VM and attaches it as `scsi3` to the target VM. In the background
1778 the disk image is being renamed so that the name matches the new owner.
1779
1780 ----
1781 # qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1782 ----
1783
1784
1785 [[qm_configuration]]
1786 Configuration
1787 -------------
1788
1789 VM configuration files are stored inside the Proxmox cluster file
1790 system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1791 Like other files stored inside `/etc/pve/`, they get automatically
1792 replicated to all other cluster nodes.
1793
1794 NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1795 unique cluster wide.
1796
1797 .Example VM Configuration
1798 ----
1799 boot: order=virtio0;net0
1800 cores: 1
1801 sockets: 1
1802 memory: 512
1803 name: webmail
1804 ostype: l26
1805 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1806 virtio0: local:vm-100-disk-1,size=32G
1807 ----
1808
1809 Those configuration files are simple text files, and you can edit them
1810 using a normal text editor (`vi`, `nano`, ...). This is sometimes
1811 useful to do small corrections, but keep in mind that you need to
1812 restart the VM to apply such changes.
1813
1814 For that reason, it is usually better to use the `qm` command to
1815 generate and modify those files, or do the whole thing using the GUI.
1816 Our toolkit is smart enough to instantaneously apply most changes to
1817 running VM. This feature is called "hot plug", and there is no
1818 need to restart the VM in that case.
1819
1820
1821 File Format
1822 ~~~~~~~~~~~
1823
1824 VM configuration files use a simple colon separated key/value
1825 format. Each line has the following format:
1826
1827 -----
1828 # this is a comment
1829 OPTION: value
1830 -----
1831
1832 Blank lines in those files are ignored, and lines starting with a `#`
1833 character are treated as comments and are also ignored.
1834
1835
1836 [[qm_snapshots]]
1837 Snapshots
1838 ~~~~~~~~~
1839
1840 When you create a snapshot, `qm` stores the configuration at snapshot
1841 time into a separate snapshot section within the same configuration
1842 file. For example, after creating a snapshot called ``testsnapshot'',
1843 your configuration file will look like this:
1844
1845 .VM configuration with snapshot
1846 ----
1847 memory: 512
1848 swap: 512
1849 parent: testsnaphot
1850 ...
1851
1852 [testsnaphot]
1853 memory: 512
1854 swap: 512
1855 snaptime: 1457170803
1856 ...
1857 ----
1858
1859 There are a few snapshot related properties like `parent` and
1860 `snaptime`. The `parent` property is used to store the parent/child
1861 relationship between snapshots. `snaptime` is the snapshot creation
1862 time stamp (Unix epoch).
1863
1864 You can optionally save the memory of a running VM with the option `vmstate`.
1865 For details about how the target storage gets chosen for the VM state, see
1866 xref:qm_vmstatestorage[State storage selection] in the chapter
1867 xref:qm_hibernate[Hibernation].
1868
1869 [[qm_options]]
1870 Options
1871 ~~~~~~~
1872
1873 include::qm.conf.5-opts.adoc[]
1874
1875
1876 Locks
1877 -----
1878
1879 Online migrations, snapshots and backups (`vzdump`) set a lock to prevent
1880 incompatible concurrent actions on the affected VMs. Sometimes you need to
1881 remove such a lock manually (for example after a power failure).
1882
1883 ----
1884 # qm unlock <vmid>
1885 ----
1886
1887 CAUTION: Only do that if you are sure the action which set the lock is
1888 no longer running.
1889
1890 ifdef::wiki[]
1891
1892 See Also
1893 ~~~~~~~~
1894
1895 * link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1896
1897 endif::wiki[]
1898
1899
1900 ifdef::manvolnum[]
1901
1902 Files
1903 ------
1904
1905 `/etc/pve/qemu-server/<VMID>.conf`::
1906
1907 Configuration file for the VM '<VMID>'.
1908
1909
1910 include::pve-copyright.adoc[]
1911 endif::manvolnum[]