]> git.proxmox.com Git - pve-docs.git/blob - qm.adoc
bump version to 8.2.1
[pve-docs.git] / qm.adoc
1 [[chapter_virtual_machines]]
2 ifdef::manvolnum[]
3 qm(1)
4 =====
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 qm - QEMU/KVM Virtual Machine Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::qm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 QEMU/KVM Virtual Machines
23 =========================
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 // deprecates
28 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29 // http://pve.proxmox.com/wiki/KVM
30 // http://pve.proxmox.com/wiki/Qemu_Server
31
32 QEMU (short form for Quick Emulator) is an open source hypervisor that emulates a
33 physical computer. From the perspective of the host system where QEMU is
34 running, QEMU is a user program which has access to a number of local resources
35 like partitions, files, network cards which are then passed to an
36 emulated computer which sees them as if they were real devices.
37
38 A guest operating system running in the emulated computer accesses these
39 devices, and runs as if it were running on real hardware. For instance, you can pass
40 an ISO image as a parameter to QEMU, and the OS running in the emulated computer
41 will see a real CD-ROM inserted into a CD drive.
42
43 QEMU can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44 only concerned with 32 and 64 bits PC clone emulation, since it represents the
45 overwhelming majority of server hardware. The emulation of PC clones is also one
46 of the fastest due to the availability of processor extensions which greatly
47 speed up QEMU when the emulated architecture is the same as the host
48 architecture.
49
50 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51 It means that QEMU is running with the support of the virtualization processor
52 extensions, via the Linux KVM module. In the context of {pve} _QEMU_ and
53 _KVM_ can be used interchangeably, as QEMU in {pve} will always try to load the KVM
54 module.
55
56 QEMU inside {pve} runs as a root process, since this is required to access block
57 and PCI devices.
58
59
60 Emulated devices and paravirtualized devices
61 --------------------------------------------
62
63 The PC hardware emulated by QEMU includes a motherboard, network controllers,
64 SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
65 the `kvm(1)` man page) all of them emulated in software. All these devices
66 are the exact software equivalent of existing hardware devices, and if the OS
67 running in the guest has the proper drivers it will use the devices as if it
68 were running on real hardware. This allows QEMU to run _unmodified_ operating
69 systems.
70
71 This however has a performance cost, as running in software what was meant to
72 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73 QEMU can present to the guest operating system _paravirtualized devices_, where
74 the guest OS recognizes it is running inside QEMU and cooperates with the
75 hypervisor.
76
77 QEMU relies on the virtio virtualization standard, and is thus able to present
78 paravirtualized virtio devices, which includes a paravirtualized generic disk
79 controller, a paravirtualized network card, a paravirtualized serial port,
80 a paravirtualized SCSI controller, etc ...
81
82 TIP: It is *highly recommended* to use the virtio devices whenever you can, as
83 they provide a big performance improvement and are generally better maintained.
84 Using the virtio generic disk controller versus an emulated IDE controller will
85 double the sequential write throughput, as measured with `bonnie++(8)`. Using
86 the virtio network interface can deliver up to three times the throughput of an
87 emulated Intel E1000 network card, as measured with `iperf(1)`. footnote:[See
88 this benchmark on the KVM wiki https://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
90
91 [[qm_virtual_machines_settings]]
92 Virtual Machines Settings
93 -------------------------
94
95 Generally speaking {pve} tries to choose sane defaults for virtual machines
96 (VM). Make sure you understand the meaning of the settings you change, as it
97 could incur a performance slowdown, or putting your data at risk.
98
99
100 [[qm_general_settings]]
101 General Settings
102 ~~~~~~~~~~~~~~~~
103
104 [thumbnail="screenshot/gui-create-vm-general.png"]
105
106 General settings of a VM include
107
108 * the *Node* : the physical server on which the VM will run
109 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
110 * *Name*: a free form text string you can use to describe the VM
111 * *Resource Pool*: a logical group of VMs
112
113
114 [[qm_os_settings]]
115 OS Settings
116 ~~~~~~~~~~~
117
118 [thumbnail="screenshot/gui-create-vm-os.png"]
119
120 When creating a virtual machine (VM), setting the proper Operating System(OS)
121 allows {pve} to optimize some low level parameters. For instance Windows OS
122 expect the BIOS clock to use the local time, while Unix based OS expect the
123 BIOS clock to have the UTC time.
124
125 [[qm_system_settings]]
126 System Settings
127 ~~~~~~~~~~~~~~~
128
129 On VM creation you can change some basic system components of the new VM. You
130 can specify which xref:qm_display[display type] you want to use.
131 [thumbnail="screenshot/gui-create-vm-system.png"]
132 Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133 If you plan to install the QEMU Guest Agent, or if your selected ISO image
134 already ships and installs it automatically, you may want to tick the 'QEMU
135 Agent' box, which lets {pve} know that it can use its features to show some
136 more information, and complete some actions (for example, shutdown or
137 snapshots) more intelligently.
138
139 {pve} allows to boot VMs with different firmware and machine types, namely
140 xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
141 the default SeaBIOS to OVMF only if you plan to use
142 xref:qm_pci_passthrough[PCIe passthrough].
143
144 [[qm_machine_type]]
145
146 Machine Type
147 ^^^^^^^^^^^^
148
149 A VM's 'Machine Type' defines the hardware layout of the VM's virtual
150 motherboard. You can choose between the default
151 https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
152 https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
153 chipset, which also provides a virtual PCIe bus, and thus may be
154 desired if you want to pass through PCIe hardware.
155 Additionally, you can select a xref:qm_pci_viommu[vIOMMU] implementation.
156
157 Machine Version
158 +++++++++++++++
159
160 Each machine type is versioned in QEMU and a given QEMU binary supports many
161 machine versions. New versions might bring support for new features, fixes or
162 general improvements. However, they also change properties of the virtual
163 hardware. To avoid sudden changes from the guest's perspective and ensure
164 compatibility of the VM state, live-migration and snapshots with RAM will keep
165 using the same machine version in the new QEMU instance.
166
167 For Windows guests, the machine version is pinned during creation, because
168 Windows is sensitive to changes in the virtual hardware - even between cold
169 boots. For example, the enumeration of network devices might be different with
170 different machine versions. Other OSes like Linux can usually deal with such
171 changes just fine. For those, the 'Latest' machine version is used by default.
172 This means that after a fresh start, the newest machine version supported by the
173 QEMU binary is used (e.g. the newest machine version QEMU 8.1 supports is
174 version 8.1 for each machine type).
175
176 [[qm_machine_update]]
177
178 Update to a Newer Machine Version
179 +++++++++++++++++++++++++++++++++
180
181 Very old machine versions might become deprecated in QEMU. For example, this is
182 the case for versions 1.4 to 1.7 for the i440fx machine type. It is expected
183 that support for these machine versions will be dropped at some point. If you
184 see a deprecation warning, you should change the machine version to a newer one.
185 Be sure to have a working backup first and be prepared for changes to how the
186 guest sees hardware. In some scenarios, re-installing certain drivers might be
187 required. You should also check for snapshots with RAM that were taken with
188 these machine versions (i.e. the `runningmachine` configuration entry).
189 Unfortunately, there is no way to change the machine version of a snapshot, so
190 you'd need to load the snapshot to salvage any data from it.
191
192 [[qm_hard_disk]]
193 Hard Disk
194 ~~~~~~~~~
195
196 [[qm_hard_disk_bus]]
197 Bus/Controller
198 ^^^^^^^^^^^^^^
199 QEMU can emulate a number of storage controllers:
200
201 TIP: It is highly recommended to use the *VirtIO SCSI* or *VirtIO Block*
202 controller for performance reasons and because they are better maintained.
203
204 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
205 controller. Even if this controller has been superseded by recent designs,
206 each and every OS you can think of has support for it, making it a great choice
207 if you want to run an OS released before 2003. You can connect up to 4 devices
208 on this controller.
209
210 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
211 design, allowing higher throughput and a greater number of devices to be
212 connected. You can connect up to 6 devices on this controller.
213
214 * the *SCSI* controller, designed in 1985, is commonly found on server grade
215 hardware, and can connect up to 14 storage devices. {pve} emulates by default a
216 LSI 53C895A controller.
217 +
218 A SCSI controller of type _VirtIO SCSI single_ and enabling the
219 xref:qm_hard_disk_iothread[IO Thread] setting for the attached disks is
220 recommended if you aim for performance. This is the default for newly created
221 Linux VMs since {pve} 7.3. Each disk will have its own _VirtIO SCSI_ controller,
222 and QEMU will handle the disks IO in a dedicated thread. Linux distributions
223 have support for this controller since 2012, and FreeBSD since 2014. For Windows
224 OSes, you need to provide an extra ISO containing the drivers during the
225 installation.
226 // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
227
228 * The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
229 is an older type of paravirtualized controller. It has been superseded by the
230 VirtIO SCSI Controller, in terms of features.
231
232 [thumbnail="screenshot/gui-create-vm-hard-disk.png"]
233
234 [[qm_hard_disk_formats]]
235 Image Format
236 ^^^^^^^^^^^^
237 On each controller you attach a number of emulated hard disks, which are backed
238 by a file or a block device residing in the configured storage. The choice of
239 a storage type will determine the format of the hard disk image. Storages which
240 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
241 whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
242 either the *raw disk image format* or the *QEMU image format*.
243
244 * the *QEMU image format* is a copy on write format which allows snapshots, and
245 thin provisioning of the disk image.
246 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
247 you would get when executing the `dd` command on a block device in Linux. This
248 format does not support thin provisioning or snapshots by itself, requiring
249 cooperation from the storage layer for these tasks. It may, however, be up to
250 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
251 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
252 * the *VMware image format* only makes sense if you intend to import/export the
253 disk image to other hypervisors.
254
255 [[qm_hard_disk_cache]]
256 Cache Mode
257 ^^^^^^^^^^
258 Setting the *Cache* mode of the hard drive will impact how the host system will
259 notify the guest systems of block write completions. The *No cache* default
260 means that the guest system will be notified that a write is complete when each
261 block reaches the physical storage write queue, ignoring the host page cache.
262 This provides a good balance between safety and speed.
263
264 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
265 you can set the *No backup* option on that disk.
266
267 If you want the {pve} storage replication mechanism to skip a disk when starting
268 a replication job, you can set the *Skip replication* option on that disk.
269 As of {pve} 5.0, replication requires the disk images to be on a storage of type
270 `zfspool`, so adding a disk image to other storages when the VM has replication
271 configured requires to skip replication for this disk image.
272
273 [[qm_hard_disk_discard]]
274 Trim/Discard
275 ^^^^^^^^^^^^
276 If your storage supports _thin provisioning_ (see the storage chapter in the
277 {pve} guide), you can activate the *Discard* option on a drive. With *Discard*
278 set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
279 https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
280 marks blocks as unused after deleting files, the controller will relay this
281 information to the storage, which will then shrink the disk image accordingly.
282 For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
283 option on the drive. Some guest operating systems may also require the
284 *SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
285 only supported on guests using Linux Kernel 5.0 or higher.
286
287 If you would like a drive to be presented to the guest as a solid-state drive
288 rather than a rotational hard disk, you can set the *SSD emulation* option on
289 that drive. There is no requirement that the underlying storage actually be
290 backed by SSDs; this feature can be used with physical media of any type.
291 Note that *SSD emulation* is not supported on *VirtIO Block* drives.
292
293
294 [[qm_hard_disk_iothread]]
295 IO Thread
296 ^^^^^^^^^
297 The option *IO Thread* can only be used when using a disk with the *VirtIO*
298 controller, or with the *SCSI* controller, when the emulated controller type is
299 *VirtIO SCSI single*. With *IO Thread* enabled, QEMU creates one I/O thread per
300 storage controller rather than handling all I/O in the main event loop or vCPU
301 threads. One benefit is better work distribution and utilization of the
302 underlying storage. Another benefit is reduced latency (hangs) in the guest for
303 very I/O-intensive host workloads, since neither the main thread nor a vCPU
304 thread can be blocked by disk I/O.
305
306 [[qm_cpu]]
307 CPU
308 ~~~
309
310 [thumbnail="screenshot/gui-create-vm-cpu.png"]
311
312 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
313 This CPU can then contain one or many *cores*, which are independent
314 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
315 sockets with two cores is mostly irrelevant from a performance point of view.
316 However some software licenses depend on the number of sockets a machine has,
317 in that case it makes sense to set the number of sockets to what the license
318 allows you.
319
320 Increasing the number of virtual CPUs (cores and sockets) will usually provide a
321 performance improvement though that is heavily dependent on the use of the VM.
322 Multi-threaded applications will of course benefit from a large number of
323 virtual CPUs, as for each virtual cpu you add, QEMU will create a new thread of
324 execution on the host system. If you're not sure about the workload of your VM,
325 it is usually a safe bet to set the number of *Total cores* to 2.
326
327 NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
328 is greater than the number of cores on the server (for example, 4 VMs each with
329 4 cores (= total 16) on a machine with only 8 cores). In that case the host
330 system will balance the QEMU execution threads between your server cores, just
331 like if you were running a standard multi-threaded application. However, {pve}
332 will prevent you from starting VMs with more virtual CPU cores than physically
333 available, as this will only bring the performance down due to the cost of
334 context switches.
335
336 [[qm_cpu_resource_limits]]
337 Resource Limits
338 ^^^^^^^^^^^^^^^
339
340 *cpulimit*
341
342 In addition to the number of virtual cores, the total available ``Host CPU
343 Time'' for the VM can be set with the *cpulimit* option. It is a floating point
344 value representing CPU time in percent, so `1.0` is equal to `100%`, `2.5` to
345 `250%` and so on. If a single process would fully use one single core it would
346 have `100%` CPU Time usage. If a VM with four cores utilizes all its cores
347 fully it would theoretically use `400%`. In reality the usage may be even a bit
348 higher as QEMU can have additional threads for VM peripherals besides the vCPU
349 core ones.
350
351 This setting can be useful when a VM should have multiple vCPUs because it is
352 running some processes in parallel, but the VM as a whole should not be able to
353 run all vCPUs at 100% at the same time.
354
355 For example, suppose you have a virtual machine that would benefit from having 8
356 virtual CPUs, but you don't want the VM to be able to max out all 8 cores
357 running at full load - because that would overload the server and leave other
358 virtual machines and containers with too little CPU time. To solve this, you
359 could set *cpulimit* to `4.0` (=400%). This means that if the VM fully utilizes
360 all 8 virtual CPUs by running 8 processes simultaneously, each vCPU will receive
361 a maximum of 50% CPU time from the physical cores. However, if the VM workload
362 only fully utilizes 4 virtual CPUs, it could still receive up to 100% CPU time
363 from a physical core, for a total of 400%.
364
365 NOTE: VMs can, depending on their configuration, use additional threads, such
366 as for networking or IO operations but also live migration. Thus a VM can show
367 up to use more CPU time than just its virtual CPUs could use. To ensure that a
368 VM never uses more CPU time than vCPUs assigned, set the *cpulimit* to
369 the same value as the total core count.
370
371 *cpuuntis*
372
373 With the *cpuunits* option, nowadays often called CPU shares or CPU weight, you
374 can control how much CPU time a VM gets compared to other running VMs. It is a
375 relative weight which defaults to `100` (or `1024` if the host uses legacy
376 cgroup v1). If you increase this for a VM it will be prioritized by the
377 scheduler in comparison to other VMs with lower weight.
378
379 For example, if VM 100 has set the default `100` and VM 200 was changed to
380 `200`, the latter VM 200 would receive twice the CPU bandwidth than the first
381 VM 100.
382
383 For more information see `man systemd.resource-control`, here `CPUQuota`
384 corresponds to `cpulimit` and `CPUWeight` to our `cpuunits` setting. Visit its
385 Notes section for references and implementation details.
386
387 *affinity*
388
389 With the *affinity* option, you can specify the physical CPU cores that are used
390 to run the VM's vCPUs. Peripheral VM processes, such as those for I/O, are not
391 affected by this setting. Note that the *CPU affinity is not a security
392 feature*.
393
394 Forcing a CPU *affinity* can make sense in certain cases but is accompanied by
395 an increase in complexity and maintenance effort. For example, if you want to
396 add more VMs later or migrate VMs to nodes with fewer CPU cores. It can also
397 easily lead to asynchronous and therefore limited system performance if some
398 CPUs are fully utilized while others are almost idle.
399
400 The *affinity* is set through the `taskset` CLI tool. It accepts the host CPU
401 numbers (see `lscpu`) in the `List Format` from `man cpuset`. This ASCII decimal
402 list can contain numbers but also number ranges. For example, the *affinity*
403 `0-1,8-11` (expanded `0, 1, 8, 9, 10, 11`) would allow the VM to run on only
404 these six specific host cores.
405
406 CPU Type
407 ^^^^^^^^
408
409 QEMU can emulate a number different of *CPU types* from 486 to the latest Xeon
410 processors. Each new processor generation adds new features, like hardware
411 assisted 3d rendering, random number generation, memory protection, etc. Also,
412 a current generation can be upgraded through
413 xref:chapter_firmware_updates[microcode update] with bug or security fixes.
414
415 Usually you should select for your VM a processor type which closely matches the
416 CPU of the host system, as it means that the host CPU features (also called _CPU
417 flags_ ) will be available in your VMs. If you want an exact match, you can set
418 the CPU type to *host* in which case the VM will have exactly the same CPU flags
419 as your host system.
420
421 This has a downside though. If you want to do a live migration of VMs between
422 different hosts, your VM might end up on a new system with a different CPU type
423 or a different microcode version.
424 If the CPU flags passed to the guest are missing, the QEMU process will stop. To
425 remedy this QEMU has also its own virtual CPU types, that {pve} uses by default.
426
427 The backend default is 'kvm64' which works on essentially all x86_64 host CPUs
428 and the UI default when creating a new VM is 'x86-64-v2-AES', which requires a
429 host CPU starting from Westmere for Intel or at least a fourth generation
430 Opteron for AMD.
431
432 In short:
433
434 If you don’t care about live migration or have a homogeneous cluster where all
435 nodes have the same CPU and same microcode version, set the CPU type to host, as
436 in theory this will give your guests maximum performance.
437
438 If you care about live migration and security, and you have only Intel CPUs or
439 only AMD CPUs, choose the lowest generation CPU model of your cluster.
440
441 If you care about live migration without security, or have mixed Intel/AMD
442 cluster, choose the lowest compatible virtual QEMU CPU type.
443
444 NOTE: Live migrations between Intel and AMD host CPUs have no guarantee to work.
445
446 See also
447 xref:chapter_qm_vcpu_list[List of AMD and Intel CPU Types as Defined in QEMU].
448
449 QEMU CPU Types
450 ^^^^^^^^^^^^^^
451
452 QEMU also provide virtual CPU types, compatible with both Intel and AMD host
453 CPUs.
454
455 NOTE: To mitigate the Spectre vulnerability for virtual CPU types, you need to
456 add the relevant CPU flags, see
457 xref:qm_meltdown_spectre[Meltdown / Spectre related CPU flags].
458
459 Historically, {pve} had the 'kvm64' CPU model, with CPU flags at the level of
460 Pentium 4 enabled, so performance was not great for certain workloads.
461
462 In the summer of 2020, AMD, Intel, Red Hat, and SUSE collaborated to define
463 three x86-64 microarchitecture levels on top of the x86-64 baseline, with modern
464 flags enabled. For details, see the
465 https://gitlab.com/x86-psABIs/x86-64-ABI[x86-64-ABI specification].
466
467 NOTE: Some newer distributions like CentOS 9 are now built with 'x86-64-v2'
468 flags as a minimum requirement.
469
470 * 'kvm64 (x86-64-v1)': Compatible with Intel CPU >= Pentium 4, AMD CPU >=
471 Phenom.
472 +
473 * 'x86-64-v2': Compatible with Intel CPU >= Nehalem, AMD CPU >= Opteron_G3.
474 Added CPU flags compared to 'x86-64-v1': '+cx16', '+lahf-lm', '+popcnt', '+pni',
475 '+sse4.1', '+sse4.2', '+ssse3'.
476 +
477 * 'x86-64-v2-AES': Compatible with Intel CPU >= Westmere, AMD CPU >= Opteron_G4.
478 Added CPU flags compared to 'x86-64-v2': '+aes'.
479 +
480 * 'x86-64-v3': Compatible with Intel CPU >= Broadwell, AMD CPU >= EPYC. Added
481 CPU flags compared to 'x86-64-v2-AES': '+avx', '+avx2', '+bmi1', '+bmi2',
482 '+f16c', '+fma', '+movbe', '+xsave'.
483 +
484 * 'x86-64-v4': Compatible with Intel CPU >= Skylake, AMD CPU >= EPYC v4 Genoa.
485 Added CPU flags compared to 'x86-64-v3': '+avx512f', '+avx512bw', '+avx512cd',
486 '+avx512dq', '+avx512vl'.
487
488 Custom CPU Types
489 ^^^^^^^^^^^^^^^^
490
491 You can specify custom CPU types with a configurable set of features. These are
492 maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
493 an administrator. See `man cpu-models.conf` for format details.
494
495 Specified custom types can be selected by any user with the `Sys.Audit`
496 privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
497 or API, the name needs to be prefixed with 'custom-'.
498
499 [[qm_meltdown_spectre]]
500 Meltdown / Spectre related CPU flags
501 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
502
503 There are several CPU flags related to the Meltdown and Spectre vulnerabilities
504 footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
505 manually unless the selected CPU type of your VM already enables them by default.
506
507 There are two requirements that need to be fulfilled in order to use these
508 CPU flags:
509
510 * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
511 * The guest operating system must be updated to a version which mitigates the
512 attacks and is able to utilize the CPU feature
513
514 Otherwise you need to set the desired CPU flag of the virtual CPU, either by
515 editing the CPU options in the web UI, or by setting the 'flags' property of the
516 'cpu' option in the VM configuration file.
517
518 For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
519 so-called ``microcode update'' for your CPU, see
520 xref:chapter_firmware_updates[chapter Firmware Updates]. Note that not all
521 affected CPUs can be updated to support spec-ctrl.
522
523
524 To check if the {pve} host is vulnerable, execute the following command as root:
525
526 ----
527 for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
528 ----
529
530 A community script is also available to detect if the host is still vulnerable.
531 footnote:[spectre-meltdown-checker https://meltdown.ovh/]
532
533 Intel processors
534 ^^^^^^^^^^^^^^^^
535
536 * 'pcid'
537 +
538 This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
539 called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
540 the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
541 mechanism footnote:[PCID is now a critical performance/security feature on x86
542 https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
543 +
544 To check if the {pve} host supports PCID, execute the following command as root:
545 +
546 ----
547 # grep ' pcid ' /proc/cpuinfo
548 ----
549 +
550 If this does not return empty your host's CPU has support for 'pcid'.
551
552 * 'spec-ctrl'
553 +
554 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
555 in cases where retpolines are not sufficient.
556 Included by default in Intel CPU models with -IBRS suffix.
557 Must be explicitly turned on for Intel CPU models without -IBRS suffix.
558 Requires an updated host CPU microcode (intel-microcode >= 20180425).
559 +
560 * 'ssbd'
561 +
562 Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
563 Must be explicitly turned on for all Intel CPU models.
564 Requires an updated host CPU microcode(intel-microcode >= 20180703).
565
566
567 AMD processors
568 ^^^^^^^^^^^^^^
569
570 * 'ibpb'
571 +
572 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
573 in cases where retpolines are not sufficient.
574 Included by default in AMD CPU models with -IBPB suffix.
575 Must be explicitly turned on for AMD CPU models without -IBPB suffix.
576 Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
577
578
579
580 * 'virt-ssbd'
581 +
582 Required to enable the Spectre v4 (CVE-2018-3639) fix.
583 Not included by default in any AMD CPU model.
584 Must be explicitly turned on for all AMD CPU models.
585 This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
586 Note that this must be explicitly enabled when when using the "host" cpu model,
587 because this is a virtual feature which does not exist in the physical CPUs.
588
589
590 * 'amd-ssbd'
591 +
592 Required to enable the Spectre v4 (CVE-2018-3639) fix.
593 Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
594 This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
595 virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
596
597
598 * 'amd-no-ssb'
599 +
600 Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
601 Not included by default in any AMD CPU model.
602 Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
603 and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
604 This is mutually exclusive with virt-ssbd and amd-ssbd.
605
606
607 NUMA
608 ^^^^
609 You can also optionally emulate a *NUMA*
610 footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
611 in your VMs. The basics of the NUMA architecture mean that instead of having a
612 global memory pool available to all your cores, the memory is spread into local
613 banks close to each socket.
614 This can bring speed improvements as the memory bus is not a bottleneck
615 anymore. If your system has a NUMA architecture footnote:[if the command
616 `numactl --hardware | grep available` returns more than one node, then your host
617 system has a NUMA architecture] we recommend to activate the option, as this
618 will allow proper distribution of the VM resources on the host system.
619 This option is also required to hot-plug cores or RAM in a VM.
620
621 If the NUMA option is used, it is recommended to set the number of sockets to
622 the number of nodes of the host system.
623
624 vCPU hot-plug
625 ^^^^^^^^^^^^^
626
627 Modern operating systems introduced the capability to hot-plug and, to a
628 certain extent, hot-unplug CPUs in a running system. Virtualization allows us
629 to avoid a lot of the (physical) problems real hardware can cause in such
630 scenarios.
631 Still, this is a rather new and complicated feature, so its use should be
632 restricted to cases where its absolutely needed. Most of the functionality can
633 be replicated with other, well tested and less complicated, features, see
634 xref:qm_cpu_resource_limits[Resource Limits].
635
636 In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
637 To start a VM with less than this total core count of CPUs you may use the
638 *vcpus* setting, it denotes how many vCPUs should be plugged in at VM start.
639
640 Currently only this feature is only supported on Linux, a kernel newer than 3.10
641 is needed, a kernel newer than 4.7 is recommended.
642
643 You can use a udev rule as follow to automatically set new CPUs as online in
644 the guest:
645
646 ----
647 SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
648 ----
649
650 Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
651
652 Note: CPU hot-remove is machine dependent and requires guest cooperation. The
653 deletion command does not guarantee CPU removal to actually happen, typically
654 it's a request forwarded to guest OS using target dependent mechanism, such as
655 ACPI on x86/amd64.
656
657
658 [[qm_memory]]
659 Memory
660 ~~~~~~
661
662 For each VM you have the option to set a fixed size memory or asking
663 {pve} to dynamically allocate memory based on the current RAM usage of the
664 host.
665
666 .Fixed Memory Allocation
667 [thumbnail="screenshot/gui-create-vm-memory.png"]
668
669 When setting memory and minimum memory to the same amount
670 {pve} will simply allocate what you specify to your VM.
671
672 Even when using a fixed memory size, the ballooning device gets added to the
673 VM, because it delivers useful information such as how much memory the guest
674 really uses.
675 In general, you should leave *ballooning* enabled, but if you want to disable
676 it (like for debugging purposes), simply uncheck *Ballooning Device* or set
677
678 balloon: 0
679
680 in the configuration.
681
682 .Automatic Memory Allocation
683
684 // see autoballoon() in pvestatd.pm
685 When setting the minimum memory lower than memory, {pve} will make sure that the
686 minimum amount you specified is always available to the VM, and if RAM usage on
687 the host is below 80%, will dynamically add memory to the guest up to the
688 maximum memory specified.
689
690 When the host is running low on RAM, the VM will then release some memory
691 back to the host, swapping running processes if needed and starting the oom
692 killer in last resort. The passing around of memory between host and guest is
693 done via a special `balloon` kernel driver running inside the guest, which will
694 grab or release memory pages from the host.
695 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
696
697 When multiple VMs use the autoallocate facility, it is possible to set a
698 *Shares* coefficient which indicates the relative amount of the free host memory
699 that each VM should take. Suppose for instance you have four VMs, three of them
700 running an HTTP server and the last one is a database server. To cache more
701 database blocks in the database server RAM, you would like to prioritize the
702 database VM when spare RAM is available. For this you assign a Shares property
703 of 3000 to the database VM, leaving the other VMs to the Shares default setting
704 of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
705 * 80/100 - 16 = 9GB RAM to be allocated to the VMs on top of their configured
706 minimum memory amount. The database VM will benefit from 9 * 3000 / (3000 +
707 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server from 1.5 GB.
708
709 All Linux distributions released after 2010 have the balloon kernel driver
710 included. For Windows OSes, the balloon driver needs to be added manually and can
711 incur a slowdown of the guest, so we don't recommend using it on critical
712 systems.
713 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
714
715 When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
716 of RAM available to the host.
717
718
719 [[qm_network_device]]
720 Network Device
721 ~~~~~~~~~~~~~~
722
723 [thumbnail="screenshot/gui-create-vm-network.png"]
724
725 Each VM can have many _Network interface controllers_ (NIC), of four different
726 types:
727
728 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
729 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
730 performance. Like all VirtIO devices, the guest OS should have the proper driver
731 installed.
732 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
733 only be used when emulating older operating systems ( released before 2002 )
734 * the *vmxnet3* is another paravirtualized device, which should only be used
735 when importing a VM from another hypervisor.
736
737 {pve} will generate for each NIC a random *MAC address*, so that your VM is
738 addressable on Ethernet networks.
739
740 The NIC you added to the VM can follow one of two different models:
741
742 * in the default *Bridged mode* each virtual NIC is backed on the host by a
743 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
744 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
745 have direct access to the Ethernet LAN on which the host is located.
746 * in the alternative *NAT mode*, each virtual NIC will only communicate with
747 the QEMU user networking stack, where a built-in router and DHCP server can
748 provide network access. This built-in DHCP will serve addresses in the private
749 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
750 should only be used for testing. This mode is only available via CLI or the API,
751 but not via the web UI.
752
753 You can also skip adding a network device when creating a VM by selecting *No
754 network device*.
755
756 You can overwrite the *MTU* setting for each VM network device. The option
757 `mtu=1` represents a special case, in which the MTU value will be inherited
758 from the underlying bridge.
759 This option is only available for *VirtIO* network devices.
760
761 .Multiqueue
762 If you are using the VirtIO driver, you can optionally activate the
763 *Multiqueue* option. This option allows the guest OS to process networking
764 packets using multiple virtual CPUs, providing an increase in the total number
765 of packets transferred.
766
767 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
768 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
769 host kernel, where the queue will be processed by a kernel thread spawned by the
770 vhost driver. With this option activated, it is possible to pass _multiple_
771 network queues to the host kernel for each NIC.
772
773 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
774 When using Multiqueue, it is recommended to set it to a value equal to the
775 number of vCPUs of your guest. Remember that the number of vCPUs is the number
776 of sockets times the number of cores configured for the VM. You also need to set
777 the number of multi-purpose channels on each VirtIO NIC in the VM with this
778 ethtool command:
779
780 `ethtool -L ens1 combined X`
781
782 where X is the number of the number of vCPUs of the VM.
783
784 To configure a Windows guest for Multiqueue install the
785 https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers[Redhat VirtIO Ethernet
786 Adapter drivers], then adapt the NIC's configuration as follows. Open the
787 device manager, right click the NIC under "Network adapters", and select
788 "Properties". Then open the "Advanced" tab and select "Receive Side Scaling"
789 from the list on the left. Make sure it is set to "Enabled". Next, navigate to
790 "Maximum number of RSS Queues" in the list and set it to the number of vCPUs of
791 your VM. Once you verified that the settings are correct, click "OK" to confirm
792 them.
793
794 You should note that setting the Multiqueue parameter to a value greater
795 than one will increase the CPU load on the host and guest systems as the
796 traffic increases. We recommend to set this option only when the VM has to
797 process a great number of incoming connections, such as when the VM is running
798 as a router, reverse proxy or a busy HTTP server doing long polling.
799
800 [[qm_display]]
801 Display
802 ~~~~~~~
803
804 QEMU can virtualize a few types of VGA hardware. Some examples are:
805
806 * *std*, the default, emulates a card with Bochs VBE extensions.
807 * *cirrus*, this was once the default, it emulates a very old hardware module
808 with all its problems. This display type should only be used if really
809 necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
810 qemu: using cirrus considered harmful], for example, if using Windows XP or
811 earlier
812 * *vmware*, is a VMWare SVGA-II compatible adapter.
813 * *qxl*, is the QXL paravirtualized graphics card. Selecting this also
814 enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
815 VM.
816 * *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
817 can offload workloads to the host GPU without requiring special (expensive)
818 models and drivers and neither binding the host GPU completely, allowing
819 reuse between multiple guests and or the host.
820 +
821 NOTE: VirGL support needs some extra libraries that aren't installed by
822 default due to being relatively big and also not available as open source for
823 all GPU models/vendors. For most setups you'll just need to do:
824 `apt install libgl1 libegl1`
825
826 You can edit the amount of memory given to the virtual GPU, by setting
827 the 'memory' option. This can enable higher resolutions inside the VM,
828 especially with SPICE/QXL.
829
830 As the memory is reserved by display device, selecting Multi-Monitor mode
831 for SPICE (such as `qxl2` for dual monitors) has some implications:
832
833 * Windows needs a device for each monitor, so if your 'ostype' is some
834 version of Windows, {pve} gives the VM an extra device per monitor.
835 Each device gets the specified amount of memory.
836
837 * Linux VMs, can always enable more virtual monitors, but selecting
838 a Multi-Monitor mode multiplies the memory given to the device with
839 the number of monitors.
840
841 Selecting `serialX` as display 'type' disables the VGA output, and redirects
842 the Web Console to the selected serial port. A configured display 'memory'
843 setting will be ignored in that case.
844
845 .VNC clipboard
846 You can enable the VNC clipboard by setting `clipboard` to `vnc`.
847
848 ----
849 # qm set <vmid> -vga <displaytype>,clipboard=vnc
850 ----
851
852 In order to use the clipboard feature, you must first install the
853 SPICE guest tools. On Debian-based distributions, this can be achieved
854 by installing `spice-vdagent`. For other Operating Systems search for it
855 in the offical repositories or see: https://www.spice-space.org/download.html
856
857 Once you have installed the spice guest tools, you can use the VNC clipboard
858 function (e.g. in the noVNC console panel). However, if you're using
859 SPICE, virtio or virgl, you'll need to choose which clipboard to use.
860 This is because the default *SPICE* clipboard will be replaced by the
861 *VNC* clipboard, if `clipboard` is set to `vnc`.
862
863 [[qm_usb_passthrough]]
864 USB Passthrough
865 ~~~~~~~~~~~~~~~
866
867 There are two different types of USB passthrough devices:
868
869 * Host USB passthrough
870 * SPICE USB passthrough
871
872 Host USB passthrough works by giving a VM a USB device of the host.
873 This can either be done via the vendor- and product-id, or
874 via the host bus and port.
875
876 The vendor/product-id looks like this: *0123:abcd*,
877 where *0123* is the id of the vendor, and *abcd* is the id
878 of the product, meaning two pieces of the same usb device
879 have the same id.
880
881 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
882 and *2.3.4* is the port path. This represents the physical
883 ports of your host (depending of the internal order of the
884 usb controllers).
885
886 If a device is present in a VM configuration when the VM starts up,
887 but the device is not present in the host, the VM can boot without problems.
888 As soon as the device/port is available in the host, it gets passed through.
889
890 WARNING: Using this kind of USB passthrough means that you cannot move
891 a VM online to another host, since the hardware is only available
892 on the host the VM is currently residing.
893
894 The second type of passthrough is SPICE USB passthrough. If you add one or more
895 SPICE USB ports to your VM, you can dynamically pass a local USB device from
896 your SPICE client through to the VM. This can be useful to redirect an input
897 device or hardware dongle temporarily.
898
899 It is also possible to map devices on a cluster level, so that they can be
900 properly used with HA and hardware changes are detected and non root users
901 can configure them. See xref:resource_mapping[Resource Mapping]
902 for details on that.
903
904 [[qm_bios_and_uefi]]
905 BIOS and UEFI
906 ~~~~~~~~~~~~~
907
908 In order to properly emulate a computer, QEMU needs to use a firmware.
909 Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
910 first steps when booting a VM. It is responsible for doing basic hardware
911 initialization and for providing an interface to the firmware and hardware for
912 the operating system. By default QEMU uses *SeaBIOS* for this, which is an
913 open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
914 standard setups.
915
916 Some operating systems (such as Windows 11) may require use of an UEFI
917 compatible implementation. In such cases, you must use *OVMF* instead,
918 which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
919
920 There are other scenarios in which the SeaBIOS may not be the ideal firmware to
921 boot from, for example if you want to do VGA passthrough. footnote:[Alex
922 Williamson has a good blog entry about this
923 https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
924
925 If you want to use OVMF, there are several things to consider:
926
927 In order to save things like the *boot order*, there needs to be an EFI Disk.
928 This disk will be included in backups and snapshots, and there can only be one.
929
930 You can create such a disk with the following command:
931
932 ----
933 # qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
934 ----
935
936 Where *<storage>* is the storage where you want to have the disk, and
937 *<format>* is a format which the storage supports. Alternatively, you can
938 create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
939 hardware section of a VM.
940
941 The *efitype* option specifies which version of the OVMF firmware should be
942 used. For new VMs, this should always be '4m', as it supports Secure Boot and
943 has more space allocated to support future development (this is the default in
944 the GUI).
945
946 *pre-enroll-keys* specifies if the efidisk should come pre-loaded with
947 distribution-specific and Microsoft Standard Secure Boot keys. It also enables
948 Secure Boot by default (though it can still be disabled in the OVMF menu within
949 the VM).
950
951 NOTE: If you want to start using Secure Boot in an existing VM (that still uses
952 a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
953 (`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
954 will reset any custom configurations you have made in the OVMF menu!
955
956 When using OVMF with a virtual display (without VGA passthrough),
957 you need to set the client resolution in the OVMF menu (which you can reach
958 with a press of the ESC button during boot), or you have to choose
959 SPICE as the display type.
960
961 [[qm_tpm]]
962 Trusted Platform Module (TPM)
963 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
964
965 A *Trusted Platform Module* is a device which stores secret data - such as
966 encryption keys - securely and provides tamper-resistance functions for
967 validating system boot.
968
969 Certain operating systems (such as Windows 11) require such a device to be
970 attached to a machine (be it physical or virtual).
971
972 A TPM is added by specifying a *tpmstate* volume. This works similar to an
973 efidisk, in that it cannot be changed (only removed) once created. You can add
974 one via the following command:
975
976 ----
977 # qm set <vmid> -tpmstate0 <storage>:1,version=<version>
978 ----
979
980 Where *<storage>* is the storage you want to put the state on, and *<version>*
981 is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
982 choosing 'Add' -> 'TPM State' in the hardware section of a VM.
983
984 The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
985 implementation that requires a 'v1.2' TPM, it should be preferred.
986
987 NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
988 security benefits. The point of a TPM is that the data on it cannot be modified
989 easily, except via commands specified as part of the TPM spec. Since with an
990 emulated device the data storage happens on a regular volume, it can potentially
991 be edited by anyone with access to it.
992
993 [[qm_ivshmem]]
994 Inter-VM shared memory
995 ~~~~~~~~~~~~~~~~~~~~~~
996
997 You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
998 share memory between the host and a guest, or also between multiple guests.
999
1000 To add such a device, you can use `qm`:
1001
1002 ----
1003 # qm set <vmid> -ivshmem size=32,name=foo
1004 ----
1005
1006 Where the size is in MiB. The file will be located under
1007 `/dev/shm/pve-shm-$name` (the default name is the vmid).
1008
1009 NOTE: Currently the device will get deleted as soon as any VM using it got
1010 shutdown or stopped. Open connections will still persist, but new connections
1011 to the exact same device cannot be made anymore.
1012
1013 A use case for such a device is the Looking Glass
1014 footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
1015 performance, low-latency display mirroring between host and guest.
1016
1017 [[qm_audio_device]]
1018 Audio Device
1019 ~~~~~~~~~~~~
1020
1021 To add an audio device run the following command:
1022
1023 ----
1024 qm set <vmid> -audio0 device=<device>
1025 ----
1026
1027 Supported audio devices are:
1028
1029 * `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
1030 * `intel-hda`: Intel HD Audio Controller, emulates ICH6
1031 * `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
1032
1033 There are two backends available:
1034
1035 * 'spice'
1036 * 'none'
1037
1038 The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
1039 the 'none' backend can be useful if an audio device is needed in the VM for some
1040 software to work. To use the physical audio device of the host use device
1041 passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
1042 xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
1043 have options to play sound.
1044
1045
1046 [[qm_virtio_rng]]
1047 VirtIO RNG
1048 ~~~~~~~~~~
1049
1050 A RNG (Random Number Generator) is a device providing entropy ('randomness') to
1051 a system. A virtual hardware-RNG can be used to provide such entropy from the
1052 host system to a guest VM. This helps to avoid entropy starvation problems in
1053 the guest (a situation where not enough entropy is available and the system may
1054 slow down or run into problems), especially during the guests boot process.
1055
1056 To add a VirtIO-based emulated RNG, run the following command:
1057
1058 ----
1059 qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
1060 ----
1061
1062 `source` specifies where entropy is read from on the host and has to be one of
1063 the following:
1064
1065 * `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
1066 * `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
1067 starvation on the host system)
1068 * `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
1069 are available, the one selected in
1070 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
1071
1072 A limit can be specified via the `max_bytes` and `period` parameters, they are
1073 read as `max_bytes` per `period` in milliseconds. However, it does not represent
1074 a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
1075 available on a 1 second timer, not that 1 KiB is streamed to the guest over the
1076 course of one second. Reducing the `period` can thus be used to inject entropy
1077 into the guest at a faster rate.
1078
1079 By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
1080 recommended to always use a limiter to avoid guests using too many host
1081 resources. If desired, a value of '0' for `max_bytes` can be used to disable
1082 all limits.
1083
1084 [[qm_bootorder]]
1085 Device Boot Order
1086 ~~~~~~~~~~~~~~~~~
1087
1088 QEMU can tell the guest which devices it should boot from, and in which order.
1089 This can be specified in the config via the `boot` property, for example:
1090
1091 ----
1092 boot: order=scsi0;net0;hostpci0
1093 ----
1094
1095 [thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
1096
1097 This way, the guest would first attempt to boot from the disk `scsi0`, if that
1098 fails, it would go on to attempt network boot from `net0`, and in case that
1099 fails too, finally attempt to boot from a passed through PCIe device (seen as
1100 disk in case of NVMe, otherwise tries to launch into an option ROM).
1101
1102 On the GUI you can use a drag-and-drop editor to specify the boot order, and use
1103 the checkbox to enable or disable certain devices for booting altogether.
1104
1105 NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
1106 all of them must be marked as 'bootable' (that is, they must have the checkbox
1107 enabled or appear in the list in the config) for the guest to be able to boot.
1108 This is because recent SeaBIOS and OVMF versions only initialize disks if they
1109 are marked 'bootable'.
1110
1111 In any case, even devices not appearing in the list or having the checkmark
1112 disabled will still be available to the guest, once it's operating system has
1113 booted and initialized them. The 'bootable' flag only affects the guest BIOS and
1114 bootloader.
1115
1116
1117 [[qm_startup_and_shutdown]]
1118 Automatic Start and Shutdown of Virtual Machines
1119 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1120
1121 After creating your VMs, you probably want them to start automatically
1122 when the host system boots. For this you need to select the option 'Start at
1123 boot' from the 'Options' Tab of your VM in the web interface, or set it with
1124 the following command:
1125
1126 ----
1127 # qm set <vmid> -onboot 1
1128 ----
1129
1130 .Start and Shutdown Order
1131
1132 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
1133
1134 In some case you want to be able to fine tune the boot order of your
1135 VMs, for instance if one of your VM is providing firewalling or DHCP
1136 to other guest systems. For this you can use the following
1137 parameters:
1138
1139 * *Start/Shutdown order*: Defines the start order priority. For example, set it
1140 to 1 if you want the VM to be the first to be started. (We use the reverse
1141 startup order for shutdown, so a machine with a start order of 1 would be the
1142 last to be shut down). If multiple VMs have the same order defined on a host,
1143 they will additionally be ordered by 'VMID' in ascending order.
1144 * *Startup delay*: Defines the interval between this VM start and subsequent
1145 VMs starts. For example, set it to 240 if you want to wait 240 seconds before
1146 starting other VMs.
1147 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
1148 for the VM to be offline after issuing a shutdown command. By default this
1149 value is set to 180, which means that {pve} will issue a shutdown request and
1150 wait 180 seconds for the machine to be offline. If the machine is still online
1151 after the timeout it will be stopped forcefully.
1152
1153 NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
1154 'boot order' options currently. Those VMs will be skipped by the startup and
1155 shutdown algorithm as the HA manager itself ensures that VMs get started and
1156 stopped.
1157
1158 Please note that machines without a Start/Shutdown order parameter will always
1159 start after those where the parameter is set. Further, this parameter can only
1160 be enforced between virtual machines running on the same host, not
1161 cluster-wide.
1162
1163 If you require a delay between the host boot and the booting of the first VM,
1164 see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
1165
1166
1167 [[qm_qemu_agent]]
1168 QEMU Guest Agent
1169 ~~~~~~~~~~~~~~~~
1170
1171 The QEMU Guest Agent is a service which runs inside the VM, providing a
1172 communication channel between the host and the guest. It is used to exchange
1173 information and allows the host to issue commands to the guest.
1174
1175 For example, the IP addresses in the VM summary panel are fetched via the guest
1176 agent.
1177
1178 Or when starting a backup, the guest is told via the guest agent to sync
1179 outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1180
1181 For the guest agent to work properly the following steps must be taken:
1182
1183 * install the agent in the guest and make sure it is running
1184 * enable the communication via the agent in {pve}
1185
1186 Install Guest Agent
1187 ^^^^^^^^^^^^^^^^^^^
1188
1189 For most Linux distributions, the guest agent is available. The package is
1190 usually named `qemu-guest-agent`.
1191
1192 For Windows, it can be installed from the
1193 https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1194 VirtIO driver ISO].
1195
1196 [[qm_qga_enable]]
1197 Enable Guest Agent Communication
1198 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1199
1200 Communication from {pve} with the guest agent can be enabled in the VM's
1201 *Options* panel. A fresh start of the VM is necessary for the changes to take
1202 effect.
1203
1204 [[qm_qga_auto_trim]]
1205 Automatic TRIM Using QGA
1206 ^^^^^^^^^^^^^^^^^^^^^^^^
1207
1208 It is possible to enable the 'Run guest-trim' option. With this enabled,
1209 {pve} will issue a trim command to the guest after the following
1210 operations that have the potential to write out zeros to the storage:
1211
1212 * moving a disk to another storage
1213 * live migrating a VM to another node with local storage
1214
1215 On a thin provisioned storage, this can help to free up unused space.
1216
1217 NOTE: There is a caveat with ext4 on Linux, because it uses an in-memory
1218 optimization to avoid issuing duplicate TRIM requests. Since the guest doesn't
1219 know about the change in the underlying storage, only the first guest-trim will
1220 run as expected. Subsequent ones, until the next reboot, will only consider
1221 parts of the filesystem that changed since then.
1222
1223 [[qm_qga_fsfreeze]]
1224 Filesystem Freeze & Thaw on Backup
1225 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1226
1227 By default, guest filesystems are synced via the 'fs-freeze' QEMU Guest Agent
1228 Command when a backup is performed, to provide consistency.
1229
1230 On Windows guests, some applications might handle consistent backups themselves
1231 by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
1232 'fs-freeze' then might interfere with that. For example, it has been observed
1233 that calling 'fs-freeze' with some SQL Servers triggers VSS to call the SQL
1234 Writer VSS module in a mode that breaks the SQL Server backup chain for
1235 differential backups.
1236
1237 For such setups you can configure {pve} to not issue a freeze-and-thaw cycle on
1238 backup by setting the `freeze-fs-on-backup` QGA option to `0`. This can also be
1239 done via the GUI with the 'Freeze/thaw guest filesystems on backup for
1240 consistency' option.
1241
1242 IMPORTANT: Disabling this option can potentially lead to backups with inconsistent
1243 filesystems and should therefore only be disabled if you know what you are
1244 doing.
1245
1246 Troubleshooting
1247 ^^^^^^^^^^^^^^^
1248
1249 .VM does not shut down
1250
1251 Make sure the guest agent is installed and running.
1252
1253 Once the guest agent is enabled, {pve} will send power commands like
1254 'shutdown' via the guest agent. If the guest agent is not running, commands
1255 cannot get executed properly and the shutdown command will run into a timeout.
1256
1257 [[qm_spice_enhancements]]
1258 SPICE Enhancements
1259 ~~~~~~~~~~~~~~~~~~
1260
1261 SPICE Enhancements are optional features that can improve the remote viewer
1262 experience.
1263
1264 To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1265 the following command to enable them via the CLI:
1266
1267 ----
1268 qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1269 ----
1270
1271 NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1272 must be set to SPICE (qxl).
1273
1274 Folder Sharing
1275 ^^^^^^^^^^^^^^
1276
1277 Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1278 installed in the guest. It makes the shared folder available through a local
1279 WebDAV server located at http://localhost:9843.
1280
1281 For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1282 from the
1283 https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1284
1285 Most Linux distributions have a package called `spice-webdavd` that can be
1286 installed.
1287
1288 To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1289 Select the folder to share and then enable the checkbox.
1290
1291 NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1292
1293 CAUTION: Experimental! Currently this feature does not work reliably.
1294
1295 Video Streaming
1296 ^^^^^^^^^^^^^^^
1297
1298 Fast refreshing areas are encoded into a video stream. Two options exist:
1299
1300 * *all*: Any fast refreshing area will be encoded into a video stream.
1301 * *filter*: Additional filters are used to decide if video streaming should be
1302 used (currently only small window surfaces are skipped).
1303
1304 A general recommendation if video streaming should be enabled and which option
1305 to choose from cannot be given. Your mileage may vary depending on the specific
1306 circumstances.
1307
1308 Troubleshooting
1309 ^^^^^^^^^^^^^^^
1310
1311 .Shared folder does not show up
1312
1313 Make sure the WebDAV service is enabled and running in the guest. On Windows it
1314 is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1315 different depending on the distribution.
1316
1317 If the service is running, check the WebDAV server by opening
1318 http://localhost:9843 in a browser in the guest.
1319
1320 It can help to restart the SPICE session.
1321
1322 [[qm_migration]]
1323 Migration
1324 ---------
1325
1326 [thumbnail="screenshot/gui-qemu-migrate.png"]
1327
1328 If you have a cluster, you can migrate your VM to another host with
1329
1330 ----
1331 # qm migrate <vmid> <target>
1332 ----
1333
1334 There are generally two mechanisms for this
1335
1336 * Online Migration (aka Live Migration)
1337 * Offline Migration
1338
1339 Online Migration
1340 ~~~~~~~~~~~~~~~~
1341
1342 If your VM is running and no locally bound resources are configured (such as
1343 devices that are passed through), you can initiate a live migration with the `--online`
1344 flag in the `qm migration` command evocation. The web interface defaults to
1345 live migration when the VM is running.
1346
1347 How it works
1348 ^^^^^^^^^^^^
1349
1350 Online migration first starts a new QEMU process on the target host with the
1351 'incoming' flag, which performs only basic initialization with the guest vCPUs
1352 still paused and then waits for the guest memory and device state data streams
1353 of the source Virtual Machine.
1354 All other resources, such as disks, are either shared or got already sent
1355 before runtime state migration of the VMs begins; so only the memory content
1356 and device state remain to be transferred.
1357
1358 Once this connection is established, the source begins asynchronously sending
1359 the memory content to the target. If the guest memory on the source changes,
1360 those sections are marked dirty and another pass is made to send the guest
1361 memory data.
1362 This loop is repeated until the data difference between running source VM
1363 and incoming target VM is small enough to be sent in a few milliseconds,
1364 because then the source VM can be paused completely, without a user or program
1365 noticing the pause, so that the remaining data can be sent to the target, and
1366 then unpause the targets VM's CPU to make it the new running VM in well under a
1367 second.
1368
1369 Requirements
1370 ^^^^^^^^^^^^
1371
1372 For Live Migration to work, there are some things required:
1373
1374 * The VM has no local resources that cannot be migrated. For example,
1375 PCI or USB devices that are passed through currently block live-migration.
1376 Local Disks, on the other hand, can be migrated by sending them to the target
1377 just fine.
1378 * The hosts are located in the same {pve} cluster.
1379 * The hosts have a working (and reliable) network connection between them.
1380 * The target host must have the same, or higher versions of the
1381 {pve} packages. Although it can sometimes work the other way around, this
1382 cannot be guaranteed.
1383 * The hosts have CPUs from the same vendor with similar capabilities. Different
1384 vendor *might* work depending on the actual models and VMs CPU type
1385 configured, but it cannot be guaranteed - so please test before deploying
1386 such a setup in production.
1387
1388 Offline Migration
1389 ~~~~~~~~~~~~~~~~~
1390
1391 If you have local resources, you can still migrate your VMs offline as long as
1392 all disk are on storage defined on both hosts.
1393 Migration then copies the disks to the target host over the network, as with
1394 online migration. Note that any hardware passthrough configuration may need to
1395 be adapted to the device location on the target host.
1396
1397 // TODO: mention hardware map IDs as better way to solve that, once available
1398
1399 [[qm_copy_and_clone]]
1400 Copies and Clones
1401 -----------------
1402
1403 [thumbnail="screenshot/gui-qemu-full-clone.png"]
1404
1405 VM installation is usually done using an installation media (CD-ROM)
1406 from the operating system vendor. Depending on the OS, this can be a
1407 time consuming task one might want to avoid.
1408
1409 An easy way to deploy many VMs of the same type is to copy an existing
1410 VM. We use the term 'clone' for such copies, and distinguish between
1411 'linked' and 'full' clones.
1412
1413 Full Clone::
1414
1415 The result of such copy is an independent VM. The
1416 new VM does not share any storage resources with the original.
1417 +
1418
1419 It is possible to select a *Target Storage*, so one can use this to
1420 migrate a VM to a totally different storage. You can also change the
1421 disk image *Format* if the storage driver supports several formats.
1422 +
1423
1424 NOTE: A full clone needs to read and copy all VM image data. This is
1425 usually much slower than creating a linked clone.
1426 +
1427
1428 Some storage types allows to copy a specific *Snapshot*, which
1429 defaults to the 'current' VM data. This also means that the final copy
1430 never includes any additional snapshots from the original VM.
1431
1432
1433 Linked Clone::
1434
1435 Modern storage drivers support a way to generate fast linked
1436 clones. Such a clone is a writable copy whose initial contents are the
1437 same as the original data. Creating a linked clone is nearly
1438 instantaneous, and initially consumes no additional space.
1439 +
1440
1441 They are called 'linked' because the new image still refers to the
1442 original. Unmodified data blocks are read from the original image, but
1443 modification are written (and afterwards read) from a new
1444 location. This technique is called 'Copy-on-write'.
1445 +
1446
1447 This requires that the original volume is read-only. With {pve} one
1448 can convert any VM into a read-only <<qm_templates, Template>>). Such
1449 templates can later be used to create linked clones efficiently.
1450 +
1451
1452 NOTE: You cannot delete an original template while linked clones
1453 exist.
1454 +
1455
1456 It is not possible to change the *Target storage* for linked clones,
1457 because this is a storage internal feature.
1458
1459
1460 The *Target node* option allows you to create the new VM on a
1461 different node. The only restriction is that the VM is on shared
1462 storage, and that storage is also available on the target node.
1463
1464 To avoid resource conflicts, all network interface MAC addresses get
1465 randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1466 setting.
1467
1468
1469 [[qm_templates]]
1470 Virtual Machine Templates
1471 -------------------------
1472
1473 One can convert a VM into a Template. Such templates are read-only,
1474 and you can use them to create linked clones.
1475
1476 NOTE: It is not possible to start templates, because this would modify
1477 the disk images. If you want to change the template, create a linked
1478 clone and modify that.
1479
1480 VM Generation ID
1481 ----------------
1482
1483 {pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
1484 'vmgenid' Specification
1485 https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1486 for virtual machines.
1487 This can be used by the guest operating system to detect any event resulting
1488 in a time shift event, for example, restoring a backup or a snapshot rollback.
1489
1490 When creating new VMs, a 'vmgenid' will be automatically generated and saved
1491 in its configuration file.
1492
1493 To create and add a 'vmgenid' to an already existing VM one can pass the
1494 special value `1' to let {pve} autogenerate one or manually set the 'UUID'
1495 footnote:[Online GUID generator http://guid.one/] by using it as value, for
1496 example:
1497
1498 ----
1499 # qm set VMID -vmgenid 1
1500 # qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
1501 ----
1502
1503 NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1504 in the same effects as a change on snapshot rollback, backup restore, etc., has
1505 as the VM can interpret this as generation change.
1506
1507 In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1508 its value on VM creation, or retroactively delete the property in the
1509 configuration with:
1510
1511 ----
1512 # qm set VMID -delete vmgenid
1513 ----
1514
1515 The most prominent use case for 'vmgenid' are newer Microsoft Windows
1516 operating systems, which use it to avoid problems in time sensitive or
1517 replicate services (such as databases or domain controller
1518 footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1519 on snapshot rollback, backup restore or a whole VM clone operation.
1520
1521 [[qm_import_virtual_machines]]
1522 Importing Virtual Machines
1523 --------------------------
1524
1525 Importing existing virtual machines from foreign hypervisors or other {pve}
1526 clusters can be achieved through various methods, the most common ones are:
1527
1528 * Using the native import wizard, which utilizes the 'import' content type, such
1529 as provided by the ESXi special storage.
1530 * Performing a backup on the source and then restoring on the target. This
1531 method works best when migrating from another {pve} instance.
1532 * using the OVF-specific import command of the `qm` command-line tool.
1533
1534 If you import VMs to {pve} from other hypervisors, it’s recommended to
1535 familiarize yourself with the
1536 https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Concepts[concepts of {pve}].
1537
1538 Import Wizard
1539 ~~~~~~~~~~~~~
1540
1541 [thumbnail="screenshot/gui-import-wizard-general.png"]
1542
1543 {pve} provides an integrated VM importer using the storage plugin system for
1544 native integration into the API and web-based user interface. You can use this
1545 to import the VM as a whole, with most of its config mapped to {pve}'s config
1546 model and reduced downtime.
1547
1548 NOTE: The import wizard was added during the {pve} 8.2 development cycle and is
1549 in tech preview state. While it's already promising and working stable, it's
1550 still under active development, focusing on adding other import-sources, like
1551 for example OVF/OVA files, in the future.
1552
1553 To use the import wizard you have to first set up a new storage for an import
1554 source, you can do so on the web-interface under _Datacenter -> Storage -> Add_.
1555
1556 Then you can select the new storage in the resource tree and use the 'Virtual
1557 Guests' content tab to see all available guests that can be imported.
1558
1559 [thumbnail="screenshot/gui-import-wizard-advanced.png"]
1560
1561 Select one and use the 'Import' button (or double-click) to open the import
1562 wizard. You can modify a subset of the available options here and then start the
1563 import. Please note that you can do more advanced modifications after the import
1564 finished.
1565
1566 TIP: The import wizard is currently (2024-03) available for ESXi and has been
1567 tested with ESXi versions 6.5 through 8.0. Note that guests using vSAN storage
1568 cannot be directly imported directly; their disks must first be moved to another
1569 storage. While it is possible to use a vCenter as the import source, performance
1570 is dramatically degraded (5 to 10 times slower).
1571
1572 For a step-by-step guide and tips for how to adapt the virtual guest to the new
1573 hyper-visor see our
1574 https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Migration[migrate to {pve}
1575 wiki article].
1576
1577 Import OVF/OVA Through CLI
1578 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1579
1580 A VM export from a foreign hypervisor takes usually the form of one or more disk
1581 images, with a configuration file describing the settings of the VM (RAM,
1582 number of cores). +
1583 The disk images can be in the vmdk format, if the disks come from
1584 VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1585 The most popular configuration format for VM exports is the OVF standard, but in
1586 practice interoperation is limited because many settings are not implemented in
1587 the standard itself, and hypervisors export the supplementary information
1588 in non-standard extensions.
1589
1590 Besides the problem of format, importing disk images from other hypervisors
1591 may fail if the emulated hardware changes too much from one hypervisor to
1592 another. Windows VMs are particularly concerned by this, as the OS is very
1593 picky about any changes of hardware. This problem may be solved by
1594 installing the MergeIDE.zip utility available from the Internet before exporting
1595 and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1596
1597 Finally there is the question of paravirtualized drivers, which improve the
1598 speed of the emulated system and are specific to the hypervisor.
1599 GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1600 default and you can switch to the paravirtualized drivers right after importing
1601 the VM. For Windows VMs, you need to install the Windows paravirtualized
1602 drivers by yourself.
1603
1604 GNU/Linux and other free Unix can usually be imported without hassle. Note
1605 that we cannot guarantee a successful import/export of Windows VMs in all
1606 cases due to the problems above.
1607
1608 Step-by-step example of a Windows OVF import
1609 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1610
1611 Microsoft provides
1612 https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
1613 to get started with Windows development.We are going to use one of these
1614 to demonstrate the OVF import feature.
1615
1616 Download the Virtual Machine zip
1617 ++++++++++++++++++++++++++++++++
1618
1619 After getting informed about the user agreement, choose the _Windows 10
1620 Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
1621
1622 Extract the disk image from the zip
1623 +++++++++++++++++++++++++++++++++++
1624
1625 Using the `unzip` utility or any archiver of your choice, unpack the zip,
1626 and copy via ssh/scp the ovf and vmdk files to your {pve} host.
1627
1628 Import the Virtual Machine
1629 ++++++++++++++++++++++++++
1630
1631 This will create a new virtual machine, using cores, memory and
1632 VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1633 storage. You have to configure the network manually.
1634
1635 ----
1636 # qm importovf 999 WinDev1709Eval.ovf local-lvm
1637 ----
1638
1639 The VM is ready to be started.
1640
1641 Adding an external disk image to a Virtual Machine
1642 ++++++++++++++++++++++++++++++++++++++++++++++++++
1643
1644 You can also add an existing disk image to a VM, either coming from a
1645 foreign hypervisor, or one that you created yourself.
1646
1647 Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1648
1649 vmdebootstrap --verbose \
1650 --size 10GiB --serial-console \
1651 --grub --no-extlinux \
1652 --package openssh-server \
1653 --package avahi-daemon \
1654 --package qemu-guest-agent \
1655 --hostname vm600 --enable-dhcp \
1656 --customize=./copy_pub_ssh.sh \
1657 --sparse --image vm600.raw
1658
1659 You can now create a new target VM, importing the image to the storage `pvedir`
1660 and attaching it to the VM's SCSI controller:
1661
1662 ----
1663 # qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
1664 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1665 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
1666 ----
1667
1668 The VM is ready to be started.
1669
1670
1671 ifndef::wiki[]
1672 include::qm-cloud-init.adoc[]
1673 endif::wiki[]
1674
1675 ifndef::wiki[]
1676 include::qm-pci-passthrough.adoc[]
1677 endif::wiki[]
1678
1679 Hookscripts
1680 -----------
1681
1682 You can add a hook script to VMs with the config property `hookscript`.
1683
1684 ----
1685 # qm set 100 --hookscript local:snippets/hookscript.pl
1686 ----
1687
1688 It will be called during various phases of the guests lifetime.
1689 For an example and documentation see the example script under
1690 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
1691
1692 [[qm_hibernate]]
1693 Hibernation
1694 -----------
1695
1696 You can suspend a VM to disk with the GUI option `Hibernate` or with
1697
1698 ----
1699 # qm suspend ID --todisk
1700 ----
1701
1702 That means that the current content of the memory will be saved onto disk
1703 and the VM gets stopped. On the next start, the memory content will be
1704 loaded and the VM can continue where it was left off.
1705
1706 [[qm_vmstatestorage]]
1707 .State storage selection
1708 If no target storage for the memory is given, it will be automatically
1709 chosen, the first of:
1710
1711 1. The storage `vmstatestorage` from the VM config.
1712 2. The first shared storage from any VM disk.
1713 3. The first non-shared storage from any VM disk.
1714 4. The storage `local` as a fallback.
1715
1716 [[resource_mapping]]
1717 Resource Mapping
1718 ----------------
1719
1720 [thumbnail="screenshot/gui-datacenter-resource-mappings.png"]
1721
1722 When using or referencing local resources (e.g. address of a pci device), using
1723 the raw address or id is sometimes problematic, for example:
1724
1725 * when using HA, a different device with the same id or path may exist on the
1726 target node, and if one is not careful when assigning such guests to HA
1727 groups, the wrong device could be used, breaking configurations.
1728
1729 * changing hardware can change ids and paths, so one would have to check all
1730 assigned devices and see if the path or id is still correct.
1731
1732 To handle this better, one can define cluster wide resource mappings, such that
1733 a resource has a cluster unique, user selected identifier which can correspond
1734 to different devices on different hosts. With this, HA won't start a guest with
1735 a wrong device, and hardware changes can be detected.
1736
1737 Creating such a mapping can be done with the {pve} web GUI under `Datacenter`
1738 in the relevant tab in the `Resource Mappings` category, or on the cli with
1739
1740 ----
1741 # pvesh create /cluster/mapping/<type> <options>
1742 ----
1743
1744 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
1745
1746 Where `<type>` is the hardware type (currently either `pci` or `usb`) and
1747 `<options>` are the device mappings and other configuration parameters.
1748
1749 Note that the options must include a map property with all identifying
1750 properties of that hardware, so that it's possible to verify the hardware did
1751 not change and the correct device is passed through.
1752
1753 For example to add a PCI device as `device1` with the path `0000:01:00.0` that
1754 has the device id `0001` and the vendor id `0002` on the node `node1`, and
1755 `0000:02:00.0` on `node2` you can add it with:
1756
1757 ----
1758 # pvesh create /cluster/mapping/pci --id device1 \
1759 --map node=node1,path=0000:01:00.0,id=0002:0001 \
1760 --map node=node2,path=0000:02:00.0,id=0002:0001
1761 ----
1762
1763 You must repeat the `map` parameter for each node where that device should have
1764 a mapping (note that you can currently only map one USB device per node per
1765 mapping).
1766
1767 Using the GUI makes this much easier, as the correct properties are
1768 automatically picked up and sent to the API.
1769
1770 [thumbnail="screenshot/gui-datacenter-mapping-usb-edit.png"]
1771
1772 It's also possible for PCI devices to provide multiple devices per node with
1773 multiple map properties for the nodes. If such a device is assigned to a guest,
1774 the first free one will be used when the guest is started. The order of the
1775 paths given is also the order in which they are tried, so arbitrary allocation
1776 policies can be implemented.
1777
1778 This is useful for devices with SR-IOV, since some times it is not important
1779 which exact virtual function is passed through.
1780
1781 You can assign such a device to a guest either with the GUI or with
1782
1783 ----
1784 # qm set ID -hostpci0 <name>
1785 ----
1786
1787 for PCI devices, or
1788
1789 ----
1790 # qm set <vmid> -usb0 <name>
1791 ----
1792
1793 for USB devices.
1794
1795 Where `<vmid>` is the guests id and `<name>` is the chosen name for the created
1796 mapping. All usual options for passing through the devices are allowed, such as
1797 `mdev`.
1798
1799 To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
1800 (where `<type>` is the device type and `<name>` is the name of the mapping).
1801
1802 To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
1803 (in addition to the normal guest privileges to edit the configuration).
1804
1805 Managing Virtual Machines with `qm`
1806 ------------------------------------
1807
1808 qm is the tool to manage QEMU/KVM virtual machines on {pve}. You can
1809 create and destroy virtual machines, and control execution
1810 (start/stop/suspend/resume). Besides that, you can use qm to set
1811 parameters in the associated config file. It is also possible to
1812 create and delete virtual disks.
1813
1814 CLI Usage Examples
1815 ~~~~~~~~~~~~~~~~~~
1816
1817 Using an iso file uploaded on the 'local' storage, create a VM
1818 with a 4 GB IDE disk on the 'local-lvm' storage
1819
1820 ----
1821 # qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1822 ----
1823
1824 Start the new VM
1825
1826 ----
1827 # qm start 300
1828 ----
1829
1830 Send a shutdown request, then wait until the VM is stopped.
1831
1832 ----
1833 # qm shutdown 300 && qm wait 300
1834 ----
1835
1836 Same as above, but only wait for 40 seconds.
1837
1838 ----
1839 # qm shutdown 300 && qm wait 300 -timeout 40
1840 ----
1841
1842 If the VM does not shut down, force-stop it and overrule any running shutdown
1843 tasks. As stopping VMs may incur data loss, use it with caution.
1844
1845 ----
1846 # qm stop 300 -overrule-shutdown 1
1847 ----
1848
1849 Destroying a VM always removes it from Access Control Lists and it always
1850 removes the firewall configuration of the VM. You have to activate
1851 '--purge', if you want to additionally remove the VM from replication jobs,
1852 backup jobs and HA resource configurations.
1853
1854 ----
1855 # qm destroy 300 --purge
1856 ----
1857
1858 Move a disk image to a different storage.
1859
1860 ----
1861 # qm move-disk 300 scsi0 other-storage
1862 ----
1863
1864 Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1865 the source VM and attaches it as `scsi3` to the target VM. In the background
1866 the disk image is being renamed so that the name matches the new owner.
1867
1868 ----
1869 # qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1870 ----
1871
1872
1873 [[qm_configuration]]
1874 Configuration
1875 -------------
1876
1877 VM configuration files are stored inside the Proxmox cluster file
1878 system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1879 Like other files stored inside `/etc/pve/`, they get automatically
1880 replicated to all other cluster nodes.
1881
1882 NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1883 unique cluster wide.
1884
1885 .Example VM Configuration
1886 ----
1887 boot: order=virtio0;net0
1888 cores: 1
1889 sockets: 1
1890 memory: 512
1891 name: webmail
1892 ostype: l26
1893 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1894 virtio0: local:vm-100-disk-1,size=32G
1895 ----
1896
1897 Those configuration files are simple text files, and you can edit them
1898 using a normal text editor (`vi`, `nano`, ...). This is sometimes
1899 useful to do small corrections, but keep in mind that you need to
1900 restart the VM to apply such changes.
1901
1902 For that reason, it is usually better to use the `qm` command to
1903 generate and modify those files, or do the whole thing using the GUI.
1904 Our toolkit is smart enough to instantaneously apply most changes to
1905 running VM. This feature is called "hot plug", and there is no
1906 need to restart the VM in that case.
1907
1908
1909 File Format
1910 ~~~~~~~~~~~
1911
1912 VM configuration files use a simple colon separated key/value
1913 format. Each line has the following format:
1914
1915 -----
1916 # this is a comment
1917 OPTION: value
1918 -----
1919
1920 Blank lines in those files are ignored, and lines starting with a `#`
1921 character are treated as comments and are also ignored.
1922
1923
1924 [[qm_snapshots]]
1925 Snapshots
1926 ~~~~~~~~~
1927
1928 When you create a snapshot, `qm` stores the configuration at snapshot
1929 time into a separate snapshot section within the same configuration
1930 file. For example, after creating a snapshot called ``testsnapshot'',
1931 your configuration file will look like this:
1932
1933 .VM configuration with snapshot
1934 ----
1935 memory: 512
1936 swap: 512
1937 parent: testsnaphot
1938 ...
1939
1940 [testsnaphot]
1941 memory: 512
1942 swap: 512
1943 snaptime: 1457170803
1944 ...
1945 ----
1946
1947 There are a few snapshot related properties like `parent` and
1948 `snaptime`. The `parent` property is used to store the parent/child
1949 relationship between snapshots. `snaptime` is the snapshot creation
1950 time stamp (Unix epoch).
1951
1952 You can optionally save the memory of a running VM with the option `vmstate`.
1953 For details about how the target storage gets chosen for the VM state, see
1954 xref:qm_vmstatestorage[State storage selection] in the chapter
1955 xref:qm_hibernate[Hibernation].
1956
1957 [[qm_options]]
1958 Options
1959 ~~~~~~~
1960
1961 include::qm.conf.5-opts.adoc[]
1962
1963
1964 Locks
1965 -----
1966
1967 Online migrations, snapshots and backups (`vzdump`) set a lock to prevent
1968 incompatible concurrent actions on the affected VMs. Sometimes you need to
1969 remove such a lock manually (for example after a power failure).
1970
1971 ----
1972 # qm unlock <vmid>
1973 ----
1974
1975 CAUTION: Only do that if you are sure the action which set the lock is
1976 no longer running.
1977
1978 ifdef::wiki[]
1979
1980 See Also
1981 ~~~~~~~~
1982
1983 * link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1984
1985 endif::wiki[]
1986
1987
1988 ifdef::manvolnum[]
1989
1990 Files
1991 ------
1992
1993 `/etc/pve/qemu-server/<VMID>.conf`::
1994
1995 Configuration file for the VM '<VMID>'.
1996
1997
1998 include::pve-copyright.adoc[]
1999 endif::manvolnum[]