]> git.proxmox.com Git - pve-docs.git/blob - qm.adoc
qm: improve list of Intel/AMD CPU types in QEMU section
[pve-docs.git] / qm.adoc
1 [[chapter_virtual_machines]]
2 ifdef::manvolnum[]
3 qm(1)
4 =====
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 qm - QEMU/KVM Virtual Machine Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::qm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 QEMU/KVM Virtual Machines
23 =========================
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 // deprecates
28 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29 // http://pve.proxmox.com/wiki/KVM
30 // http://pve.proxmox.com/wiki/Qemu_Server
31
32 QEMU (short form for Quick Emulator) is an open source hypervisor that emulates a
33 physical computer. From the perspective of the host system where QEMU is
34 running, QEMU is a user program which has access to a number of local resources
35 like partitions, files, network cards which are then passed to an
36 emulated computer which sees them as if they were real devices.
37
38 A guest operating system running in the emulated computer accesses these
39 devices, and runs as if it were running on real hardware. For instance, you can pass
40 an ISO image as a parameter to QEMU, and the OS running in the emulated computer
41 will see a real CD-ROM inserted into a CD drive.
42
43 QEMU can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44 only concerned with 32 and 64 bits PC clone emulation, since it represents the
45 overwhelming majority of server hardware. The emulation of PC clones is also one
46 of the fastest due to the availability of processor extensions which greatly
47 speed up QEMU when the emulated architecture is the same as the host
48 architecture.
49
50 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51 It means that QEMU is running with the support of the virtualization processor
52 extensions, via the Linux KVM module. In the context of {pve} _QEMU_ and
53 _KVM_ can be used interchangeably, as QEMU in {pve} will always try to load the KVM
54 module.
55
56 QEMU inside {pve} runs as a root process, since this is required to access block
57 and PCI devices.
58
59
60 Emulated devices and paravirtualized devices
61 --------------------------------------------
62
63 The PC hardware emulated by QEMU includes a mainboard, network controllers,
64 SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
65 the `kvm(1)` man page) all of them emulated in software. All these devices
66 are the exact software equivalent of existing hardware devices, and if the OS
67 running in the guest has the proper drivers it will use the devices as if it
68 were running on real hardware. This allows QEMU to run _unmodified_ operating
69 systems.
70
71 This however has a performance cost, as running in software what was meant to
72 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73 QEMU can present to the guest operating system _paravirtualized devices_, where
74 the guest OS recognizes it is running inside QEMU and cooperates with the
75 hypervisor.
76
77 QEMU relies on the virtio virtualization standard, and is thus able to present
78 paravirtualized virtio devices, which includes a paravirtualized generic disk
79 controller, a paravirtualized network card, a paravirtualized serial port,
80 a paravirtualized SCSI controller, etc ...
81
82 TIP: It is *highly recommended* to use the virtio devices whenever you can, as
83 they provide a big performance improvement and are generally better maintained.
84 Using the virtio generic disk controller versus an emulated IDE controller will
85 double the sequential write throughput, as measured with `bonnie++(8)`. Using
86 the virtio network interface can deliver up to three times the throughput of an
87 emulated Intel E1000 network card, as measured with `iperf(1)`. footnote:[See
88 this benchmark on the KVM wiki https://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
90
91 [[qm_virtual_machines_settings]]
92 Virtual Machines Settings
93 -------------------------
94
95 Generally speaking {pve} tries to choose sane defaults for virtual machines
96 (VM). Make sure you understand the meaning of the settings you change, as it
97 could incur a performance slowdown, or putting your data at risk.
98
99
100 [[qm_general_settings]]
101 General Settings
102 ~~~~~~~~~~~~~~~~
103
104 [thumbnail="screenshot/gui-create-vm-general.png"]
105
106 General settings of a VM include
107
108 * the *Node* : the physical server on which the VM will run
109 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
110 * *Name*: a free form text string you can use to describe the VM
111 * *Resource Pool*: a logical group of VMs
112
113
114 [[qm_os_settings]]
115 OS Settings
116 ~~~~~~~~~~~
117
118 [thumbnail="screenshot/gui-create-vm-os.png"]
119
120 When creating a virtual machine (VM), setting the proper Operating System(OS)
121 allows {pve} to optimize some low level parameters. For instance Windows OS
122 expect the BIOS clock to use the local time, while Unix based OS expect the
123 BIOS clock to have the UTC time.
124
125 [[qm_system_settings]]
126 System Settings
127 ~~~~~~~~~~~~~~~
128
129 On VM creation you can change some basic system components of the new VM. You
130 can specify which xref:qm_display[display type] you want to use.
131 [thumbnail="screenshot/gui-create-vm-system.png"]
132 Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133 If you plan to install the QEMU Guest Agent, or if your selected ISO image
134 already ships and installs it automatically, you may want to tick the 'QEMU
135 Agent' box, which lets {pve} know that it can use its features to show some
136 more information, and complete some actions (for example, shutdown or
137 snapshots) more intelligently.
138
139 {pve} allows to boot VMs with different firmware and machine types, namely
140 xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
141 the default SeaBIOS to OVMF only if you plan to use
142 xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the
143 hardware layout of the VM's virtual motherboard. You can choose between the
144 default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
145 https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
146 chipset, which also provides a virtual PCIe bus, and thus may be desired if
147 one wants to pass through PCIe hardware.
148
149 [[qm_hard_disk]]
150 Hard Disk
151 ~~~~~~~~~
152
153 [[qm_hard_disk_bus]]
154 Bus/Controller
155 ^^^^^^^^^^^^^^
156 QEMU can emulate a number of storage controllers:
157
158 TIP: It is highly recommended to use the *VirtIO SCSI* or *VirtIO Block*
159 controller for performance reasons and because they are better maintained.
160
161 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
162 controller. Even if this controller has been superseded by recent designs,
163 each and every OS you can think of has support for it, making it a great choice
164 if you want to run an OS released before 2003. You can connect up to 4 devices
165 on this controller.
166
167 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
168 design, allowing higher throughput and a greater number of devices to be
169 connected. You can connect up to 6 devices on this controller.
170
171 * the *SCSI* controller, designed in 1985, is commonly found on server grade
172 hardware, and can connect up to 14 storage devices. {pve} emulates by default a
173 LSI 53C895A controller.
174 +
175 A SCSI controller of type _VirtIO SCSI single_ and enabling the
176 xref:qm_hard_disk_iothread[IO Thread] setting for the attached disks is
177 recommended if you aim for performance. This is the default for newly created
178 Linux VMs since {pve} 7.3. Each disk will have its own _VirtIO SCSI_ controller,
179 and QEMU will handle the disks IO in a dedicated thread. Linux distributions
180 have support for this controller since 2012, and FreeBSD since 2014. For Windows
181 OSes, you need to provide an extra ISO containing the drivers during the
182 installation.
183 // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
184
185 * The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
186 is an older type of paravirtualized controller. It has been superseded by the
187 VirtIO SCSI Controller, in terms of features.
188
189 [thumbnail="screenshot/gui-create-vm-hard-disk.png"]
190
191 [[qm_hard_disk_formats]]
192 Image Format
193 ^^^^^^^^^^^^
194 On each controller you attach a number of emulated hard disks, which are backed
195 by a file or a block device residing in the configured storage. The choice of
196 a storage type will determine the format of the hard disk image. Storages which
197 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
198 whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
199 either the *raw disk image format* or the *QEMU image format*.
200
201 * the *QEMU image format* is a copy on write format which allows snapshots, and
202 thin provisioning of the disk image.
203 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
204 you would get when executing the `dd` command on a block device in Linux. This
205 format does not support thin provisioning or snapshots by itself, requiring
206 cooperation from the storage layer for these tasks. It may, however, be up to
207 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
208 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
209 * the *VMware image format* only makes sense if you intend to import/export the
210 disk image to other hypervisors.
211
212 [[qm_hard_disk_cache]]
213 Cache Mode
214 ^^^^^^^^^^
215 Setting the *Cache* mode of the hard drive will impact how the host system will
216 notify the guest systems of block write completions. The *No cache* default
217 means that the guest system will be notified that a write is complete when each
218 block reaches the physical storage write queue, ignoring the host page cache.
219 This provides a good balance between safety and speed.
220
221 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
222 you can set the *No backup* option on that disk.
223
224 If you want the {pve} storage replication mechanism to skip a disk when starting
225 a replication job, you can set the *Skip replication* option on that disk.
226 As of {pve} 5.0, replication requires the disk images to be on a storage of type
227 `zfspool`, so adding a disk image to other storages when the VM has replication
228 configured requires to skip replication for this disk image.
229
230 [[qm_hard_disk_discard]]
231 Trim/Discard
232 ^^^^^^^^^^^^
233 If your storage supports _thin provisioning_ (see the storage chapter in the
234 {pve} guide), you can activate the *Discard* option on a drive. With *Discard*
235 set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
236 https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
237 marks blocks as unused after deleting files, the controller will relay this
238 information to the storage, which will then shrink the disk image accordingly.
239 For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
240 option on the drive. Some guest operating systems may also require the
241 *SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
242 only supported on guests using Linux Kernel 5.0 or higher.
243
244 If you would like a drive to be presented to the guest as a solid-state drive
245 rather than a rotational hard disk, you can set the *SSD emulation* option on
246 that drive. There is no requirement that the underlying storage actually be
247 backed by SSDs; this feature can be used with physical media of any type.
248 Note that *SSD emulation* is not supported on *VirtIO Block* drives.
249
250
251 [[qm_hard_disk_iothread]]
252 IO Thread
253 ^^^^^^^^^
254 The option *IO Thread* can only be used when using a disk with the *VirtIO*
255 controller, or with the *SCSI* controller, when the emulated controller type is
256 *VirtIO SCSI single*. With *IO Thread* enabled, QEMU creates one I/O thread per
257 storage controller rather than handling all I/O in the main event loop or vCPU
258 threads. One benefit is better work distribution and utilization of the
259 underlying storage. Another benefit is reduced latency (hangs) in the guest for
260 very I/O-intensive host workloads, since neither the main thread nor a vCPU
261 thread can be blocked by disk I/O.
262
263 [[qm_cpu]]
264 CPU
265 ~~~
266
267 [thumbnail="screenshot/gui-create-vm-cpu.png"]
268
269 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
270 This CPU can then contain one or many *cores*, which are independent
271 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
272 sockets with two cores is mostly irrelevant from a performance point of view.
273 However some software licenses depend on the number of sockets a machine has,
274 in that case it makes sense to set the number of sockets to what the license
275 allows you.
276
277 Increasing the number of virtual CPUs (cores and sockets) will usually provide a
278 performance improvement though that is heavily dependent on the use of the VM.
279 Multi-threaded applications will of course benefit from a large number of
280 virtual CPUs, as for each virtual cpu you add, QEMU will create a new thread of
281 execution on the host system. If you're not sure about the workload of your VM,
282 it is usually a safe bet to set the number of *Total cores* to 2.
283
284 NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
285 is greater than the number of cores on the server (for example, 4 VMs each with
286 4 cores (= total 16) on a machine with only 8 cores). In that case the host
287 system will balance the QEMU execution threads between your server cores, just
288 like if you were running a standard multi-threaded application. However, {pve}
289 will prevent you from starting VMs with more virtual CPU cores than physically
290 available, as this will only bring the performance down due to the cost of
291 context switches.
292
293 [[qm_cpu_resource_limits]]
294 Resource Limits
295 ^^^^^^^^^^^^^^^
296
297 In addition to the number of virtual cores, you can configure how much resources
298 a VM can get in relation to the host CPU time and also in relation to other
299 VMs.
300 With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
301 the whole VM can use on the host. It is a floating point value representing CPU
302 time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
303 single process would fully use one single core it would have `100%` CPU Time
304 usage. If a VM with four cores utilizes all its cores fully it would
305 theoretically use `400%`. In reality the usage may be even a bit higher as QEMU
306 can have additional threads for VM peripherals besides the vCPU core ones.
307 This setting can be useful if a VM should have multiple vCPUs, as it runs a few
308 processes in parallel, but the VM as a whole should not be able to run all
309 vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
310 which would profit from having 8 vCPUs, but at no time all of those 8 cores
311 should run at full load - as this would make the server so overloaded that
312 other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
313 `4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
314 real host cores CPU time. But, if only 4 would do work they could still get
315 almost 100% of a real core each.
316
317 NOTE: VMs can, depending on their configuration, use additional threads, such
318 as for networking or IO operations but also live migration. Thus a VM can show
319 up to use more CPU time than just its virtual CPUs could use. To ensure that a
320 VM never uses more CPU time than virtual CPUs assigned set the *cpulimit*
321 setting to the same value as the total core count.
322
323 The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
324 shares or CPU weight), controls how much CPU time a VM gets compared to other
325 running VMs. It is a relative weight which defaults to `100` (or `1024` if the
326 host uses legacy cgroup v1). If you increase this for a VM it will be
327 prioritized by the scheduler in comparison to other VMs with lower weight. For
328 example, if VM 100 has set the default `100` and VM 200 was changed to `200`,
329 the latter VM 200 would receive twice the CPU bandwidth than the first VM 100.
330
331 For more information see `man systemd.resource-control`, here `CPUQuota`
332 corresponds to `cpulimit` and `CPUWeight` corresponds to our `cpuunits`
333 setting, visit its Notes section for references and implementation details.
334
335 The third CPU resource limiting setting, *affinity*, controls what host cores
336 the virtual machine will be permitted to execute on. E.g., if an affinity value
337 of `0-3,8-11` is provided, the virtual machine will be restricted to using the
338 host cores `0,1,2,3,8,9,10,` and `11`. Valid *affinity* values are written in
339 cpuset `List Format`. List Format is a comma-separated list of CPU numbers and
340 ranges of numbers, in ASCII decimal.
341
342 NOTE: CPU *affinity* uses the `taskset` command to restrict virtual machines to
343 a given set of cores. This restriction will not take effect for some types of
344 processes that may be created for IO. *CPU affinity is not a security feature.*
345
346 For more information regarding *affinity* see `man cpuset`. Here the
347 `List Format` corresponds to valid *affinity* values. Visit its `Formats`
348 section for more examples.
349
350 CPU Type
351 ^^^^^^^^
352
353 QEMU can emulate a number different of *CPU types* from 486 to the latest Xeon
354 processors. Each new processor generation adds new features, like hardware
355 assisted 3d rendering, random number generation, memory protection, etc.. Also,
356 a current generation can be upgraded through microcode update with bug or
357 security fixes.
358
359 Usually you should select for your VM a processor type which closely matches the
360 CPU of the host system, as it means that the host CPU features (also called _CPU
361 flags_ ) will be available in your VMs. If you want an exact match, you can set
362 the CPU type to *host* in which case the VM will have exactly the same CPU flags
363 as your host system.
364
365 This has a downside though. If you want to do a live migration of VMs between
366 different hosts, your VM might end up on a new system with a different CPU type
367 or a different microcode version.
368 If the CPU flags passed to the guest are missing, the QEMU process will stop. To
369 remedy this QEMU has also its own virtual CPU types, that {pve} uses by default.
370
371 The backend default is 'kvm64' which works on essentially all x86_64 host CPUs
372 and the UI default when creating a new VM is 'x86-64-v2-AES', which requires a
373 host CPU starting from Westmere for Intel or at least a fourth generation
374 Opteron for AMD.
375
376 In short:
377
378 If you don’t care about live migration or have a homogeneous cluster where all
379 nodes have the same CPU and same microcode version, set the CPU type to host, as
380 in theory this will give your guests maximum performance.
381
382 If you care about live migration and security, and you have only Intel CPUs or
383 only AMD CPUs, choose the lowest generation CPU model of your cluster.
384
385 If you care about live migration without security, or have mixed Intel/AMD
386 cluster, choose the lowest compatible virtual QEMU CPU type.
387
388 NOTE: Live migrations between Intel and AMD host CPUs have no guarantee to work.
389
390
391 Intel CPU Types Since 2007 as Defined in QEMU
392 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
393
394 https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors[Intel processors]
395
396 * 'Nahelem' : https://en.wikipedia.org/wiki/Nehalem_(microarchitecture)[1st generation of the Intel Core processor]
397 +
398 * 'Nahelem-IBRS (v2)' : add Spectre v1 protection ('+spec-ctrl')
399 +
400 * 'Westmere' : https://en.wikipedia.org/wiki/Westmere_(microarchitecture)[1st generation of the Intel Core processor (Xeon E7-)]
401 +
402 * 'Westmere-IBRS (v2)' : add Spectre v1 protection ('+spec-ctrl')
403 +
404 * 'SandyBridge' : https://en.wikipedia.org/wiki/Sandy_Bridge[2nd generation of the Intel Core processor]
405 +
406 * 'SandyBridge-IBRS (v2)' : add Spectre v1 protection ('+spec-ctrl')
407 +
408 * 'IvyBridge' : https://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)[3rd generation of the Intel Core processor]
409 +
410 * 'IvyBridge-IBRS (v2)': add Spectre v1 protection ('+spec-ctrl')
411 +
412 * 'Haswell' : https://en.wikipedia.org/wiki/Haswell_(microarchitecture)[4th generation of the Intel Core processor]
413 +
414 * 'Haswell-noTSX (v2)' : disable TSX ('-hle', '-rtm')
415 +
416 * 'Haswell-IBRS (v3)' : re-add TSX, add Spectre v1 protection ('+hle', '+rtm',
417 '+spec-ctrl')
418 +
419 * 'Haswell-noTSX-IBRS (v4)' : disable TSX ('-hle', '-rtm')
420 +
421 * 'Broadwell': https://en.wikipedia.org/wiki/Broadwell_(microarchitecture)[5th generation of the Intel Core processor]
422 +
423 * 'Skylake': https://en.wikipedia.org/wiki/Skylake_(microarchitecture)[1st generation Xeon Scalable server processors]
424 +
425 * 'Skylake-IBRS (v2)' : add Spectre v1 protection, disable CLFLUSHOPT
426 ('+spec-ctrl', '-clflushopt')
427 +
428 * 'Skylake-noTSX-IBRS (v3)' : disable TSX ('-hle', '-rtm')
429 +
430 * 'Skylake-v4': add EPT switching ('+vmx-eptp-switching')
431 +
432 * 'Cascadelake': https://en.wikipedia.org/wiki/Cascade_Lake_(microprocessor)[2nd generation Xeon Scalable processor]
433 +
434 * 'Cascadelake-v2' : add arch_capabilities msr ('+arch-capabilities',
435 '+rdctl-no', '+ibrs-all', '+skip-l1dfl-vmentry', '+mds-no')
436 +
437 * 'Cascadelake-v3' : disable TSX ('-hle', '-rtm')
438 +
439 * 'Cascadelake-v4' : add EPT switching ('+vmx-eptp-switching')
440 +
441 * 'Cascadelake-v5' : add XSAVES ('+xsaves', '+vmx-xsaves')
442 +
443 * 'Cooperlake' : https://en.wikipedia.org/wiki/Cooper_Lake_(microprocessor)[3rd generation Xeon Scalable processors for 4 & 8 sockets servers]
444 +
445 * 'Cooperlake-v2' : add XSAVES ('+xsaves', '+vmx-xsaves')
446 +
447 * 'Icelake': https://en.wikipedia.org/wiki/Ice_Lake_(microprocessor)[3rd generation Xeon Scalable server processors]
448 +
449 * 'Icelake-v2' : disable TSX ('-hle', '-rtm')
450 +
451 * 'Icelake-v3' : add arch_capabilities msr ('+arch-capabilities', '+rdctl-no',
452 '+ibrs-all', '+skip-l1dfl-vmentry', '+mds-no', '+pschange-mc-no', '+taa-no')
453 +
454 * 'Icelake-v4' : add missing flags ('+sha-ni', '+avx512ifma', '+rdpid', '+fsrm',
455 '+vmx-rdseed-exit', '+vmx-pml', '+vmx-eptp-switching')
456 +
457 * 'Icelake-v5' : add XSAVES ('+xsaves', '+vmx-xsaves')
458 +
459 * 'Icelake-v6' : add "5-level EPT" ('+vmx-page-walk-5')
460 +
461 * 'SapphireRapids' : https://en.wikipedia.org/wiki/Sapphire_Rapids[4th generation Xeon Scalable server processors]
462
463
464 AMD CPU Types Since 2007 as Defined in QEMU
465 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
466
467 https://en.wikipedia.org/wiki/List_of_AMD_processors[AMD processors]
468
469 * 'Opteron_G3' : https://en.wikipedia.org/wiki/AMD_10h[K10]
470 +
471 * 'Opteron_G4' : https://en.wikipedia.org/wiki/Bulldozer_(microarchitecture)[Bulldozer]
472 +
473 * 'Opteron_G5' : https://en.wikipedia.org/wiki/Piledriver_(microarchitecture)[Piledriver]
474 +
475 * 'EPYC' : https://en.wikipedia.org/wiki/Zen_(first_generation)[1st generation of Zen processors]
476 +
477 * 'EPYC-IBPB (v2)' : add Spectre v1 protection ('+ibpb')
478 +
479 * 'EPYC-v3' : add missing flags ('+perfctr-core', '+clzero', '+xsaveerptr',
480 '+xsaves')
481 +
482 * 'EPYC-Rome' : https://en.wikipedia.org/wiki/Zen_2[2nd generation of Zen processors]
483 +
484 * 'EPYC-Rome-v2' : add Spectre v2, v4 protection ('+ibrs', '+amd-ssbd')
485 +
486 * 'EPYC-Milan' : https://en.wikipedia.org/wiki/Zen_3[3rd generation of Zen processors]
487 +
488 * 'EPYC-Milan-v2' : add missing flags ('+vaes', '+vpclmulqdq',
489 '+stibp-always-on', '+amd-psfd', '+no-nested-data-bp',
490 '+lfence-always-serializing', '+null-sel-clr-base')
491
492 QEMU CPU Types
493 ^^^^^^^^^^^^^^
494
495 QEMU also provide virtual CPU types, compatible with both Intel and AMD host
496 CPUs.
497
498 NOTE: To mitigate the Spectre vulnerability for virtual CPU types, you need to
499 add the relevant CPU flags, see
500 xref:qm_meltdown_spectre[Meltdown / Spectre related CPU flags].
501
502 Historically, {pve} had the 'kvm64' CPU model, with CPU flags at the level of
503 Pentium 4 enabled, so performance was not great for certain workloads.
504
505 In the summer of 2020, AMD, Intel, Red Hat, and SUSE collaborated to define
506 three x86-64 microarchitecture levels on top of the x86-64 baseline, with modern
507 flags enabled. For details, see the
508 https://gitlab.com/x86-psABIs/x86-64-ABI[x86-64-ABI specification].
509
510 NOTE: Some newer distributions like CentOS 9 are now built with 'x86-64-v2'
511 flags as a minimum requirement.
512
513 * 'kvm64 (x86-64-v1)': Compatible with Intel CPU >= Pentium 4, AMD CPU >=
514 Phenom.
515 +
516 * 'x86-64-v2': Compatible with Intel CPU >= Nehalem, AMD CPU >= Opteron_G3.
517 Added CPU flags compared to 'x86-64-v1': '+cx16', '+lahf-lm', '+popcnt', '+pni',
518 '+sse4.1', '+sse4.2', '+ssse3'.
519 +
520 * 'x86-64-v2-AES': Compatible with Intel CPU >= Westmere, AMD CPU >= Opteron_G4.
521 Added CPU flags compared to 'x86-64-v2': '+aes'.
522 +
523 * 'x86-64-v3': Compatible with Intel CPU >= Broadwell, AMD CPU >= EPYC. Added
524 CPU flags compared to 'x86-64-v2-AES': '+avx', '+avx2', '+bmi1', '+bmi2',
525 '+f16c', '+fma', '+movbe', '+xsave'.
526 +
527 * 'x86-64-v4': Compatible with Intel CPU >= Skylake, AMD CPU >= EPYC v4 Genoa.
528 Added CPU flags compared to 'x86-64-v3': '+avx512f', '+avx512bw', '+avx512cd',
529 '+avx512dq', '+avx512vl'.
530
531 Custom CPU Types
532 ^^^^^^^^^^^^^^^^
533
534 You can specify custom CPU types with a configurable set of features. These are
535 maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
536 an administrator. See `man cpu-models.conf` for format details.
537
538 Specified custom types can be selected by any user with the `Sys.Audit`
539 privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
540 or API, the name needs to be prefixed with 'custom-'.
541
542 [[qm_meltdown_spectre]]
543 Meltdown / Spectre related CPU flags
544 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
545
546 There are several CPU flags related to the Meltdown and Spectre vulnerabilities
547 footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
548 manually unless the selected CPU type of your VM already enables them by default.
549
550 There are two requirements that need to be fulfilled in order to use these
551 CPU flags:
552
553 * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
554 * The guest operating system must be updated to a version which mitigates the
555 attacks and is able to utilize the CPU feature
556
557 Otherwise you need to set the desired CPU flag of the virtual CPU, either by
558 editing the CPU options in the WebUI, or by setting the 'flags' property of the
559 'cpu' option in the VM configuration file.
560
561 For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
562 so-called ``microcode update'' footnote:[You can use `intel-microcode' /
563 `amd-microcode' from Debian non-free if your vendor does not provide such an
564 update. Note that not all affected CPUs can be updated to support spec-ctrl.]
565 for your CPU.
566
567
568 To check if the {pve} host is vulnerable, execute the following command as root:
569
570 ----
571 for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
572 ----
573
574 A community script is also available to detect is the host is still vulnerable.
575 footnote:[spectre-meltdown-checker https://meltdown.ovh/]
576
577 Intel processors
578 ^^^^^^^^^^^^^^^^
579
580 * 'pcid'
581 +
582 This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
583 called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
584 the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
585 mechanism footnote:[PCID is now a critical performance/security feature on x86
586 https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
587 +
588 To check if the {pve} host supports PCID, execute the following command as root:
589 +
590 ----
591 # grep ' pcid ' /proc/cpuinfo
592 ----
593 +
594 If this does not return empty your host's CPU has support for 'pcid'.
595
596 * 'spec-ctrl'
597 +
598 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
599 in cases where retpolines are not sufficient.
600 Included by default in Intel CPU models with -IBRS suffix.
601 Must be explicitly turned on for Intel CPU models without -IBRS suffix.
602 Requires an updated host CPU microcode (intel-microcode >= 20180425).
603 +
604 * 'ssbd'
605 +
606 Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
607 Must be explicitly turned on for all Intel CPU models.
608 Requires an updated host CPU microcode(intel-microcode >= 20180703).
609
610
611 AMD processors
612 ^^^^^^^^^^^^^^
613
614 * 'ibpb'
615 +
616 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
617 in cases where retpolines are not sufficient.
618 Included by default in AMD CPU models with -IBPB suffix.
619 Must be explicitly turned on for AMD CPU models without -IBPB suffix.
620 Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
621
622
623
624 * 'virt-ssbd'
625 +
626 Required to enable the Spectre v4 (CVE-2018-3639) fix.
627 Not included by default in any AMD CPU model.
628 Must be explicitly turned on for all AMD CPU models.
629 This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
630 Note that this must be explicitly enabled when when using the "host" cpu model,
631 because this is a virtual feature which does not exist in the physical CPUs.
632
633
634 * 'amd-ssbd'
635 +
636 Required to enable the Spectre v4 (CVE-2018-3639) fix.
637 Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
638 This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
639 virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
640
641
642 * 'amd-no-ssb'
643 +
644 Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
645 Not included by default in any AMD CPU model.
646 Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
647 and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
648 This is mutually exclusive with virt-ssbd and amd-ssbd.
649
650
651 NUMA
652 ^^^^
653 You can also optionally emulate a *NUMA*
654 footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
655 in your VMs. The basics of the NUMA architecture mean that instead of having a
656 global memory pool available to all your cores, the memory is spread into local
657 banks close to each socket.
658 This can bring speed improvements as the memory bus is not a bottleneck
659 anymore. If your system has a NUMA architecture footnote:[if the command
660 `numactl --hardware | grep available` returns more than one node, then your host
661 system has a NUMA architecture] we recommend to activate the option, as this
662 will allow proper distribution of the VM resources on the host system.
663 This option is also required to hot-plug cores or RAM in a VM.
664
665 If the NUMA option is used, it is recommended to set the number of sockets to
666 the number of nodes of the host system.
667
668 vCPU hot-plug
669 ^^^^^^^^^^^^^
670
671 Modern operating systems introduced the capability to hot-plug and, to a
672 certain extent, hot-unplug CPUs in a running system. Virtualization allows us
673 to avoid a lot of the (physical) problems real hardware can cause in such
674 scenarios.
675 Still, this is a rather new and complicated feature, so its use should be
676 restricted to cases where its absolutely needed. Most of the functionality can
677 be replicated with other, well tested and less complicated, features, see
678 xref:qm_cpu_resource_limits[Resource Limits].
679
680 In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
681 To start a VM with less than this total core count of CPUs you may use the
682 *vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
683
684 Currently only this feature is only supported on Linux, a kernel newer than 3.10
685 is needed, a kernel newer than 4.7 is recommended.
686
687 You can use a udev rule as follow to automatically set new CPUs as online in
688 the guest:
689
690 ----
691 SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
692 ----
693
694 Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
695
696 Note: CPU hot-remove is machine dependent and requires guest cooperation. The
697 deletion command does not guarantee CPU removal to actually happen, typically
698 it's a request forwarded to guest OS using target dependent mechanism, such as
699 ACPI on x86/amd64.
700
701
702 [[qm_memory]]
703 Memory
704 ~~~~~~
705
706 For each VM you have the option to set a fixed size memory or asking
707 {pve} to dynamically allocate memory based on the current RAM usage of the
708 host.
709
710 .Fixed Memory Allocation
711 [thumbnail="screenshot/gui-create-vm-memory.png"]
712
713 When setting memory and minimum memory to the same amount
714 {pve} will simply allocate what you specify to your VM.
715
716 Even when using a fixed memory size, the ballooning device gets added to the
717 VM, because it delivers useful information such as how much memory the guest
718 really uses.
719 In general, you should leave *ballooning* enabled, but if you want to disable
720 it (like for debugging purposes), simply uncheck *Ballooning Device* or set
721
722 balloon: 0
723
724 in the configuration.
725
726 .Automatic Memory Allocation
727
728 // see autoballoon() in pvestatd.pm
729 When setting the minimum memory lower than memory, {pve} will make sure that the
730 minimum amount you specified is always available to the VM, and if RAM usage on
731 the host is below 80%, will dynamically add memory to the guest up to the
732 maximum memory specified.
733
734 When the host is running low on RAM, the VM will then release some memory
735 back to the host, swapping running processes if needed and starting the oom
736 killer in last resort. The passing around of memory between host and guest is
737 done via a special `balloon` kernel driver running inside the guest, which will
738 grab or release memory pages from the host.
739 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
740
741 When multiple VMs use the autoallocate facility, it is possible to set a
742 *Shares* coefficient which indicates the relative amount of the free host memory
743 that each VM should take. Suppose for instance you have four VMs, three of them
744 running an HTTP server and the last one is a database server. To cache more
745 database blocks in the database server RAM, you would like to prioritize the
746 database VM when spare RAM is available. For this you assign a Shares property
747 of 3000 to the database VM, leaving the other VMs to the Shares default setting
748 of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
749 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
750 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
751 get 1.5 GB.
752
753 All Linux distributions released after 2010 have the balloon kernel driver
754 included. For Windows OSes, the balloon driver needs to be added manually and can
755 incur a slowdown of the guest, so we don't recommend using it on critical
756 systems.
757 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
758
759 When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
760 of RAM available to the host.
761
762
763 [[qm_network_device]]
764 Network Device
765 ~~~~~~~~~~~~~~
766
767 [thumbnail="screenshot/gui-create-vm-network.png"]
768
769 Each VM can have many _Network interface controllers_ (NIC), of four different
770 types:
771
772 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
773 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
774 performance. Like all VirtIO devices, the guest OS should have the proper driver
775 installed.
776 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
777 only be used when emulating older operating systems ( released before 2002 )
778 * the *vmxnet3* is another paravirtualized device, which should only be used
779 when importing a VM from another hypervisor.
780
781 {pve} will generate for each NIC a random *MAC address*, so that your VM is
782 addressable on Ethernet networks.
783
784 The NIC you added to the VM can follow one of two different models:
785
786 * in the default *Bridged mode* each virtual NIC is backed on the host by a
787 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
788 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
789 have direct access to the Ethernet LAN on which the host is located.
790 * in the alternative *NAT mode*, each virtual NIC will only communicate with
791 the QEMU user networking stack, where a built-in router and DHCP server can
792 provide network access. This built-in DHCP will serve addresses in the private
793 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
794 should only be used for testing. This mode is only available via CLI or the API,
795 but not via the WebUI.
796
797 You can also skip adding a network device when creating a VM by selecting *No
798 network device*.
799
800 You can overwrite the *MTU* setting for each VM network device. The option
801 `mtu=1` represents a special case, in which the MTU value will be inherited
802 from the underlying bridge.
803 This option is only available for *VirtIO* network devices.
804
805 .Multiqueue
806 If you are using the VirtIO driver, you can optionally activate the
807 *Multiqueue* option. This option allows the guest OS to process networking
808 packets using multiple virtual CPUs, providing an increase in the total number
809 of packets transferred.
810
811 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
812 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
813 host kernel, where the queue will be processed by a kernel thread spawned by the
814 vhost driver. With this option activated, it is possible to pass _multiple_
815 network queues to the host kernel for each NIC.
816
817 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
818 When using Multiqueue, it is recommended to set it to a value equal
819 to the number of Total Cores of your guest. You also need to set in
820 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
821 command:
822
823 `ethtool -L ens1 combined X`
824
825 where X is the number of the number of vcpus of the VM.
826
827 You should note that setting the Multiqueue parameter to a value greater
828 than one will increase the CPU load on the host and guest systems as the
829 traffic increases. We recommend to set this option only when the VM has to
830 process a great number of incoming connections, such as when the VM is running
831 as a router, reverse proxy or a busy HTTP server doing long polling.
832
833 [[qm_display]]
834 Display
835 ~~~~~~~
836
837 QEMU can virtualize a few types of VGA hardware. Some examples are:
838
839 * *std*, the default, emulates a card with Bochs VBE extensions.
840 * *cirrus*, this was once the default, it emulates a very old hardware module
841 with all its problems. This display type should only be used if really
842 necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
843 qemu: using cirrus considered harmful], for example, if using Windows XP or
844 earlier
845 * *vmware*, is a VMWare SVGA-II compatible adapter.
846 * *qxl*, is the QXL paravirtualized graphics card. Selecting this also
847 enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
848 VM.
849 * *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
850 can offload workloads to the host GPU without requiring special (expensive)
851 models and drivers and neither binding the host GPU completely, allowing
852 reuse between multiple guests and or the host.
853 +
854 NOTE: VirGL support needs some extra libraries that aren't installed by
855 default due to being relatively big and also not available as open source for
856 all GPU models/vendors. For most setups you'll just need to do:
857 `apt install libgl1 libegl1`
858
859 You can edit the amount of memory given to the virtual GPU, by setting
860 the 'memory' option. This can enable higher resolutions inside the VM,
861 especially with SPICE/QXL.
862
863 As the memory is reserved by display device, selecting Multi-Monitor mode
864 for SPICE (such as `qxl2` for dual monitors) has some implications:
865
866 * Windows needs a device for each monitor, so if your 'ostype' is some
867 version of Windows, {pve} gives the VM an extra device per monitor.
868 Each device gets the specified amount of memory.
869
870 * Linux VMs, can always enable more virtual monitors, but selecting
871 a Multi-Monitor mode multiplies the memory given to the device with
872 the number of monitors.
873
874 Selecting `serialX` as display 'type' disables the VGA output, and redirects
875 the Web Console to the selected serial port. A configured display 'memory'
876 setting will be ignored in that case.
877
878 [[qm_usb_passthrough]]
879 USB Passthrough
880 ~~~~~~~~~~~~~~~
881
882 There are two different types of USB passthrough devices:
883
884 * Host USB passthrough
885 * SPICE USB passthrough
886
887 Host USB passthrough works by giving a VM a USB device of the host.
888 This can either be done via the vendor- and product-id, or
889 via the host bus and port.
890
891 The vendor/product-id looks like this: *0123:abcd*,
892 where *0123* is the id of the vendor, and *abcd* is the id
893 of the product, meaning two pieces of the same usb device
894 have the same id.
895
896 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
897 and *2.3.4* is the port path. This represents the physical
898 ports of your host (depending of the internal order of the
899 usb controllers).
900
901 If a device is present in a VM configuration when the VM starts up,
902 but the device is not present in the host, the VM can boot without problems.
903 As soon as the device/port is available in the host, it gets passed through.
904
905 WARNING: Using this kind of USB passthrough means that you cannot move
906 a VM online to another host, since the hardware is only available
907 on the host the VM is currently residing.
908
909 The second type of passthrough is SPICE USB passthrough. This is useful
910 if you use a SPICE client which supports it. If you add a SPICE USB port
911 to your VM, you can passthrough a USB device from where your SPICE client is,
912 directly to the VM (for example an input device or hardware dongle).
913
914 It is also possible to map devices on a cluster level, so that they can be
915 properly used with HA and hardware changes are detected and non root users
916 can configure them. See xref:resource_mapping[Resource Mapping]
917 for details on that.
918
919 [[qm_bios_and_uefi]]
920 BIOS and UEFI
921 ~~~~~~~~~~~~~
922
923 In order to properly emulate a computer, QEMU needs to use a firmware.
924 Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
925 first steps when booting a VM. It is responsible for doing basic hardware
926 initialization and for providing an interface to the firmware and hardware for
927 the operating system. By default QEMU uses *SeaBIOS* for this, which is an
928 open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
929 standard setups.
930
931 Some operating systems (such as Windows 11) may require use of an UEFI
932 compatible implementation. In such cases, you must use *OVMF* instead,
933 which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
934
935 There are other scenarios in which the SeaBIOS may not be the ideal firmware to
936 boot from, for example if you want to do VGA passthrough. footnote:[Alex
937 Williamson has a good blog entry about this
938 https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
939
940 If you want to use OVMF, there are several things to consider:
941
942 In order to save things like the *boot order*, there needs to be an EFI Disk.
943 This disk will be included in backups and snapshots, and there can only be one.
944
945 You can create such a disk with the following command:
946
947 ----
948 # qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
949 ----
950
951 Where *<storage>* is the storage where you want to have the disk, and
952 *<format>* is a format which the storage supports. Alternatively, you can
953 create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
954 hardware section of a VM.
955
956 The *efitype* option specifies which version of the OVMF firmware should be
957 used. For new VMs, this should always be '4m', as it supports Secure Boot and
958 has more space allocated to support future development (this is the default in
959 the GUI).
960
961 *pre-enroll-keys* specifies if the efidisk should come pre-loaded with
962 distribution-specific and Microsoft Standard Secure Boot keys. It also enables
963 Secure Boot by default (though it can still be disabled in the OVMF menu within
964 the VM).
965
966 NOTE: If you want to start using Secure Boot in an existing VM (that still uses
967 a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
968 (`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
969 will reset any custom configurations you have made in the OVMF menu!
970
971 When using OVMF with a virtual display (without VGA passthrough),
972 you need to set the client resolution in the OVMF menu (which you can reach
973 with a press of the ESC button during boot), or you have to choose
974 SPICE as the display type.
975
976 [[qm_tpm]]
977 Trusted Platform Module (TPM)
978 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
979
980 A *Trusted Platform Module* is a device which stores secret data - such as
981 encryption keys - securely and provides tamper-resistance functions for
982 validating system boot.
983
984 Certain operating systems (such as Windows 11) require such a device to be
985 attached to a machine (be it physical or virtual).
986
987 A TPM is added by specifying a *tpmstate* volume. This works similar to an
988 efidisk, in that it cannot be changed (only removed) once created. You can add
989 one via the following command:
990
991 ----
992 # qm set <vmid> -tpmstate0 <storage>:1,version=<version>
993 ----
994
995 Where *<storage>* is the storage you want to put the state on, and *<version>*
996 is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
997 choosing 'Add' -> 'TPM State' in the hardware section of a VM.
998
999 The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
1000 implementation that requires a 'v1.2' TPM, it should be preferred.
1001
1002 NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
1003 security benefits. The point of a TPM is that the data on it cannot be modified
1004 easily, except via commands specified as part of the TPM spec. Since with an
1005 emulated device the data storage happens on a regular volume, it can potentially
1006 be edited by anyone with access to it.
1007
1008 [[qm_ivshmem]]
1009 Inter-VM shared memory
1010 ~~~~~~~~~~~~~~~~~~~~~~
1011
1012 You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
1013 share memory between the host and a guest, or also between multiple guests.
1014
1015 To add such a device, you can use `qm`:
1016
1017 ----
1018 # qm set <vmid> -ivshmem size=32,name=foo
1019 ----
1020
1021 Where the size is in MiB. The file will be located under
1022 `/dev/shm/pve-shm-$name` (the default name is the vmid).
1023
1024 NOTE: Currently the device will get deleted as soon as any VM using it got
1025 shutdown or stopped. Open connections will still persist, but new connections
1026 to the exact same device cannot be made anymore.
1027
1028 A use case for such a device is the Looking Glass
1029 footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
1030 performance, low-latency display mirroring between host and guest.
1031
1032 [[qm_audio_device]]
1033 Audio Device
1034 ~~~~~~~~~~~~
1035
1036 To add an audio device run the following command:
1037
1038 ----
1039 qm set <vmid> -audio0 device=<device>
1040 ----
1041
1042 Supported audio devices are:
1043
1044 * `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
1045 * `intel-hda`: Intel HD Audio Controller, emulates ICH6
1046 * `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
1047
1048 There are two backends available:
1049
1050 * 'spice'
1051 * 'none'
1052
1053 The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
1054 the 'none' backend can be useful if an audio device is needed in the VM for some
1055 software to work. To use the physical audio device of the host use device
1056 passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
1057 xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
1058 have options to play sound.
1059
1060
1061 [[qm_virtio_rng]]
1062 VirtIO RNG
1063 ~~~~~~~~~~
1064
1065 A RNG (Random Number Generator) is a device providing entropy ('randomness') to
1066 a system. A virtual hardware-RNG can be used to provide such entropy from the
1067 host system to a guest VM. This helps to avoid entropy starvation problems in
1068 the guest (a situation where not enough entropy is available and the system may
1069 slow down or run into problems), especially during the guests boot process.
1070
1071 To add a VirtIO-based emulated RNG, run the following command:
1072
1073 ----
1074 qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
1075 ----
1076
1077 `source` specifies where entropy is read from on the host and has to be one of
1078 the following:
1079
1080 * `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
1081 * `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
1082 starvation on the host system)
1083 * `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
1084 are available, the one selected in
1085 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
1086
1087 A limit can be specified via the `max_bytes` and `period` parameters, they are
1088 read as `max_bytes` per `period` in milliseconds. However, it does not represent
1089 a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
1090 available on a 1 second timer, not that 1 KiB is streamed to the guest over the
1091 course of one second. Reducing the `period` can thus be used to inject entropy
1092 into the guest at a faster rate.
1093
1094 By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
1095 recommended to always use a limiter to avoid guests using too many host
1096 resources. If desired, a value of '0' for `max_bytes` can be used to disable
1097 all limits.
1098
1099 [[qm_bootorder]]
1100 Device Boot Order
1101 ~~~~~~~~~~~~~~~~~
1102
1103 QEMU can tell the guest which devices it should boot from, and in which order.
1104 This can be specified in the config via the `boot` property, for example:
1105
1106 ----
1107 boot: order=scsi0;net0;hostpci0
1108 ----
1109
1110 [thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
1111
1112 This way, the guest would first attempt to boot from the disk `scsi0`, if that
1113 fails, it would go on to attempt network boot from `net0`, and in case that
1114 fails too, finally attempt to boot from a passed through PCIe device (seen as
1115 disk in case of NVMe, otherwise tries to launch into an option ROM).
1116
1117 On the GUI you can use a drag-and-drop editor to specify the boot order, and use
1118 the checkbox to enable or disable certain devices for booting altogether.
1119
1120 NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
1121 all of them must be marked as 'bootable' (that is, they must have the checkbox
1122 enabled or appear in the list in the config) for the guest to be able to boot.
1123 This is because recent SeaBIOS and OVMF versions only initialize disks if they
1124 are marked 'bootable'.
1125
1126 In any case, even devices not appearing in the list or having the checkmark
1127 disabled will still be available to the guest, once it's operating system has
1128 booted and initialized them. The 'bootable' flag only affects the guest BIOS and
1129 bootloader.
1130
1131
1132 [[qm_startup_and_shutdown]]
1133 Automatic Start and Shutdown of Virtual Machines
1134 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1135
1136 After creating your VMs, you probably want them to start automatically
1137 when the host system boots. For this you need to select the option 'Start at
1138 boot' from the 'Options' Tab of your VM in the web interface, or set it with
1139 the following command:
1140
1141 ----
1142 # qm set <vmid> -onboot 1
1143 ----
1144
1145 .Start and Shutdown Order
1146
1147 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
1148
1149 In some case you want to be able to fine tune the boot order of your
1150 VMs, for instance if one of your VM is providing firewalling or DHCP
1151 to other guest systems. For this you can use the following
1152 parameters:
1153
1154 * *Start/Shutdown order*: Defines the start order priority. For example, set it
1155 * to 1 if
1156 you want the VM to be the first to be started. (We use the reverse startup
1157 order for shutdown, so a machine with a start order of 1 would be the last to
1158 be shut down). If multiple VMs have the same order defined on a host, they will
1159 additionally be ordered by 'VMID' in ascending order.
1160 * *Startup delay*: Defines the interval between this VM start and subsequent
1161 VMs starts. For example, set it to 240 if you want to wait 240 seconds before
1162 starting other VMs.
1163 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
1164 for the VM to be offline after issuing a shutdown command. By default this
1165 value is set to 180, which means that {pve} will issue a shutdown request and
1166 wait 180 seconds for the machine to be offline. If the machine is still online
1167 after the timeout it will be stopped forcefully.
1168
1169 NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
1170 'boot order' options currently. Those VMs will be skipped by the startup and
1171 shutdown algorithm as the HA manager itself ensures that VMs get started and
1172 stopped.
1173
1174 Please note that machines without a Start/Shutdown order parameter will always
1175 start after those where the parameter is set. Further, this parameter can only
1176 be enforced between virtual machines running on the same host, not
1177 cluster-wide.
1178
1179 If you require a delay between the host boot and the booting of the first VM,
1180 see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
1181
1182
1183 [[qm_qemu_agent]]
1184 QEMU Guest Agent
1185 ~~~~~~~~~~~~~~~~
1186
1187 The QEMU Guest Agent is a service which runs inside the VM, providing a
1188 communication channel between the host and the guest. It is used to exchange
1189 information and allows the host to issue commands to the guest.
1190
1191 For example, the IP addresses in the VM summary panel are fetched via the guest
1192 agent.
1193
1194 Or when starting a backup, the guest is told via the guest agent to sync
1195 outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1196
1197 For the guest agent to work properly the following steps must be taken:
1198
1199 * install the agent in the guest and make sure it is running
1200 * enable the communication via the agent in {pve}
1201
1202 Install Guest Agent
1203 ^^^^^^^^^^^^^^^^^^^
1204
1205 For most Linux distributions, the guest agent is available. The package is
1206 usually named `qemu-guest-agent`.
1207
1208 For Windows, it can be installed from the
1209 https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1210 VirtIO driver ISO].
1211
1212 [[qm_qga_enable]]
1213 Enable Guest Agent Communication
1214 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1215
1216 Communication from {pve} with the guest agent can be enabled in the VM's
1217 *Options* panel. A fresh start of the VM is necessary for the changes to take
1218 effect.
1219
1220 [[qm_qga_auto_trim]]
1221 Automatic TRIM Using QGA
1222 ^^^^^^^^^^^^^^^^^^^^^^^^
1223
1224 It is possible to enable the 'Run guest-trim' option. With this enabled,
1225 {pve} will issue a trim command to the guest after the following
1226 operations that have the potential to write out zeros to the storage:
1227
1228 * moving a disk to another storage
1229 * live migrating a VM to another node with local storage
1230
1231 On a thin provisioned storage, this can help to free up unused space.
1232
1233 NOTE: There is a caveat with ext4 on Linux, because it uses an in-memory
1234 optimization to avoid issuing duplicate TRIM requests. Since the guest doesn't
1235 know about the change in the underlying storage, only the first guest-trim will
1236 run as expected. Subsequent ones, until the next reboot, will only consider
1237 parts of the filesystem that changed since then.
1238
1239 [[qm_qga_fsfreeze]]
1240 Filesystem Freeze & Thaw on Backup
1241 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1242
1243 By default, guest filesystems are synced via the 'fs-freeze' QEMU Guest Agent
1244 Command when a backup is performed, to provide consistency.
1245
1246 On Windows guests, some applications might handle consistent backups themselves
1247 by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
1248 'fs-freeze' then might interfere with that. For example, it has been observed
1249 that calling 'fs-freeze' with some SQL Servers triggers VSS to call the SQL
1250 Writer VSS module in a mode that breaks the SQL Server backup chain for
1251 differential backups.
1252
1253 For such setups you can configure {pve} to not issue a freeze-and-thaw cycle on
1254 backup by setting the `freeze-fs-on-backup` QGA option to `0`. This can also be
1255 done via the GUI with the 'Freeze/thaw guest filesystems on backup for
1256 consistency' option.
1257
1258 IMPORTANT: Disabling this option can potentially lead to backups with inconsistent
1259 filesystems and should therefore only be disabled if you know what you are
1260 doing.
1261
1262 Troubleshooting
1263 ^^^^^^^^^^^^^^^
1264
1265 .VM does not shut down
1266
1267 Make sure the guest agent is installed and running.
1268
1269 Once the guest agent is enabled, {pve} will send power commands like
1270 'shutdown' via the guest agent. If the guest agent is not running, commands
1271 cannot get executed properly and the shutdown command will run into a timeout.
1272
1273 [[qm_spice_enhancements]]
1274 SPICE Enhancements
1275 ~~~~~~~~~~~~~~~~~~
1276
1277 SPICE Enhancements are optional features that can improve the remote viewer
1278 experience.
1279
1280 To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1281 the following command to enable them via the CLI:
1282
1283 ----
1284 qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1285 ----
1286
1287 NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1288 must be set to SPICE (qxl).
1289
1290 Folder Sharing
1291 ^^^^^^^^^^^^^^
1292
1293 Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1294 installed in the guest. It makes the shared folder available through a local
1295 WebDAV server located at http://localhost:9843.
1296
1297 For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1298 from the
1299 https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1300
1301 Most Linux distributions have a package called `spice-webdavd` that can be
1302 installed.
1303
1304 To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1305 Select the folder to share and then enable the checkbox.
1306
1307 NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1308
1309 CAUTION: Experimental! Currently this feature does not work reliably.
1310
1311 Video Streaming
1312 ^^^^^^^^^^^^^^^
1313
1314 Fast refreshing areas are encoded into a video stream. Two options exist:
1315
1316 * *all*: Any fast refreshing area will be encoded into a video stream.
1317 * *filter*: Additional filters are used to decide if video streaming should be
1318 used (currently only small window surfaces are skipped).
1319
1320 A general recommendation if video streaming should be enabled and which option
1321 to choose from cannot be given. Your mileage may vary depending on the specific
1322 circumstances.
1323
1324 Troubleshooting
1325 ^^^^^^^^^^^^^^^
1326
1327 .Shared folder does not show up
1328
1329 Make sure the WebDAV service is enabled and running in the guest. On Windows it
1330 is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1331 different depending on the distribution.
1332
1333 If the service is running, check the WebDAV server by opening
1334 http://localhost:9843 in a browser in the guest.
1335
1336 It can help to restart the SPICE session.
1337
1338 [[qm_migration]]
1339 Migration
1340 ---------
1341
1342 [thumbnail="screenshot/gui-qemu-migrate.png"]
1343
1344 If you have a cluster, you can migrate your VM to another host with
1345
1346 ----
1347 # qm migrate <vmid> <target>
1348 ----
1349
1350 There are generally two mechanisms for this
1351
1352 * Online Migration (aka Live Migration)
1353 * Offline Migration
1354
1355 Online Migration
1356 ~~~~~~~~~~~~~~~~
1357
1358 If your VM is running and no locally bound resources are configured (such as
1359 passed-through devices), you can initiate a live migration with the `--online`
1360 flag in the `qm migration` command evocation. The web-interface defaults to
1361 live migration when the VM is running.
1362
1363 How it works
1364 ^^^^^^^^^^^^
1365
1366 Online migration first starts a new QEMU process on the target host with the
1367 'incoming' flag, which performs only basic initialization with the guest vCPUs
1368 still paused and then waits for the guest memory and device state data streams
1369 of the source Virtual Machine.
1370 All other resources, such as disks, are either shared or got already sent
1371 before runtime state migration of the VMs begins; so only the memory content
1372 and device state remain to be transferred.
1373
1374 Once this connection is established, the source begins asynchronously sending
1375 the memory content to the target. If the guest memory on the source changes,
1376 those sections are marked dirty and another pass is made to send the guest
1377 memory data.
1378 This loop is repeated until the data difference between running source VM
1379 and incoming target VM is small enough to be sent in a few milliseconds,
1380 because then the source VM can be paused completely, without a user or program
1381 noticing the pause, so that the remaining data can be sent to the target, and
1382 then unpause the targets VM's CPU to make it the new running VM in well under a
1383 second.
1384
1385 Requirements
1386 ^^^^^^^^^^^^
1387
1388 For Live Migration to work, there are some things required:
1389
1390 * The VM has no local resources that cannot be migrated. For example,
1391 PCI or USB devices that are passed through currently block live-migration.
1392 Local Disks, on the other hand, can be migrated by sending them to the target
1393 just fine.
1394 * The hosts are located in the same {pve} cluster.
1395 * The hosts have a working (and reliable) network connection between them.
1396 * The target host must have the same, or higher versions of the
1397 {pve} packages. Although it can sometimes work the other way around, this
1398 cannot be guaranteed.
1399 * The hosts have CPUs from the same vendor with similar capabilities. Different
1400 vendor *might* work depending on the actual models and VMs CPU type
1401 configured, but it cannot be guaranteed - so please test before deploying
1402 such a setup in production.
1403
1404 Offline Migration
1405 ~~~~~~~~~~~~~~~~~
1406
1407 If you have local resources, you can still migrate your VMs offline as long as
1408 all disk are on storage defined on both hosts.
1409 Migration then copies the disks to the target host over the network, as with
1410 online migration. Note that any hardware pass-through configuration may need to
1411 be adapted to the device location on the target host.
1412
1413 // TODO: mention hardware map IDs as better way to solve that, once available
1414
1415 [[qm_copy_and_clone]]
1416 Copies and Clones
1417 -----------------
1418
1419 [thumbnail="screenshot/gui-qemu-full-clone.png"]
1420
1421 VM installation is usually done using an installation media (CD-ROM)
1422 from the operating system vendor. Depending on the OS, this can be a
1423 time consuming task one might want to avoid.
1424
1425 An easy way to deploy many VMs of the same type is to copy an existing
1426 VM. We use the term 'clone' for such copies, and distinguish between
1427 'linked' and 'full' clones.
1428
1429 Full Clone::
1430
1431 The result of such copy is an independent VM. The
1432 new VM does not share any storage resources with the original.
1433 +
1434
1435 It is possible to select a *Target Storage*, so one can use this to
1436 migrate a VM to a totally different storage. You can also change the
1437 disk image *Format* if the storage driver supports several formats.
1438 +
1439
1440 NOTE: A full clone needs to read and copy all VM image data. This is
1441 usually much slower than creating a linked clone.
1442 +
1443
1444 Some storage types allows to copy a specific *Snapshot*, which
1445 defaults to the 'current' VM data. This also means that the final copy
1446 never includes any additional snapshots from the original VM.
1447
1448
1449 Linked Clone::
1450
1451 Modern storage drivers support a way to generate fast linked
1452 clones. Such a clone is a writable copy whose initial contents are the
1453 same as the original data. Creating a linked clone is nearly
1454 instantaneous, and initially consumes no additional space.
1455 +
1456
1457 They are called 'linked' because the new image still refers to the
1458 original. Unmodified data blocks are read from the original image, but
1459 modification are written (and afterwards read) from a new
1460 location. This technique is called 'Copy-on-write'.
1461 +
1462
1463 This requires that the original volume is read-only. With {pve} one
1464 can convert any VM into a read-only <<qm_templates, Template>>). Such
1465 templates can later be used to create linked clones efficiently.
1466 +
1467
1468 NOTE: You cannot delete an original template while linked clones
1469 exist.
1470 +
1471
1472 It is not possible to change the *Target storage* for linked clones,
1473 because this is a storage internal feature.
1474
1475
1476 The *Target node* option allows you to create the new VM on a
1477 different node. The only restriction is that the VM is on shared
1478 storage, and that storage is also available on the target node.
1479
1480 To avoid resource conflicts, all network interface MAC addresses get
1481 randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1482 setting.
1483
1484
1485 [[qm_templates]]
1486 Virtual Machine Templates
1487 -------------------------
1488
1489 One can convert a VM into a Template. Such templates are read-only,
1490 and you can use them to create linked clones.
1491
1492 NOTE: It is not possible to start templates, because this would modify
1493 the disk images. If you want to change the template, create a linked
1494 clone and modify that.
1495
1496 VM Generation ID
1497 ----------------
1498
1499 {pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
1500 'vmgenid' Specification
1501 https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1502 for virtual machines.
1503 This can be used by the guest operating system to detect any event resulting
1504 in a time shift event, for example, restoring a backup or a snapshot rollback.
1505
1506 When creating new VMs, a 'vmgenid' will be automatically generated and saved
1507 in its configuration file.
1508
1509 To create and add a 'vmgenid' to an already existing VM one can pass the
1510 special value `1' to let {pve} autogenerate one or manually set the 'UUID'
1511 footnote:[Online GUID generator http://guid.one/] by using it as value, for
1512 example:
1513
1514 ----
1515 # qm set VMID -vmgenid 1
1516 # qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
1517 ----
1518
1519 NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1520 in the same effects as a change on snapshot rollback, backup restore, etc., has
1521 as the VM can interpret this as generation change.
1522
1523 In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1524 its value on VM creation, or retroactively delete the property in the
1525 configuration with:
1526
1527 ----
1528 # qm set VMID -delete vmgenid
1529 ----
1530
1531 The most prominent use case for 'vmgenid' are newer Microsoft Windows
1532 operating systems, which use it to avoid problems in time sensitive or
1533 replicate services (such as databases or domain controller
1534 footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1535 on snapshot rollback, backup restore or a whole VM clone operation.
1536
1537 Importing Virtual Machines and disk images
1538 ------------------------------------------
1539
1540 A VM export from a foreign hypervisor takes usually the form of one or more disk
1541 images, with a configuration file describing the settings of the VM (RAM,
1542 number of cores). +
1543 The disk images can be in the vmdk format, if the disks come from
1544 VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1545 The most popular configuration format for VM exports is the OVF standard, but in
1546 practice interoperation is limited because many settings are not implemented in
1547 the standard itself, and hypervisors export the supplementary information
1548 in non-standard extensions.
1549
1550 Besides the problem of format, importing disk images from other hypervisors
1551 may fail if the emulated hardware changes too much from one hypervisor to
1552 another. Windows VMs are particularly concerned by this, as the OS is very
1553 picky about any changes of hardware. This problem may be solved by
1554 installing the MergeIDE.zip utility available from the Internet before exporting
1555 and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1556
1557 Finally there is the question of paravirtualized drivers, which improve the
1558 speed of the emulated system and are specific to the hypervisor.
1559 GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1560 default and you can switch to the paravirtualized drivers right after importing
1561 the VM. For Windows VMs, you need to install the Windows paravirtualized
1562 drivers by yourself.
1563
1564 GNU/Linux and other free Unix can usually be imported without hassle. Note
1565 that we cannot guarantee a successful import/export of Windows VMs in all
1566 cases due to the problems above.
1567
1568 Step-by-step example of a Windows OVF import
1569 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1570
1571 Microsoft provides
1572 https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
1573 to get started with Windows development.We are going to use one of these
1574 to demonstrate the OVF import feature.
1575
1576 Download the Virtual Machine zip
1577 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1578
1579 After getting informed about the user agreement, choose the _Windows 10
1580 Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
1581
1582 Extract the disk image from the zip
1583 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1584
1585 Using the `unzip` utility or any archiver of your choice, unpack the zip,
1586 and copy via ssh/scp the ovf and vmdk files to your {pve} host.
1587
1588 Import the Virtual Machine
1589 ^^^^^^^^^^^^^^^^^^^^^^^^^^
1590
1591 This will create a new virtual machine, using cores, memory and
1592 VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1593 storage. You have to configure the network manually.
1594
1595 ----
1596 # qm importovf 999 WinDev1709Eval.ovf local-lvm
1597 ----
1598
1599 The VM is ready to be started.
1600
1601 Adding an external disk image to a Virtual Machine
1602 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1603
1604 You can also add an existing disk image to a VM, either coming from a
1605 foreign hypervisor, or one that you created yourself.
1606
1607 Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1608
1609 vmdebootstrap --verbose \
1610 --size 10GiB --serial-console \
1611 --grub --no-extlinux \
1612 --package openssh-server \
1613 --package avahi-daemon \
1614 --package qemu-guest-agent \
1615 --hostname vm600 --enable-dhcp \
1616 --customize=./copy_pub_ssh.sh \
1617 --sparse --image vm600.raw
1618
1619 You can now create a new target VM, importing the image to the storage `pvedir`
1620 and attaching it to the VM's SCSI controller:
1621
1622 ----
1623 # qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
1624 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1625 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
1626 ----
1627
1628 The VM is ready to be started.
1629
1630
1631 ifndef::wiki[]
1632 include::qm-cloud-init.adoc[]
1633 endif::wiki[]
1634
1635 ifndef::wiki[]
1636 include::qm-pci-passthrough.adoc[]
1637 endif::wiki[]
1638
1639 Hookscripts
1640 -----------
1641
1642 You can add a hook script to VMs with the config property `hookscript`.
1643
1644 ----
1645 # qm set 100 --hookscript local:snippets/hookscript.pl
1646 ----
1647
1648 It will be called during various phases of the guests lifetime.
1649 For an example and documentation see the example script under
1650 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
1651
1652 [[qm_hibernate]]
1653 Hibernation
1654 -----------
1655
1656 You can suspend a VM to disk with the GUI option `Hibernate` or with
1657
1658 ----
1659 # qm suspend ID --todisk
1660 ----
1661
1662 That means that the current content of the memory will be saved onto disk
1663 and the VM gets stopped. On the next start, the memory content will be
1664 loaded and the VM can continue where it was left off.
1665
1666 [[qm_vmstatestorage]]
1667 .State storage selection
1668 If no target storage for the memory is given, it will be automatically
1669 chosen, the first of:
1670
1671 1. The storage `vmstatestorage` from the VM config.
1672 2. The first shared storage from any VM disk.
1673 3. The first non-shared storage from any VM disk.
1674 4. The storage `local` as a fallback.
1675
1676 [[resource_mapping]]
1677 Resource Mapping
1678 ~~~~~~~~~~~~~~~~
1679
1680 When using or referencing local resources (e.g. address of a pci device), using
1681 the raw address or id is sometimes problematic, for example:
1682
1683 * when using HA, a different device with the same id or path may exist on the
1684 target node, and if one is not careful when assigning such guests to HA
1685 groups, the wrong device could be used, breaking configurations.
1686
1687 * changing hardware can change ids and paths, so one would have to check all
1688 assigned devices and see if the path or id is still correct.
1689
1690 To handle this better, one can define cluster wide resource mappings, such that
1691 a resource has a cluster unique, user selected identifier which can correspond
1692 to different devices on different hosts. With this, HA won't start a guest with
1693 a wrong device, and hardware changes can be detected.
1694
1695 Creating such a mapping can be done with the {pve} web GUI under `Datacenter`
1696 in the relevant tab in the `Resource Mappings` category, or on the cli with
1697
1698 ----
1699 # pvesh create /cluster/mapping/<type> <options>
1700 ----
1701
1702 Where `<type>` is the hardware type (currently either `pci` or `usb`) and
1703 `<options>` are the device mappings and other configuration parameters.
1704
1705 Note that the options must include a map property with all identifying
1706 properties of that hardware, so that it's possible to verify the hardware did
1707 not change and the correct device is passed through.
1708
1709 For example to add a PCI device as `device1` with the path `0000:01:00.0` that
1710 has the device id `0001` and the vendor id `0002` on the node `node1`, and
1711 `0000:02:00.0` on `node2` you can add it with:
1712
1713 ----
1714 # pvesh create /cluster/mapping/pci --id device1 \
1715 --map node=node1,path=0000:01:00.0,id=0002:0001 \
1716 --map node=node2,path=0000:02:00.0,id=0002:0001
1717 ----
1718
1719 You must repeat the `map` parameter for each node where that device should have
1720 a mapping (note that you can currently only map one USB device per node per
1721 mapping).
1722
1723 Using the GUI makes this much easier, as the correct properties are
1724 automatically picked up and sent to the API.
1725
1726 It's also possible for PCI devices to provide multiple devices per node with
1727 multiple map properties for the nodes. If such a device is assigned to a guest,
1728 the first free one will be used when the guest is started. The order of the
1729 paths given is also the order in which they are tried, so arbitrary allocation
1730 policies can be implemented.
1731
1732 This is useful for devices with SR-IOV, since some times it is not important
1733 which exact virtual function is passed through.
1734
1735 You can assign such a device to a guest either with the GUI or with
1736
1737 ----
1738 # qm set ID -hostpci0 <name>
1739 ----
1740
1741 for PCI devices, or
1742
1743 ----
1744 # qm set <vmid> -usb0 <name>
1745 ----
1746
1747 for USB devices.
1748
1749 Where `<vmid>` is the guests id and `<name>` is the chosen name for the created
1750 mapping. All usual options for passing through the devices are allowed, such as
1751 `mdev`.
1752
1753 To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
1754 (where `<type>` is the device type and `<name>` is the name of the mapping).
1755
1756 To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
1757 (in addition to the normal guest privileges to edit the configuration).
1758
1759 Managing Virtual Machines with `qm`
1760 ------------------------------------
1761
1762 qm is the tool to manage QEMU/KVM virtual machines on {pve}. You can
1763 create and destroy virtual machines, and control execution
1764 (start/stop/suspend/resume). Besides that, you can use qm to set
1765 parameters in the associated config file. It is also possible to
1766 create and delete virtual disks.
1767
1768 CLI Usage Examples
1769 ~~~~~~~~~~~~~~~~~~
1770
1771 Using an iso file uploaded on the 'local' storage, create a VM
1772 with a 4 GB IDE disk on the 'local-lvm' storage
1773
1774 ----
1775 # qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1776 ----
1777
1778 Start the new VM
1779
1780 ----
1781 # qm start 300
1782 ----
1783
1784 Send a shutdown request, then wait until the VM is stopped.
1785
1786 ----
1787 # qm shutdown 300 && qm wait 300
1788 ----
1789
1790 Same as above, but only wait for 40 seconds.
1791
1792 ----
1793 # qm shutdown 300 && qm wait 300 -timeout 40
1794 ----
1795
1796 Destroying a VM always removes it from Access Control Lists and it always
1797 removes the firewall configuration of the VM. You have to activate
1798 '--purge', if you want to additionally remove the VM from replication jobs,
1799 backup jobs and HA resource configurations.
1800
1801 ----
1802 # qm destroy 300 --purge
1803 ----
1804
1805 Move a disk image to a different storage.
1806
1807 ----
1808 # qm move-disk 300 scsi0 other-storage
1809 ----
1810
1811 Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1812 the source VM and attaches it as `scsi3` to the target VM. In the background
1813 the disk image is being renamed so that the name matches the new owner.
1814
1815 ----
1816 # qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1817 ----
1818
1819
1820 [[qm_configuration]]
1821 Configuration
1822 -------------
1823
1824 VM configuration files are stored inside the Proxmox cluster file
1825 system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1826 Like other files stored inside `/etc/pve/`, they get automatically
1827 replicated to all other cluster nodes.
1828
1829 NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1830 unique cluster wide.
1831
1832 .Example VM Configuration
1833 ----
1834 boot: order=virtio0;net0
1835 cores: 1
1836 sockets: 1
1837 memory: 512
1838 name: webmail
1839 ostype: l26
1840 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1841 virtio0: local:vm-100-disk-1,size=32G
1842 ----
1843
1844 Those configuration files are simple text files, and you can edit them
1845 using a normal text editor (`vi`, `nano`, ...). This is sometimes
1846 useful to do small corrections, but keep in mind that you need to
1847 restart the VM to apply such changes.
1848
1849 For that reason, it is usually better to use the `qm` command to
1850 generate and modify those files, or do the whole thing using the GUI.
1851 Our toolkit is smart enough to instantaneously apply most changes to
1852 running VM. This feature is called "hot plug", and there is no
1853 need to restart the VM in that case.
1854
1855
1856 File Format
1857 ~~~~~~~~~~~
1858
1859 VM configuration files use a simple colon separated key/value
1860 format. Each line has the following format:
1861
1862 -----
1863 # this is a comment
1864 OPTION: value
1865 -----
1866
1867 Blank lines in those files are ignored, and lines starting with a `#`
1868 character are treated as comments and are also ignored.
1869
1870
1871 [[qm_snapshots]]
1872 Snapshots
1873 ~~~~~~~~~
1874
1875 When you create a snapshot, `qm` stores the configuration at snapshot
1876 time into a separate snapshot section within the same configuration
1877 file. For example, after creating a snapshot called ``testsnapshot'',
1878 your configuration file will look like this:
1879
1880 .VM configuration with snapshot
1881 ----
1882 memory: 512
1883 swap: 512
1884 parent: testsnaphot
1885 ...
1886
1887 [testsnaphot]
1888 memory: 512
1889 swap: 512
1890 snaptime: 1457170803
1891 ...
1892 ----
1893
1894 There are a few snapshot related properties like `parent` and
1895 `snaptime`. The `parent` property is used to store the parent/child
1896 relationship between snapshots. `snaptime` is the snapshot creation
1897 time stamp (Unix epoch).
1898
1899 You can optionally save the memory of a running VM with the option `vmstate`.
1900 For details about how the target storage gets chosen for the VM state, see
1901 xref:qm_vmstatestorage[State storage selection] in the chapter
1902 xref:qm_hibernate[Hibernation].
1903
1904 [[qm_options]]
1905 Options
1906 ~~~~~~~
1907
1908 include::qm.conf.5-opts.adoc[]
1909
1910
1911 Locks
1912 -----
1913
1914 Online migrations, snapshots and backups (`vzdump`) set a lock to prevent
1915 incompatible concurrent actions on the affected VMs. Sometimes you need to
1916 remove such a lock manually (for example after a power failure).
1917
1918 ----
1919 # qm unlock <vmid>
1920 ----
1921
1922 CAUTION: Only do that if you are sure the action which set the lock is
1923 no longer running.
1924
1925
1926 ifdef::wiki[]
1927
1928 See Also
1929 ~~~~~~~~
1930
1931 * link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1932
1933 endif::wiki[]
1934
1935
1936 ifdef::manvolnum[]
1937
1938 Files
1939 ------
1940
1941 `/etc/pve/qemu-server/<VMID>.conf`::
1942
1943 Configuration file for the VM '<VMID>'.
1944
1945
1946 include::pve-copyright.adoc[]
1947 endif::manvolnum[]