]> git.proxmox.com Git - pve-docs.git/blame - qm.adoc
zfs: add missing "to"
[pve-docs.git] / qm.adoc
CommitLineData
80c0adcb 1[[chapter_virtual_machines]]
f69cfd23 2ifdef::manvolnum[]
b2f242ab
DM
3qm(1)
4=====
5f09af76
DM
5:pve-toplevel:
6
f69cfd23
DM
7NAME
8----
9
c730e973 10qm - QEMU/KVM Virtual Machine Manager
f69cfd23
DM
11
12
49a5e11c 13SYNOPSIS
f69cfd23
DM
14--------
15
16include::qm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
f69cfd23 21ifndef::manvolnum[]
c730e973 22QEMU/KVM Virtual Machines
f69cfd23 23=========================
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
c4cba5d7
EK
27// deprecates
28// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29// http://pve.proxmox.com/wiki/KVM
30// http://pve.proxmox.com/wiki/Qemu_Server
31
c730e973
FE
32QEMU (short form for Quick Emulator) is an open source hypervisor that emulates a
33physical computer. From the perspective of the host system where QEMU is
34running, QEMU is a user program which has access to a number of local resources
c4cba5d7 35like partitions, files, network cards which are then passed to an
189d3661 36emulated computer which sees them as if they were real devices.
c4cba5d7
EK
37
38A guest operating system running in the emulated computer accesses these
3a433e9b 39devices, and runs as if it were running on real hardware. For instance, you can pass
c730e973 40an ISO image as a parameter to QEMU, and the OS running in the emulated computer
3a433e9b 41will see a real CD-ROM inserted into a CD drive.
c4cba5d7 42
c730e973 43QEMU can emulate a great variety of hardware from ARM to Sparc, but {pve} is
c4cba5d7
EK
44only concerned with 32 and 64 bits PC clone emulation, since it represents the
45overwhelming majority of server hardware. The emulation of PC clones is also one
46of the fastest due to the availability of processor extensions which greatly
c730e973 47speed up QEMU when the emulated architecture is the same as the host
9c63b5d9
EK
48architecture.
49
50NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
c730e973
FE
51It means that QEMU is running with the support of the virtualization processor
52extensions, via the Linux KVM module. In the context of {pve} _QEMU_ and
53_KVM_ can be used interchangeably, as QEMU in {pve} will always try to load the KVM
9c63b5d9
EK
54module.
55
c730e973 56QEMU inside {pve} runs as a root process, since this is required to access block
c4cba5d7
EK
57and PCI devices.
58
5eba0743 59
c4cba5d7
EK
60Emulated devices and paravirtualized devices
61--------------------------------------------
62
42dfa5e9 63The PC hardware emulated by QEMU includes a motherboard, network controllers,
3a433e9b 64SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
189d3661
DC
65the `kvm(1)` man page) all of them emulated in software. All these devices
66are the exact software equivalent of existing hardware devices, and if the OS
67running in the guest has the proper drivers it will use the devices as if it
c35063c2 68were running on real hardware. This allows QEMU to run _unmodified_ operating
c4cba5d7
EK
69systems.
70
71This however has a performance cost, as running in software what was meant to
72run in hardware involves a lot of extra work for the host CPU. To mitigate this,
c730e973
FE
73QEMU can present to the guest operating system _paravirtualized devices_, where
74the guest OS recognizes it is running inside QEMU and cooperates with the
c4cba5d7
EK
75hypervisor.
76
c730e973 77QEMU relies on the virtio virtualization standard, and is thus able to present
189d3661
DC
78paravirtualized virtio devices, which includes a paravirtualized generic disk
79controller, a paravirtualized network card, a paravirtualized serial port,
c4cba5d7
EK
80a paravirtualized SCSI controller, etc ...
81
e3d91783
FE
82TIP: It is *highly recommended* to use the virtio devices whenever you can, as
83they provide a big performance improvement and are generally better maintained.
84Using the virtio generic disk controller versus an emulated IDE controller will
85double the sequential write throughput, as measured with `bonnie++(8)`. Using
86the virtio network interface can deliver up to three times the throughput of an
0677f4cc
FE
87emulated Intel E1000 network card, as measured with `iperf(1)`. footnote:[See
88this benchmark on the KVM wiki https://www.linux-kvm.org/page/Using_VirtIO_NIC]
c4cba5d7 89
5eba0743 90
80c0adcb 91[[qm_virtual_machines_settings]]
5274ad28 92Virtual Machines Settings
c4cba5d7 93-------------------------
80c0adcb 94
c4cba5d7
EK
95Generally speaking {pve} tries to choose sane defaults for virtual machines
96(VM). Make sure you understand the meaning of the settings you change, as it
97could incur a performance slowdown, or putting your data at risk.
98
5eba0743 99
80c0adcb 100[[qm_general_settings]]
c4cba5d7
EK
101General Settings
102~~~~~~~~~~~~~~~~
80c0adcb 103
1ff5e4e8 104[thumbnail="screenshot/gui-create-vm-general.png"]
b16d767f 105
c4cba5d7
EK
106General settings of a VM include
107
108* the *Node* : the physical server on which the VM will run
109* the *VM ID*: a unique number in this {pve} installation used to identify your VM
110* *Name*: a free form text string you can use to describe the VM
111* *Resource Pool*: a logical group of VMs
112
5eba0743 113
80c0adcb 114[[qm_os_settings]]
c4cba5d7
EK
115OS Settings
116~~~~~~~~~~~
80c0adcb 117
1ff5e4e8 118[thumbnail="screenshot/gui-create-vm-os.png"]
200114a7 119
d3c00374
TL
120When creating a virtual machine (VM), setting the proper Operating System(OS)
121allows {pve} to optimize some low level parameters. For instance Windows OS
122expect the BIOS clock to use the local time, while Unix based OS expect the
123BIOS clock to have the UTC time.
124
125[[qm_system_settings]]
126System Settings
127~~~~~~~~~~~~~~~
128
ade78a55
TL
129On VM creation you can change some basic system components of the new VM. You
130can specify which xref:qm_display[display type] you want to use.
d3c00374
TL
131[thumbnail="screenshot/gui-create-vm-system.png"]
132Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133If you plan to install the QEMU Guest Agent, or if your selected ISO image
c730e973 134already ships and installs it automatically, you may want to tick the 'QEMU
d3c00374
TL
135Agent' box, which lets {pve} know that it can use its features to show some
136more information, and complete some actions (for example, shutdown or
137snapshots) more intelligently.
138
139{pve} allows to boot VMs with different firmware and machine types, namely
140xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
3a433e9b 141the default SeaBIOS to OVMF only if you plan to use
cecc8064
FE
142xref:qm_pci_passthrough[PCIe passthrough].
143
ff0c3ed1
TL
144[[qm_machine_type]]
145
cecc8064
FE
146Machine Type
147^^^^^^^^^^^^
148
149A VM's 'Machine Type' defines the hardware layout of the VM's virtual
150motherboard. You can choose between the default
151https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
d3c00374 152https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
cecc8064
FE
153chipset, which also provides a virtual PCIe bus, and thus may be
154desired if you want to pass through PCIe hardware.
155
ff0c3ed1
TL
156Machine Version
157+++++++++++++++
158
cecc8064
FE
159Each machine type is versioned in QEMU and a given QEMU binary supports many
160machine versions. New versions might bring support for new features, fixes or
161general improvements. However, they also change properties of the virtual
162hardware. To avoid sudden changes from the guest's perspective and ensure
163compatibility of the VM state, live-migration and snapshots with RAM will keep
164using the same machine version in the new QEMU instance.
165
166For Windows guests, the machine version is pinned during creation, because
167Windows is sensitive to changes in the virtual hardware - even between cold
168boots. For example, the enumeration of network devices might be different with
169different machine versions. Other OSes like Linux can usually deal with such
170changes just fine. For those, the 'Latest' machine version is used by default.
171This means that after a fresh start, the newest machine version supported by the
172QEMU binary is used (e.g. the newest machine version QEMU 8.1 supports is
173version 8.1 for each machine type).
174
ff0c3ed1
TL
175[[qm_machine_update]]
176
177Update to a Newer Machine Version
178+++++++++++++++++++++++++++++++++
179
cecc8064
FE
180Very old machine versions might become deprecated in QEMU. For example, this is
181the case for versions 1.4 to 1.7 for the i440fx machine type. It is expected
182that support for these machine versions will be dropped at some point. If you
183see a deprecation warning, you should change the machine version to a newer one.
184Be sure to have a working backup first and be prepared for changes to how the
185guest sees hardware. In some scenarios, re-installing certain drivers might be
186required. You should also check for snapshots with RAM that were taken with
187these machine versions (i.e. the `runningmachine` configuration entry).
188Unfortunately, there is no way to change the machine version of a snapshot, so
189you'd need to load the snapshot to salvage any data from it.
5eba0743 190
80c0adcb 191[[qm_hard_disk]]
c4cba5d7
EK
192Hard Disk
193~~~~~~~~~
80c0adcb 194
3dbe1daa
TL
195[[qm_hard_disk_bus]]
196Bus/Controller
197^^^^^^^^^^^^^^
c730e973 198QEMU can emulate a number of storage controllers:
c4cba5d7 199
741fa478
FE
200TIP: It is highly recommended to use the *VirtIO SCSI* or *VirtIO Block*
201controller for performance reasons and because they are better maintained.
202
c4cba5d7 203* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
44f38275 204controller. Even if this controller has been superseded by recent designs,
6fb50457 205each and every OS you can think of has support for it, making it a great choice
c4cba5d7
EK
206if you want to run an OS released before 2003. You can connect up to 4 devices
207on this controller.
208
209* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
210design, allowing higher throughput and a greater number of devices to be
211connected. You can connect up to 6 devices on this controller.
212
b0b6802b
EK
213* the *SCSI* controller, designed in 1985, is commonly found on server grade
214hardware, and can connect up to 14 storage devices. {pve} emulates by default a
f4bfd701
DM
215LSI 53C895A controller.
216+
a89ded0b
FE
217A SCSI controller of type _VirtIO SCSI single_ and enabling the
218xref:qm_hard_disk_iothread[IO Thread] setting for the attached disks is
219recommended if you aim for performance. This is the default for newly created
220Linux VMs since {pve} 7.3. Each disk will have its own _VirtIO SCSI_ controller,
221and QEMU will handle the disks IO in a dedicated thread. Linux distributions
222have support for this controller since 2012, and FreeBSD since 2014. For Windows
223OSes, you need to provide an extra ISO containing the drivers during the
224installation.
b0b6802b
EK
225// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
226
30e6fe00
TL
227* The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
228is an older type of paravirtualized controller. It has been superseded by the
229VirtIO SCSI Controller, in terms of features.
c4cba5d7 230
1ff5e4e8 231[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
3dbe1daa
TL
232
233[[qm_hard_disk_formats]]
234Image Format
235^^^^^^^^^^^^
c4cba5d7
EK
236On each controller you attach a number of emulated hard disks, which are backed
237by a file or a block device residing in the configured storage. The choice of
238a storage type will determine the format of the hard disk image. Storages which
239present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
de14ebff 240whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
c4cba5d7
EK
241either the *raw disk image format* or the *QEMU image format*.
242
243 * the *QEMU image format* is a copy on write format which allows snapshots, and
244 thin provisioning of the disk image.
189d3661
DC
245 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
246 you would get when executing the `dd` command on a block device in Linux. This
4371b2fe 247 format does not support thin provisioning or snapshots by itself, requiring
30e6fe00
TL
248 cooperation from the storage layer for these tasks. It may, however, be up to
249 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
43530f6f 250 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
189d3661 251 * the *VMware image format* only makes sense if you intend to import/export the
c4cba5d7
EK
252 disk image to other hypervisors.
253
3dbe1daa
TL
254[[qm_hard_disk_cache]]
255Cache Mode
256^^^^^^^^^^
c4cba5d7
EK
257Setting the *Cache* mode of the hard drive will impact how the host system will
258notify the guest systems of block write completions. The *No cache* default
259means that the guest system will be notified that a write is complete when each
260block reaches the physical storage write queue, ignoring the host page cache.
261This provides a good balance between safety and speed.
262
263If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
264you can set the *No backup* option on that disk.
265
3205ac49
EK
266If you want the {pve} storage replication mechanism to skip a disk when starting
267 a replication job, you can set the *Skip replication* option on that disk.
6fb50457 268As of {pve} 5.0, replication requires the disk images to be on a storage of type
3205ac49 269`zfspool`, so adding a disk image to other storages when the VM has replication
6fb50457 270configured requires to skip replication for this disk image.
3205ac49 271
3dbe1daa
TL
272[[qm_hard_disk_discard]]
273Trim/Discard
274^^^^^^^^^^^^
c4cba5d7 275If your storage supports _thin provisioning_ (see the storage chapter in the
53cbac40
NC
276{pve} guide), you can activate the *Discard* option on a drive. With *Discard*
277set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
278https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
279marks blocks as unused after deleting files, the controller will relay this
280information to the storage, which will then shrink the disk image accordingly.
43975153
SR
281For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
282option on the drive. Some guest operating systems may also require the
283*SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
284only supported on guests using Linux Kernel 5.0 or higher.
c4cba5d7 285
25203dc1
NC
286If you would like a drive to be presented to the guest as a solid-state drive
287rather than a rotational hard disk, you can set the *SSD emulation* option on
288that drive. There is no requirement that the underlying storage actually be
289backed by SSDs; this feature can be used with physical media of any type.
53cbac40 290Note that *SSD emulation* is not supported on *VirtIO Block* drives.
25203dc1 291
3dbe1daa
TL
292
293[[qm_hard_disk_iothread]]
294IO Thread
295^^^^^^^^^
4c7a47cf
FE
296The option *IO Thread* can only be used when using a disk with the *VirtIO*
297controller, or with the *SCSI* controller, when the emulated controller type is
298*VirtIO SCSI single*. With *IO Thread* enabled, QEMU creates one I/O thread per
58e695ca 299storage controller rather than handling all I/O in the main event loop or vCPU
afb90565
TL
300threads. One benefit is better work distribution and utilization of the
301underlying storage. Another benefit is reduced latency (hangs) in the guest for
302very I/O-intensive host workloads, since neither the main thread nor a vCPU
303thread can be blocked by disk I/O.
80c0adcb
DM
304
305[[qm_cpu]]
34e541c5
EK
306CPU
307~~~
80c0adcb 308
1ff5e4e8 309[thumbnail="screenshot/gui-create-vm-cpu.png"]
397c74c3 310
34e541c5
EK
311A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
312This CPU can then contain one or many *cores*, which are independent
313processing units. Whether you have a single CPU socket with 4 cores, or two CPU
314sockets with two cores is mostly irrelevant from a performance point of view.
44f38275
TL
315However some software licenses depend on the number of sockets a machine has,
316in that case it makes sense to set the number of sockets to what the license
317allows you.
f4bfd701 318
3a433e9b 319Increasing the number of virtual CPUs (cores and sockets) will usually provide a
34e541c5 320performance improvement though that is heavily dependent on the use of the VM.
3a433e9b 321Multi-threaded applications will of course benefit from a large number of
c730e973 322virtual CPUs, as for each virtual cpu you add, QEMU will create a new thread of
34e541c5
EK
323execution on the host system. If you're not sure about the workload of your VM,
324it is usually a safe bet to set the number of *Total cores* to 2.
325
fb29acdd 326NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
d6466262
TL
327is greater than the number of cores on the server (for example, 4 VMs each with
3284 cores (= total 16) on a machine with only 8 cores). In that case the host
329system will balance the QEMU execution threads between your server cores, just
330like if you were running a standard multi-threaded application. However, {pve}
331will prevent you from starting VMs with more virtual CPU cores than physically
332available, as this will only bring the performance down due to the cost of
333context switches.
34e541c5 334
af54f54d
TL
335[[qm_cpu_resource_limits]]
336Resource Limits
337^^^^^^^^^^^^^^^
338
d17b6bd3
AZ
339*cpulimit*
340
341In addition to the number of virtual cores, the total available ``Host CPU
342Time'' for the VM can be set with the *cpulimit* option. It is a floating point
343value representing CPU time in percent, so `1.0` is equal to `100%`, `2.5` to
344`250%` and so on. If a single process would fully use one single core it would
345have `100%` CPU Time usage. If a VM with four cores utilizes all its cores
346fully it would theoretically use `400%`. In reality the usage may be even a bit
347higher as QEMU can have additional threads for VM peripherals besides the vCPU
348core ones.
349
0186a467
TL
350This setting can be useful when a VM should have multiple vCPUs because it is
351running some processes in parallel, but the VM as a whole should not be able to
352run all vCPUs at 100% at the same time.
353
354For example, suppose you have a virtual machine that would benefit from having 8
355virtual CPUs, but you don't want the VM to be able to max out all 8 cores
356running at full load - because that would overload the server and leave other
357virtual machines and containers with too little CPU time. To solve this, you
358could set *cpulimit* to `4.0` (=400%). This means that if the VM fully utilizes
359all 8 virtual CPUs by running 8 processes simultaneously, each vCPU will receive
360a maximum of 50% CPU time from the physical cores. However, if the VM workload
361only fully utilizes 4 virtual CPUs, it could still receive up to 100% CPU time
362from a physical core, for a total of 400%.
af54f54d 363
d6466262
TL
364NOTE: VMs can, depending on their configuration, use additional threads, such
365as for networking or IO operations but also live migration. Thus a VM can show
366up to use more CPU time than just its virtual CPUs could use. To ensure that a
d17b6bd3
AZ
367VM never uses more CPU time than vCPUs assigned, set the *cpulimit* to
368the same value as the total core count.
af54f54d 369
6a31c01b
AZ
370*cpuuntis*
371
372With the *cpuunits* option, nowadays often called CPU shares or CPU weight, you
373can control how much CPU time a VM gets compared to other running VMs. It is a
374relative weight which defaults to `100` (or `1024` if the host uses legacy
375cgroup v1). If you increase this for a VM it will be prioritized by the
376scheduler in comparison to other VMs with lower weight.
377
378For example, if VM 100 has set the default `100` and VM 200 was changed to
379`200`, the latter VM 200 would receive twice the CPU bandwidth than the first
380VM 100.
af54f54d
TL
381
382For more information see `man systemd.resource-control`, here `CPUQuota`
6a31c01b
AZ
383corresponds to `cpulimit` and `CPUWeight` to our `cpuunits` setting. Visit its
384Notes section for references and implementation details.
af54f54d 385
b3848f24
AZ
386*affinity*
387
af47d42e
TL
388With the *affinity* option, you can specify the physical CPU cores that are used
389to run the VM's vCPUs. Peripheral VM processes, such as those for I/O, are not
390affected by this setting. Note that the *CPU affinity is not a security
b3848f24
AZ
391feature*.
392
af47d42e 393Forcing a CPU *affinity* can make sense in certain cases but is accompanied by
b3848f24
AZ
394an increase in complexity and maintenance effort. For example, if you want to
395add more VMs later or migrate VMs to nodes with fewer CPU cores. It can also
396easily lead to asynchronous and therefore limited system performance if some
397CPUs are fully utilized while others are almost idle.
398
af47d42e
TL
399The *affinity* is set through the `taskset` CLI tool. It accepts the host CPU
400numbers (see `lscpu`) in the `List Format` from `man cpuset`. This ASCII decimal
401list can contain numbers but also number ranges. For example, the *affinity*
402`0-1,8-11` (expanded `0, 1, 8, 9, 10, 11`) would allow the VM to run on only
403these six specific host cores.
1e6b30b5 404
af54f54d
TL
405CPU Type
406^^^^^^^^
407
c730e973 408QEMU can emulate a number different of *CPU types* from 486 to the latest Xeon
34e541c5 409processors. Each new processor generation adds new features, like hardware
16b31cc9
AZ
410assisted 3d rendering, random number generation, memory protection, etc. Also,
411a current generation can be upgraded through
412xref:chapter_firmware_updates[microcode update] with bug or security fixes.
41379e9a 413
34e541c5
EK
414Usually you should select for your VM a processor type which closely matches the
415CPU of the host system, as it means that the host CPU features (also called _CPU
416flags_ ) will be available in your VMs. If you want an exact match, you can set
417the CPU type to *host* in which case the VM will have exactly the same CPU flags
f4bfd701
DM
418as your host system.
419
34e541c5 420This has a downside though. If you want to do a live migration of VMs between
41379e9a 421different hosts, your VM might end up on a new system with a different CPU type
57bb28ef
FE
422or a different microcode version.
423If the CPU flags passed to the guest are missing, the QEMU process will stop. To
424remedy this QEMU has also its own virtual CPU types, that {pve} uses by default.
41379e9a 425
57bb28ef
FE
426The backend default is 'kvm64' which works on essentially all x86_64 host CPUs
427and the UI default when creating a new VM is 'x86-64-v2-AES', which requires a
428host CPU starting from Westmere for Intel or at least a fourth generation
429Opteron for AMD.
41379e9a
AD
430
431In short:
f4bfd701 432
57bb28ef
FE
433If you don’t care about live migration or have a homogeneous cluster where all
434nodes have the same CPU and same microcode version, set the CPU type to host, as
435in theory this will give your guests maximum performance.
af54f54d 436
57bb28ef
FE
437If you care about live migration and security, and you have only Intel CPUs or
438only AMD CPUs, choose the lowest generation CPU model of your cluster.
41379e9a 439
57bb28ef
FE
440If you care about live migration without security, or have mixed Intel/AMD
441cluster, choose the lowest compatible virtual QEMU CPU type.
41379e9a 442
57bb28ef 443NOTE: Live migrations between Intel and AMD host CPUs have no guarantee to work.
41379e9a 444
85e53bbf 445See also
2157032d 446xref:chapter_qm_vcpu_list[List of AMD and Intel CPU Types as Defined in QEMU].
41379e9a 447
c85a1f5a 448QEMU CPU Types
41379e9a
AD
449^^^^^^^^^^^^^^
450
c85a1f5a
FE
451QEMU also provide virtual CPU types, compatible with both Intel and AMD host
452CPUs.
41379e9a 453
c85a1f5a
FE
454NOTE: To mitigate the Spectre vulnerability for virtual CPU types, you need to
455add the relevant CPU flags, see
456xref:qm_meltdown_spectre[Meltdown / Spectre related CPU flags].
41379e9a 457
c85a1f5a
FE
458Historically, {pve} had the 'kvm64' CPU model, with CPU flags at the level of
459Pentium 4 enabled, so performance was not great for certain workloads.
41379e9a 460
c85a1f5a
FE
461In the summer of 2020, AMD, Intel, Red Hat, and SUSE collaborated to define
462three x86-64 microarchitecture levels on top of the x86-64 baseline, with modern
463flags enabled. For details, see the
464https://gitlab.com/x86-psABIs/x86-64-ABI[x86-64-ABI specification].
41379e9a 465
c85a1f5a
FE
466NOTE: Some newer distributions like CentOS 9 are now built with 'x86-64-v2'
467flags as a minimum requirement.
41379e9a 468
c85a1f5a
FE
469* 'kvm64 (x86-64-v1)': Compatible with Intel CPU >= Pentium 4, AMD CPU >=
470Phenom.
41379e9a 471+
c85a1f5a
FE
472* 'x86-64-v2': Compatible with Intel CPU >= Nehalem, AMD CPU >= Opteron_G3.
473Added CPU flags compared to 'x86-64-v1': '+cx16', '+lahf-lm', '+popcnt', '+pni',
474'+sse4.1', '+sse4.2', '+ssse3'.
41379e9a 475+
c85a1f5a
FE
476* 'x86-64-v2-AES': Compatible with Intel CPU >= Westmere, AMD CPU >= Opteron_G4.
477Added CPU flags compared to 'x86-64-v2': '+aes'.
41379e9a 478+
c85a1f5a
FE
479* 'x86-64-v3': Compatible with Intel CPU >= Broadwell, AMD CPU >= EPYC. Added
480CPU flags compared to 'x86-64-v2-AES': '+avx', '+avx2', '+bmi1', '+bmi2',
481'+f16c', '+fma', '+movbe', '+xsave'.
41379e9a 482+
c85a1f5a
FE
483* 'x86-64-v4': Compatible with Intel CPU >= Skylake, AMD CPU >= EPYC v4 Genoa.
484Added CPU flags compared to 'x86-64-v3': '+avx512f', '+avx512bw', '+avx512cd',
485'+avx512dq', '+avx512vl'.
41379e9a 486
9e797d8c
SR
487Custom CPU Types
488^^^^^^^^^^^^^^^^
489
490You can specify custom CPU types with a configurable set of features. These are
491maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
492an administrator. See `man cpu-models.conf` for format details.
493
494Specified custom types can be selected by any user with the `Sys.Audit`
495privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
496or API, the name needs to be prefixed with 'custom-'.
497
c85a1f5a 498[[qm_meltdown_spectre]]
72ae8aa2
FG
499Meltdown / Spectre related CPU flags
500^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
501
2975cb7a 502There are several CPU flags related to the Meltdown and Spectre vulnerabilities
72ae8aa2
FG
503footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
504manually unless the selected CPU type of your VM already enables them by default.
505
2975cb7a 506There are two requirements that need to be fulfilled in order to use these
72ae8aa2 507CPU flags:
5dba2677 508
72ae8aa2
FG
509* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
510* The guest operating system must be updated to a version which mitigates the
511 attacks and is able to utilize the CPU feature
512
2975cb7a 513Otherwise you need to set the desired CPU flag of the virtual CPU, either by
e2b3622a 514editing the CPU options in the web UI, or by setting the 'flags' property of the
2975cb7a
AD
515'cpu' option in the VM configuration file.
516
517For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
16b31cc9
AZ
518so-called ``microcode update'' for your CPU, see
519xref:chapter_firmware_updates[chapter Firmware Updates]. Note that not all
520affected CPUs can be updated to support spec-ctrl.
5dba2677 521
2975cb7a
AD
522
523To check if the {pve} host is vulnerable, execute the following command as root:
5dba2677
TL
524
525----
2975cb7a 526for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
5dba2677
TL
527----
528
16b31cc9 529A community script is also available to detect if the host is still vulnerable.
2975cb7a 530footnote:[spectre-meltdown-checker https://meltdown.ovh/]
72ae8aa2 531
2975cb7a
AD
532Intel processors
533^^^^^^^^^^^^^^^^
72ae8aa2 534
2975cb7a
AD
535* 'pcid'
536+
144d5ede 537This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
2975cb7a
AD
538called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
539the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
540mechanism footnote:[PCID is now a critical performance/security feature on x86
541https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
542+
543To check if the {pve} host supports PCID, execute the following command as root:
544+
72ae8aa2 545----
2975cb7a 546# grep ' pcid ' /proc/cpuinfo
72ae8aa2 547----
2975cb7a
AD
548+
549If this does not return empty your host's CPU has support for 'pcid'.
72ae8aa2 550
2975cb7a
AD
551* 'spec-ctrl'
552+
144d5ede
WB
553Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
554in cases where retpolines are not sufficient.
555Included by default in Intel CPU models with -IBRS suffix.
556Must be explicitly turned on for Intel CPU models without -IBRS suffix.
557Requires an updated host CPU microcode (intel-microcode >= 20180425).
2975cb7a
AD
558+
559* 'ssbd'
560+
144d5ede
WB
561Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
562Must be explicitly turned on for all Intel CPU models.
563Requires an updated host CPU microcode(intel-microcode >= 20180703).
72ae8aa2 564
72ae8aa2 565
2975cb7a
AD
566AMD processors
567^^^^^^^^^^^^^^
568
569* 'ibpb'
570+
144d5ede
WB
571Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
572in cases where retpolines are not sufficient.
573Included by default in AMD CPU models with -IBPB suffix.
574Must be explicitly turned on for AMD CPU models without -IBPB suffix.
2975cb7a
AD
575Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
576
577
578
579* 'virt-ssbd'
580+
581Required to enable the Spectre v4 (CVE-2018-3639) fix.
144d5ede
WB
582Not included by default in any AMD CPU model.
583Must be explicitly turned on for all AMD CPU models.
584This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
585Note that this must be explicitly enabled when when using the "host" cpu model,
586because this is a virtual feature which does not exist in the physical CPUs.
2975cb7a
AD
587
588
589* 'amd-ssbd'
590+
144d5ede
WB
591Required to enable the Spectre v4 (CVE-2018-3639) fix.
592Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
593This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
2975cb7a
AD
594virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
595
596
597* 'amd-no-ssb'
598+
599Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
144d5ede
WB
600Not included by default in any AMD CPU model.
601Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
602and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
2975cb7a
AD
603This is mutually exclusive with virt-ssbd and amd-ssbd.
604
5dba2677 605
af54f54d
TL
606NUMA
607^^^^
608You can also optionally emulate a *NUMA*
609footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
610in your VMs. The basics of the NUMA architecture mean that instead of having a
611global memory pool available to all your cores, the memory is spread into local
612banks close to each socket.
34e541c5
EK
613This can bring speed improvements as the memory bus is not a bottleneck
614anymore. If your system has a NUMA architecture footnote:[if the command
615`numactl --hardware | grep available` returns more than one node, then your host
616system has a NUMA architecture] we recommend to activate the option, as this
af54f54d
TL
617will allow proper distribution of the VM resources on the host system.
618This option is also required to hot-plug cores or RAM in a VM.
34e541c5
EK
619
620If the NUMA option is used, it is recommended to set the number of sockets to
4ccb911c 621the number of nodes of the host system.
34e541c5 622
af54f54d
TL
623vCPU hot-plug
624^^^^^^^^^^^^^
625
626Modern operating systems introduced the capability to hot-plug and, to a
3a433e9b 627certain extent, hot-unplug CPUs in a running system. Virtualization allows us
4371b2fe
FG
628to avoid a lot of the (physical) problems real hardware can cause in such
629scenarios.
630Still, this is a rather new and complicated feature, so its use should be
631restricted to cases where its absolutely needed. Most of the functionality can
632be replicated with other, well tested and less complicated, features, see
af54f54d
TL
633xref:qm_cpu_resource_limits[Resource Limits].
634
635In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
636To start a VM with less than this total core count of CPUs you may use the
2a249c84 637*vcpus* setting, it denotes how many vCPUs should be plugged in at VM start.
af54f54d 638
4371b2fe 639Currently only this feature is only supported on Linux, a kernel newer than 3.10
af54f54d
TL
640is needed, a kernel newer than 4.7 is recommended.
641
642You can use a udev rule as follow to automatically set new CPUs as online in
643the guest:
644
645----
646SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
647----
648
649Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
650
d6466262
TL
651Note: CPU hot-remove is machine dependent and requires guest cooperation. The
652deletion command does not guarantee CPU removal to actually happen, typically
653it's a request forwarded to guest OS using target dependent mechanism, such as
654ACPI on x86/amd64.
af54f54d 655
80c0adcb
DM
656
657[[qm_memory]]
34e541c5
EK
658Memory
659~~~~~~
80c0adcb 660
34e541c5
EK
661For each VM you have the option to set a fixed size memory or asking
662{pve} to dynamically allocate memory based on the current RAM usage of the
59552707 663host.
34e541c5 664
96124d0f 665.Fixed Memory Allocation
1ff5e4e8 666[thumbnail="screenshot/gui-create-vm-memory.png"]
96124d0f 667
9ea21953 668When setting memory and minimum memory to the same amount
9fb002e6 669{pve} will simply allocate what you specify to your VM.
34e541c5 670
9abfec65
DC
671Even when using a fixed memory size, the ballooning device gets added to the
672VM, because it delivers useful information such as how much memory the guest
673really uses.
674In general, you should leave *ballooning* enabled, but if you want to disable
d6466262 675it (like for debugging purposes), simply uncheck *Ballooning Device* or set
9abfec65
DC
676
677 balloon: 0
678
679in the configuration.
680
96124d0f 681.Automatic Memory Allocation
96124d0f 682
34e541c5 683// see autoballoon() in pvestatd.pm
58e04593 684When setting the minimum memory lower than memory, {pve} will make sure that the
34e541c5
EK
685minimum amount you specified is always available to the VM, and if RAM usage on
686the host is below 80%, will dynamically add memory to the guest up to the
f4bfd701
DM
687maximum memory specified.
688
a35aad4a 689When the host is running low on RAM, the VM will then release some memory
34e541c5
EK
690back to the host, swapping running processes if needed and starting the oom
691killer in last resort. The passing around of memory between host and guest is
692done via a special `balloon` kernel driver running inside the guest, which will
693grab or release memory pages from the host.
694footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
695
c9f6e1a4
EK
696When multiple VMs use the autoallocate facility, it is possible to set a
697*Shares* coefficient which indicates the relative amount of the free host memory
470d4313 698that each VM should take. Suppose for instance you have four VMs, three of them
a35aad4a 699running an HTTP server and the last one is a database server. To cache more
c9f6e1a4
EK
700database blocks in the database server RAM, you would like to prioritize the
701database VM when spare RAM is available. For this you assign a Shares property
702of 3000 to the database VM, leaving the other VMs to the Shares default setting
470d4313 703of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
c3ff2832
AZ
704* 80/100 - 16 = 9GB RAM to be allocated to the VMs on top of their configured
705minimum memory amount. The database VM will benefit from 9 * 3000 / (3000 +
7061000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server from 1.5 GB.
c9f6e1a4 707
34e541c5
EK
708All Linux distributions released after 2010 have the balloon kernel driver
709included. For Windows OSes, the balloon driver needs to be added manually and can
710incur a slowdown of the guest, so we don't recommend using it on critical
59552707 711systems.
34e541c5
EK
712// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
713
470d4313 714When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
34e541c5
EK
715of RAM available to the host.
716
80c0adcb
DM
717
718[[qm_network_device]]
1ff7835b
EK
719Network Device
720~~~~~~~~~~~~~~
80c0adcb 721
1ff5e4e8 722[thumbnail="screenshot/gui-create-vm-network.png"]
c24ddb0a 723
1ff7835b
EK
724Each VM can have many _Network interface controllers_ (NIC), of four different
725types:
726
727 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
728 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
729performance. Like all VirtIO devices, the guest OS should have the proper driver
730installed.
731 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
59552707 732only be used when emulating older operating systems ( released before 2002 )
1ff7835b
EK
733 * the *vmxnet3* is another paravirtualized device, which should only be used
734when importing a VM from another hypervisor.
735
736{pve} will generate for each NIC a random *MAC address*, so that your VM is
737addressable on Ethernet networks.
738
470d4313 739The NIC you added to the VM can follow one of two different models:
af9c6de1
EK
740
741 * in the default *Bridged mode* each virtual NIC is backed on the host by a
742_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
743tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
744have direct access to the Ethernet LAN on which the host is located.
745 * in the alternative *NAT mode*, each virtual NIC will only communicate with
c730e973 746the QEMU user networking stack, where a built-in router and DHCP server can
470d4313 747provide network access. This built-in DHCP will serve addresses in the private
af9c6de1 74810.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
f5041150 749should only be used for testing. This mode is only available via CLI or the API,
e2b3622a 750but not via the web UI.
af9c6de1
EK
751
752You can also skip adding a network device when creating a VM by selecting *No
753network device*.
754
750d4f04 755You can overwrite the *MTU* setting for each VM network device. The option
00dc358b 756`mtu=1` represents a special case, in which the MTU value will be inherited
750d4f04
DT
757from the underlying bridge.
758This option is only available for *VirtIO* network devices.
759
af9c6de1 760.Multiqueue
1ff7835b 761If you are using the VirtIO driver, you can optionally activate the
af9c6de1 762*Multiqueue* option. This option allows the guest OS to process networking
1ff7835b 763packets using multiple virtual CPUs, providing an increase in the total number
470d4313 764of packets transferred.
1ff7835b
EK
765
766//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
767When using the VirtIO driver with {pve}, each NIC network queue is passed to the
a35aad4a 768host kernel, where the queue will be processed by a kernel thread spawned by the
1ff7835b
EK
769vhost driver. With this option activated, it is possible to pass _multiple_
770network queues to the host kernel for each NIC.
771
772//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
3d565359
SS
773When using Multiqueue, it is recommended to set it to a value equal to the
774number of vCPUs of your guest. Remember that the number of vCPUs is the number
775of sockets times the number of cores configured for the VM. You also need to set
776the number of multi-purpose channels on each VirtIO NIC in the VM with this
777ethtool command:
1ff7835b 778
7a0d4784 779`ethtool -L ens1 combined X`
1ff7835b 780
3d565359 781where X is the number of the number of vCPUs of the VM.
1ff7835b 782
b1b6d1bc 783To configure a Windows guest for Multiqueue install the
93a7dcca
AL
784https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers[Redhat VirtIO Ethernet
785Adapter drivers], then adapt the NIC's configuration as follows. Open the
786device manager, right click the NIC under "Network adapters", and select
787"Properties". Then open the "Advanced" tab and select "Receive Side Scaling"
788from the list on the left. Make sure it is set to "Enabled". Next, navigate to
789"Maximum number of RSS Queues" in the list and set it to the number of vCPUs of
790your VM. Once you verified that the settings are correct, click "OK" to confirm
791them.
b1b6d1bc 792
af9c6de1 793You should note that setting the Multiqueue parameter to a value greater
1ff7835b
EK
794than one will increase the CPU load on the host and guest systems as the
795traffic increases. We recommend to set this option only when the VM has to
796process a great number of incoming connections, such as when the VM is running
797as a router, reverse proxy or a busy HTTP server doing long polling.
798
6cb67d7f
DC
799[[qm_display]]
800Display
801~~~~~~~
802
803QEMU can virtualize a few types of VGA hardware. Some examples are:
804
805* *std*, the default, emulates a card with Bochs VBE extensions.
1368dc02
TL
806* *cirrus*, this was once the default, it emulates a very old hardware module
807with all its problems. This display type should only be used if really
808necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
d6466262
TL
809qemu: using cirrus considered harmful], for example, if using Windows XP or
810earlier
6cb67d7f
DC
811* *vmware*, is a VMWare SVGA-II compatible adapter.
812* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
37422176
AL
813enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
814VM.
e039fe3c
TL
815* *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
816 can offload workloads to the host GPU without requiring special (expensive)
817 models and drivers and neither binding the host GPU completely, allowing
818 reuse between multiple guests and or the host.
819+
820NOTE: VirGL support needs some extra libraries that aren't installed by
821default due to being relatively big and also not available as open source for
822all GPU models/vendors. For most setups you'll just need to do:
823`apt install libgl1 libegl1`
6cb67d7f
DC
824
825You can edit the amount of memory given to the virtual GPU, by setting
1368dc02 826the 'memory' option. This can enable higher resolutions inside the VM,
6cb67d7f
DC
827especially with SPICE/QXL.
828
1368dc02 829As the memory is reserved by display device, selecting Multi-Monitor mode
d6466262 830for SPICE (such as `qxl2` for dual monitors) has some implications:
6cb67d7f 831
1368dc02
TL
832* Windows needs a device for each monitor, so if your 'ostype' is some
833version of Windows, {pve} gives the VM an extra device per monitor.
6cb67d7f 834Each device gets the specified amount of memory.
1368dc02 835
6cb67d7f
DC
836* Linux VMs, can always enable more virtual monitors, but selecting
837a Multi-Monitor mode multiplies the memory given to the device with
838the number of monitors.
839
1368dc02
TL
840Selecting `serialX` as display 'type' disables the VGA output, and redirects
841the Web Console to the selected serial port. A configured display 'memory'
842setting will be ignored in that case.
80c0adcb 843
4005a5fa
MF
844.VNC clipboard
845You can enable the VNC clipboard by setting `clipboard` to `vnc`.
846
847----
848# qm set <vmid> -vga <displaytype>,clipboard=vnc
849----
850
851In order to use the clipboard feature, you must first install the
852SPICE guest tools. On Debian-based distributions, this can be achieved
853by installing `spice-vdagent`. For other Operating Systems search for it
854in the offical repositories or see: https://www.spice-space.org/download.html
855
856Once you have installed the spice guest tools, you can use the VNC clipboard
857function (e.g. in the noVNC console panel). However, if you're using
858SPICE, virtio or virgl, you'll need to choose which clipboard to use.
859This is because the default *SPICE* clipboard will be replaced by the
860*VNC* clipboard, if `clipboard` is set to `vnc`.
861
dbb44ef0 862[[qm_usb_passthrough]]
685cc8e0
DC
863USB Passthrough
864~~~~~~~~~~~~~~~
80c0adcb 865
685cc8e0
DC
866There are two different types of USB passthrough devices:
867
470d4313 868* Host USB passthrough
685cc8e0
DC
869* SPICE USB passthrough
870
871Host USB passthrough works by giving a VM a USB device of the host.
872This can either be done via the vendor- and product-id, or
873via the host bus and port.
874
875The vendor/product-id looks like this: *0123:abcd*,
876where *0123* is the id of the vendor, and *abcd* is the id
877of the product, meaning two pieces of the same usb device
878have the same id.
879
880The bus/port looks like this: *1-2.3.4*, where *1* is the bus
881and *2.3.4* is the port path. This represents the physical
882ports of your host (depending of the internal order of the
883usb controllers).
884
885If a device is present in a VM configuration when the VM starts up,
886but the device is not present in the host, the VM can boot without problems.
470d4313 887As soon as the device/port is available in the host, it gets passed through.
685cc8e0 888
e60ce90c 889WARNING: Using this kind of USB passthrough means that you cannot move
685cc8e0
DC
890a VM online to another host, since the hardware is only available
891on the host the VM is currently residing.
892
9632a85d
NU
893The second type of passthrough is SPICE USB passthrough. If you add one or more
894SPICE USB ports to your VM, you can dynamically pass a local USB device from
895your SPICE client through to the VM. This can be useful to redirect an input
896device or hardware dongle temporarily.
685cc8e0 897
e2a867b2
DC
898It is also possible to map devices on a cluster level, so that they can be
899properly used with HA and hardware changes are detected and non root users
900can configure them. See xref:resource_mapping[Resource Mapping]
901for details on that.
80c0adcb
DM
902
903[[qm_bios_and_uefi]]
076d60ae
DC
904BIOS and UEFI
905~~~~~~~~~~~~~
906
907In order to properly emulate a computer, QEMU needs to use a firmware.
55ce3375
TL
908Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
909first steps when booting a VM. It is responsible for doing basic hardware
910initialization and for providing an interface to the firmware and hardware for
911the operating system. By default QEMU uses *SeaBIOS* for this, which is an
912open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
913standard setups.
076d60ae 914
8e5720fd 915Some operating systems (such as Windows 11) may require use of an UEFI
58e695ca 916compatible implementation. In such cases, you must use *OVMF* instead,
8e5720fd
SR
917which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
918
d6466262
TL
919There are other scenarios in which the SeaBIOS may not be the ideal firmware to
920boot from, for example if you want to do VGA passthrough. footnote:[Alex
921Williamson has a good blog entry about this
922https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
076d60ae
DC
923
924If you want to use OVMF, there are several things to consider:
925
926In order to save things like the *boot order*, there needs to be an EFI Disk.
927This disk will be included in backups and snapshots, and there can only be one.
928
929You can create such a disk with the following command:
930
32e8b5b2
AL
931----
932# qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
933----
076d60ae
DC
934
935Where *<storage>* is the storage where you want to have the disk, and
936*<format>* is a format which the storage supports. Alternatively, you can
937create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
938hardware section of a VM.
939
8e5720fd
SR
940The *efitype* option specifies which version of the OVMF firmware should be
941used. For new VMs, this should always be '4m', as it supports Secure Boot and
942has more space allocated to support future development (this is the default in
943the GUI).
944
945*pre-enroll-keys* specifies if the efidisk should come pre-loaded with
946distribution-specific and Microsoft Standard Secure Boot keys. It also enables
947Secure Boot by default (though it can still be disabled in the OVMF menu within
948the VM).
949
950NOTE: If you want to start using Secure Boot in an existing VM (that still uses
951a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
952(`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
953will reset any custom configurations you have made in the OVMF menu!
954
076d60ae 955When using OVMF with a virtual display (without VGA passthrough),
8e5720fd 956you need to set the client resolution in the OVMF menu (which you can reach
076d60ae
DC
957with a press of the ESC button during boot), or you have to choose
958SPICE as the display type.
959
95e8e1b7
SR
960[[qm_tpm]]
961Trusted Platform Module (TPM)
962~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
963
964A *Trusted Platform Module* is a device which stores secret data - such as
965encryption keys - securely and provides tamper-resistance functions for
966validating system boot.
967
d6466262
TL
968Certain operating systems (such as Windows 11) require such a device to be
969attached to a machine (be it physical or virtual).
95e8e1b7
SR
970
971A TPM is added by specifying a *tpmstate* volume. This works similar to an
972efidisk, in that it cannot be changed (only removed) once created. You can add
973one via the following command:
974
32e8b5b2
AL
975----
976# qm set <vmid> -tpmstate0 <storage>:1,version=<version>
977----
95e8e1b7
SR
978
979Where *<storage>* is the storage you want to put the state on, and *<version>*
980is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
981choosing 'Add' -> 'TPM State' in the hardware section of a VM.
982
983The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
984implementation that requires a 'v1.2' TPM, it should be preferred.
985
986NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
987security benefits. The point of a TPM is that the data on it cannot be modified
988easily, except via commands specified as part of the TPM spec. Since with an
989emulated device the data storage happens on a regular volume, it can potentially
990be edited by anyone with access to it.
991
0ad30983
DC
992[[qm_ivshmem]]
993Inter-VM shared memory
994~~~~~~~~~~~~~~~~~~~~~~
995
8861c7ad
TL
996You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
997share memory between the host and a guest, or also between multiple guests.
0ad30983
DC
998
999To add such a device, you can use `qm`:
1000
32e8b5b2
AL
1001----
1002# qm set <vmid> -ivshmem size=32,name=foo
1003----
0ad30983
DC
1004
1005Where the size is in MiB. The file will be located under
1006`/dev/shm/pve-shm-$name` (the default name is the vmid).
1007
4d1a19eb
TL
1008NOTE: Currently the device will get deleted as soon as any VM using it got
1009shutdown or stopped. Open connections will still persist, but new connections
1010to the exact same device cannot be made anymore.
1011
8861c7ad 1012A use case for such a device is the Looking Glass
451bb75f
SR
1013footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
1014performance, low-latency display mirroring between host and guest.
0ad30983 1015
ca8c3009
AL
1016[[qm_audio_device]]
1017Audio Device
1018~~~~~~~~~~~~
1019
1020To add an audio device run the following command:
1021
1022----
1023qm set <vmid> -audio0 device=<device>
1024----
1025
1026Supported audio devices are:
1027
1028* `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
1029* `intel-hda`: Intel HD Audio Controller, emulates ICH6
1030* `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
1031
cf41761d
AL
1032There are two backends available:
1033
1034* 'spice'
1035* 'none'
1036
1037The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
1038the 'none' backend can be useful if an audio device is needed in the VM for some
1039software to work. To use the physical audio device of the host use device
1040passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
1041xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
1042have options to play sound.
1043
ca8c3009 1044
adb2c91d
SR
1045[[qm_virtio_rng]]
1046VirtIO RNG
1047~~~~~~~~~~
1048
1049A RNG (Random Number Generator) is a device providing entropy ('randomness') to
1050a system. A virtual hardware-RNG can be used to provide such entropy from the
1051host system to a guest VM. This helps to avoid entropy starvation problems in
1052the guest (a situation where not enough entropy is available and the system may
1053slow down or run into problems), especially during the guests boot process.
1054
1055To add a VirtIO-based emulated RNG, run the following command:
1056
1057----
1058qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
1059----
1060
1061`source` specifies where entropy is read from on the host and has to be one of
1062the following:
1063
1064* `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
1065* `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
1066 starvation on the host system)
1067* `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
1068 are available, the one selected in
1069 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
1070
1071A limit can be specified via the `max_bytes` and `period` parameters, they are
1072read as `max_bytes` per `period` in milliseconds. However, it does not represent
1073a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
1074available on a 1 second timer, not that 1 KiB is streamed to the guest over the
1075course of one second. Reducing the `period` can thus be used to inject entropy
1076into the guest at a faster rate.
1077
1078By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
1079recommended to always use a limiter to avoid guests using too many host
1080resources. If desired, a value of '0' for `max_bytes` can be used to disable
1081all limits.
1082
777cf894 1083[[qm_bootorder]]
8cd6f474
TL
1084Device Boot Order
1085~~~~~~~~~~~~~~~~~
777cf894
SR
1086
1087QEMU can tell the guest which devices it should boot from, and in which order.
d6466262 1088This can be specified in the config via the `boot` property, for example:
777cf894
SR
1089
1090----
1091boot: order=scsi0;net0;hostpci0
1092----
1093
1094[thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
1095
1096This way, the guest would first attempt to boot from the disk `scsi0`, if that
1097fails, it would go on to attempt network boot from `net0`, and in case that
1098fails too, finally attempt to boot from a passed through PCIe device (seen as
1099disk in case of NVMe, otherwise tries to launch into an option ROM).
1100
1101On the GUI you can use a drag-and-drop editor to specify the boot order, and use
1102the checkbox to enable or disable certain devices for booting altogether.
1103
1104NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
1105all of them must be marked as 'bootable' (that is, they must have the checkbox
1106enabled or appear in the list in the config) for the guest to be able to boot.
1107This is because recent SeaBIOS and OVMF versions only initialize disks if they
1108are marked 'bootable'.
1109
1110In any case, even devices not appearing in the list or having the checkmark
1111disabled will still be available to the guest, once it's operating system has
1112booted and initialized them. The 'bootable' flag only affects the guest BIOS and
1113bootloader.
1114
1115
288e3f46
EK
1116[[qm_startup_and_shutdown]]
1117Automatic Start and Shutdown of Virtual Machines
1118~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1119
1120After creating your VMs, you probably want them to start automatically
1121when the host system boots. For this you need to select the option 'Start at
1122boot' from the 'Options' Tab of your VM in the web interface, or set it with
1123the following command:
1124
32e8b5b2
AL
1125----
1126# qm set <vmid> -onboot 1
1127----
288e3f46 1128
4dbeb548
DM
1129.Start and Shutdown Order
1130
1ff5e4e8 1131[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548
DM
1132
1133In some case you want to be able to fine tune the boot order of your
1134VMs, for instance if one of your VM is providing firewalling or DHCP
1135to other guest systems. For this you can use the following
1136parameters:
288e3f46 1137
d6466262 1138* *Start/Shutdown order*: Defines the start order priority. For example, set it
5afa9371
FG
1139to 1 if you want the VM to be the first to be started. (We use the reverse
1140startup order for shutdown, so a machine with a start order of 1 would be the
1141last to be shut down). If multiple VMs have the same order defined on a host,
1142they will additionally be ordered by 'VMID' in ascending order.
288e3f46 1143* *Startup delay*: Defines the interval between this VM start and subsequent
d6466262
TL
1144VMs starts. For example, set it to 240 if you want to wait 240 seconds before
1145starting other VMs.
288e3f46 1146* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
d6466262
TL
1147for the VM to be offline after issuing a shutdown command. By default this
1148value is set to 180, which means that {pve} will issue a shutdown request and
1149wait 180 seconds for the machine to be offline. If the machine is still online
1150after the timeout it will be stopped forcefully.
288e3f46 1151
2b2c6286
TL
1152NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
1153'boot order' options currently. Those VMs will be skipped by the startup and
1154shutdown algorithm as the HA manager itself ensures that VMs get started and
1155stopped.
1156
288e3f46 1157Please note that machines without a Start/Shutdown order parameter will always
7eed72d8 1158start after those where the parameter is set. Further, this parameter can only
d750c851 1159be enforced between virtual machines running on the same host, not
288e3f46 1160cluster-wide.
076d60ae 1161
0f7778ac
DW
1162If you require a delay between the host boot and the booting of the first VM,
1163see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
1164
c0f039aa
AL
1165
1166[[qm_qemu_agent]]
c730e973 1167QEMU Guest Agent
c0f039aa
AL
1168~~~~~~~~~~~~~~~~
1169
c730e973 1170The QEMU Guest Agent is a service which runs inside the VM, providing a
c0f039aa
AL
1171communication channel between the host and the guest. It is used to exchange
1172information and allows the host to issue commands to the guest.
1173
1174For example, the IP addresses in the VM summary panel are fetched via the guest
1175agent.
1176
1177Or when starting a backup, the guest is told via the guest agent to sync
1178outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1179
1180For the guest agent to work properly the following steps must be taken:
1181
1182* install the agent in the guest and make sure it is running
1183* enable the communication via the agent in {pve}
1184
1185Install Guest Agent
1186^^^^^^^^^^^^^^^^^^^
1187
1188For most Linux distributions, the guest agent is available. The package is
1189usually named `qemu-guest-agent`.
1190
1191For Windows, it can be installed from the
1192https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1193VirtIO driver ISO].
1194
80df0d2e 1195[[qm_qga_enable]]
c0f039aa
AL
1196Enable Guest Agent Communication
1197^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1198
1199Communication from {pve} with the guest agent can be enabled in the VM's
1200*Options* panel. A fresh start of the VM is necessary for the changes to take
1201effect.
1202
80df0d2e
TL
1203[[qm_qga_auto_trim]]
1204Automatic TRIM Using QGA
1205^^^^^^^^^^^^^^^^^^^^^^^^
1206
c0f039aa
AL
1207It is possible to enable the 'Run guest-trim' option. With this enabled,
1208{pve} will issue a trim command to the guest after the following
1209operations that have the potential to write out zeros to the storage:
1210
1211* moving a disk to another storage
1212* live migrating a VM to another node with local storage
1213
1214On a thin provisioned storage, this can help to free up unused space.
1215
95117b6c
FE
1216NOTE: There is a caveat with ext4 on Linux, because it uses an in-memory
1217optimization to avoid issuing duplicate TRIM requests. Since the guest doesn't
1218know about the change in the underlying storage, only the first guest-trim will
1219run as expected. Subsequent ones, until the next reboot, will only consider
1220parts of the filesystem that changed since then.
1221
80df0d2e 1222[[qm_qga_fsfreeze]]
62bf5d75
CH
1223Filesystem Freeze & Thaw on Backup
1224^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1225
1226By default, guest filesystems are synced via the 'fs-freeze' QEMU Guest Agent
1227Command when a backup is performed, to provide consistency.
1228
1229On Windows guests, some applications might handle consistent backups themselves
1230by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
1231'fs-freeze' then might interfere with that. For example, it has been observed
1232that calling 'fs-freeze' with some SQL Servers triggers VSS to call the SQL
1233Writer VSS module in a mode that breaks the SQL Server backup chain for
1234differential backups.
1235
1236For such setups you can configure {pve} to not issue a freeze-and-thaw cycle on
266dd87d
CH
1237backup by setting the `freeze-fs-on-backup` QGA option to `0`. This can also be
1238done via the GUI with the 'Freeze/thaw guest filesystems on backup for
1239consistency' option.
62bf5d75 1240
80df0d2e 1241IMPORTANT: Disabling this option can potentially lead to backups with inconsistent
62bf5d75
CH
1242filesystems and should therefore only be disabled if you know what you are
1243doing.
1244
c0f039aa
AL
1245Troubleshooting
1246^^^^^^^^^^^^^^^
1247
1248.VM does not shut down
1249
1250Make sure the guest agent is installed and running.
1251
1252Once the guest agent is enabled, {pve} will send power commands like
1253'shutdown' via the guest agent. If the guest agent is not running, commands
1254cannot get executed properly and the shutdown command will run into a timeout.
1255
22a0091c
AL
1256[[qm_spice_enhancements]]
1257SPICE Enhancements
1258~~~~~~~~~~~~~~~~~~
1259
1260SPICE Enhancements are optional features that can improve the remote viewer
1261experience.
1262
1263To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1264the following command to enable them via the CLI:
1265
1266----
1267qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1268----
1269
1270NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1271must be set to SPICE (qxl).
1272
1273Folder Sharing
1274^^^^^^^^^^^^^^
1275
1276Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1277installed in the guest. It makes the shared folder available through a local
1278WebDAV server located at http://localhost:9843.
1279
1280For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1281from the
1282https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1283
1284Most Linux distributions have a package called `spice-webdavd` that can be
1285installed.
1286
1287To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1288Select the folder to share and then enable the checkbox.
1289
1290NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1291
0dcd22f5
AL
1292CAUTION: Experimental! Currently this feature does not work reliably.
1293
22a0091c
AL
1294Video Streaming
1295^^^^^^^^^^^^^^^
1296
1297Fast refreshing areas are encoded into a video stream. Two options exist:
1298
1299* *all*: Any fast refreshing area will be encoded into a video stream.
1300* *filter*: Additional filters are used to decide if video streaming should be
1301 used (currently only small window surfaces are skipped).
1302
1303A general recommendation if video streaming should be enabled and which option
1304to choose from cannot be given. Your mileage may vary depending on the specific
1305circumstances.
1306
1307Troubleshooting
1308^^^^^^^^^^^^^^^
1309
19a58e02 1310.Shared folder does not show up
22a0091c
AL
1311
1312Make sure the WebDAV service is enabled and running in the guest. On Windows it
1313is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1314different depending on the distribution.
1315
1316If the service is running, check the WebDAV server by opening
1317http://localhost:9843 in a browser in the guest.
1318
1319It can help to restart the SPICE session.
c73c190f
DM
1320
1321[[qm_migration]]
1322Migration
1323---------
1324
1ff5e4e8 1325[thumbnail="screenshot/gui-qemu-migrate.png"]
e4bcef0a 1326
c73c190f
DM
1327If you have a cluster, you can migrate your VM to another host with
1328
32e8b5b2
AL
1329----
1330# qm migrate <vmid> <target>
1331----
c73c190f 1332
8df8cfb7
DC
1333There are generally two mechanisms for this
1334
1335* Online Migration (aka Live Migration)
1336* Offline Migration
1337
1338Online Migration
1339~~~~~~~~~~~~~~~~
1340
27780834 1341If your VM is running and no locally bound resources are configured (such as
9632a85d 1342devices that are passed through), you can initiate a live migration with the `--online`
e2b3622a 1343flag in the `qm migration` command evocation. The web interface defaults to
27780834 1344live migration when the VM is running.
c73c190f 1345
8df8cfb7
DC
1346How it works
1347^^^^^^^^^^^^
1348
27780834
TL
1349Online migration first starts a new QEMU process on the target host with the
1350'incoming' flag, which performs only basic initialization with the guest vCPUs
1351still paused and then waits for the guest memory and device state data streams
1352of the source Virtual Machine.
1353All other resources, such as disks, are either shared or got already sent
1354before runtime state migration of the VMs begins; so only the memory content
1355and device state remain to be transferred.
1356
1357Once this connection is established, the source begins asynchronously sending
1358the memory content to the target. If the guest memory on the source changes,
1359those sections are marked dirty and another pass is made to send the guest
1360memory data.
1361This loop is repeated until the data difference between running source VM
1362and incoming target VM is small enough to be sent in a few milliseconds,
1363because then the source VM can be paused completely, without a user or program
1364noticing the pause, so that the remaining data can be sent to the target, and
1365then unpause the targets VM's CPU to make it the new running VM in well under a
1366second.
8df8cfb7
DC
1367
1368Requirements
1369^^^^^^^^^^^^
1370
1371For Live Migration to work, there are some things required:
1372
27780834
TL
1373* The VM has no local resources that cannot be migrated. For example,
1374 PCI or USB devices that are passed through currently block live-migration.
1375 Local Disks, on the other hand, can be migrated by sending them to the target
1376 just fine.
1377* The hosts are located in the same {pve} cluster.
1378* The hosts have a working (and reliable) network connection between them.
1379* The target host must have the same, or higher versions of the
1380 {pve} packages. Although it can sometimes work the other way around, this
1381 cannot be guaranteed.
1382* The hosts have CPUs from the same vendor with similar capabilities. Different
1383 vendor *might* work depending on the actual models and VMs CPU type
1384 configured, but it cannot be guaranteed - so please test before deploying
1385 such a setup in production.
8df8cfb7
DC
1386
1387Offline Migration
1388~~~~~~~~~~~~~~~~~
1389
27780834
TL
1390If you have local resources, you can still migrate your VMs offline as long as
1391all disk are on storage defined on both hosts.
1392Migration then copies the disks to the target host over the network, as with
9632a85d 1393online migration. Note that any hardware passthrough configuration may need to
27780834
TL
1394be adapted to the device location on the target host.
1395
1396// TODO: mention hardware map IDs as better way to solve that, once available
c73c190f 1397
eeb87f95
DM
1398[[qm_copy_and_clone]]
1399Copies and Clones
1400-----------------
9e55c76d 1401
1ff5e4e8 1402[thumbnail="screenshot/gui-qemu-full-clone.png"]
9e55c76d
DM
1403
1404VM installation is usually done using an installation media (CD-ROM)
61018238 1405from the operating system vendor. Depending on the OS, this can be a
9e55c76d
DM
1406time consuming task one might want to avoid.
1407
1408An easy way to deploy many VMs of the same type is to copy an existing
1409VM. We use the term 'clone' for such copies, and distinguish between
1410'linked' and 'full' clones.
1411
1412Full Clone::
1413
1414The result of such copy is an independent VM. The
1415new VM does not share any storage resources with the original.
1416+
707e37a2 1417
9e55c76d
DM
1418It is possible to select a *Target Storage*, so one can use this to
1419migrate a VM to a totally different storage. You can also change the
1420disk image *Format* if the storage driver supports several formats.
1421+
707e37a2 1422
730fbca4 1423NOTE: A full clone needs to read and copy all VM image data. This is
9e55c76d 1424usually much slower than creating a linked clone.
707e37a2
DM
1425+
1426
1427Some storage types allows to copy a specific *Snapshot*, which
1428defaults to the 'current' VM data. This also means that the final copy
1429never includes any additional snapshots from the original VM.
1430
9e55c76d
DM
1431
1432Linked Clone::
1433
730fbca4 1434Modern storage drivers support a way to generate fast linked
9e55c76d
DM
1435clones. Such a clone is a writable copy whose initial contents are the
1436same as the original data. Creating a linked clone is nearly
1437instantaneous, and initially consumes no additional space.
1438+
707e37a2 1439
9e55c76d
DM
1440They are called 'linked' because the new image still refers to the
1441original. Unmodified data blocks are read from the original image, but
1442modification are written (and afterwards read) from a new
1443location. This technique is called 'Copy-on-write'.
1444+
707e37a2
DM
1445
1446This requires that the original volume is read-only. With {pve} one
1447can convert any VM into a read-only <<qm_templates, Template>>). Such
1448templates can later be used to create linked clones efficiently.
1449+
1450
730fbca4
OB
1451NOTE: You cannot delete an original template while linked clones
1452exist.
9e55c76d 1453+
707e37a2
DM
1454
1455It is not possible to change the *Target storage* for linked clones,
1456because this is a storage internal feature.
9e55c76d
DM
1457
1458
1459The *Target node* option allows you to create the new VM on a
1460different node. The only restriction is that the VM is on shared
1461storage, and that storage is also available on the target node.
1462
730fbca4 1463To avoid resource conflicts, all network interface MAC addresses get
9e55c76d
DM
1464randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1465setting.
1466
1467
707e37a2
DM
1468[[qm_templates]]
1469Virtual Machine Templates
1470-------------------------
1471
1472One can convert a VM into a Template. Such templates are read-only,
1473and you can use them to create linked clones.
1474
1475NOTE: It is not possible to start templates, because this would modify
1476the disk images. If you want to change the template, create a linked
1477clone and modify that.
1478
319d5325
DC
1479VM Generation ID
1480----------------
1481
941ff8d3 1482{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
effa4818
TL
1483'vmgenid' Specification
1484https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1485for virtual machines.
1486This can be used by the guest operating system to detect any event resulting
1487in a time shift event, for example, restoring a backup or a snapshot rollback.
319d5325 1488
effa4818
TL
1489When creating new VMs, a 'vmgenid' will be automatically generated and saved
1490in its configuration file.
319d5325 1491
effa4818
TL
1492To create and add a 'vmgenid' to an already existing VM one can pass the
1493special value `1' to let {pve} autogenerate one or manually set the 'UUID'
d6466262
TL
1494footnote:[Online GUID generator http://guid.one/] by using it as value, for
1495example:
319d5325 1496
effa4818 1497----
32e8b5b2
AL
1498# qm set VMID -vmgenid 1
1499# qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
effa4818 1500----
319d5325 1501
cfd48f55
TL
1502NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1503in the same effects as a change on snapshot rollback, backup restore, etc., has
1504as the VM can interpret this as generation change.
1505
effa4818
TL
1506In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1507its value on VM creation, or retroactively delete the property in the
1508configuration with:
319d5325 1509
effa4818 1510----
32e8b5b2 1511# qm set VMID -delete vmgenid
effa4818 1512----
319d5325 1513
effa4818
TL
1514The most prominent use case for 'vmgenid' are newer Microsoft Windows
1515operating systems, which use it to avoid problems in time sensitive or
d6466262 1516replicate services (such as databases or domain controller
cfd48f55
TL
1517footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1518on snapshot rollback, backup restore or a whole VM clone operation.
319d5325 1519
ff303757
TL
1520[[qm_import_virtual_machines]]
1521Importing Virtual Machines
1522--------------------------
1523
1524Importing existing virtual machines from foreign hypervisors or other {pve}
1525clusters can be achieved through various methods, the most common ones are:
1526
1527* Using the native import wizard, which utilizes the 'import' content type, such
1528 as provided by the ESXi special storage.
1529* Performing a backup on the source and then restoring on the target. This
1530 method works best when migrating from another {pve} instance.
1531* using the OVF-specific import command of the `qm` command-line tool.
1532
1533If you import VMs to {pve} from other hypervisors, it’s recommended to
1534familiarize yourself with the
1535https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Concepts[concepts of {pve}].
1536
1537Import Wizard
1538~~~~~~~~~~~~~
1539
1540[thumbnail="screenshot/gui-import-wizard-general.png"]
1541
1542{pve} provides an integrated VM importer using the storage plugin system for
1543native integration into the API and web-based user interface. You can use this
1544to import the VM as a whole, with most of its config mapped to {pve}'s config
1545model and reduced downtime.
1546
1547NOTE: The import wizard was added during the {pve} 8.2 development cycle and is
1548in tech preview state. While it's already promising and working stable, it's
1549still under active development, focusing on adding other import-sources, like
1550for example OVF/OVA files, in the future.
1551
1552To use the import wizard you have to first set up a new storage for an import
1553source, you can do so on the web-interface under _Datacenter -> Storage -> Add_.
1554
1555Then you can select the new storage in the resource tree and use the 'Virtual
1556Guests' content tab to see all available guests that can be imported.
1557
1558[thumbnail="screenshot/gui-import-wizard-advanced.png"]
1559
1560Select one and use the 'Import' button (or double-click) to open the import
1561wizard. You can modify a subset of the available options here and then start the
1562import. Please note that you can do more advanced modifications after the import
1563finished.
1564
1565TIP: The import wizard is currently (2024-03) available for ESXi and has been
1566tested with ESXi versions 6.5 through 8.0. Note that guests using vSAN storage
1567cannot be directly imported directly; their disks must first be moved to another
1568storage. While it is possible to use a vCenter as the import source, performance
1569is dramatically degraded (5 to 10 times slower).
1570
1571For a step-by-step guide and tips for how to adapt the virtual guest to the new
1572hyper-visor see our
1573https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Migration[migrate to {pve}
1574wiki article].
1575
1576Import OVF/OVA Through CLI
1577~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8
EK
1578
1579A VM export from a foreign hypervisor takes usually the form of one or more disk
59552707 1580 images, with a configuration file describing the settings of the VM (RAM,
56368da8
EK
1581 number of cores). +
1582The disk images can be in the vmdk format, if the disks come from
59552707
DM
1583VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1584The most popular configuration format for VM exports is the OVF standard, but in
1585practice interoperation is limited because many settings are not implemented in
1586the standard itself, and hypervisors export the supplementary information
56368da8
EK
1587in non-standard extensions.
1588
1589Besides the problem of format, importing disk images from other hypervisors
1590may fail if the emulated hardware changes too much from one hypervisor to
1591another. Windows VMs are particularly concerned by this, as the OS is very
1592picky about any changes of hardware. This problem may be solved by
1593installing the MergeIDE.zip utility available from the Internet before exporting
1594and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1595
59552707 1596Finally there is the question of paravirtualized drivers, which improve the
56368da8
EK
1597speed of the emulated system and are specific to the hypervisor.
1598GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1599default and you can switch to the paravirtualized drivers right after importing
59552707 1600the VM. For Windows VMs, you need to install the Windows paravirtualized
56368da8
EK
1601drivers by yourself.
1602
1603GNU/Linux and other free Unix can usually be imported without hassle. Note
eb01c5cf 1604that we cannot guarantee a successful import/export of Windows VMs in all
56368da8
EK
1605cases due to the problems above.
1606
c069256d 1607Step-by-step example of a Windows OVF import
ff303757 1608^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 1609
59552707 1610Microsoft provides
c069256d 1611https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
144d5ede 1612 to get started with Windows development.We are going to use one of these
c069256d 1613to demonstrate the OVF import feature.
56368da8 1614
c069256d 1615Download the Virtual Machine zip
ff303757 1616++++++++++++++++++++++++++++++++
56368da8 1617
144d5ede 1618After getting informed about the user agreement, choose the _Windows 10
c069256d 1619Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
56368da8
EK
1620
1621Extract the disk image from the zip
ff303757 1622+++++++++++++++++++++++++++++++++++
56368da8 1623
c069256d
EK
1624Using the `unzip` utility or any archiver of your choice, unpack the zip,
1625and copy via ssh/scp the ovf and vmdk files to your {pve} host.
56368da8 1626
c069256d 1627Import the Virtual Machine
ff303757 1628++++++++++++++++++++++++++
56368da8 1629
c069256d
EK
1630This will create a new virtual machine, using cores, memory and
1631VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1632 storage. You have to configure the network manually.
56368da8 1633
32e8b5b2
AL
1634----
1635# qm importovf 999 WinDev1709Eval.ovf local-lvm
1636----
56368da8 1637
c069256d 1638The VM is ready to be started.
56368da8 1639
c069256d 1640Adding an external disk image to a Virtual Machine
ff303757 1641++++++++++++++++++++++++++++++++++++++++++++++++++
56368da8 1642
144d5ede 1643You can also add an existing disk image to a VM, either coming from a
c069256d
EK
1644foreign hypervisor, or one that you created yourself.
1645
1646Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1647
1648 vmdebootstrap --verbose \
67d59a35 1649 --size 10GiB --serial-console \
c069256d
EK
1650 --grub --no-extlinux \
1651 --package openssh-server \
1652 --package avahi-daemon \
1653 --package qemu-guest-agent \
1654 --hostname vm600 --enable-dhcp \
1655 --customize=./copy_pub_ssh.sh \
1656 --sparse --image vm600.raw
1657
10a2a4aa
FE
1658You can now create a new target VM, importing the image to the storage `pvedir`
1659and attaching it to the VM's SCSI controller:
c069256d 1660
32e8b5b2
AL
1661----
1662# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
10a2a4aa
FE
1663 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1664 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
32e8b5b2 1665----
c069256d
EK
1666
1667The VM is ready to be started.
707e37a2 1668
7eb69fd2 1669
16b4185a 1670ifndef::wiki[]
7eb69fd2 1671include::qm-cloud-init.adoc[]
16b4185a
DM
1672endif::wiki[]
1673
6e4c46c4
DC
1674ifndef::wiki[]
1675include::qm-pci-passthrough.adoc[]
1676endif::wiki[]
16b4185a 1677
c2c8eb89 1678Hookscripts
91f416b7 1679-----------
c2c8eb89
DC
1680
1681You can add a hook script to VMs with the config property `hookscript`.
1682
32e8b5b2
AL
1683----
1684# qm set 100 --hookscript local:snippets/hookscript.pl
1685----
c2c8eb89
DC
1686
1687It will be called during various phases of the guests lifetime.
1688For an example and documentation see the example script under
1689`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
7eb69fd2 1690
88a31964
DC
1691[[qm_hibernate]]
1692Hibernation
1693-----------
1694
1695You can suspend a VM to disk with the GUI option `Hibernate` or with
1696
32e8b5b2
AL
1697----
1698# qm suspend ID --todisk
1699----
88a31964
DC
1700
1701That means that the current content of the memory will be saved onto disk
1702and the VM gets stopped. On the next start, the memory content will be
1703loaded and the VM can continue where it was left off.
1704
1705[[qm_vmstatestorage]]
1706.State storage selection
1707If no target storage for the memory is given, it will be automatically
1708chosen, the first of:
1709
17101. The storage `vmstatestorage` from the VM config.
17112. The first shared storage from any VM disk.
17123. The first non-shared storage from any VM disk.
17134. The storage `local` as a fallback.
1714
e2a867b2
DC
1715[[resource_mapping]]
1716Resource Mapping
bd0cc33d 1717----------------
e2a867b2 1718
481a0ee4
DC
1719[thumbnail="screenshot/gui-datacenter-resource-mappings.png"]
1720
e2a867b2
DC
1721When using or referencing local resources (e.g. address of a pci device), using
1722the raw address or id is sometimes problematic, for example:
1723
1724* when using HA, a different device with the same id or path may exist on the
1725 target node, and if one is not careful when assigning such guests to HA
1726 groups, the wrong device could be used, breaking configurations.
1727
1728* changing hardware can change ids and paths, so one would have to check all
1729 assigned devices and see if the path or id is still correct.
1730
1731To handle this better, one can define cluster wide resource mappings, such that
1732a resource has a cluster unique, user selected identifier which can correspond
1733to different devices on different hosts. With this, HA won't start a guest with
1734a wrong device, and hardware changes can be detected.
1735
1736Creating such a mapping can be done with the {pve} web GUI under `Datacenter`
1737in the relevant tab in the `Resource Mappings` category, or on the cli with
1738
1739----
d772991e 1740# pvesh create /cluster/mapping/<type> <options>
e2a867b2
DC
1741----
1742
4657b9ff
TL
1743[thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
1744
d772991e
TL
1745Where `<type>` is the hardware type (currently either `pci` or `usb`) and
1746`<options>` are the device mappings and other configuration parameters.
e2a867b2
DC
1747
1748Note that the options must include a map property with all identifying
1749properties of that hardware, so that it's possible to verify the hardware did
1750not change and the correct device is passed through.
1751
1752For example to add a PCI device as `device1` with the path `0000:01:00.0` that
1753has the device id `0001` and the vendor id `0002` on the node `node1`, and
1754`0000:02:00.0` on `node2` you can add it with:
1755
1756----
1757# pvesh create /cluster/mapping/pci --id device1 \
1758 --map node=node1,path=0000:01:00.0,id=0002:0001 \
1759 --map node=node2,path=0000:02:00.0,id=0002:0001
1760----
1761
1762You must repeat the `map` parameter for each node where that device should have
1763a mapping (note that you can currently only map one USB device per node per
1764mapping).
1765
1766Using the GUI makes this much easier, as the correct properties are
1767automatically picked up and sent to the API.
1768
481a0ee4
DC
1769[thumbnail="screenshot/gui-datacenter-mapping-usb-edit.png"]
1770
e2a867b2
DC
1771It's also possible for PCI devices to provide multiple devices per node with
1772multiple map properties for the nodes. If such a device is assigned to a guest,
1773the first free one will be used when the guest is started. The order of the
1774paths given is also the order in which they are tried, so arbitrary allocation
1775policies can be implemented.
1776
1777This is useful for devices with SR-IOV, since some times it is not important
1778which exact virtual function is passed through.
1779
1780You can assign such a device to a guest either with the GUI or with
1781
1782----
d772991e 1783# qm set ID -hostpci0 <name>
e2a867b2
DC
1784----
1785
1786for PCI devices, or
1787
1788----
d772991e 1789# qm set <vmid> -usb0 <name>
e2a867b2
DC
1790----
1791
1792for USB devices.
1793
d772991e 1794Where `<vmid>` is the guests id and `<name>` is the chosen name for the created
e2a867b2
DC
1795mapping. All usual options for passing through the devices are allowed, such as
1796`mdev`.
1797
d772991e
TL
1798To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
1799(where `<type>` is the device type and `<name>` is the name of the mapping).
e2a867b2 1800
d772991e
TL
1801To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
1802(in addition to the normal guest privileges to edit the configuration).
e2a867b2 1803
8c1189b6 1804Managing Virtual Machines with `qm`
dd042288 1805------------------------------------
f69cfd23 1806
c730e973 1807qm is the tool to manage QEMU/KVM virtual machines on {pve}. You can
f69cfd23
DM
1808create and destroy virtual machines, and control execution
1809(start/stop/suspend/resume). Besides that, you can use qm to set
1810parameters in the associated config file. It is also possible to
1811create and delete virtual disks.
1812
dd042288
EK
1813CLI Usage Examples
1814~~~~~~~~~~~~~~~~~~
1815
b01b1f2c
EK
1816Using an iso file uploaded on the 'local' storage, create a VM
1817with a 4 GB IDE disk on the 'local-lvm' storage
dd042288 1818
32e8b5b2
AL
1819----
1820# qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1821----
dd042288
EK
1822
1823Start the new VM
1824
32e8b5b2
AL
1825----
1826# qm start 300
1827----
dd042288
EK
1828
1829Send a shutdown request, then wait until the VM is stopped.
1830
32e8b5b2
AL
1831----
1832# qm shutdown 300 && qm wait 300
1833----
dd042288
EK
1834
1835Same as above, but only wait for 40 seconds.
1836
32e8b5b2
AL
1837----
1838# qm shutdown 300 && qm wait 300 -timeout 40
1839----
dd042288 1840
87927c65
DJ
1841Destroying a VM always removes it from Access Control Lists and it always
1842removes the firewall configuration of the VM. You have to activate
1843'--purge', if you want to additionally remove the VM from replication jobs,
1844backup jobs and HA resource configurations.
1845
32e8b5b2
AL
1846----
1847# qm destroy 300 --purge
1848----
87927c65 1849
66aecccb
AL
1850Move a disk image to a different storage.
1851
32e8b5b2
AL
1852----
1853# qm move-disk 300 scsi0 other-storage
1854----
66aecccb
AL
1855
1856Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1857the source VM and attaches it as `scsi3` to the target VM. In the background
1858the disk image is being renamed so that the name matches the new owner.
1859
32e8b5b2
AL
1860----
1861# qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1862----
87927c65 1863
f0a8ab95
DM
1864
1865[[qm_configuration]]
f69cfd23
DM
1866Configuration
1867-------------
1868
f0a8ab95
DM
1869VM configuration files are stored inside the Proxmox cluster file
1870system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1871Like other files stored inside `/etc/pve/`, they get automatically
1872replicated to all other cluster nodes.
f69cfd23 1873
f0a8ab95
DM
1874NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1875unique cluster wide.
1876
1877.Example VM Configuration
1878----
777cf894 1879boot: order=virtio0;net0
f0a8ab95
DM
1880cores: 1
1881sockets: 1
1882memory: 512
1883name: webmail
1884ostype: l26
f0a8ab95
DM
1885net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1886virtio0: local:vm-100-disk-1,size=32G
1887----
1888
1889Those configuration files are simple text files, and you can edit them
1890using a normal text editor (`vi`, `nano`, ...). This is sometimes
1891useful to do small corrections, but keep in mind that you need to
1892restart the VM to apply such changes.
1893
1894For that reason, it is usually better to use the `qm` command to
1895generate and modify those files, or do the whole thing using the GUI.
1896Our toolkit is smart enough to instantaneously apply most changes to
1897running VM. This feature is called "hot plug", and there is no
1898need to restart the VM in that case.
1899
1900
1901File Format
1902~~~~~~~~~~~
1903
1904VM configuration files use a simple colon separated key/value
1905format. Each line has the following format:
1906
1907-----
1908# this is a comment
1909OPTION: value
1910-----
1911
1912Blank lines in those files are ignored, and lines starting with a `#`
1913character are treated as comments and are also ignored.
1914
1915
1916[[qm_snapshots]]
1917Snapshots
1918~~~~~~~~~
1919
1920When you create a snapshot, `qm` stores the configuration at snapshot
1921time into a separate snapshot section within the same configuration
1922file. For example, after creating a snapshot called ``testsnapshot'',
1923your configuration file will look like this:
1924
1925.VM configuration with snapshot
1926----
1927memory: 512
1928swap: 512
1929parent: testsnaphot
1930...
1931
1932[testsnaphot]
1933memory: 512
1934swap: 512
1935snaptime: 1457170803
1936...
1937----
1938
1939There are a few snapshot related properties like `parent` and
1940`snaptime`. The `parent` property is used to store the parent/child
1941relationship between snapshots. `snaptime` is the snapshot creation
1942time stamp (Unix epoch).
f69cfd23 1943
88a31964
DC
1944You can optionally save the memory of a running VM with the option `vmstate`.
1945For details about how the target storage gets chosen for the VM state, see
1946xref:qm_vmstatestorage[State storage selection] in the chapter
1947xref:qm_hibernate[Hibernation].
f69cfd23 1948
80c0adcb 1949[[qm_options]]
a7f36905
DM
1950Options
1951~~~~~~~
1952
1953include::qm.conf.5-opts.adoc[]
1954
f69cfd23
DM
1955
1956Locks
1957-----
1958
d6466262
TL
1959Online migrations, snapshots and backups (`vzdump`) set a lock to prevent
1960incompatible concurrent actions on the affected VMs. Sometimes you need to
1961remove such a lock manually (for example after a power failure).
f69cfd23 1962
32e8b5b2
AL
1963----
1964# qm unlock <vmid>
1965----
f69cfd23 1966
0bcc62dd
DM
1967CAUTION: Only do that if you are sure the action which set the lock is
1968no longer running.
1969
16b4185a
DM
1970ifdef::wiki[]
1971
1972See Also
1973~~~~~~~~
1974
1975* link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1976
1977endif::wiki[]
1978
1979
f69cfd23 1980ifdef::manvolnum[]
704f19fb
DM
1981
1982Files
1983------
1984
1985`/etc/pve/qemu-server/<VMID>.conf`::
1986
1987Configuration file for the VM '<VMID>'.
1988
1989
f69cfd23
DM
1990include::pve-copyright.adoc[]
1991endif::manvolnum[]