]> git.proxmox.com Git - pve-docs.git/blame - qm.adoc
pvecm: expand on public/cluster networks
[pve-docs.git] / qm.adoc
CommitLineData
80c0adcb 1[[chapter_virtual_machines]]
f69cfd23 2ifdef::manvolnum[]
b2f242ab
DM
3qm(1)
4=====
5f09af76
DM
5:pve-toplevel:
6
f69cfd23
DM
7NAME
8----
9
c730e973 10qm - QEMU/KVM Virtual Machine Manager
f69cfd23
DM
11
12
49a5e11c 13SYNOPSIS
f69cfd23
DM
14--------
15
16include::qm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
f69cfd23 21ifndef::manvolnum[]
c730e973 22QEMU/KVM Virtual Machines
f69cfd23 23=========================
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
c4cba5d7
EK
27// deprecates
28// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29// http://pve.proxmox.com/wiki/KVM
30// http://pve.proxmox.com/wiki/Qemu_Server
31
c730e973
FE
32QEMU (short form for Quick Emulator) is an open source hypervisor that emulates a
33physical computer. From the perspective of the host system where QEMU is
34running, QEMU is a user program which has access to a number of local resources
c4cba5d7 35like partitions, files, network cards which are then passed to an
189d3661 36emulated computer which sees them as if they were real devices.
c4cba5d7
EK
37
38A guest operating system running in the emulated computer accesses these
3a433e9b 39devices, and runs as if it were running on real hardware. For instance, you can pass
c730e973 40an ISO image as a parameter to QEMU, and the OS running in the emulated computer
3a433e9b 41will see a real CD-ROM inserted into a CD drive.
c4cba5d7 42
c730e973 43QEMU can emulate a great variety of hardware from ARM to Sparc, but {pve} is
c4cba5d7
EK
44only concerned with 32 and 64 bits PC clone emulation, since it represents the
45overwhelming majority of server hardware. The emulation of PC clones is also one
46of the fastest due to the availability of processor extensions which greatly
c730e973 47speed up QEMU when the emulated architecture is the same as the host
9c63b5d9
EK
48architecture.
49
50NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
c730e973
FE
51It means that QEMU is running with the support of the virtualization processor
52extensions, via the Linux KVM module. In the context of {pve} _QEMU_ and
53_KVM_ can be used interchangeably, as QEMU in {pve} will always try to load the KVM
9c63b5d9
EK
54module.
55
c730e973 56QEMU inside {pve} runs as a root process, since this is required to access block
c4cba5d7
EK
57and PCI devices.
58
5eba0743 59
c4cba5d7
EK
60Emulated devices and paravirtualized devices
61--------------------------------------------
62
42dfa5e9 63The PC hardware emulated by QEMU includes a motherboard, network controllers,
3a433e9b 64SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
189d3661
DC
65the `kvm(1)` man page) all of them emulated in software. All these devices
66are the exact software equivalent of existing hardware devices, and if the OS
67running in the guest has the proper drivers it will use the devices as if it
c35063c2 68were running on real hardware. This allows QEMU to run _unmodified_ operating
c4cba5d7
EK
69systems.
70
71This however has a performance cost, as running in software what was meant to
72run in hardware involves a lot of extra work for the host CPU. To mitigate this,
c730e973
FE
73QEMU can present to the guest operating system _paravirtualized devices_, where
74the guest OS recognizes it is running inside QEMU and cooperates with the
c4cba5d7
EK
75hypervisor.
76
c730e973 77QEMU relies on the virtio virtualization standard, and is thus able to present
189d3661
DC
78paravirtualized virtio devices, which includes a paravirtualized generic disk
79controller, a paravirtualized network card, a paravirtualized serial port,
c4cba5d7
EK
80a paravirtualized SCSI controller, etc ...
81
e3d91783
FE
82TIP: It is *highly recommended* to use the virtio devices whenever you can, as
83they provide a big performance improvement and are generally better maintained.
84Using the virtio generic disk controller versus an emulated IDE controller will
85double the sequential write throughput, as measured with `bonnie++(8)`. Using
86the virtio network interface can deliver up to three times the throughput of an
0677f4cc
FE
87emulated Intel E1000 network card, as measured with `iperf(1)`. footnote:[See
88this benchmark on the KVM wiki https://www.linux-kvm.org/page/Using_VirtIO_NIC]
c4cba5d7 89
5eba0743 90
80c0adcb 91[[qm_virtual_machines_settings]]
5274ad28 92Virtual Machines Settings
c4cba5d7 93-------------------------
80c0adcb 94
c4cba5d7
EK
95Generally speaking {pve} tries to choose sane defaults for virtual machines
96(VM). Make sure you understand the meaning of the settings you change, as it
97could incur a performance slowdown, or putting your data at risk.
98
5eba0743 99
80c0adcb 100[[qm_general_settings]]
c4cba5d7
EK
101General Settings
102~~~~~~~~~~~~~~~~
80c0adcb 103
1ff5e4e8 104[thumbnail="screenshot/gui-create-vm-general.png"]
b16d767f 105
c4cba5d7
EK
106General settings of a VM include
107
108* the *Node* : the physical server on which the VM will run
109* the *VM ID*: a unique number in this {pve} installation used to identify your VM
110* *Name*: a free form text string you can use to describe the VM
111* *Resource Pool*: a logical group of VMs
112
5eba0743 113
80c0adcb 114[[qm_os_settings]]
c4cba5d7
EK
115OS Settings
116~~~~~~~~~~~
80c0adcb 117
1ff5e4e8 118[thumbnail="screenshot/gui-create-vm-os.png"]
200114a7 119
d3c00374
TL
120When creating a virtual machine (VM), setting the proper Operating System(OS)
121allows {pve} to optimize some low level parameters. For instance Windows OS
122expect the BIOS clock to use the local time, while Unix based OS expect the
123BIOS clock to have the UTC time.
124
125[[qm_system_settings]]
126System Settings
127~~~~~~~~~~~~~~~
128
ade78a55
TL
129On VM creation you can change some basic system components of the new VM. You
130can specify which xref:qm_display[display type] you want to use.
d3c00374
TL
131[thumbnail="screenshot/gui-create-vm-system.png"]
132Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133If you plan to install the QEMU Guest Agent, or if your selected ISO image
c730e973 134already ships and installs it automatically, you may want to tick the 'QEMU
d3c00374
TL
135Agent' box, which lets {pve} know that it can use its features to show some
136more information, and complete some actions (for example, shutdown or
137snapshots) more intelligently.
138
139{pve} allows to boot VMs with different firmware and machine types, namely
140xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
3a433e9b 141the default SeaBIOS to OVMF only if you plan to use
cecc8064
FE
142xref:qm_pci_passthrough[PCIe passthrough].
143
ff0c3ed1
TL
144[[qm_machine_type]]
145
cecc8064
FE
146Machine Type
147^^^^^^^^^^^^
148
149A VM's 'Machine Type' defines the hardware layout of the VM's virtual
150motherboard. You can choose between the default
151https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
d3c00374 152https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
cecc8064
FE
153chipset, which also provides a virtual PCIe bus, and thus may be
154desired if you want to pass through PCIe hardware.
155
ff0c3ed1
TL
156Machine Version
157+++++++++++++++
158
cecc8064
FE
159Each machine type is versioned in QEMU and a given QEMU binary supports many
160machine versions. New versions might bring support for new features, fixes or
161general improvements. However, they also change properties of the virtual
162hardware. To avoid sudden changes from the guest's perspective and ensure
163compatibility of the VM state, live-migration and snapshots with RAM will keep
164using the same machine version in the new QEMU instance.
165
166For Windows guests, the machine version is pinned during creation, because
167Windows is sensitive to changes in the virtual hardware - even between cold
168boots. For example, the enumeration of network devices might be different with
169different machine versions. Other OSes like Linux can usually deal with such
170changes just fine. For those, the 'Latest' machine version is used by default.
171This means that after a fresh start, the newest machine version supported by the
172QEMU binary is used (e.g. the newest machine version QEMU 8.1 supports is
173version 8.1 for each machine type).
174
ff0c3ed1
TL
175[[qm_machine_update]]
176
177Update to a Newer Machine Version
178+++++++++++++++++++++++++++++++++
179
cecc8064
FE
180Very old machine versions might become deprecated in QEMU. For example, this is
181the case for versions 1.4 to 1.7 for the i440fx machine type. It is expected
182that support for these machine versions will be dropped at some point. If you
183see a deprecation warning, you should change the machine version to a newer one.
184Be sure to have a working backup first and be prepared for changes to how the
185guest sees hardware. In some scenarios, re-installing certain drivers might be
186required. You should also check for snapshots with RAM that were taken with
187these machine versions (i.e. the `runningmachine` configuration entry).
188Unfortunately, there is no way to change the machine version of a snapshot, so
189you'd need to load the snapshot to salvage any data from it.
5eba0743 190
80c0adcb 191[[qm_hard_disk]]
c4cba5d7
EK
192Hard Disk
193~~~~~~~~~
80c0adcb 194
3dbe1daa
TL
195[[qm_hard_disk_bus]]
196Bus/Controller
197^^^^^^^^^^^^^^
c730e973 198QEMU can emulate a number of storage controllers:
c4cba5d7 199
741fa478
FE
200TIP: It is highly recommended to use the *VirtIO SCSI* or *VirtIO Block*
201controller for performance reasons and because they are better maintained.
202
c4cba5d7 203* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
44f38275 204controller. Even if this controller has been superseded by recent designs,
6fb50457 205each and every OS you can think of has support for it, making it a great choice
c4cba5d7
EK
206if you want to run an OS released before 2003. You can connect up to 4 devices
207on this controller.
208
209* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
210design, allowing higher throughput and a greater number of devices to be
211connected. You can connect up to 6 devices on this controller.
212
b0b6802b
EK
213* the *SCSI* controller, designed in 1985, is commonly found on server grade
214hardware, and can connect up to 14 storage devices. {pve} emulates by default a
f4bfd701
DM
215LSI 53C895A controller.
216+
a89ded0b
FE
217A SCSI controller of type _VirtIO SCSI single_ and enabling the
218xref:qm_hard_disk_iothread[IO Thread] setting for the attached disks is
219recommended if you aim for performance. This is the default for newly created
220Linux VMs since {pve} 7.3. Each disk will have its own _VirtIO SCSI_ controller,
221and QEMU will handle the disks IO in a dedicated thread. Linux distributions
222have support for this controller since 2012, and FreeBSD since 2014. For Windows
223OSes, you need to provide an extra ISO containing the drivers during the
224installation.
b0b6802b
EK
225// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
226
30e6fe00
TL
227* The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
228is an older type of paravirtualized controller. It has been superseded by the
229VirtIO SCSI Controller, in terms of features.
c4cba5d7 230
1ff5e4e8 231[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
3dbe1daa
TL
232
233[[qm_hard_disk_formats]]
234Image Format
235^^^^^^^^^^^^
c4cba5d7
EK
236On each controller you attach a number of emulated hard disks, which are backed
237by a file or a block device residing in the configured storage. The choice of
238a storage type will determine the format of the hard disk image. Storages which
239present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
de14ebff 240whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
c4cba5d7
EK
241either the *raw disk image format* or the *QEMU image format*.
242
243 * the *QEMU image format* is a copy on write format which allows snapshots, and
244 thin provisioning of the disk image.
189d3661
DC
245 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
246 you would get when executing the `dd` command on a block device in Linux. This
4371b2fe 247 format does not support thin provisioning or snapshots by itself, requiring
30e6fe00
TL
248 cooperation from the storage layer for these tasks. It may, however, be up to
249 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
43530f6f 250 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
189d3661 251 * the *VMware image format* only makes sense if you intend to import/export the
c4cba5d7
EK
252 disk image to other hypervisors.
253
3dbe1daa
TL
254[[qm_hard_disk_cache]]
255Cache Mode
256^^^^^^^^^^
c4cba5d7
EK
257Setting the *Cache* mode of the hard drive will impact how the host system will
258notify the guest systems of block write completions. The *No cache* default
259means that the guest system will be notified that a write is complete when each
260block reaches the physical storage write queue, ignoring the host page cache.
261This provides a good balance between safety and speed.
262
263If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
264you can set the *No backup* option on that disk.
265
3205ac49
EK
266If you want the {pve} storage replication mechanism to skip a disk when starting
267 a replication job, you can set the *Skip replication* option on that disk.
6fb50457 268As of {pve} 5.0, replication requires the disk images to be on a storage of type
3205ac49 269`zfspool`, so adding a disk image to other storages when the VM has replication
6fb50457 270configured requires to skip replication for this disk image.
3205ac49 271
3dbe1daa
TL
272[[qm_hard_disk_discard]]
273Trim/Discard
274^^^^^^^^^^^^
c4cba5d7 275If your storage supports _thin provisioning_ (see the storage chapter in the
53cbac40
NC
276{pve} guide), you can activate the *Discard* option on a drive. With *Discard*
277set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
278https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
279marks blocks as unused after deleting files, the controller will relay this
280information to the storage, which will then shrink the disk image accordingly.
43975153
SR
281For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
282option on the drive. Some guest operating systems may also require the
283*SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
284only supported on guests using Linux Kernel 5.0 or higher.
c4cba5d7 285
25203dc1
NC
286If you would like a drive to be presented to the guest as a solid-state drive
287rather than a rotational hard disk, you can set the *SSD emulation* option on
288that drive. There is no requirement that the underlying storage actually be
289backed by SSDs; this feature can be used with physical media of any type.
53cbac40 290Note that *SSD emulation* is not supported on *VirtIO Block* drives.
25203dc1 291
3dbe1daa
TL
292
293[[qm_hard_disk_iothread]]
294IO Thread
295^^^^^^^^^
4c7a47cf
FE
296The option *IO Thread* can only be used when using a disk with the *VirtIO*
297controller, or with the *SCSI* controller, when the emulated controller type is
298*VirtIO SCSI single*. With *IO Thread* enabled, QEMU creates one I/O thread per
58e695ca 299storage controller rather than handling all I/O in the main event loop or vCPU
afb90565
TL
300threads. One benefit is better work distribution and utilization of the
301underlying storage. Another benefit is reduced latency (hangs) in the guest for
302very I/O-intensive host workloads, since neither the main thread nor a vCPU
303thread can be blocked by disk I/O.
80c0adcb
DM
304
305[[qm_cpu]]
34e541c5
EK
306CPU
307~~~
80c0adcb 308
1ff5e4e8 309[thumbnail="screenshot/gui-create-vm-cpu.png"]
397c74c3 310
34e541c5
EK
311A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
312This CPU can then contain one or many *cores*, which are independent
313processing units. Whether you have a single CPU socket with 4 cores, or two CPU
314sockets with two cores is mostly irrelevant from a performance point of view.
44f38275
TL
315However some software licenses depend on the number of sockets a machine has,
316in that case it makes sense to set the number of sockets to what the license
317allows you.
f4bfd701 318
3a433e9b 319Increasing the number of virtual CPUs (cores and sockets) will usually provide a
34e541c5 320performance improvement though that is heavily dependent on the use of the VM.
3a433e9b 321Multi-threaded applications will of course benefit from a large number of
c730e973 322virtual CPUs, as for each virtual cpu you add, QEMU will create a new thread of
34e541c5
EK
323execution on the host system. If you're not sure about the workload of your VM,
324it is usually a safe bet to set the number of *Total cores* to 2.
325
fb29acdd 326NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
d6466262
TL
327is greater than the number of cores on the server (for example, 4 VMs each with
3284 cores (= total 16) on a machine with only 8 cores). In that case the host
329system will balance the QEMU execution threads between your server cores, just
330like if you were running a standard multi-threaded application. However, {pve}
331will prevent you from starting VMs with more virtual CPU cores than physically
332available, as this will only bring the performance down due to the cost of
333context switches.
34e541c5 334
af54f54d
TL
335[[qm_cpu_resource_limits]]
336Resource Limits
337^^^^^^^^^^^^^^^
338
4371b2fe 339In addition to the number of virtual cores, you can configure how much resources
af54f54d
TL
340a VM can get in relation to the host CPU time and also in relation to other
341VMs.
046643ec
FG
342With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
343the whole VM can use on the host. It is a floating point value representing CPU
af54f54d 344time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
4371b2fe 345single process would fully use one single core it would have `100%` CPU Time
af54f54d 346usage. If a VM with four cores utilizes all its cores fully it would
c730e973 347theoretically use `400%`. In reality the usage may be even a bit higher as QEMU
af54f54d
TL
348can have additional threads for VM peripherals besides the vCPU core ones.
349This setting can be useful if a VM should have multiple vCPUs, as it runs a few
350processes in parallel, but the VM as a whole should not be able to run all
351vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
352which would profit from having 8 vCPUs, but at no time all of those 8 cores
353should run at full load - as this would make the server so overloaded that
354other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
355`4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
356real host cores CPU time. But, if only 4 would do work they could still get
357almost 100% of a real core each.
358
d6466262
TL
359NOTE: VMs can, depending on their configuration, use additional threads, such
360as for networking or IO operations but also live migration. Thus a VM can show
361up to use more CPU time than just its virtual CPUs could use. To ensure that a
362VM never uses more CPU time than virtual CPUs assigned set the *cpulimit*
363setting to the same value as the total core count.
af54f54d
TL
364
365The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
48219c58
FE
366shares or CPU weight), controls how much CPU time a VM gets compared to other
367running VMs. It is a relative weight which defaults to `100` (or `1024` if the
368host uses legacy cgroup v1). If you increase this for a VM it will be
d6466262
TL
369prioritized by the scheduler in comparison to other VMs with lower weight. For
370example, if VM 100 has set the default `100` and VM 200 was changed to `200`,
371the latter VM 200 would receive twice the CPU bandwidth than the first VM 100.
af54f54d
TL
372
373For more information see `man systemd.resource-control`, here `CPUQuota`
b90b797f 374corresponds to `cpulimit` and `CPUWeight` corresponds to our `cpuunits`
af54f54d
TL
375setting, visit its Notes section for references and implementation details.
376
1e6b30b5
DB
377The third CPU resource limiting setting, *affinity*, controls what host cores
378the virtual machine will be permitted to execute on. E.g., if an affinity value
379of `0-3,8-11` is provided, the virtual machine will be restricted to using the
380host cores `0,1,2,3,8,9,10,` and `11`. Valid *affinity* values are written in
381cpuset `List Format`. List Format is a comma-separated list of CPU numbers and
382ranges of numbers, in ASCII decimal.
383
384NOTE: CPU *affinity* uses the `taskset` command to restrict virtual machines to
385a given set of cores. This restriction will not take effect for some types of
386processes that may be created for IO. *CPU affinity is not a security feature.*
387
388For more information regarding *affinity* see `man cpuset`. Here the
389`List Format` corresponds to valid *affinity* values. Visit its `Formats`
390section for more examples.
391
af54f54d
TL
392CPU Type
393^^^^^^^^
394
c730e973 395QEMU can emulate a number different of *CPU types* from 486 to the latest Xeon
34e541c5 396processors. Each new processor generation adds new features, like hardware
16b31cc9
AZ
397assisted 3d rendering, random number generation, memory protection, etc. Also,
398a current generation can be upgraded through
399xref:chapter_firmware_updates[microcode update] with bug or security fixes.
41379e9a 400
34e541c5
EK
401Usually you should select for your VM a processor type which closely matches the
402CPU of the host system, as it means that the host CPU features (also called _CPU
403flags_ ) will be available in your VMs. If you want an exact match, you can set
404the CPU type to *host* in which case the VM will have exactly the same CPU flags
f4bfd701
DM
405as your host system.
406
34e541c5 407This has a downside though. If you want to do a live migration of VMs between
41379e9a 408different hosts, your VM might end up on a new system with a different CPU type
57bb28ef
FE
409or a different microcode version.
410If the CPU flags passed to the guest are missing, the QEMU process will stop. To
411remedy this QEMU has also its own virtual CPU types, that {pve} uses by default.
41379e9a 412
57bb28ef
FE
413The backend default is 'kvm64' which works on essentially all x86_64 host CPUs
414and the UI default when creating a new VM is 'x86-64-v2-AES', which requires a
415host CPU starting from Westmere for Intel or at least a fourth generation
416Opteron for AMD.
41379e9a
AD
417
418In short:
f4bfd701 419
57bb28ef
FE
420If you don’t care about live migration or have a homogeneous cluster where all
421nodes have the same CPU and same microcode version, set the CPU type to host, as
422in theory this will give your guests maximum performance.
af54f54d 423
57bb28ef
FE
424If you care about live migration and security, and you have only Intel CPUs or
425only AMD CPUs, choose the lowest generation CPU model of your cluster.
41379e9a 426
57bb28ef
FE
427If you care about live migration without security, or have mixed Intel/AMD
428cluster, choose the lowest compatible virtual QEMU CPU type.
41379e9a 429
57bb28ef 430NOTE: Live migrations between Intel and AMD host CPUs have no guarantee to work.
41379e9a 431
85e53bbf 432See also
2157032d 433xref:chapter_qm_vcpu_list[List of AMD and Intel CPU Types as Defined in QEMU].
41379e9a 434
c85a1f5a 435QEMU CPU Types
41379e9a
AD
436^^^^^^^^^^^^^^
437
c85a1f5a
FE
438QEMU also provide virtual CPU types, compatible with both Intel and AMD host
439CPUs.
41379e9a 440
c85a1f5a
FE
441NOTE: To mitigate the Spectre vulnerability for virtual CPU types, you need to
442add the relevant CPU flags, see
443xref:qm_meltdown_spectre[Meltdown / Spectre related CPU flags].
41379e9a 444
c85a1f5a
FE
445Historically, {pve} had the 'kvm64' CPU model, with CPU flags at the level of
446Pentium 4 enabled, so performance was not great for certain workloads.
41379e9a 447
c85a1f5a
FE
448In the summer of 2020, AMD, Intel, Red Hat, and SUSE collaborated to define
449three x86-64 microarchitecture levels on top of the x86-64 baseline, with modern
450flags enabled. For details, see the
451https://gitlab.com/x86-psABIs/x86-64-ABI[x86-64-ABI specification].
41379e9a 452
c85a1f5a
FE
453NOTE: Some newer distributions like CentOS 9 are now built with 'x86-64-v2'
454flags as a minimum requirement.
41379e9a 455
c85a1f5a
FE
456* 'kvm64 (x86-64-v1)': Compatible with Intel CPU >= Pentium 4, AMD CPU >=
457Phenom.
41379e9a 458+
c85a1f5a
FE
459* 'x86-64-v2': Compatible with Intel CPU >= Nehalem, AMD CPU >= Opteron_G3.
460Added CPU flags compared to 'x86-64-v1': '+cx16', '+lahf-lm', '+popcnt', '+pni',
461'+sse4.1', '+sse4.2', '+ssse3'.
41379e9a 462+
c85a1f5a
FE
463* 'x86-64-v2-AES': Compatible with Intel CPU >= Westmere, AMD CPU >= Opteron_G4.
464Added CPU flags compared to 'x86-64-v2': '+aes'.
41379e9a 465+
c85a1f5a
FE
466* 'x86-64-v3': Compatible with Intel CPU >= Broadwell, AMD CPU >= EPYC. Added
467CPU flags compared to 'x86-64-v2-AES': '+avx', '+avx2', '+bmi1', '+bmi2',
468'+f16c', '+fma', '+movbe', '+xsave'.
41379e9a 469+
c85a1f5a
FE
470* 'x86-64-v4': Compatible with Intel CPU >= Skylake, AMD CPU >= EPYC v4 Genoa.
471Added CPU flags compared to 'x86-64-v3': '+avx512f', '+avx512bw', '+avx512cd',
472'+avx512dq', '+avx512vl'.
41379e9a 473
9e797d8c
SR
474Custom CPU Types
475^^^^^^^^^^^^^^^^
476
477You can specify custom CPU types with a configurable set of features. These are
478maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
479an administrator. See `man cpu-models.conf` for format details.
480
481Specified custom types can be selected by any user with the `Sys.Audit`
482privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
483or API, the name needs to be prefixed with 'custom-'.
484
c85a1f5a 485[[qm_meltdown_spectre]]
72ae8aa2
FG
486Meltdown / Spectre related CPU flags
487^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
488
2975cb7a 489There are several CPU flags related to the Meltdown and Spectre vulnerabilities
72ae8aa2
FG
490footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
491manually unless the selected CPU type of your VM already enables them by default.
492
2975cb7a 493There are two requirements that need to be fulfilled in order to use these
72ae8aa2 494CPU flags:
5dba2677 495
72ae8aa2
FG
496* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
497* The guest operating system must be updated to a version which mitigates the
498 attacks and is able to utilize the CPU feature
499
2975cb7a
AD
500Otherwise you need to set the desired CPU flag of the virtual CPU, either by
501editing the CPU options in the WebUI, or by setting the 'flags' property of the
502'cpu' option in the VM configuration file.
503
504For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
16b31cc9
AZ
505so-called ``microcode update'' for your CPU, see
506xref:chapter_firmware_updates[chapter Firmware Updates]. Note that not all
507affected CPUs can be updated to support spec-ctrl.
5dba2677 508
2975cb7a
AD
509
510To check if the {pve} host is vulnerable, execute the following command as root:
5dba2677
TL
511
512----
2975cb7a 513for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
5dba2677
TL
514----
515
16b31cc9 516A community script is also available to detect if the host is still vulnerable.
2975cb7a 517footnote:[spectre-meltdown-checker https://meltdown.ovh/]
72ae8aa2 518
2975cb7a
AD
519Intel processors
520^^^^^^^^^^^^^^^^
72ae8aa2 521
2975cb7a
AD
522* 'pcid'
523+
144d5ede 524This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
2975cb7a
AD
525called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
526the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
527mechanism footnote:[PCID is now a critical performance/security feature on x86
528https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
529+
530To check if the {pve} host supports PCID, execute the following command as root:
531+
72ae8aa2 532----
2975cb7a 533# grep ' pcid ' /proc/cpuinfo
72ae8aa2 534----
2975cb7a
AD
535+
536If this does not return empty your host's CPU has support for 'pcid'.
72ae8aa2 537
2975cb7a
AD
538* 'spec-ctrl'
539+
144d5ede
WB
540Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
541in cases where retpolines are not sufficient.
542Included by default in Intel CPU models with -IBRS suffix.
543Must be explicitly turned on for Intel CPU models without -IBRS suffix.
544Requires an updated host CPU microcode (intel-microcode >= 20180425).
2975cb7a
AD
545+
546* 'ssbd'
547+
144d5ede
WB
548Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
549Must be explicitly turned on for all Intel CPU models.
550Requires an updated host CPU microcode(intel-microcode >= 20180703).
72ae8aa2 551
72ae8aa2 552
2975cb7a
AD
553AMD processors
554^^^^^^^^^^^^^^
555
556* 'ibpb'
557+
144d5ede
WB
558Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
559in cases where retpolines are not sufficient.
560Included by default in AMD CPU models with -IBPB suffix.
561Must be explicitly turned on for AMD CPU models without -IBPB suffix.
2975cb7a
AD
562Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
563
564
565
566* 'virt-ssbd'
567+
568Required to enable the Spectre v4 (CVE-2018-3639) fix.
144d5ede
WB
569Not included by default in any AMD CPU model.
570Must be explicitly turned on for all AMD CPU models.
571This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
572Note that this must be explicitly enabled when when using the "host" cpu model,
573because this is a virtual feature which does not exist in the physical CPUs.
2975cb7a
AD
574
575
576* 'amd-ssbd'
577+
144d5ede
WB
578Required to enable the Spectre v4 (CVE-2018-3639) fix.
579Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
580This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
2975cb7a
AD
581virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
582
583
584* 'amd-no-ssb'
585+
586Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
144d5ede
WB
587Not included by default in any AMD CPU model.
588Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
589and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
2975cb7a
AD
590This is mutually exclusive with virt-ssbd and amd-ssbd.
591
5dba2677 592
af54f54d
TL
593NUMA
594^^^^
595You can also optionally emulate a *NUMA*
596footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
597in your VMs. The basics of the NUMA architecture mean that instead of having a
598global memory pool available to all your cores, the memory is spread into local
599banks close to each socket.
34e541c5
EK
600This can bring speed improvements as the memory bus is not a bottleneck
601anymore. If your system has a NUMA architecture footnote:[if the command
602`numactl --hardware | grep available` returns more than one node, then your host
603system has a NUMA architecture] we recommend to activate the option, as this
af54f54d
TL
604will allow proper distribution of the VM resources on the host system.
605This option is also required to hot-plug cores or RAM in a VM.
34e541c5
EK
606
607If the NUMA option is used, it is recommended to set the number of sockets to
4ccb911c 608the number of nodes of the host system.
34e541c5 609
af54f54d
TL
610vCPU hot-plug
611^^^^^^^^^^^^^
612
613Modern operating systems introduced the capability to hot-plug and, to a
3a433e9b 614certain extent, hot-unplug CPUs in a running system. Virtualization allows us
4371b2fe
FG
615to avoid a lot of the (physical) problems real hardware can cause in such
616scenarios.
617Still, this is a rather new and complicated feature, so its use should be
618restricted to cases where its absolutely needed. Most of the functionality can
619be replicated with other, well tested and less complicated, features, see
af54f54d
TL
620xref:qm_cpu_resource_limits[Resource Limits].
621
622In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
623To start a VM with less than this total core count of CPUs you may use the
4371b2fe 624*vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
af54f54d 625
4371b2fe 626Currently only this feature is only supported on Linux, a kernel newer than 3.10
af54f54d
TL
627is needed, a kernel newer than 4.7 is recommended.
628
629You can use a udev rule as follow to automatically set new CPUs as online in
630the guest:
631
632----
633SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
634----
635
636Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
637
d6466262
TL
638Note: CPU hot-remove is machine dependent and requires guest cooperation. The
639deletion command does not guarantee CPU removal to actually happen, typically
640it's a request forwarded to guest OS using target dependent mechanism, such as
641ACPI on x86/amd64.
af54f54d 642
80c0adcb
DM
643
644[[qm_memory]]
34e541c5
EK
645Memory
646~~~~~~
80c0adcb 647
34e541c5
EK
648For each VM you have the option to set a fixed size memory or asking
649{pve} to dynamically allocate memory based on the current RAM usage of the
59552707 650host.
34e541c5 651
96124d0f 652.Fixed Memory Allocation
1ff5e4e8 653[thumbnail="screenshot/gui-create-vm-memory.png"]
96124d0f 654
9ea21953 655When setting memory and minimum memory to the same amount
9fb002e6 656{pve} will simply allocate what you specify to your VM.
34e541c5 657
9abfec65
DC
658Even when using a fixed memory size, the ballooning device gets added to the
659VM, because it delivers useful information such as how much memory the guest
660really uses.
661In general, you should leave *ballooning* enabled, but if you want to disable
d6466262 662it (like for debugging purposes), simply uncheck *Ballooning Device* or set
9abfec65
DC
663
664 balloon: 0
665
666in the configuration.
667
96124d0f 668.Automatic Memory Allocation
96124d0f 669
34e541c5 670// see autoballoon() in pvestatd.pm
58e04593 671When setting the minimum memory lower than memory, {pve} will make sure that the
34e541c5
EK
672minimum amount you specified is always available to the VM, and if RAM usage on
673the host is below 80%, will dynamically add memory to the guest up to the
f4bfd701
DM
674maximum memory specified.
675
a35aad4a 676When the host is running low on RAM, the VM will then release some memory
34e541c5
EK
677back to the host, swapping running processes if needed and starting the oom
678killer in last resort. The passing around of memory between host and guest is
679done via a special `balloon` kernel driver running inside the guest, which will
680grab or release memory pages from the host.
681footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
682
c9f6e1a4
EK
683When multiple VMs use the autoallocate facility, it is possible to set a
684*Shares* coefficient which indicates the relative amount of the free host memory
470d4313 685that each VM should take. Suppose for instance you have four VMs, three of them
a35aad4a 686running an HTTP server and the last one is a database server. To cache more
c9f6e1a4
EK
687database blocks in the database server RAM, you would like to prioritize the
688database VM when spare RAM is available. For this you assign a Shares property
689of 3000 to the database VM, leaving the other VMs to the Shares default setting
470d4313 690of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
c9f6e1a4
EK
691* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
6923000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
a35aad4a 693get 1.5 GB.
c9f6e1a4 694
34e541c5
EK
695All Linux distributions released after 2010 have the balloon kernel driver
696included. For Windows OSes, the balloon driver needs to be added manually and can
697incur a slowdown of the guest, so we don't recommend using it on critical
59552707 698systems.
34e541c5
EK
699// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
700
470d4313 701When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
34e541c5
EK
702of RAM available to the host.
703
80c0adcb
DM
704
705[[qm_network_device]]
1ff7835b
EK
706Network Device
707~~~~~~~~~~~~~~
80c0adcb 708
1ff5e4e8 709[thumbnail="screenshot/gui-create-vm-network.png"]
c24ddb0a 710
1ff7835b
EK
711Each VM can have many _Network interface controllers_ (NIC), of four different
712types:
713
714 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
715 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
716performance. Like all VirtIO devices, the guest OS should have the proper driver
717installed.
718 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
59552707 719only be used when emulating older operating systems ( released before 2002 )
1ff7835b
EK
720 * the *vmxnet3* is another paravirtualized device, which should only be used
721when importing a VM from another hypervisor.
722
723{pve} will generate for each NIC a random *MAC address*, so that your VM is
724addressable on Ethernet networks.
725
470d4313 726The NIC you added to the VM can follow one of two different models:
af9c6de1
EK
727
728 * in the default *Bridged mode* each virtual NIC is backed on the host by a
729_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
730tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
731have direct access to the Ethernet LAN on which the host is located.
732 * in the alternative *NAT mode*, each virtual NIC will only communicate with
c730e973 733the QEMU user networking stack, where a built-in router and DHCP server can
470d4313 734provide network access. This built-in DHCP will serve addresses in the private
af9c6de1 73510.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
f5041150
DC
736should only be used for testing. This mode is only available via CLI or the API,
737but not via the WebUI.
af9c6de1
EK
738
739You can also skip adding a network device when creating a VM by selecting *No
740network device*.
741
750d4f04 742You can overwrite the *MTU* setting for each VM network device. The option
00dc358b 743`mtu=1` represents a special case, in which the MTU value will be inherited
750d4f04
DT
744from the underlying bridge.
745This option is only available for *VirtIO* network devices.
746
af9c6de1 747.Multiqueue
1ff7835b 748If you are using the VirtIO driver, you can optionally activate the
af9c6de1 749*Multiqueue* option. This option allows the guest OS to process networking
1ff7835b 750packets using multiple virtual CPUs, providing an increase in the total number
470d4313 751of packets transferred.
1ff7835b
EK
752
753//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
754When using the VirtIO driver with {pve}, each NIC network queue is passed to the
a35aad4a 755host kernel, where the queue will be processed by a kernel thread spawned by the
1ff7835b
EK
756vhost driver. With this option activated, it is possible to pass _multiple_
757network queues to the host kernel for each NIC.
758
759//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
af9c6de1 760When using Multiqueue, it is recommended to set it to a value equal
1ff7835b
EK
761to the number of Total Cores of your guest. You also need to set in
762the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
59552707 763command:
1ff7835b 764
7a0d4784 765`ethtool -L ens1 combined X`
1ff7835b
EK
766
767where X is the number of the number of vcpus of the VM.
768
af9c6de1 769You should note that setting the Multiqueue parameter to a value greater
1ff7835b
EK
770than one will increase the CPU load on the host and guest systems as the
771traffic increases. We recommend to set this option only when the VM has to
772process a great number of incoming connections, such as when the VM is running
773as a router, reverse proxy or a busy HTTP server doing long polling.
774
6cb67d7f
DC
775[[qm_display]]
776Display
777~~~~~~~
778
779QEMU can virtualize a few types of VGA hardware. Some examples are:
780
781* *std*, the default, emulates a card with Bochs VBE extensions.
1368dc02
TL
782* *cirrus*, this was once the default, it emulates a very old hardware module
783with all its problems. This display type should only be used if really
784necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
d6466262
TL
785qemu: using cirrus considered harmful], for example, if using Windows XP or
786earlier
6cb67d7f
DC
787* *vmware*, is a VMWare SVGA-II compatible adapter.
788* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
37422176
AL
789enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
790VM.
e039fe3c
TL
791* *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
792 can offload workloads to the host GPU without requiring special (expensive)
793 models and drivers and neither binding the host GPU completely, allowing
794 reuse between multiple guests and or the host.
795+
796NOTE: VirGL support needs some extra libraries that aren't installed by
797default due to being relatively big and also not available as open source for
798all GPU models/vendors. For most setups you'll just need to do:
799`apt install libgl1 libegl1`
6cb67d7f
DC
800
801You can edit the amount of memory given to the virtual GPU, by setting
1368dc02 802the 'memory' option. This can enable higher resolutions inside the VM,
6cb67d7f
DC
803especially with SPICE/QXL.
804
1368dc02 805As the memory is reserved by display device, selecting Multi-Monitor mode
d6466262 806for SPICE (such as `qxl2` for dual monitors) has some implications:
6cb67d7f 807
1368dc02
TL
808* Windows needs a device for each monitor, so if your 'ostype' is some
809version of Windows, {pve} gives the VM an extra device per monitor.
6cb67d7f 810Each device gets the specified amount of memory.
1368dc02 811
6cb67d7f
DC
812* Linux VMs, can always enable more virtual monitors, but selecting
813a Multi-Monitor mode multiplies the memory given to the device with
814the number of monitors.
815
1368dc02
TL
816Selecting `serialX` as display 'type' disables the VGA output, and redirects
817the Web Console to the selected serial port. A configured display 'memory'
818setting will be ignored in that case.
80c0adcb 819
dbb44ef0 820[[qm_usb_passthrough]]
685cc8e0
DC
821USB Passthrough
822~~~~~~~~~~~~~~~
80c0adcb 823
685cc8e0
DC
824There are two different types of USB passthrough devices:
825
470d4313 826* Host USB passthrough
685cc8e0
DC
827* SPICE USB passthrough
828
829Host USB passthrough works by giving a VM a USB device of the host.
830This can either be done via the vendor- and product-id, or
831via the host bus and port.
832
833The vendor/product-id looks like this: *0123:abcd*,
834where *0123* is the id of the vendor, and *abcd* is the id
835of the product, meaning two pieces of the same usb device
836have the same id.
837
838The bus/port looks like this: *1-2.3.4*, where *1* is the bus
839and *2.3.4* is the port path. This represents the physical
840ports of your host (depending of the internal order of the
841usb controllers).
842
843If a device is present in a VM configuration when the VM starts up,
844but the device is not present in the host, the VM can boot without problems.
470d4313 845As soon as the device/port is available in the host, it gets passed through.
685cc8e0 846
e60ce90c 847WARNING: Using this kind of USB passthrough means that you cannot move
685cc8e0
DC
848a VM online to another host, since the hardware is only available
849on the host the VM is currently residing.
850
9632a85d
NU
851The second type of passthrough is SPICE USB passthrough. If you add one or more
852SPICE USB ports to your VM, you can dynamically pass a local USB device from
853your SPICE client through to the VM. This can be useful to redirect an input
854device or hardware dongle temporarily.
685cc8e0 855
e2a867b2
DC
856It is also possible to map devices on a cluster level, so that they can be
857properly used with HA and hardware changes are detected and non root users
858can configure them. See xref:resource_mapping[Resource Mapping]
859for details on that.
80c0adcb
DM
860
861[[qm_bios_and_uefi]]
076d60ae
DC
862BIOS and UEFI
863~~~~~~~~~~~~~
864
865In order to properly emulate a computer, QEMU needs to use a firmware.
55ce3375
TL
866Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
867first steps when booting a VM. It is responsible for doing basic hardware
868initialization and for providing an interface to the firmware and hardware for
869the operating system. By default QEMU uses *SeaBIOS* for this, which is an
870open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
871standard setups.
076d60ae 872
8e5720fd 873Some operating systems (such as Windows 11) may require use of an UEFI
58e695ca 874compatible implementation. In such cases, you must use *OVMF* instead,
8e5720fd
SR
875which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
876
d6466262
TL
877There are other scenarios in which the SeaBIOS may not be the ideal firmware to
878boot from, for example if you want to do VGA passthrough. footnote:[Alex
879Williamson has a good blog entry about this
880https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
076d60ae
DC
881
882If you want to use OVMF, there are several things to consider:
883
884In order to save things like the *boot order*, there needs to be an EFI Disk.
885This disk will be included in backups and snapshots, and there can only be one.
886
887You can create such a disk with the following command:
888
32e8b5b2
AL
889----
890# qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
891----
076d60ae
DC
892
893Where *<storage>* is the storage where you want to have the disk, and
894*<format>* is a format which the storage supports. Alternatively, you can
895create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
896hardware section of a VM.
897
8e5720fd
SR
898The *efitype* option specifies which version of the OVMF firmware should be
899used. For new VMs, this should always be '4m', as it supports Secure Boot and
900has more space allocated to support future development (this is the default in
901the GUI).
902
903*pre-enroll-keys* specifies if the efidisk should come pre-loaded with
904distribution-specific and Microsoft Standard Secure Boot keys. It also enables
905Secure Boot by default (though it can still be disabled in the OVMF menu within
906the VM).
907
908NOTE: If you want to start using Secure Boot in an existing VM (that still uses
909a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
910(`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
911will reset any custom configurations you have made in the OVMF menu!
912
076d60ae 913When using OVMF with a virtual display (without VGA passthrough),
8e5720fd 914you need to set the client resolution in the OVMF menu (which you can reach
076d60ae
DC
915with a press of the ESC button during boot), or you have to choose
916SPICE as the display type.
917
95e8e1b7
SR
918[[qm_tpm]]
919Trusted Platform Module (TPM)
920~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
921
922A *Trusted Platform Module* is a device which stores secret data - such as
923encryption keys - securely and provides tamper-resistance functions for
924validating system boot.
925
d6466262
TL
926Certain operating systems (such as Windows 11) require such a device to be
927attached to a machine (be it physical or virtual).
95e8e1b7
SR
928
929A TPM is added by specifying a *tpmstate* volume. This works similar to an
930efidisk, in that it cannot be changed (only removed) once created. You can add
931one via the following command:
932
32e8b5b2
AL
933----
934# qm set <vmid> -tpmstate0 <storage>:1,version=<version>
935----
95e8e1b7
SR
936
937Where *<storage>* is the storage you want to put the state on, and *<version>*
938is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
939choosing 'Add' -> 'TPM State' in the hardware section of a VM.
940
941The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
942implementation that requires a 'v1.2' TPM, it should be preferred.
943
944NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
945security benefits. The point of a TPM is that the data on it cannot be modified
946easily, except via commands specified as part of the TPM spec. Since with an
947emulated device the data storage happens on a regular volume, it can potentially
948be edited by anyone with access to it.
949
0ad30983
DC
950[[qm_ivshmem]]
951Inter-VM shared memory
952~~~~~~~~~~~~~~~~~~~~~~
953
8861c7ad
TL
954You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
955share memory between the host and a guest, or also between multiple guests.
0ad30983
DC
956
957To add such a device, you can use `qm`:
958
32e8b5b2
AL
959----
960# qm set <vmid> -ivshmem size=32,name=foo
961----
0ad30983
DC
962
963Where the size is in MiB. The file will be located under
964`/dev/shm/pve-shm-$name` (the default name is the vmid).
965
4d1a19eb
TL
966NOTE: Currently the device will get deleted as soon as any VM using it got
967shutdown or stopped. Open connections will still persist, but new connections
968to the exact same device cannot be made anymore.
969
8861c7ad 970A use case for such a device is the Looking Glass
451bb75f
SR
971footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
972performance, low-latency display mirroring between host and guest.
0ad30983 973
ca8c3009
AL
974[[qm_audio_device]]
975Audio Device
976~~~~~~~~~~~~
977
978To add an audio device run the following command:
979
980----
981qm set <vmid> -audio0 device=<device>
982----
983
984Supported audio devices are:
985
986* `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
987* `intel-hda`: Intel HD Audio Controller, emulates ICH6
988* `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
989
cf41761d
AL
990There are two backends available:
991
992* 'spice'
993* 'none'
994
995The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
996the 'none' backend can be useful if an audio device is needed in the VM for some
997software to work. To use the physical audio device of the host use device
998passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
999xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
1000have options to play sound.
1001
ca8c3009 1002
adb2c91d
SR
1003[[qm_virtio_rng]]
1004VirtIO RNG
1005~~~~~~~~~~
1006
1007A RNG (Random Number Generator) is a device providing entropy ('randomness') to
1008a system. A virtual hardware-RNG can be used to provide such entropy from the
1009host system to a guest VM. This helps to avoid entropy starvation problems in
1010the guest (a situation where not enough entropy is available and the system may
1011slow down or run into problems), especially during the guests boot process.
1012
1013To add a VirtIO-based emulated RNG, run the following command:
1014
1015----
1016qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
1017----
1018
1019`source` specifies where entropy is read from on the host and has to be one of
1020the following:
1021
1022* `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
1023* `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
1024 starvation on the host system)
1025* `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
1026 are available, the one selected in
1027 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
1028
1029A limit can be specified via the `max_bytes` and `period` parameters, they are
1030read as `max_bytes` per `period` in milliseconds. However, it does not represent
1031a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
1032available on a 1 second timer, not that 1 KiB is streamed to the guest over the
1033course of one second. Reducing the `period` can thus be used to inject entropy
1034into the guest at a faster rate.
1035
1036By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
1037recommended to always use a limiter to avoid guests using too many host
1038resources. If desired, a value of '0' for `max_bytes` can be used to disable
1039all limits.
1040
777cf894 1041[[qm_bootorder]]
8cd6f474
TL
1042Device Boot Order
1043~~~~~~~~~~~~~~~~~
777cf894
SR
1044
1045QEMU can tell the guest which devices it should boot from, and in which order.
d6466262 1046This can be specified in the config via the `boot` property, for example:
777cf894
SR
1047
1048----
1049boot: order=scsi0;net0;hostpci0
1050----
1051
1052[thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
1053
1054This way, the guest would first attempt to boot from the disk `scsi0`, if that
1055fails, it would go on to attempt network boot from `net0`, and in case that
1056fails too, finally attempt to boot from a passed through PCIe device (seen as
1057disk in case of NVMe, otherwise tries to launch into an option ROM).
1058
1059On the GUI you can use a drag-and-drop editor to specify the boot order, and use
1060the checkbox to enable or disable certain devices for booting altogether.
1061
1062NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
1063all of them must be marked as 'bootable' (that is, they must have the checkbox
1064enabled or appear in the list in the config) for the guest to be able to boot.
1065This is because recent SeaBIOS and OVMF versions only initialize disks if they
1066are marked 'bootable'.
1067
1068In any case, even devices not appearing in the list or having the checkmark
1069disabled will still be available to the guest, once it's operating system has
1070booted and initialized them. The 'bootable' flag only affects the guest BIOS and
1071bootloader.
1072
1073
288e3f46
EK
1074[[qm_startup_and_shutdown]]
1075Automatic Start and Shutdown of Virtual Machines
1076~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1077
1078After creating your VMs, you probably want them to start automatically
1079when the host system boots. For this you need to select the option 'Start at
1080boot' from the 'Options' Tab of your VM in the web interface, or set it with
1081the following command:
1082
32e8b5b2
AL
1083----
1084# qm set <vmid> -onboot 1
1085----
288e3f46 1086
4dbeb548
DM
1087.Start and Shutdown Order
1088
1ff5e4e8 1089[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548
DM
1090
1091In some case you want to be able to fine tune the boot order of your
1092VMs, for instance if one of your VM is providing firewalling or DHCP
1093to other guest systems. For this you can use the following
1094parameters:
288e3f46 1095
d6466262
TL
1096* *Start/Shutdown order*: Defines the start order priority. For example, set it
1097* to 1 if
288e3f46
EK
1098you want the VM to be the first to be started. (We use the reverse startup
1099order for shutdown, so a machine with a start order of 1 would be the last to
7eed72d8 1100be shut down). If multiple VMs have the same order defined on a host, they will
d750c851 1101additionally be ordered by 'VMID' in ascending order.
288e3f46 1102* *Startup delay*: Defines the interval between this VM start and subsequent
d6466262
TL
1103VMs starts. For example, set it to 240 if you want to wait 240 seconds before
1104starting other VMs.
288e3f46 1105* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
d6466262
TL
1106for the VM to be offline after issuing a shutdown command. By default this
1107value is set to 180, which means that {pve} will issue a shutdown request and
1108wait 180 seconds for the machine to be offline. If the machine is still online
1109after the timeout it will be stopped forcefully.
288e3f46 1110
2b2c6286
TL
1111NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
1112'boot order' options currently. Those VMs will be skipped by the startup and
1113shutdown algorithm as the HA manager itself ensures that VMs get started and
1114stopped.
1115
288e3f46 1116Please note that machines without a Start/Shutdown order parameter will always
7eed72d8 1117start after those where the parameter is set. Further, this parameter can only
d750c851 1118be enforced between virtual machines running on the same host, not
288e3f46 1119cluster-wide.
076d60ae 1120
0f7778ac
DW
1121If you require a delay between the host boot and the booting of the first VM,
1122see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
1123
c0f039aa
AL
1124
1125[[qm_qemu_agent]]
c730e973 1126QEMU Guest Agent
c0f039aa
AL
1127~~~~~~~~~~~~~~~~
1128
c730e973 1129The QEMU Guest Agent is a service which runs inside the VM, providing a
c0f039aa
AL
1130communication channel between the host and the guest. It is used to exchange
1131information and allows the host to issue commands to the guest.
1132
1133For example, the IP addresses in the VM summary panel are fetched via the guest
1134agent.
1135
1136Or when starting a backup, the guest is told via the guest agent to sync
1137outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1138
1139For the guest agent to work properly the following steps must be taken:
1140
1141* install the agent in the guest and make sure it is running
1142* enable the communication via the agent in {pve}
1143
1144Install Guest Agent
1145^^^^^^^^^^^^^^^^^^^
1146
1147For most Linux distributions, the guest agent is available. The package is
1148usually named `qemu-guest-agent`.
1149
1150For Windows, it can be installed from the
1151https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1152VirtIO driver ISO].
1153
80df0d2e 1154[[qm_qga_enable]]
c0f039aa
AL
1155Enable Guest Agent Communication
1156^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1157
1158Communication from {pve} with the guest agent can be enabled in the VM's
1159*Options* panel. A fresh start of the VM is necessary for the changes to take
1160effect.
1161
80df0d2e
TL
1162[[qm_qga_auto_trim]]
1163Automatic TRIM Using QGA
1164^^^^^^^^^^^^^^^^^^^^^^^^
1165
c0f039aa
AL
1166It is possible to enable the 'Run guest-trim' option. With this enabled,
1167{pve} will issue a trim command to the guest after the following
1168operations that have the potential to write out zeros to the storage:
1169
1170* moving a disk to another storage
1171* live migrating a VM to another node with local storage
1172
1173On a thin provisioned storage, this can help to free up unused space.
1174
95117b6c
FE
1175NOTE: There is a caveat with ext4 on Linux, because it uses an in-memory
1176optimization to avoid issuing duplicate TRIM requests. Since the guest doesn't
1177know about the change in the underlying storage, only the first guest-trim will
1178run as expected. Subsequent ones, until the next reboot, will only consider
1179parts of the filesystem that changed since then.
1180
80df0d2e 1181[[qm_qga_fsfreeze]]
62bf5d75
CH
1182Filesystem Freeze & Thaw on Backup
1183^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1184
1185By default, guest filesystems are synced via the 'fs-freeze' QEMU Guest Agent
1186Command when a backup is performed, to provide consistency.
1187
1188On Windows guests, some applications might handle consistent backups themselves
1189by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
1190'fs-freeze' then might interfere with that. For example, it has been observed
1191that calling 'fs-freeze' with some SQL Servers triggers VSS to call the SQL
1192Writer VSS module in a mode that breaks the SQL Server backup chain for
1193differential backups.
1194
1195For such setups you can configure {pve} to not issue a freeze-and-thaw cycle on
266dd87d
CH
1196backup by setting the `freeze-fs-on-backup` QGA option to `0`. This can also be
1197done via the GUI with the 'Freeze/thaw guest filesystems on backup for
1198consistency' option.
62bf5d75 1199
80df0d2e 1200IMPORTANT: Disabling this option can potentially lead to backups with inconsistent
62bf5d75
CH
1201filesystems and should therefore only be disabled if you know what you are
1202doing.
1203
c0f039aa
AL
1204Troubleshooting
1205^^^^^^^^^^^^^^^
1206
1207.VM does not shut down
1208
1209Make sure the guest agent is installed and running.
1210
1211Once the guest agent is enabled, {pve} will send power commands like
1212'shutdown' via the guest agent. If the guest agent is not running, commands
1213cannot get executed properly and the shutdown command will run into a timeout.
1214
22a0091c
AL
1215[[qm_spice_enhancements]]
1216SPICE Enhancements
1217~~~~~~~~~~~~~~~~~~
1218
1219SPICE Enhancements are optional features that can improve the remote viewer
1220experience.
1221
1222To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1223the following command to enable them via the CLI:
1224
1225----
1226qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1227----
1228
1229NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1230must be set to SPICE (qxl).
1231
1232Folder Sharing
1233^^^^^^^^^^^^^^
1234
1235Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1236installed in the guest. It makes the shared folder available through a local
1237WebDAV server located at http://localhost:9843.
1238
1239For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1240from the
1241https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1242
1243Most Linux distributions have a package called `spice-webdavd` that can be
1244installed.
1245
1246To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1247Select the folder to share and then enable the checkbox.
1248
1249NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1250
0dcd22f5
AL
1251CAUTION: Experimental! Currently this feature does not work reliably.
1252
22a0091c
AL
1253Video Streaming
1254^^^^^^^^^^^^^^^
1255
1256Fast refreshing areas are encoded into a video stream. Two options exist:
1257
1258* *all*: Any fast refreshing area will be encoded into a video stream.
1259* *filter*: Additional filters are used to decide if video streaming should be
1260 used (currently only small window surfaces are skipped).
1261
1262A general recommendation if video streaming should be enabled and which option
1263to choose from cannot be given. Your mileage may vary depending on the specific
1264circumstances.
1265
1266Troubleshooting
1267^^^^^^^^^^^^^^^
1268
19a58e02 1269.Shared folder does not show up
22a0091c
AL
1270
1271Make sure the WebDAV service is enabled and running in the guest. On Windows it
1272is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1273different depending on the distribution.
1274
1275If the service is running, check the WebDAV server by opening
1276http://localhost:9843 in a browser in the guest.
1277
1278It can help to restart the SPICE session.
c73c190f
DM
1279
1280[[qm_migration]]
1281Migration
1282---------
1283
1ff5e4e8 1284[thumbnail="screenshot/gui-qemu-migrate.png"]
e4bcef0a 1285
c73c190f
DM
1286If you have a cluster, you can migrate your VM to another host with
1287
32e8b5b2
AL
1288----
1289# qm migrate <vmid> <target>
1290----
c73c190f 1291
8df8cfb7
DC
1292There are generally two mechanisms for this
1293
1294* Online Migration (aka Live Migration)
1295* Offline Migration
1296
1297Online Migration
1298~~~~~~~~~~~~~~~~
1299
27780834 1300If your VM is running and no locally bound resources are configured (such as
9632a85d 1301devices that are passed through), you can initiate a live migration with the `--online`
27780834
TL
1302flag in the `qm migration` command evocation. The web-interface defaults to
1303live migration when the VM is running.
c73c190f 1304
8df8cfb7
DC
1305How it works
1306^^^^^^^^^^^^
1307
27780834
TL
1308Online migration first starts a new QEMU process on the target host with the
1309'incoming' flag, which performs only basic initialization with the guest vCPUs
1310still paused and then waits for the guest memory and device state data streams
1311of the source Virtual Machine.
1312All other resources, such as disks, are either shared or got already sent
1313before runtime state migration of the VMs begins; so only the memory content
1314and device state remain to be transferred.
1315
1316Once this connection is established, the source begins asynchronously sending
1317the memory content to the target. If the guest memory on the source changes,
1318those sections are marked dirty and another pass is made to send the guest
1319memory data.
1320This loop is repeated until the data difference between running source VM
1321and incoming target VM is small enough to be sent in a few milliseconds,
1322because then the source VM can be paused completely, without a user or program
1323noticing the pause, so that the remaining data can be sent to the target, and
1324then unpause the targets VM's CPU to make it the new running VM in well under a
1325second.
8df8cfb7
DC
1326
1327Requirements
1328^^^^^^^^^^^^
1329
1330For Live Migration to work, there are some things required:
1331
27780834
TL
1332* The VM has no local resources that cannot be migrated. For example,
1333 PCI or USB devices that are passed through currently block live-migration.
1334 Local Disks, on the other hand, can be migrated by sending them to the target
1335 just fine.
1336* The hosts are located in the same {pve} cluster.
1337* The hosts have a working (and reliable) network connection between them.
1338* The target host must have the same, or higher versions of the
1339 {pve} packages. Although it can sometimes work the other way around, this
1340 cannot be guaranteed.
1341* The hosts have CPUs from the same vendor with similar capabilities. Different
1342 vendor *might* work depending on the actual models and VMs CPU type
1343 configured, but it cannot be guaranteed - so please test before deploying
1344 such a setup in production.
8df8cfb7
DC
1345
1346Offline Migration
1347~~~~~~~~~~~~~~~~~
1348
27780834
TL
1349If you have local resources, you can still migrate your VMs offline as long as
1350all disk are on storage defined on both hosts.
1351Migration then copies the disks to the target host over the network, as with
9632a85d 1352online migration. Note that any hardware passthrough configuration may need to
27780834
TL
1353be adapted to the device location on the target host.
1354
1355// TODO: mention hardware map IDs as better way to solve that, once available
c73c190f 1356
eeb87f95
DM
1357[[qm_copy_and_clone]]
1358Copies and Clones
1359-----------------
9e55c76d 1360
1ff5e4e8 1361[thumbnail="screenshot/gui-qemu-full-clone.png"]
9e55c76d
DM
1362
1363VM installation is usually done using an installation media (CD-ROM)
61018238 1364from the operating system vendor. Depending on the OS, this can be a
9e55c76d
DM
1365time consuming task one might want to avoid.
1366
1367An easy way to deploy many VMs of the same type is to copy an existing
1368VM. We use the term 'clone' for such copies, and distinguish between
1369'linked' and 'full' clones.
1370
1371Full Clone::
1372
1373The result of such copy is an independent VM. The
1374new VM does not share any storage resources with the original.
1375+
707e37a2 1376
9e55c76d
DM
1377It is possible to select a *Target Storage*, so one can use this to
1378migrate a VM to a totally different storage. You can also change the
1379disk image *Format* if the storage driver supports several formats.
1380+
707e37a2 1381
730fbca4 1382NOTE: A full clone needs to read and copy all VM image data. This is
9e55c76d 1383usually much slower than creating a linked clone.
707e37a2
DM
1384+
1385
1386Some storage types allows to copy a specific *Snapshot*, which
1387defaults to the 'current' VM data. This also means that the final copy
1388never includes any additional snapshots from the original VM.
1389
9e55c76d
DM
1390
1391Linked Clone::
1392
730fbca4 1393Modern storage drivers support a way to generate fast linked
9e55c76d
DM
1394clones. Such a clone is a writable copy whose initial contents are the
1395same as the original data. Creating a linked clone is nearly
1396instantaneous, and initially consumes no additional space.
1397+
707e37a2 1398
9e55c76d
DM
1399They are called 'linked' because the new image still refers to the
1400original. Unmodified data blocks are read from the original image, but
1401modification are written (and afterwards read) from a new
1402location. This technique is called 'Copy-on-write'.
1403+
707e37a2
DM
1404
1405This requires that the original volume is read-only. With {pve} one
1406can convert any VM into a read-only <<qm_templates, Template>>). Such
1407templates can later be used to create linked clones efficiently.
1408+
1409
730fbca4
OB
1410NOTE: You cannot delete an original template while linked clones
1411exist.
9e55c76d 1412+
707e37a2
DM
1413
1414It is not possible to change the *Target storage* for linked clones,
1415because this is a storage internal feature.
9e55c76d
DM
1416
1417
1418The *Target node* option allows you to create the new VM on a
1419different node. The only restriction is that the VM is on shared
1420storage, and that storage is also available on the target node.
1421
730fbca4 1422To avoid resource conflicts, all network interface MAC addresses get
9e55c76d
DM
1423randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1424setting.
1425
1426
707e37a2
DM
1427[[qm_templates]]
1428Virtual Machine Templates
1429-------------------------
1430
1431One can convert a VM into a Template. Such templates are read-only,
1432and you can use them to create linked clones.
1433
1434NOTE: It is not possible to start templates, because this would modify
1435the disk images. If you want to change the template, create a linked
1436clone and modify that.
1437
319d5325
DC
1438VM Generation ID
1439----------------
1440
941ff8d3 1441{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
effa4818
TL
1442'vmgenid' Specification
1443https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1444for virtual machines.
1445This can be used by the guest operating system to detect any event resulting
1446in a time shift event, for example, restoring a backup or a snapshot rollback.
319d5325 1447
effa4818
TL
1448When creating new VMs, a 'vmgenid' will be automatically generated and saved
1449in its configuration file.
319d5325 1450
effa4818
TL
1451To create and add a 'vmgenid' to an already existing VM one can pass the
1452special value `1' to let {pve} autogenerate one or manually set the 'UUID'
d6466262
TL
1453footnote:[Online GUID generator http://guid.one/] by using it as value, for
1454example:
319d5325 1455
effa4818 1456----
32e8b5b2
AL
1457# qm set VMID -vmgenid 1
1458# qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
effa4818 1459----
319d5325 1460
cfd48f55
TL
1461NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1462in the same effects as a change on snapshot rollback, backup restore, etc., has
1463as the VM can interpret this as generation change.
1464
effa4818
TL
1465In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1466its value on VM creation, or retroactively delete the property in the
1467configuration with:
319d5325 1468
effa4818 1469----
32e8b5b2 1470# qm set VMID -delete vmgenid
effa4818 1471----
319d5325 1472
effa4818
TL
1473The most prominent use case for 'vmgenid' are newer Microsoft Windows
1474operating systems, which use it to avoid problems in time sensitive or
d6466262 1475replicate services (such as databases or domain controller
cfd48f55
TL
1476footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1477on snapshot rollback, backup restore or a whole VM clone operation.
319d5325 1478
c069256d
EK
1479Importing Virtual Machines and disk images
1480------------------------------------------
56368da8
EK
1481
1482A VM export from a foreign hypervisor takes usually the form of one or more disk
59552707 1483 images, with a configuration file describing the settings of the VM (RAM,
56368da8
EK
1484 number of cores). +
1485The disk images can be in the vmdk format, if the disks come from
59552707
DM
1486VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1487The most popular configuration format for VM exports is the OVF standard, but in
1488practice interoperation is limited because many settings are not implemented in
1489the standard itself, and hypervisors export the supplementary information
56368da8
EK
1490in non-standard extensions.
1491
1492Besides the problem of format, importing disk images from other hypervisors
1493may fail if the emulated hardware changes too much from one hypervisor to
1494another. Windows VMs are particularly concerned by this, as the OS is very
1495picky about any changes of hardware. This problem may be solved by
1496installing the MergeIDE.zip utility available from the Internet before exporting
1497and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1498
59552707 1499Finally there is the question of paravirtualized drivers, which improve the
56368da8
EK
1500speed of the emulated system and are specific to the hypervisor.
1501GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1502default and you can switch to the paravirtualized drivers right after importing
59552707 1503the VM. For Windows VMs, you need to install the Windows paravirtualized
56368da8
EK
1504drivers by yourself.
1505
1506GNU/Linux and other free Unix can usually be imported without hassle. Note
eb01c5cf 1507that we cannot guarantee a successful import/export of Windows VMs in all
56368da8
EK
1508cases due to the problems above.
1509
c069256d
EK
1510Step-by-step example of a Windows OVF import
1511~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8 1512
59552707 1513Microsoft provides
c069256d 1514https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
144d5ede 1515 to get started with Windows development.We are going to use one of these
c069256d 1516to demonstrate the OVF import feature.
56368da8 1517
c069256d
EK
1518Download the Virtual Machine zip
1519^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 1520
144d5ede 1521After getting informed about the user agreement, choose the _Windows 10
c069256d 1522Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
56368da8
EK
1523
1524Extract the disk image from the zip
1525^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1526
c069256d
EK
1527Using the `unzip` utility or any archiver of your choice, unpack the zip,
1528and copy via ssh/scp the ovf and vmdk files to your {pve} host.
56368da8 1529
c069256d
EK
1530Import the Virtual Machine
1531^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 1532
c069256d
EK
1533This will create a new virtual machine, using cores, memory and
1534VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1535 storage. You have to configure the network manually.
56368da8 1536
32e8b5b2
AL
1537----
1538# qm importovf 999 WinDev1709Eval.ovf local-lvm
1539----
56368da8 1540
c069256d 1541The VM is ready to be started.
56368da8 1542
c069256d
EK
1543Adding an external disk image to a Virtual Machine
1544~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8 1545
144d5ede 1546You can also add an existing disk image to a VM, either coming from a
c069256d
EK
1547foreign hypervisor, or one that you created yourself.
1548
1549Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1550
1551 vmdebootstrap --verbose \
67d59a35 1552 --size 10GiB --serial-console \
c069256d
EK
1553 --grub --no-extlinux \
1554 --package openssh-server \
1555 --package avahi-daemon \
1556 --package qemu-guest-agent \
1557 --hostname vm600 --enable-dhcp \
1558 --customize=./copy_pub_ssh.sh \
1559 --sparse --image vm600.raw
1560
10a2a4aa
FE
1561You can now create a new target VM, importing the image to the storage `pvedir`
1562and attaching it to the VM's SCSI controller:
c069256d 1563
32e8b5b2
AL
1564----
1565# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
10a2a4aa
FE
1566 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1567 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
32e8b5b2 1568----
c069256d
EK
1569
1570The VM is ready to be started.
707e37a2 1571
7eb69fd2 1572
16b4185a 1573ifndef::wiki[]
7eb69fd2 1574include::qm-cloud-init.adoc[]
16b4185a
DM
1575endif::wiki[]
1576
6e4c46c4
DC
1577ifndef::wiki[]
1578include::qm-pci-passthrough.adoc[]
1579endif::wiki[]
16b4185a 1580
c2c8eb89 1581Hookscripts
91f416b7 1582-----------
c2c8eb89
DC
1583
1584You can add a hook script to VMs with the config property `hookscript`.
1585
32e8b5b2
AL
1586----
1587# qm set 100 --hookscript local:snippets/hookscript.pl
1588----
c2c8eb89
DC
1589
1590It will be called during various phases of the guests lifetime.
1591For an example and documentation see the example script under
1592`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
7eb69fd2 1593
88a31964
DC
1594[[qm_hibernate]]
1595Hibernation
1596-----------
1597
1598You can suspend a VM to disk with the GUI option `Hibernate` or with
1599
32e8b5b2
AL
1600----
1601# qm suspend ID --todisk
1602----
88a31964
DC
1603
1604That means that the current content of the memory will be saved onto disk
1605and the VM gets stopped. On the next start, the memory content will be
1606loaded and the VM can continue where it was left off.
1607
1608[[qm_vmstatestorage]]
1609.State storage selection
1610If no target storage for the memory is given, it will be automatically
1611chosen, the first of:
1612
16131. The storage `vmstatestorage` from the VM config.
16142. The first shared storage from any VM disk.
16153. The first non-shared storage from any VM disk.
16164. The storage `local` as a fallback.
1617
e2a867b2
DC
1618[[resource_mapping]]
1619Resource Mapping
bd0cc33d 1620----------------
e2a867b2 1621
481a0ee4
DC
1622[thumbnail="screenshot/gui-datacenter-resource-mappings.png"]
1623
e2a867b2
DC
1624When using or referencing local resources (e.g. address of a pci device), using
1625the raw address or id is sometimes problematic, for example:
1626
1627* when using HA, a different device with the same id or path may exist on the
1628 target node, and if one is not careful when assigning such guests to HA
1629 groups, the wrong device could be used, breaking configurations.
1630
1631* changing hardware can change ids and paths, so one would have to check all
1632 assigned devices and see if the path or id is still correct.
1633
1634To handle this better, one can define cluster wide resource mappings, such that
1635a resource has a cluster unique, user selected identifier which can correspond
1636to different devices on different hosts. With this, HA won't start a guest with
1637a wrong device, and hardware changes can be detected.
1638
1639Creating such a mapping can be done with the {pve} web GUI under `Datacenter`
1640in the relevant tab in the `Resource Mappings` category, or on the cli with
1641
1642----
d772991e 1643# pvesh create /cluster/mapping/<type> <options>
e2a867b2
DC
1644----
1645
4657b9ff
TL
1646[thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
1647
d772991e
TL
1648Where `<type>` is the hardware type (currently either `pci` or `usb`) and
1649`<options>` are the device mappings and other configuration parameters.
e2a867b2
DC
1650
1651Note that the options must include a map property with all identifying
1652properties of that hardware, so that it's possible to verify the hardware did
1653not change and the correct device is passed through.
1654
1655For example to add a PCI device as `device1` with the path `0000:01:00.0` that
1656has the device id `0001` and the vendor id `0002` on the node `node1`, and
1657`0000:02:00.0` on `node2` you can add it with:
1658
1659----
1660# pvesh create /cluster/mapping/pci --id device1 \
1661 --map node=node1,path=0000:01:00.0,id=0002:0001 \
1662 --map node=node2,path=0000:02:00.0,id=0002:0001
1663----
1664
1665You must repeat the `map` parameter for each node where that device should have
1666a mapping (note that you can currently only map one USB device per node per
1667mapping).
1668
1669Using the GUI makes this much easier, as the correct properties are
1670automatically picked up and sent to the API.
1671
481a0ee4
DC
1672[thumbnail="screenshot/gui-datacenter-mapping-usb-edit.png"]
1673
e2a867b2
DC
1674It's also possible for PCI devices to provide multiple devices per node with
1675multiple map properties for the nodes. If such a device is assigned to a guest,
1676the first free one will be used when the guest is started. The order of the
1677paths given is also the order in which they are tried, so arbitrary allocation
1678policies can be implemented.
1679
1680This is useful for devices with SR-IOV, since some times it is not important
1681which exact virtual function is passed through.
1682
1683You can assign such a device to a guest either with the GUI or with
1684
1685----
d772991e 1686# qm set ID -hostpci0 <name>
e2a867b2
DC
1687----
1688
1689for PCI devices, or
1690
1691----
d772991e 1692# qm set <vmid> -usb0 <name>
e2a867b2
DC
1693----
1694
1695for USB devices.
1696
d772991e 1697Where `<vmid>` is the guests id and `<name>` is the chosen name for the created
e2a867b2
DC
1698mapping. All usual options for passing through the devices are allowed, such as
1699`mdev`.
1700
d772991e
TL
1701To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
1702(where `<type>` is the device type and `<name>` is the name of the mapping).
e2a867b2 1703
d772991e
TL
1704To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
1705(in addition to the normal guest privileges to edit the configuration).
e2a867b2 1706
8c1189b6 1707Managing Virtual Machines with `qm`
dd042288 1708------------------------------------
f69cfd23 1709
c730e973 1710qm is the tool to manage QEMU/KVM virtual machines on {pve}. You can
f69cfd23
DM
1711create and destroy virtual machines, and control execution
1712(start/stop/suspend/resume). Besides that, you can use qm to set
1713parameters in the associated config file. It is also possible to
1714create and delete virtual disks.
1715
dd042288
EK
1716CLI Usage Examples
1717~~~~~~~~~~~~~~~~~~
1718
b01b1f2c
EK
1719Using an iso file uploaded on the 'local' storage, create a VM
1720with a 4 GB IDE disk on the 'local-lvm' storage
dd042288 1721
32e8b5b2
AL
1722----
1723# qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1724----
dd042288
EK
1725
1726Start the new VM
1727
32e8b5b2
AL
1728----
1729# qm start 300
1730----
dd042288
EK
1731
1732Send a shutdown request, then wait until the VM is stopped.
1733
32e8b5b2
AL
1734----
1735# qm shutdown 300 && qm wait 300
1736----
dd042288
EK
1737
1738Same as above, but only wait for 40 seconds.
1739
32e8b5b2
AL
1740----
1741# qm shutdown 300 && qm wait 300 -timeout 40
1742----
dd042288 1743
87927c65
DJ
1744Destroying a VM always removes it from Access Control Lists and it always
1745removes the firewall configuration of the VM. You have to activate
1746'--purge', if you want to additionally remove the VM from replication jobs,
1747backup jobs and HA resource configurations.
1748
32e8b5b2
AL
1749----
1750# qm destroy 300 --purge
1751----
87927c65 1752
66aecccb
AL
1753Move a disk image to a different storage.
1754
32e8b5b2
AL
1755----
1756# qm move-disk 300 scsi0 other-storage
1757----
66aecccb
AL
1758
1759Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1760the source VM and attaches it as `scsi3` to the target VM. In the background
1761the disk image is being renamed so that the name matches the new owner.
1762
32e8b5b2
AL
1763----
1764# qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1765----
87927c65 1766
f0a8ab95
DM
1767
1768[[qm_configuration]]
f69cfd23
DM
1769Configuration
1770-------------
1771
f0a8ab95
DM
1772VM configuration files are stored inside the Proxmox cluster file
1773system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1774Like other files stored inside `/etc/pve/`, they get automatically
1775replicated to all other cluster nodes.
f69cfd23 1776
f0a8ab95
DM
1777NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1778unique cluster wide.
1779
1780.Example VM Configuration
1781----
777cf894 1782boot: order=virtio0;net0
f0a8ab95
DM
1783cores: 1
1784sockets: 1
1785memory: 512
1786name: webmail
1787ostype: l26
f0a8ab95
DM
1788net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1789virtio0: local:vm-100-disk-1,size=32G
1790----
1791
1792Those configuration files are simple text files, and you can edit them
1793using a normal text editor (`vi`, `nano`, ...). This is sometimes
1794useful to do small corrections, but keep in mind that you need to
1795restart the VM to apply such changes.
1796
1797For that reason, it is usually better to use the `qm` command to
1798generate and modify those files, or do the whole thing using the GUI.
1799Our toolkit is smart enough to instantaneously apply most changes to
1800running VM. This feature is called "hot plug", and there is no
1801need to restart the VM in that case.
1802
1803
1804File Format
1805~~~~~~~~~~~
1806
1807VM configuration files use a simple colon separated key/value
1808format. Each line has the following format:
1809
1810-----
1811# this is a comment
1812OPTION: value
1813-----
1814
1815Blank lines in those files are ignored, and lines starting with a `#`
1816character are treated as comments and are also ignored.
1817
1818
1819[[qm_snapshots]]
1820Snapshots
1821~~~~~~~~~
1822
1823When you create a snapshot, `qm` stores the configuration at snapshot
1824time into a separate snapshot section within the same configuration
1825file. For example, after creating a snapshot called ``testsnapshot'',
1826your configuration file will look like this:
1827
1828.VM configuration with snapshot
1829----
1830memory: 512
1831swap: 512
1832parent: testsnaphot
1833...
1834
1835[testsnaphot]
1836memory: 512
1837swap: 512
1838snaptime: 1457170803
1839...
1840----
1841
1842There are a few snapshot related properties like `parent` and
1843`snaptime`. The `parent` property is used to store the parent/child
1844relationship between snapshots. `snaptime` is the snapshot creation
1845time stamp (Unix epoch).
f69cfd23 1846
88a31964
DC
1847You can optionally save the memory of a running VM with the option `vmstate`.
1848For details about how the target storage gets chosen for the VM state, see
1849xref:qm_vmstatestorage[State storage selection] in the chapter
1850xref:qm_hibernate[Hibernation].
f69cfd23 1851
80c0adcb 1852[[qm_options]]
a7f36905
DM
1853Options
1854~~~~~~~
1855
1856include::qm.conf.5-opts.adoc[]
1857
f69cfd23
DM
1858
1859Locks
1860-----
1861
d6466262
TL
1862Online migrations, snapshots and backups (`vzdump`) set a lock to prevent
1863incompatible concurrent actions on the affected VMs. Sometimes you need to
1864remove such a lock manually (for example after a power failure).
f69cfd23 1865
32e8b5b2
AL
1866----
1867# qm unlock <vmid>
1868----
f69cfd23 1869
0bcc62dd
DM
1870CAUTION: Only do that if you are sure the action which set the lock is
1871no longer running.
1872
16b4185a
DM
1873ifdef::wiki[]
1874
1875See Also
1876~~~~~~~~
1877
1878* link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1879
1880endif::wiki[]
1881
1882
f69cfd23 1883ifdef::manvolnum[]
704f19fb
DM
1884
1885Files
1886------
1887
1888`/etc/pve/qemu-server/<VMID>.conf`::
1889
1890Configuration file for the VM '<VMID>'.
1891
1892
f69cfd23
DM
1893include::pve-copyright.adoc[]
1894endif::manvolnum[]