]> git.proxmox.com Git - pve-docs.git/blame - qm.adoc
sdn: add notes about bgp controller
[pve-docs.git] / qm.adoc
CommitLineData
80c0adcb 1[[chapter_virtual_machines]]
f69cfd23 2ifdef::manvolnum[]
b2f242ab
DM
3qm(1)
4=====
5f09af76
DM
5:pve-toplevel:
6
f69cfd23
DM
7NAME
8----
9
c730e973 10qm - QEMU/KVM Virtual Machine Manager
f69cfd23
DM
11
12
49a5e11c 13SYNOPSIS
f69cfd23
DM
14--------
15
16include::qm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
f69cfd23 21ifndef::manvolnum[]
c730e973 22QEMU/KVM Virtual Machines
f69cfd23 23=========================
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
c4cba5d7
EK
27// deprecates
28// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29// http://pve.proxmox.com/wiki/KVM
30// http://pve.proxmox.com/wiki/Qemu_Server
31
c730e973
FE
32QEMU (short form for Quick Emulator) is an open source hypervisor that emulates a
33physical computer. From the perspective of the host system where QEMU is
34running, QEMU is a user program which has access to a number of local resources
c4cba5d7 35like partitions, files, network cards which are then passed to an
189d3661 36emulated computer which sees them as if they were real devices.
c4cba5d7
EK
37
38A guest operating system running in the emulated computer accesses these
3a433e9b 39devices, and runs as if it were running on real hardware. For instance, you can pass
c730e973 40an ISO image as a parameter to QEMU, and the OS running in the emulated computer
3a433e9b 41will see a real CD-ROM inserted into a CD drive.
c4cba5d7 42
c730e973 43QEMU can emulate a great variety of hardware from ARM to Sparc, but {pve} is
c4cba5d7
EK
44only concerned with 32 and 64 bits PC clone emulation, since it represents the
45overwhelming majority of server hardware. The emulation of PC clones is also one
46of the fastest due to the availability of processor extensions which greatly
c730e973 47speed up QEMU when the emulated architecture is the same as the host
9c63b5d9
EK
48architecture.
49
50NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
c730e973
FE
51It means that QEMU is running with the support of the virtualization processor
52extensions, via the Linux KVM module. In the context of {pve} _QEMU_ and
53_KVM_ can be used interchangeably, as QEMU in {pve} will always try to load the KVM
9c63b5d9
EK
54module.
55
c730e973 56QEMU inside {pve} runs as a root process, since this is required to access block
c4cba5d7
EK
57and PCI devices.
58
5eba0743 59
c4cba5d7
EK
60Emulated devices and paravirtualized devices
61--------------------------------------------
62
c730e973 63The PC hardware emulated by QEMU includes a mainboard, network controllers,
3a433e9b 64SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
189d3661
DC
65the `kvm(1)` man page) all of them emulated in software. All these devices
66are the exact software equivalent of existing hardware devices, and if the OS
67running in the guest has the proper drivers it will use the devices as if it
c35063c2 68were running on real hardware. This allows QEMU to run _unmodified_ operating
c4cba5d7
EK
69systems.
70
71This however has a performance cost, as running in software what was meant to
72run in hardware involves a lot of extra work for the host CPU. To mitigate this,
c730e973
FE
73QEMU can present to the guest operating system _paravirtualized devices_, where
74the guest OS recognizes it is running inside QEMU and cooperates with the
c4cba5d7
EK
75hypervisor.
76
c730e973 77QEMU relies on the virtio virtualization standard, and is thus able to present
189d3661
DC
78paravirtualized virtio devices, which includes a paravirtualized generic disk
79controller, a paravirtualized network card, a paravirtualized serial port,
c4cba5d7
EK
80a paravirtualized SCSI controller, etc ...
81
e3d91783
FE
82TIP: It is *highly recommended* to use the virtio devices whenever you can, as
83they provide a big performance improvement and are generally better maintained.
84Using the virtio generic disk controller versus an emulated IDE controller will
85double the sequential write throughput, as measured with `bonnie++(8)`. Using
86the virtio network interface can deliver up to three times the throughput of an
0677f4cc
FE
87emulated Intel E1000 network card, as measured with `iperf(1)`. footnote:[See
88this benchmark on the KVM wiki https://www.linux-kvm.org/page/Using_VirtIO_NIC]
c4cba5d7 89
5eba0743 90
80c0adcb 91[[qm_virtual_machines_settings]]
5274ad28 92Virtual Machines Settings
c4cba5d7 93-------------------------
80c0adcb 94
c4cba5d7
EK
95Generally speaking {pve} tries to choose sane defaults for virtual machines
96(VM). Make sure you understand the meaning of the settings you change, as it
97could incur a performance slowdown, or putting your data at risk.
98
5eba0743 99
80c0adcb 100[[qm_general_settings]]
c4cba5d7
EK
101General Settings
102~~~~~~~~~~~~~~~~
80c0adcb 103
1ff5e4e8 104[thumbnail="screenshot/gui-create-vm-general.png"]
b16d767f 105
c4cba5d7
EK
106General settings of a VM include
107
108* the *Node* : the physical server on which the VM will run
109* the *VM ID*: a unique number in this {pve} installation used to identify your VM
110* *Name*: a free form text string you can use to describe the VM
111* *Resource Pool*: a logical group of VMs
112
5eba0743 113
80c0adcb 114[[qm_os_settings]]
c4cba5d7
EK
115OS Settings
116~~~~~~~~~~~
80c0adcb 117
1ff5e4e8 118[thumbnail="screenshot/gui-create-vm-os.png"]
200114a7 119
d3c00374
TL
120When creating a virtual machine (VM), setting the proper Operating System(OS)
121allows {pve} to optimize some low level parameters. For instance Windows OS
122expect the BIOS clock to use the local time, while Unix based OS expect the
123BIOS clock to have the UTC time.
124
125[[qm_system_settings]]
126System Settings
127~~~~~~~~~~~~~~~
128
ade78a55
TL
129On VM creation you can change some basic system components of the new VM. You
130can specify which xref:qm_display[display type] you want to use.
d3c00374
TL
131[thumbnail="screenshot/gui-create-vm-system.png"]
132Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133If you plan to install the QEMU Guest Agent, or if your selected ISO image
c730e973 134already ships and installs it automatically, you may want to tick the 'QEMU
d3c00374
TL
135Agent' box, which lets {pve} know that it can use its features to show some
136more information, and complete some actions (for example, shutdown or
137snapshots) more intelligently.
138
139{pve} allows to boot VMs with different firmware and machine types, namely
140xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
3a433e9b 141the default SeaBIOS to OVMF only if you plan to use
9dbab4f8 142xref:qm_pci_passthrough[PCIe passthrough]. A VMs 'Machine Type' defines the
d3c00374
TL
143hardware layout of the VM's virtual motherboard. You can choose between the
144default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
145https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
146chipset, which also provides a virtual PCIe bus, and thus may be desired if
5f318cc0 147one wants to pass through PCIe hardware.
5eba0743 148
80c0adcb 149[[qm_hard_disk]]
c4cba5d7
EK
150Hard Disk
151~~~~~~~~~
80c0adcb 152
3dbe1daa
TL
153[[qm_hard_disk_bus]]
154Bus/Controller
155^^^^^^^^^^^^^^
c730e973 156QEMU can emulate a number of storage controllers:
c4cba5d7 157
741fa478
FE
158TIP: It is highly recommended to use the *VirtIO SCSI* or *VirtIO Block*
159controller for performance reasons and because they are better maintained.
160
c4cba5d7 161* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
44f38275 162controller. Even if this controller has been superseded by recent designs,
6fb50457 163each and every OS you can think of has support for it, making it a great choice
c4cba5d7
EK
164if you want to run an OS released before 2003. You can connect up to 4 devices
165on this controller.
166
167* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
168design, allowing higher throughput and a greater number of devices to be
169connected. You can connect up to 6 devices on this controller.
170
b0b6802b
EK
171* the *SCSI* controller, designed in 1985, is commonly found on server grade
172hardware, and can connect up to 14 storage devices. {pve} emulates by default a
f4bfd701
DM
173LSI 53C895A controller.
174+
a89ded0b
FE
175A SCSI controller of type _VirtIO SCSI single_ and enabling the
176xref:qm_hard_disk_iothread[IO Thread] setting for the attached disks is
177recommended if you aim for performance. This is the default for newly created
178Linux VMs since {pve} 7.3. Each disk will have its own _VirtIO SCSI_ controller,
179and QEMU will handle the disks IO in a dedicated thread. Linux distributions
180have support for this controller since 2012, and FreeBSD since 2014. For Windows
181OSes, you need to provide an extra ISO containing the drivers during the
182installation.
b0b6802b
EK
183// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
184
30e6fe00
TL
185* The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
186is an older type of paravirtualized controller. It has been superseded by the
187VirtIO SCSI Controller, in terms of features.
c4cba5d7 188
1ff5e4e8 189[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
3dbe1daa
TL
190
191[[qm_hard_disk_formats]]
192Image Format
193^^^^^^^^^^^^
c4cba5d7
EK
194On each controller you attach a number of emulated hard disks, which are backed
195by a file or a block device residing in the configured storage. The choice of
196a storage type will determine the format of the hard disk image. Storages which
197present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
de14ebff 198whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
c4cba5d7
EK
199either the *raw disk image format* or the *QEMU image format*.
200
201 * the *QEMU image format* is a copy on write format which allows snapshots, and
202 thin provisioning of the disk image.
189d3661
DC
203 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
204 you would get when executing the `dd` command on a block device in Linux. This
4371b2fe 205 format does not support thin provisioning or snapshots by itself, requiring
30e6fe00
TL
206 cooperation from the storage layer for these tasks. It may, however, be up to
207 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
43530f6f 208 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
189d3661 209 * the *VMware image format* only makes sense if you intend to import/export the
c4cba5d7
EK
210 disk image to other hypervisors.
211
3dbe1daa
TL
212[[qm_hard_disk_cache]]
213Cache Mode
214^^^^^^^^^^
c4cba5d7
EK
215Setting the *Cache* mode of the hard drive will impact how the host system will
216notify the guest systems of block write completions. The *No cache* default
217means that the guest system will be notified that a write is complete when each
218block reaches the physical storage write queue, ignoring the host page cache.
219This provides a good balance between safety and speed.
220
221If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
222you can set the *No backup* option on that disk.
223
3205ac49
EK
224If you want the {pve} storage replication mechanism to skip a disk when starting
225 a replication job, you can set the *Skip replication* option on that disk.
6fb50457 226As of {pve} 5.0, replication requires the disk images to be on a storage of type
3205ac49 227`zfspool`, so adding a disk image to other storages when the VM has replication
6fb50457 228configured requires to skip replication for this disk image.
3205ac49 229
3dbe1daa
TL
230[[qm_hard_disk_discard]]
231Trim/Discard
232^^^^^^^^^^^^
c4cba5d7 233If your storage supports _thin provisioning_ (see the storage chapter in the
53cbac40
NC
234{pve} guide), you can activate the *Discard* option on a drive. With *Discard*
235set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
236https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
237marks blocks as unused after deleting files, the controller will relay this
238information to the storage, which will then shrink the disk image accordingly.
43975153
SR
239For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
240option on the drive. Some guest operating systems may also require the
241*SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
242only supported on guests using Linux Kernel 5.0 or higher.
c4cba5d7 243
25203dc1
NC
244If you would like a drive to be presented to the guest as a solid-state drive
245rather than a rotational hard disk, you can set the *SSD emulation* option on
246that drive. There is no requirement that the underlying storage actually be
247backed by SSDs; this feature can be used with physical media of any type.
53cbac40 248Note that *SSD emulation* is not supported on *VirtIO Block* drives.
25203dc1 249
3dbe1daa
TL
250
251[[qm_hard_disk_iothread]]
252IO Thread
253^^^^^^^^^
4c7a47cf
FE
254The option *IO Thread* can only be used when using a disk with the *VirtIO*
255controller, or with the *SCSI* controller, when the emulated controller type is
256*VirtIO SCSI single*. With *IO Thread* enabled, QEMU creates one I/O thread per
58e695ca 257storage controller rather than handling all I/O in the main event loop or vCPU
afb90565
TL
258threads. One benefit is better work distribution and utilization of the
259underlying storage. Another benefit is reduced latency (hangs) in the guest for
260very I/O-intensive host workloads, since neither the main thread nor a vCPU
261thread can be blocked by disk I/O.
80c0adcb
DM
262
263[[qm_cpu]]
34e541c5
EK
264CPU
265~~~
80c0adcb 266
1ff5e4e8 267[thumbnail="screenshot/gui-create-vm-cpu.png"]
397c74c3 268
34e541c5
EK
269A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
270This CPU can then contain one or many *cores*, which are independent
271processing units. Whether you have a single CPU socket with 4 cores, or two CPU
272sockets with two cores is mostly irrelevant from a performance point of view.
44f38275
TL
273However some software licenses depend on the number of sockets a machine has,
274in that case it makes sense to set the number of sockets to what the license
275allows you.
f4bfd701 276
3a433e9b 277Increasing the number of virtual CPUs (cores and sockets) will usually provide a
34e541c5 278performance improvement though that is heavily dependent on the use of the VM.
3a433e9b 279Multi-threaded applications will of course benefit from a large number of
c730e973 280virtual CPUs, as for each virtual cpu you add, QEMU will create a new thread of
34e541c5
EK
281execution on the host system. If you're not sure about the workload of your VM,
282it is usually a safe bet to set the number of *Total cores* to 2.
283
fb29acdd 284NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
d6466262
TL
285is greater than the number of cores on the server (for example, 4 VMs each with
2864 cores (= total 16) on a machine with only 8 cores). In that case the host
287system will balance the QEMU execution threads between your server cores, just
288like if you were running a standard multi-threaded application. However, {pve}
289will prevent you from starting VMs with more virtual CPU cores than physically
290available, as this will only bring the performance down due to the cost of
291context switches.
34e541c5 292
af54f54d
TL
293[[qm_cpu_resource_limits]]
294Resource Limits
295^^^^^^^^^^^^^^^
296
4371b2fe 297In addition to the number of virtual cores, you can configure how much resources
af54f54d
TL
298a VM can get in relation to the host CPU time and also in relation to other
299VMs.
046643ec
FG
300With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
301the whole VM can use on the host. It is a floating point value representing CPU
af54f54d 302time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
4371b2fe 303single process would fully use one single core it would have `100%` CPU Time
af54f54d 304usage. If a VM with four cores utilizes all its cores fully it would
c730e973 305theoretically use `400%`. In reality the usage may be even a bit higher as QEMU
af54f54d
TL
306can have additional threads for VM peripherals besides the vCPU core ones.
307This setting can be useful if a VM should have multiple vCPUs, as it runs a few
308processes in parallel, but the VM as a whole should not be able to run all
309vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
310which would profit from having 8 vCPUs, but at no time all of those 8 cores
311should run at full load - as this would make the server so overloaded that
312other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
313`4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
314real host cores CPU time. But, if only 4 would do work they could still get
315almost 100% of a real core each.
316
d6466262
TL
317NOTE: VMs can, depending on their configuration, use additional threads, such
318as for networking or IO operations but also live migration. Thus a VM can show
319up to use more CPU time than just its virtual CPUs could use. To ensure that a
320VM never uses more CPU time than virtual CPUs assigned set the *cpulimit*
321setting to the same value as the total core count.
af54f54d
TL
322
323The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
48219c58
FE
324shares or CPU weight), controls how much CPU time a VM gets compared to other
325running VMs. It is a relative weight which defaults to `100` (or `1024` if the
326host uses legacy cgroup v1). If you increase this for a VM it will be
d6466262
TL
327prioritized by the scheduler in comparison to other VMs with lower weight. For
328example, if VM 100 has set the default `100` and VM 200 was changed to `200`,
329the latter VM 200 would receive twice the CPU bandwidth than the first VM 100.
af54f54d
TL
330
331For more information see `man systemd.resource-control`, here `CPUQuota`
b90b797f 332corresponds to `cpulimit` and `CPUWeight` corresponds to our `cpuunits`
af54f54d
TL
333setting, visit its Notes section for references and implementation details.
334
1e6b30b5
DB
335The third CPU resource limiting setting, *affinity*, controls what host cores
336the virtual machine will be permitted to execute on. E.g., if an affinity value
337of `0-3,8-11` is provided, the virtual machine will be restricted to using the
338host cores `0,1,2,3,8,9,10,` and `11`. Valid *affinity* values are written in
339cpuset `List Format`. List Format is a comma-separated list of CPU numbers and
340ranges of numbers, in ASCII decimal.
341
342NOTE: CPU *affinity* uses the `taskset` command to restrict virtual machines to
343a given set of cores. This restriction will not take effect for some types of
344processes that may be created for IO. *CPU affinity is not a security feature.*
345
346For more information regarding *affinity* see `man cpuset`. Here the
347`List Format` corresponds to valid *affinity* values. Visit its `Formats`
348section for more examples.
349
af54f54d
TL
350CPU Type
351^^^^^^^^
352
c730e973 353QEMU can emulate a number different of *CPU types* from 486 to the latest Xeon
34e541c5 354processors. Each new processor generation adds new features, like hardware
16b31cc9
AZ
355assisted 3d rendering, random number generation, memory protection, etc. Also,
356a current generation can be upgraded through
357xref:chapter_firmware_updates[microcode update] with bug or security fixes.
41379e9a 358
34e541c5
EK
359Usually you should select for your VM a processor type which closely matches the
360CPU of the host system, as it means that the host CPU features (also called _CPU
361flags_ ) will be available in your VMs. If you want an exact match, you can set
362the CPU type to *host* in which case the VM will have exactly the same CPU flags
f4bfd701
DM
363as your host system.
364
34e541c5 365This has a downside though. If you want to do a live migration of VMs between
41379e9a 366different hosts, your VM might end up on a new system with a different CPU type
57bb28ef
FE
367or a different microcode version.
368If the CPU flags passed to the guest are missing, the QEMU process will stop. To
369remedy this QEMU has also its own virtual CPU types, that {pve} uses by default.
41379e9a 370
57bb28ef
FE
371The backend default is 'kvm64' which works on essentially all x86_64 host CPUs
372and the UI default when creating a new VM is 'x86-64-v2-AES', which requires a
373host CPU starting from Westmere for Intel or at least a fourth generation
374Opteron for AMD.
41379e9a
AD
375
376In short:
f4bfd701 377
57bb28ef
FE
378If you don’t care about live migration or have a homogeneous cluster where all
379nodes have the same CPU and same microcode version, set the CPU type to host, as
380in theory this will give your guests maximum performance.
af54f54d 381
57bb28ef
FE
382If you care about live migration and security, and you have only Intel CPUs or
383only AMD CPUs, choose the lowest generation CPU model of your cluster.
41379e9a 384
57bb28ef
FE
385If you care about live migration without security, or have mixed Intel/AMD
386cluster, choose the lowest compatible virtual QEMU CPU type.
41379e9a 387
57bb28ef 388NOTE: Live migrations between Intel and AMD host CPUs have no guarantee to work.
41379e9a 389
85e53bbf 390See also
2157032d 391xref:chapter_qm_vcpu_list[List of AMD and Intel CPU Types as Defined in QEMU].
41379e9a 392
c85a1f5a 393QEMU CPU Types
41379e9a
AD
394^^^^^^^^^^^^^^
395
c85a1f5a
FE
396QEMU also provide virtual CPU types, compatible with both Intel and AMD host
397CPUs.
41379e9a 398
c85a1f5a
FE
399NOTE: To mitigate the Spectre vulnerability for virtual CPU types, you need to
400add the relevant CPU flags, see
401xref:qm_meltdown_spectre[Meltdown / Spectre related CPU flags].
41379e9a 402
c85a1f5a
FE
403Historically, {pve} had the 'kvm64' CPU model, with CPU flags at the level of
404Pentium 4 enabled, so performance was not great for certain workloads.
41379e9a 405
c85a1f5a
FE
406In the summer of 2020, AMD, Intel, Red Hat, and SUSE collaborated to define
407three x86-64 microarchitecture levels on top of the x86-64 baseline, with modern
408flags enabled. For details, see the
409https://gitlab.com/x86-psABIs/x86-64-ABI[x86-64-ABI specification].
41379e9a 410
c85a1f5a
FE
411NOTE: Some newer distributions like CentOS 9 are now built with 'x86-64-v2'
412flags as a minimum requirement.
41379e9a 413
c85a1f5a
FE
414* 'kvm64 (x86-64-v1)': Compatible with Intel CPU >= Pentium 4, AMD CPU >=
415Phenom.
41379e9a 416+
c85a1f5a
FE
417* 'x86-64-v2': Compatible with Intel CPU >= Nehalem, AMD CPU >= Opteron_G3.
418Added CPU flags compared to 'x86-64-v1': '+cx16', '+lahf-lm', '+popcnt', '+pni',
419'+sse4.1', '+sse4.2', '+ssse3'.
41379e9a 420+
c85a1f5a
FE
421* 'x86-64-v2-AES': Compatible with Intel CPU >= Westmere, AMD CPU >= Opteron_G4.
422Added CPU flags compared to 'x86-64-v2': '+aes'.
41379e9a 423+
c85a1f5a
FE
424* 'x86-64-v3': Compatible with Intel CPU >= Broadwell, AMD CPU >= EPYC. Added
425CPU flags compared to 'x86-64-v2-AES': '+avx', '+avx2', '+bmi1', '+bmi2',
426'+f16c', '+fma', '+movbe', '+xsave'.
41379e9a 427+
c85a1f5a
FE
428* 'x86-64-v4': Compatible with Intel CPU >= Skylake, AMD CPU >= EPYC v4 Genoa.
429Added CPU flags compared to 'x86-64-v3': '+avx512f', '+avx512bw', '+avx512cd',
430'+avx512dq', '+avx512vl'.
41379e9a 431
9e797d8c
SR
432Custom CPU Types
433^^^^^^^^^^^^^^^^
434
435You can specify custom CPU types with a configurable set of features. These are
436maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
437an administrator. See `man cpu-models.conf` for format details.
438
439Specified custom types can be selected by any user with the `Sys.Audit`
440privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
441or API, the name needs to be prefixed with 'custom-'.
442
c85a1f5a 443[[qm_meltdown_spectre]]
72ae8aa2
FG
444Meltdown / Spectre related CPU flags
445^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
446
2975cb7a 447There are several CPU flags related to the Meltdown and Spectre vulnerabilities
72ae8aa2
FG
448footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
449manually unless the selected CPU type of your VM already enables them by default.
450
2975cb7a 451There are two requirements that need to be fulfilled in order to use these
72ae8aa2 452CPU flags:
5dba2677 453
72ae8aa2
FG
454* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
455* The guest operating system must be updated to a version which mitigates the
456 attacks and is able to utilize the CPU feature
457
2975cb7a
AD
458Otherwise you need to set the desired CPU flag of the virtual CPU, either by
459editing the CPU options in the WebUI, or by setting the 'flags' property of the
460'cpu' option in the VM configuration file.
461
462For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
16b31cc9
AZ
463so-called ``microcode update'' for your CPU, see
464xref:chapter_firmware_updates[chapter Firmware Updates]. Note that not all
465affected CPUs can be updated to support spec-ctrl.
5dba2677 466
2975cb7a
AD
467
468To check if the {pve} host is vulnerable, execute the following command as root:
5dba2677
TL
469
470----
2975cb7a 471for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
5dba2677
TL
472----
473
16b31cc9 474A community script is also available to detect if the host is still vulnerable.
2975cb7a 475footnote:[spectre-meltdown-checker https://meltdown.ovh/]
72ae8aa2 476
2975cb7a
AD
477Intel processors
478^^^^^^^^^^^^^^^^
72ae8aa2 479
2975cb7a
AD
480* 'pcid'
481+
144d5ede 482This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
2975cb7a
AD
483called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
484the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
485mechanism footnote:[PCID is now a critical performance/security feature on x86
486https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
487+
488To check if the {pve} host supports PCID, execute the following command as root:
489+
72ae8aa2 490----
2975cb7a 491# grep ' pcid ' /proc/cpuinfo
72ae8aa2 492----
2975cb7a
AD
493+
494If this does not return empty your host's CPU has support for 'pcid'.
72ae8aa2 495
2975cb7a
AD
496* 'spec-ctrl'
497+
144d5ede
WB
498Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
499in cases where retpolines are not sufficient.
500Included by default in Intel CPU models with -IBRS suffix.
501Must be explicitly turned on for Intel CPU models without -IBRS suffix.
502Requires an updated host CPU microcode (intel-microcode >= 20180425).
2975cb7a
AD
503+
504* 'ssbd'
505+
144d5ede
WB
506Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
507Must be explicitly turned on for all Intel CPU models.
508Requires an updated host CPU microcode(intel-microcode >= 20180703).
72ae8aa2 509
72ae8aa2 510
2975cb7a
AD
511AMD processors
512^^^^^^^^^^^^^^
513
514* 'ibpb'
515+
144d5ede
WB
516Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
517in cases where retpolines are not sufficient.
518Included by default in AMD CPU models with -IBPB suffix.
519Must be explicitly turned on for AMD CPU models without -IBPB suffix.
2975cb7a
AD
520Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
521
522
523
524* 'virt-ssbd'
525+
526Required to enable the Spectre v4 (CVE-2018-3639) fix.
144d5ede
WB
527Not included by default in any AMD CPU model.
528Must be explicitly turned on for all AMD CPU models.
529This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
530Note that this must be explicitly enabled when when using the "host" cpu model,
531because this is a virtual feature which does not exist in the physical CPUs.
2975cb7a
AD
532
533
534* 'amd-ssbd'
535+
144d5ede
WB
536Required to enable the Spectre v4 (CVE-2018-3639) fix.
537Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
538This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
2975cb7a
AD
539virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
540
541
542* 'amd-no-ssb'
543+
544Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
144d5ede
WB
545Not included by default in any AMD CPU model.
546Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
547and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
2975cb7a
AD
548This is mutually exclusive with virt-ssbd and amd-ssbd.
549
5dba2677 550
af54f54d
TL
551NUMA
552^^^^
553You can also optionally emulate a *NUMA*
554footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
555in your VMs. The basics of the NUMA architecture mean that instead of having a
556global memory pool available to all your cores, the memory is spread into local
557banks close to each socket.
34e541c5
EK
558This can bring speed improvements as the memory bus is not a bottleneck
559anymore. If your system has a NUMA architecture footnote:[if the command
560`numactl --hardware | grep available` returns more than one node, then your host
561system has a NUMA architecture] we recommend to activate the option, as this
af54f54d
TL
562will allow proper distribution of the VM resources on the host system.
563This option is also required to hot-plug cores or RAM in a VM.
34e541c5
EK
564
565If the NUMA option is used, it is recommended to set the number of sockets to
4ccb911c 566the number of nodes of the host system.
34e541c5 567
af54f54d
TL
568vCPU hot-plug
569^^^^^^^^^^^^^
570
571Modern operating systems introduced the capability to hot-plug and, to a
3a433e9b 572certain extent, hot-unplug CPUs in a running system. Virtualization allows us
4371b2fe
FG
573to avoid a lot of the (physical) problems real hardware can cause in such
574scenarios.
575Still, this is a rather new and complicated feature, so its use should be
576restricted to cases where its absolutely needed. Most of the functionality can
577be replicated with other, well tested and less complicated, features, see
af54f54d
TL
578xref:qm_cpu_resource_limits[Resource Limits].
579
580In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
581To start a VM with less than this total core count of CPUs you may use the
4371b2fe 582*vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
af54f54d 583
4371b2fe 584Currently only this feature is only supported on Linux, a kernel newer than 3.10
af54f54d
TL
585is needed, a kernel newer than 4.7 is recommended.
586
587You can use a udev rule as follow to automatically set new CPUs as online in
588the guest:
589
590----
591SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
592----
593
594Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
595
d6466262
TL
596Note: CPU hot-remove is machine dependent and requires guest cooperation. The
597deletion command does not guarantee CPU removal to actually happen, typically
598it's a request forwarded to guest OS using target dependent mechanism, such as
599ACPI on x86/amd64.
af54f54d 600
80c0adcb
DM
601
602[[qm_memory]]
34e541c5
EK
603Memory
604~~~~~~
80c0adcb 605
34e541c5
EK
606For each VM you have the option to set a fixed size memory or asking
607{pve} to dynamically allocate memory based on the current RAM usage of the
59552707 608host.
34e541c5 609
96124d0f 610.Fixed Memory Allocation
1ff5e4e8 611[thumbnail="screenshot/gui-create-vm-memory.png"]
96124d0f 612
9ea21953 613When setting memory and minimum memory to the same amount
9fb002e6 614{pve} will simply allocate what you specify to your VM.
34e541c5 615
9abfec65
DC
616Even when using a fixed memory size, the ballooning device gets added to the
617VM, because it delivers useful information such as how much memory the guest
618really uses.
619In general, you should leave *ballooning* enabled, but if you want to disable
d6466262 620it (like for debugging purposes), simply uncheck *Ballooning Device* or set
9abfec65
DC
621
622 balloon: 0
623
624in the configuration.
625
96124d0f 626.Automatic Memory Allocation
96124d0f 627
34e541c5 628// see autoballoon() in pvestatd.pm
58e04593 629When setting the minimum memory lower than memory, {pve} will make sure that the
34e541c5
EK
630minimum amount you specified is always available to the VM, and if RAM usage on
631the host is below 80%, will dynamically add memory to the guest up to the
f4bfd701
DM
632maximum memory specified.
633
a35aad4a 634When the host is running low on RAM, the VM will then release some memory
34e541c5
EK
635back to the host, swapping running processes if needed and starting the oom
636killer in last resort. The passing around of memory between host and guest is
637done via a special `balloon` kernel driver running inside the guest, which will
638grab or release memory pages from the host.
639footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
640
c9f6e1a4
EK
641When multiple VMs use the autoallocate facility, it is possible to set a
642*Shares* coefficient which indicates the relative amount of the free host memory
470d4313 643that each VM should take. Suppose for instance you have four VMs, three of them
a35aad4a 644running an HTTP server and the last one is a database server. To cache more
c9f6e1a4
EK
645database blocks in the database server RAM, you would like to prioritize the
646database VM when spare RAM is available. For this you assign a Shares property
647of 3000 to the database VM, leaving the other VMs to the Shares default setting
470d4313 648of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
c9f6e1a4
EK
649* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
6503000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
a35aad4a 651get 1.5 GB.
c9f6e1a4 652
34e541c5
EK
653All Linux distributions released after 2010 have the balloon kernel driver
654included. For Windows OSes, the balloon driver needs to be added manually and can
655incur a slowdown of the guest, so we don't recommend using it on critical
59552707 656systems.
34e541c5
EK
657// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
658
470d4313 659When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
34e541c5
EK
660of RAM available to the host.
661
80c0adcb
DM
662
663[[qm_network_device]]
1ff7835b
EK
664Network Device
665~~~~~~~~~~~~~~
80c0adcb 666
1ff5e4e8 667[thumbnail="screenshot/gui-create-vm-network.png"]
c24ddb0a 668
1ff7835b
EK
669Each VM can have many _Network interface controllers_ (NIC), of four different
670types:
671
672 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
673 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
674performance. Like all VirtIO devices, the guest OS should have the proper driver
675installed.
676 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
59552707 677only be used when emulating older operating systems ( released before 2002 )
1ff7835b
EK
678 * the *vmxnet3* is another paravirtualized device, which should only be used
679when importing a VM from another hypervisor.
680
681{pve} will generate for each NIC a random *MAC address*, so that your VM is
682addressable on Ethernet networks.
683
470d4313 684The NIC you added to the VM can follow one of two different models:
af9c6de1
EK
685
686 * in the default *Bridged mode* each virtual NIC is backed on the host by a
687_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
688tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
689have direct access to the Ethernet LAN on which the host is located.
690 * in the alternative *NAT mode*, each virtual NIC will only communicate with
c730e973 691the QEMU user networking stack, where a built-in router and DHCP server can
470d4313 692provide network access. This built-in DHCP will serve addresses in the private
af9c6de1 69310.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
f5041150
DC
694should only be used for testing. This mode is only available via CLI or the API,
695but not via the WebUI.
af9c6de1
EK
696
697You can also skip adding a network device when creating a VM by selecting *No
698network device*.
699
750d4f04 700You can overwrite the *MTU* setting for each VM network device. The option
00dc358b 701`mtu=1` represents a special case, in which the MTU value will be inherited
750d4f04
DT
702from the underlying bridge.
703This option is only available for *VirtIO* network devices.
704
af9c6de1 705.Multiqueue
1ff7835b 706If you are using the VirtIO driver, you can optionally activate the
af9c6de1 707*Multiqueue* option. This option allows the guest OS to process networking
1ff7835b 708packets using multiple virtual CPUs, providing an increase in the total number
470d4313 709of packets transferred.
1ff7835b
EK
710
711//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
712When using the VirtIO driver with {pve}, each NIC network queue is passed to the
a35aad4a 713host kernel, where the queue will be processed by a kernel thread spawned by the
1ff7835b
EK
714vhost driver. With this option activated, it is possible to pass _multiple_
715network queues to the host kernel for each NIC.
716
717//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
af9c6de1 718When using Multiqueue, it is recommended to set it to a value equal
1ff7835b
EK
719to the number of Total Cores of your guest. You also need to set in
720the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
59552707 721command:
1ff7835b 722
7a0d4784 723`ethtool -L ens1 combined X`
1ff7835b
EK
724
725where X is the number of the number of vcpus of the VM.
726
af9c6de1 727You should note that setting the Multiqueue parameter to a value greater
1ff7835b
EK
728than one will increase the CPU load on the host and guest systems as the
729traffic increases. We recommend to set this option only when the VM has to
730process a great number of incoming connections, such as when the VM is running
731as a router, reverse proxy or a busy HTTP server doing long polling.
732
6cb67d7f
DC
733[[qm_display]]
734Display
735~~~~~~~
736
737QEMU can virtualize a few types of VGA hardware. Some examples are:
738
739* *std*, the default, emulates a card with Bochs VBE extensions.
1368dc02
TL
740* *cirrus*, this was once the default, it emulates a very old hardware module
741with all its problems. This display type should only be used if really
742necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
d6466262
TL
743qemu: using cirrus considered harmful], for example, if using Windows XP or
744earlier
6cb67d7f
DC
745* *vmware*, is a VMWare SVGA-II compatible adapter.
746* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
37422176
AL
747enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
748VM.
e039fe3c
TL
749* *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
750 can offload workloads to the host GPU without requiring special (expensive)
751 models and drivers and neither binding the host GPU completely, allowing
752 reuse between multiple guests and or the host.
753+
754NOTE: VirGL support needs some extra libraries that aren't installed by
755default due to being relatively big and also not available as open source for
756all GPU models/vendors. For most setups you'll just need to do:
757`apt install libgl1 libegl1`
6cb67d7f
DC
758
759You can edit the amount of memory given to the virtual GPU, by setting
1368dc02 760the 'memory' option. This can enable higher resolutions inside the VM,
6cb67d7f
DC
761especially with SPICE/QXL.
762
1368dc02 763As the memory is reserved by display device, selecting Multi-Monitor mode
d6466262 764for SPICE (such as `qxl2` for dual monitors) has some implications:
6cb67d7f 765
1368dc02
TL
766* Windows needs a device for each monitor, so if your 'ostype' is some
767version of Windows, {pve} gives the VM an extra device per monitor.
6cb67d7f 768Each device gets the specified amount of memory.
1368dc02 769
6cb67d7f
DC
770* Linux VMs, can always enable more virtual monitors, but selecting
771a Multi-Monitor mode multiplies the memory given to the device with
772the number of monitors.
773
1368dc02
TL
774Selecting `serialX` as display 'type' disables the VGA output, and redirects
775the Web Console to the selected serial port. A configured display 'memory'
776setting will be ignored in that case.
80c0adcb 777
dbb44ef0 778[[qm_usb_passthrough]]
685cc8e0
DC
779USB Passthrough
780~~~~~~~~~~~~~~~
80c0adcb 781
685cc8e0
DC
782There are two different types of USB passthrough devices:
783
470d4313 784* Host USB passthrough
685cc8e0
DC
785* SPICE USB passthrough
786
787Host USB passthrough works by giving a VM a USB device of the host.
788This can either be done via the vendor- and product-id, or
789via the host bus and port.
790
791The vendor/product-id looks like this: *0123:abcd*,
792where *0123* is the id of the vendor, and *abcd* is the id
793of the product, meaning two pieces of the same usb device
794have the same id.
795
796The bus/port looks like this: *1-2.3.4*, where *1* is the bus
797and *2.3.4* is the port path. This represents the physical
798ports of your host (depending of the internal order of the
799usb controllers).
800
801If a device is present in a VM configuration when the VM starts up,
802but the device is not present in the host, the VM can boot without problems.
470d4313 803As soon as the device/port is available in the host, it gets passed through.
685cc8e0 804
e60ce90c 805WARNING: Using this kind of USB passthrough means that you cannot move
685cc8e0
DC
806a VM online to another host, since the hardware is only available
807on the host the VM is currently residing.
808
809The second type of passthrough is SPICE USB passthrough. This is useful
810if you use a SPICE client which supports it. If you add a SPICE USB port
811to your VM, you can passthrough a USB device from where your SPICE client is,
812directly to the VM (for example an input device or hardware dongle).
813
e2a867b2
DC
814It is also possible to map devices on a cluster level, so that they can be
815properly used with HA and hardware changes are detected and non root users
816can configure them. See xref:resource_mapping[Resource Mapping]
817for details on that.
80c0adcb
DM
818
819[[qm_bios_and_uefi]]
076d60ae
DC
820BIOS and UEFI
821~~~~~~~~~~~~~
822
823In order to properly emulate a computer, QEMU needs to use a firmware.
55ce3375
TL
824Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
825first steps when booting a VM. It is responsible for doing basic hardware
826initialization and for providing an interface to the firmware and hardware for
827the operating system. By default QEMU uses *SeaBIOS* for this, which is an
828open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
829standard setups.
076d60ae 830
8e5720fd 831Some operating systems (such as Windows 11) may require use of an UEFI
58e695ca 832compatible implementation. In such cases, you must use *OVMF* instead,
8e5720fd
SR
833which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
834
d6466262
TL
835There are other scenarios in which the SeaBIOS may not be the ideal firmware to
836boot from, for example if you want to do VGA passthrough. footnote:[Alex
837Williamson has a good blog entry about this
838https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
076d60ae
DC
839
840If you want to use OVMF, there are several things to consider:
841
842In order to save things like the *boot order*, there needs to be an EFI Disk.
843This disk will be included in backups and snapshots, and there can only be one.
844
845You can create such a disk with the following command:
846
32e8b5b2
AL
847----
848# qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
849----
076d60ae
DC
850
851Where *<storage>* is the storage where you want to have the disk, and
852*<format>* is a format which the storage supports. Alternatively, you can
853create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
854hardware section of a VM.
855
8e5720fd
SR
856The *efitype* option specifies which version of the OVMF firmware should be
857used. For new VMs, this should always be '4m', as it supports Secure Boot and
858has more space allocated to support future development (this is the default in
859the GUI).
860
861*pre-enroll-keys* specifies if the efidisk should come pre-loaded with
862distribution-specific and Microsoft Standard Secure Boot keys. It also enables
863Secure Boot by default (though it can still be disabled in the OVMF menu within
864the VM).
865
866NOTE: If you want to start using Secure Boot in an existing VM (that still uses
867a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
868(`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
869will reset any custom configurations you have made in the OVMF menu!
870
076d60ae 871When using OVMF with a virtual display (without VGA passthrough),
8e5720fd 872you need to set the client resolution in the OVMF menu (which you can reach
076d60ae
DC
873with a press of the ESC button during boot), or you have to choose
874SPICE as the display type.
875
95e8e1b7
SR
876[[qm_tpm]]
877Trusted Platform Module (TPM)
878~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
879
880A *Trusted Platform Module* is a device which stores secret data - such as
881encryption keys - securely and provides tamper-resistance functions for
882validating system boot.
883
d6466262
TL
884Certain operating systems (such as Windows 11) require such a device to be
885attached to a machine (be it physical or virtual).
95e8e1b7
SR
886
887A TPM is added by specifying a *tpmstate* volume. This works similar to an
888efidisk, in that it cannot be changed (only removed) once created. You can add
889one via the following command:
890
32e8b5b2
AL
891----
892# qm set <vmid> -tpmstate0 <storage>:1,version=<version>
893----
95e8e1b7
SR
894
895Where *<storage>* is the storage you want to put the state on, and *<version>*
896is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
897choosing 'Add' -> 'TPM State' in the hardware section of a VM.
898
899The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
900implementation that requires a 'v1.2' TPM, it should be preferred.
901
902NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
903security benefits. The point of a TPM is that the data on it cannot be modified
904easily, except via commands specified as part of the TPM spec. Since with an
905emulated device the data storage happens on a regular volume, it can potentially
906be edited by anyone with access to it.
907
0ad30983
DC
908[[qm_ivshmem]]
909Inter-VM shared memory
910~~~~~~~~~~~~~~~~~~~~~~
911
8861c7ad
TL
912You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
913share memory between the host and a guest, or also between multiple guests.
0ad30983
DC
914
915To add such a device, you can use `qm`:
916
32e8b5b2
AL
917----
918# qm set <vmid> -ivshmem size=32,name=foo
919----
0ad30983
DC
920
921Where the size is in MiB. The file will be located under
922`/dev/shm/pve-shm-$name` (the default name is the vmid).
923
4d1a19eb
TL
924NOTE: Currently the device will get deleted as soon as any VM using it got
925shutdown or stopped. Open connections will still persist, but new connections
926to the exact same device cannot be made anymore.
927
8861c7ad 928A use case for such a device is the Looking Glass
451bb75f
SR
929footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
930performance, low-latency display mirroring between host and guest.
0ad30983 931
ca8c3009
AL
932[[qm_audio_device]]
933Audio Device
934~~~~~~~~~~~~
935
936To add an audio device run the following command:
937
938----
939qm set <vmid> -audio0 device=<device>
940----
941
942Supported audio devices are:
943
944* `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
945* `intel-hda`: Intel HD Audio Controller, emulates ICH6
946* `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
947
cf41761d
AL
948There are two backends available:
949
950* 'spice'
951* 'none'
952
953The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
954the 'none' backend can be useful if an audio device is needed in the VM for some
955software to work. To use the physical audio device of the host use device
956passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
957xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
958have options to play sound.
959
ca8c3009 960
adb2c91d
SR
961[[qm_virtio_rng]]
962VirtIO RNG
963~~~~~~~~~~
964
965A RNG (Random Number Generator) is a device providing entropy ('randomness') to
966a system. A virtual hardware-RNG can be used to provide such entropy from the
967host system to a guest VM. This helps to avoid entropy starvation problems in
968the guest (a situation where not enough entropy is available and the system may
969slow down or run into problems), especially during the guests boot process.
970
971To add a VirtIO-based emulated RNG, run the following command:
972
973----
974qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
975----
976
977`source` specifies where entropy is read from on the host and has to be one of
978the following:
979
980* `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
981* `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
982 starvation on the host system)
983* `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
984 are available, the one selected in
985 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
986
987A limit can be specified via the `max_bytes` and `period` parameters, they are
988read as `max_bytes` per `period` in milliseconds. However, it does not represent
989a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
990available on a 1 second timer, not that 1 KiB is streamed to the guest over the
991course of one second. Reducing the `period` can thus be used to inject entropy
992into the guest at a faster rate.
993
994By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
995recommended to always use a limiter to avoid guests using too many host
996resources. If desired, a value of '0' for `max_bytes` can be used to disable
997all limits.
998
777cf894 999[[qm_bootorder]]
8cd6f474
TL
1000Device Boot Order
1001~~~~~~~~~~~~~~~~~
777cf894
SR
1002
1003QEMU can tell the guest which devices it should boot from, and in which order.
d6466262 1004This can be specified in the config via the `boot` property, for example:
777cf894
SR
1005
1006----
1007boot: order=scsi0;net0;hostpci0
1008----
1009
1010[thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
1011
1012This way, the guest would first attempt to boot from the disk `scsi0`, if that
1013fails, it would go on to attempt network boot from `net0`, and in case that
1014fails too, finally attempt to boot from a passed through PCIe device (seen as
1015disk in case of NVMe, otherwise tries to launch into an option ROM).
1016
1017On the GUI you can use a drag-and-drop editor to specify the boot order, and use
1018the checkbox to enable or disable certain devices for booting altogether.
1019
1020NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
1021all of them must be marked as 'bootable' (that is, they must have the checkbox
1022enabled or appear in the list in the config) for the guest to be able to boot.
1023This is because recent SeaBIOS and OVMF versions only initialize disks if they
1024are marked 'bootable'.
1025
1026In any case, even devices not appearing in the list or having the checkmark
1027disabled will still be available to the guest, once it's operating system has
1028booted and initialized them. The 'bootable' flag only affects the guest BIOS and
1029bootloader.
1030
1031
288e3f46
EK
1032[[qm_startup_and_shutdown]]
1033Automatic Start and Shutdown of Virtual Machines
1034~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1035
1036After creating your VMs, you probably want them to start automatically
1037when the host system boots. For this you need to select the option 'Start at
1038boot' from the 'Options' Tab of your VM in the web interface, or set it with
1039the following command:
1040
32e8b5b2
AL
1041----
1042# qm set <vmid> -onboot 1
1043----
288e3f46 1044
4dbeb548
DM
1045.Start and Shutdown Order
1046
1ff5e4e8 1047[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548
DM
1048
1049In some case you want to be able to fine tune the boot order of your
1050VMs, for instance if one of your VM is providing firewalling or DHCP
1051to other guest systems. For this you can use the following
1052parameters:
288e3f46 1053
d6466262
TL
1054* *Start/Shutdown order*: Defines the start order priority. For example, set it
1055* to 1 if
288e3f46
EK
1056you want the VM to be the first to be started. (We use the reverse startup
1057order for shutdown, so a machine with a start order of 1 would be the last to
7eed72d8 1058be shut down). If multiple VMs have the same order defined on a host, they will
d750c851 1059additionally be ordered by 'VMID' in ascending order.
288e3f46 1060* *Startup delay*: Defines the interval between this VM start and subsequent
d6466262
TL
1061VMs starts. For example, set it to 240 if you want to wait 240 seconds before
1062starting other VMs.
288e3f46 1063* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
d6466262
TL
1064for the VM to be offline after issuing a shutdown command. By default this
1065value is set to 180, which means that {pve} will issue a shutdown request and
1066wait 180 seconds for the machine to be offline. If the machine is still online
1067after the timeout it will be stopped forcefully.
288e3f46 1068
2b2c6286
TL
1069NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
1070'boot order' options currently. Those VMs will be skipped by the startup and
1071shutdown algorithm as the HA manager itself ensures that VMs get started and
1072stopped.
1073
288e3f46 1074Please note that machines without a Start/Shutdown order parameter will always
7eed72d8 1075start after those where the parameter is set. Further, this parameter can only
d750c851 1076be enforced between virtual machines running on the same host, not
288e3f46 1077cluster-wide.
076d60ae 1078
0f7778ac
DW
1079If you require a delay between the host boot and the booting of the first VM,
1080see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
1081
c0f039aa
AL
1082
1083[[qm_qemu_agent]]
c730e973 1084QEMU Guest Agent
c0f039aa
AL
1085~~~~~~~~~~~~~~~~
1086
c730e973 1087The QEMU Guest Agent is a service which runs inside the VM, providing a
c0f039aa
AL
1088communication channel between the host and the guest. It is used to exchange
1089information and allows the host to issue commands to the guest.
1090
1091For example, the IP addresses in the VM summary panel are fetched via the guest
1092agent.
1093
1094Or when starting a backup, the guest is told via the guest agent to sync
1095outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1096
1097For the guest agent to work properly the following steps must be taken:
1098
1099* install the agent in the guest and make sure it is running
1100* enable the communication via the agent in {pve}
1101
1102Install Guest Agent
1103^^^^^^^^^^^^^^^^^^^
1104
1105For most Linux distributions, the guest agent is available. The package is
1106usually named `qemu-guest-agent`.
1107
1108For Windows, it can be installed from the
1109https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1110VirtIO driver ISO].
1111
80df0d2e 1112[[qm_qga_enable]]
c0f039aa
AL
1113Enable Guest Agent Communication
1114^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1115
1116Communication from {pve} with the guest agent can be enabled in the VM's
1117*Options* panel. A fresh start of the VM is necessary for the changes to take
1118effect.
1119
80df0d2e
TL
1120[[qm_qga_auto_trim]]
1121Automatic TRIM Using QGA
1122^^^^^^^^^^^^^^^^^^^^^^^^
1123
c0f039aa
AL
1124It is possible to enable the 'Run guest-trim' option. With this enabled,
1125{pve} will issue a trim command to the guest after the following
1126operations that have the potential to write out zeros to the storage:
1127
1128* moving a disk to another storage
1129* live migrating a VM to another node with local storage
1130
1131On a thin provisioned storage, this can help to free up unused space.
1132
95117b6c
FE
1133NOTE: There is a caveat with ext4 on Linux, because it uses an in-memory
1134optimization to avoid issuing duplicate TRIM requests. Since the guest doesn't
1135know about the change in the underlying storage, only the first guest-trim will
1136run as expected. Subsequent ones, until the next reboot, will only consider
1137parts of the filesystem that changed since then.
1138
80df0d2e 1139[[qm_qga_fsfreeze]]
62bf5d75
CH
1140Filesystem Freeze & Thaw on Backup
1141^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1142
1143By default, guest filesystems are synced via the 'fs-freeze' QEMU Guest Agent
1144Command when a backup is performed, to provide consistency.
1145
1146On Windows guests, some applications might handle consistent backups themselves
1147by hooking into the Windows VSS (Volume Shadow Copy Service) layer, a
1148'fs-freeze' then might interfere with that. For example, it has been observed
1149that calling 'fs-freeze' with some SQL Servers triggers VSS to call the SQL
1150Writer VSS module in a mode that breaks the SQL Server backup chain for
1151differential backups.
1152
1153For such setups you can configure {pve} to not issue a freeze-and-thaw cycle on
266dd87d
CH
1154backup by setting the `freeze-fs-on-backup` QGA option to `0`. This can also be
1155done via the GUI with the 'Freeze/thaw guest filesystems on backup for
1156consistency' option.
62bf5d75 1157
80df0d2e 1158IMPORTANT: Disabling this option can potentially lead to backups with inconsistent
62bf5d75
CH
1159filesystems and should therefore only be disabled if you know what you are
1160doing.
1161
c0f039aa
AL
1162Troubleshooting
1163^^^^^^^^^^^^^^^
1164
1165.VM does not shut down
1166
1167Make sure the guest agent is installed and running.
1168
1169Once the guest agent is enabled, {pve} will send power commands like
1170'shutdown' via the guest agent. If the guest agent is not running, commands
1171cannot get executed properly and the shutdown command will run into a timeout.
1172
22a0091c
AL
1173[[qm_spice_enhancements]]
1174SPICE Enhancements
1175~~~~~~~~~~~~~~~~~~
1176
1177SPICE Enhancements are optional features that can improve the remote viewer
1178experience.
1179
1180To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1181the following command to enable them via the CLI:
1182
1183----
1184qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1185----
1186
1187NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1188must be set to SPICE (qxl).
1189
1190Folder Sharing
1191^^^^^^^^^^^^^^
1192
1193Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1194installed in the guest. It makes the shared folder available through a local
1195WebDAV server located at http://localhost:9843.
1196
1197For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1198from the
1199https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1200
1201Most Linux distributions have a package called `spice-webdavd` that can be
1202installed.
1203
1204To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1205Select the folder to share and then enable the checkbox.
1206
1207NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1208
0dcd22f5
AL
1209CAUTION: Experimental! Currently this feature does not work reliably.
1210
22a0091c
AL
1211Video Streaming
1212^^^^^^^^^^^^^^^
1213
1214Fast refreshing areas are encoded into a video stream. Two options exist:
1215
1216* *all*: Any fast refreshing area will be encoded into a video stream.
1217* *filter*: Additional filters are used to decide if video streaming should be
1218 used (currently only small window surfaces are skipped).
1219
1220A general recommendation if video streaming should be enabled and which option
1221to choose from cannot be given. Your mileage may vary depending on the specific
1222circumstances.
1223
1224Troubleshooting
1225^^^^^^^^^^^^^^^
1226
19a58e02 1227.Shared folder does not show up
22a0091c
AL
1228
1229Make sure the WebDAV service is enabled and running in the guest. On Windows it
1230is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1231different depending on the distribution.
1232
1233If the service is running, check the WebDAV server by opening
1234http://localhost:9843 in a browser in the guest.
1235
1236It can help to restart the SPICE session.
c73c190f
DM
1237
1238[[qm_migration]]
1239Migration
1240---------
1241
1ff5e4e8 1242[thumbnail="screenshot/gui-qemu-migrate.png"]
e4bcef0a 1243
c73c190f
DM
1244If you have a cluster, you can migrate your VM to another host with
1245
32e8b5b2
AL
1246----
1247# qm migrate <vmid> <target>
1248----
c73c190f 1249
8df8cfb7
DC
1250There are generally two mechanisms for this
1251
1252* Online Migration (aka Live Migration)
1253* Offline Migration
1254
1255Online Migration
1256~~~~~~~~~~~~~~~~
1257
27780834
TL
1258If your VM is running and no locally bound resources are configured (such as
1259passed-through devices), you can initiate a live migration with the `--online`
1260flag in the `qm migration` command evocation. The web-interface defaults to
1261live migration when the VM is running.
c73c190f 1262
8df8cfb7
DC
1263How it works
1264^^^^^^^^^^^^
1265
27780834
TL
1266Online migration first starts a new QEMU process on the target host with the
1267'incoming' flag, which performs only basic initialization with the guest vCPUs
1268still paused and then waits for the guest memory and device state data streams
1269of the source Virtual Machine.
1270All other resources, such as disks, are either shared or got already sent
1271before runtime state migration of the VMs begins; so only the memory content
1272and device state remain to be transferred.
1273
1274Once this connection is established, the source begins asynchronously sending
1275the memory content to the target. If the guest memory on the source changes,
1276those sections are marked dirty and another pass is made to send the guest
1277memory data.
1278This loop is repeated until the data difference between running source VM
1279and incoming target VM is small enough to be sent in a few milliseconds,
1280because then the source VM can be paused completely, without a user or program
1281noticing the pause, so that the remaining data can be sent to the target, and
1282then unpause the targets VM's CPU to make it the new running VM in well under a
1283second.
8df8cfb7
DC
1284
1285Requirements
1286^^^^^^^^^^^^
1287
1288For Live Migration to work, there are some things required:
1289
27780834
TL
1290* The VM has no local resources that cannot be migrated. For example,
1291 PCI or USB devices that are passed through currently block live-migration.
1292 Local Disks, on the other hand, can be migrated by sending them to the target
1293 just fine.
1294* The hosts are located in the same {pve} cluster.
1295* The hosts have a working (and reliable) network connection between them.
1296* The target host must have the same, or higher versions of the
1297 {pve} packages. Although it can sometimes work the other way around, this
1298 cannot be guaranteed.
1299* The hosts have CPUs from the same vendor with similar capabilities. Different
1300 vendor *might* work depending on the actual models and VMs CPU type
1301 configured, but it cannot be guaranteed - so please test before deploying
1302 such a setup in production.
8df8cfb7
DC
1303
1304Offline Migration
1305~~~~~~~~~~~~~~~~~
1306
27780834
TL
1307If you have local resources, you can still migrate your VMs offline as long as
1308all disk are on storage defined on both hosts.
1309Migration then copies the disks to the target host over the network, as with
1310online migration. Note that any hardware pass-through configuration may need to
1311be adapted to the device location on the target host.
1312
1313// TODO: mention hardware map IDs as better way to solve that, once available
c73c190f 1314
eeb87f95
DM
1315[[qm_copy_and_clone]]
1316Copies and Clones
1317-----------------
9e55c76d 1318
1ff5e4e8 1319[thumbnail="screenshot/gui-qemu-full-clone.png"]
9e55c76d
DM
1320
1321VM installation is usually done using an installation media (CD-ROM)
61018238 1322from the operating system vendor. Depending on the OS, this can be a
9e55c76d
DM
1323time consuming task one might want to avoid.
1324
1325An easy way to deploy many VMs of the same type is to copy an existing
1326VM. We use the term 'clone' for such copies, and distinguish between
1327'linked' and 'full' clones.
1328
1329Full Clone::
1330
1331The result of such copy is an independent VM. The
1332new VM does not share any storage resources with the original.
1333+
707e37a2 1334
9e55c76d
DM
1335It is possible to select a *Target Storage*, so one can use this to
1336migrate a VM to a totally different storage. You can also change the
1337disk image *Format* if the storage driver supports several formats.
1338+
707e37a2 1339
730fbca4 1340NOTE: A full clone needs to read and copy all VM image data. This is
9e55c76d 1341usually much slower than creating a linked clone.
707e37a2
DM
1342+
1343
1344Some storage types allows to copy a specific *Snapshot*, which
1345defaults to the 'current' VM data. This also means that the final copy
1346never includes any additional snapshots from the original VM.
1347
9e55c76d
DM
1348
1349Linked Clone::
1350
730fbca4 1351Modern storage drivers support a way to generate fast linked
9e55c76d
DM
1352clones. Such a clone is a writable copy whose initial contents are the
1353same as the original data. Creating a linked clone is nearly
1354instantaneous, and initially consumes no additional space.
1355+
707e37a2 1356
9e55c76d
DM
1357They are called 'linked' because the new image still refers to the
1358original. Unmodified data blocks are read from the original image, but
1359modification are written (and afterwards read) from a new
1360location. This technique is called 'Copy-on-write'.
1361+
707e37a2
DM
1362
1363This requires that the original volume is read-only. With {pve} one
1364can convert any VM into a read-only <<qm_templates, Template>>). Such
1365templates can later be used to create linked clones efficiently.
1366+
1367
730fbca4
OB
1368NOTE: You cannot delete an original template while linked clones
1369exist.
9e55c76d 1370+
707e37a2
DM
1371
1372It is not possible to change the *Target storage* for linked clones,
1373because this is a storage internal feature.
9e55c76d
DM
1374
1375
1376The *Target node* option allows you to create the new VM on a
1377different node. The only restriction is that the VM is on shared
1378storage, and that storage is also available on the target node.
1379
730fbca4 1380To avoid resource conflicts, all network interface MAC addresses get
9e55c76d
DM
1381randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1382setting.
1383
1384
707e37a2
DM
1385[[qm_templates]]
1386Virtual Machine Templates
1387-------------------------
1388
1389One can convert a VM into a Template. Such templates are read-only,
1390and you can use them to create linked clones.
1391
1392NOTE: It is not possible to start templates, because this would modify
1393the disk images. If you want to change the template, create a linked
1394clone and modify that.
1395
319d5325
DC
1396VM Generation ID
1397----------------
1398
941ff8d3 1399{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
effa4818
TL
1400'vmgenid' Specification
1401https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1402for virtual machines.
1403This can be used by the guest operating system to detect any event resulting
1404in a time shift event, for example, restoring a backup or a snapshot rollback.
319d5325 1405
effa4818
TL
1406When creating new VMs, a 'vmgenid' will be automatically generated and saved
1407in its configuration file.
319d5325 1408
effa4818
TL
1409To create and add a 'vmgenid' to an already existing VM one can pass the
1410special value `1' to let {pve} autogenerate one or manually set the 'UUID'
d6466262
TL
1411footnote:[Online GUID generator http://guid.one/] by using it as value, for
1412example:
319d5325 1413
effa4818 1414----
32e8b5b2
AL
1415# qm set VMID -vmgenid 1
1416# qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
effa4818 1417----
319d5325 1418
cfd48f55
TL
1419NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1420in the same effects as a change on snapshot rollback, backup restore, etc., has
1421as the VM can interpret this as generation change.
1422
effa4818
TL
1423In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1424its value on VM creation, or retroactively delete the property in the
1425configuration with:
319d5325 1426
effa4818 1427----
32e8b5b2 1428# qm set VMID -delete vmgenid
effa4818 1429----
319d5325 1430
effa4818
TL
1431The most prominent use case for 'vmgenid' are newer Microsoft Windows
1432operating systems, which use it to avoid problems in time sensitive or
d6466262 1433replicate services (such as databases or domain controller
cfd48f55
TL
1434footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1435on snapshot rollback, backup restore or a whole VM clone operation.
319d5325 1436
c069256d
EK
1437Importing Virtual Machines and disk images
1438------------------------------------------
56368da8
EK
1439
1440A VM export from a foreign hypervisor takes usually the form of one or more disk
59552707 1441 images, with a configuration file describing the settings of the VM (RAM,
56368da8
EK
1442 number of cores). +
1443The disk images can be in the vmdk format, if the disks come from
59552707
DM
1444VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1445The most popular configuration format for VM exports is the OVF standard, but in
1446practice interoperation is limited because many settings are not implemented in
1447the standard itself, and hypervisors export the supplementary information
56368da8
EK
1448in non-standard extensions.
1449
1450Besides the problem of format, importing disk images from other hypervisors
1451may fail if the emulated hardware changes too much from one hypervisor to
1452another. Windows VMs are particularly concerned by this, as the OS is very
1453picky about any changes of hardware. This problem may be solved by
1454installing the MergeIDE.zip utility available from the Internet before exporting
1455and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1456
59552707 1457Finally there is the question of paravirtualized drivers, which improve the
56368da8
EK
1458speed of the emulated system and are specific to the hypervisor.
1459GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1460default and you can switch to the paravirtualized drivers right after importing
59552707 1461the VM. For Windows VMs, you need to install the Windows paravirtualized
56368da8
EK
1462drivers by yourself.
1463
1464GNU/Linux and other free Unix can usually be imported without hassle. Note
eb01c5cf 1465that we cannot guarantee a successful import/export of Windows VMs in all
56368da8
EK
1466cases due to the problems above.
1467
c069256d
EK
1468Step-by-step example of a Windows OVF import
1469~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8 1470
59552707 1471Microsoft provides
c069256d 1472https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
144d5ede 1473 to get started with Windows development.We are going to use one of these
c069256d 1474to demonstrate the OVF import feature.
56368da8 1475
c069256d
EK
1476Download the Virtual Machine zip
1477^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 1478
144d5ede 1479After getting informed about the user agreement, choose the _Windows 10
c069256d 1480Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
56368da8
EK
1481
1482Extract the disk image from the zip
1483^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1484
c069256d
EK
1485Using the `unzip` utility or any archiver of your choice, unpack the zip,
1486and copy via ssh/scp the ovf and vmdk files to your {pve} host.
56368da8 1487
c069256d
EK
1488Import the Virtual Machine
1489^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 1490
c069256d
EK
1491This will create a new virtual machine, using cores, memory and
1492VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1493 storage. You have to configure the network manually.
56368da8 1494
32e8b5b2
AL
1495----
1496# qm importovf 999 WinDev1709Eval.ovf local-lvm
1497----
56368da8 1498
c069256d 1499The VM is ready to be started.
56368da8 1500
c069256d
EK
1501Adding an external disk image to a Virtual Machine
1502~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8 1503
144d5ede 1504You can also add an existing disk image to a VM, either coming from a
c069256d
EK
1505foreign hypervisor, or one that you created yourself.
1506
1507Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1508
1509 vmdebootstrap --verbose \
67d59a35 1510 --size 10GiB --serial-console \
c069256d
EK
1511 --grub --no-extlinux \
1512 --package openssh-server \
1513 --package avahi-daemon \
1514 --package qemu-guest-agent \
1515 --hostname vm600 --enable-dhcp \
1516 --customize=./copy_pub_ssh.sh \
1517 --sparse --image vm600.raw
1518
10a2a4aa
FE
1519You can now create a new target VM, importing the image to the storage `pvedir`
1520and attaching it to the VM's SCSI controller:
c069256d 1521
32e8b5b2
AL
1522----
1523# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
10a2a4aa
FE
1524 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1525 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
32e8b5b2 1526----
c069256d
EK
1527
1528The VM is ready to be started.
707e37a2 1529
7eb69fd2 1530
16b4185a 1531ifndef::wiki[]
7eb69fd2 1532include::qm-cloud-init.adoc[]
16b4185a
DM
1533endif::wiki[]
1534
6e4c46c4
DC
1535ifndef::wiki[]
1536include::qm-pci-passthrough.adoc[]
1537endif::wiki[]
16b4185a 1538
c2c8eb89 1539Hookscripts
91f416b7 1540-----------
c2c8eb89
DC
1541
1542You can add a hook script to VMs with the config property `hookscript`.
1543
32e8b5b2
AL
1544----
1545# qm set 100 --hookscript local:snippets/hookscript.pl
1546----
c2c8eb89
DC
1547
1548It will be called during various phases of the guests lifetime.
1549For an example and documentation see the example script under
1550`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
7eb69fd2 1551
88a31964
DC
1552[[qm_hibernate]]
1553Hibernation
1554-----------
1555
1556You can suspend a VM to disk with the GUI option `Hibernate` or with
1557
32e8b5b2
AL
1558----
1559# qm suspend ID --todisk
1560----
88a31964
DC
1561
1562That means that the current content of the memory will be saved onto disk
1563and the VM gets stopped. On the next start, the memory content will be
1564loaded and the VM can continue where it was left off.
1565
1566[[qm_vmstatestorage]]
1567.State storage selection
1568If no target storage for the memory is given, it will be automatically
1569chosen, the first of:
1570
15711. The storage `vmstatestorage` from the VM config.
15722. The first shared storage from any VM disk.
15733. The first non-shared storage from any VM disk.
15744. The storage `local` as a fallback.
1575
e2a867b2
DC
1576[[resource_mapping]]
1577Resource Mapping
bd0cc33d 1578----------------
e2a867b2 1579
481a0ee4
DC
1580[thumbnail="screenshot/gui-datacenter-resource-mappings.png"]
1581
e2a867b2
DC
1582When using or referencing local resources (e.g. address of a pci device), using
1583the raw address or id is sometimes problematic, for example:
1584
1585* when using HA, a different device with the same id or path may exist on the
1586 target node, and if one is not careful when assigning such guests to HA
1587 groups, the wrong device could be used, breaking configurations.
1588
1589* changing hardware can change ids and paths, so one would have to check all
1590 assigned devices and see if the path or id is still correct.
1591
1592To handle this better, one can define cluster wide resource mappings, such that
1593a resource has a cluster unique, user selected identifier which can correspond
1594to different devices on different hosts. With this, HA won't start a guest with
1595a wrong device, and hardware changes can be detected.
1596
1597Creating such a mapping can be done with the {pve} web GUI under `Datacenter`
1598in the relevant tab in the `Resource Mappings` category, or on the cli with
1599
1600----
d772991e 1601# pvesh create /cluster/mapping/<type> <options>
e2a867b2
DC
1602----
1603
4657b9ff
TL
1604[thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
1605
d772991e
TL
1606Where `<type>` is the hardware type (currently either `pci` or `usb`) and
1607`<options>` are the device mappings and other configuration parameters.
e2a867b2
DC
1608
1609Note that the options must include a map property with all identifying
1610properties of that hardware, so that it's possible to verify the hardware did
1611not change and the correct device is passed through.
1612
1613For example to add a PCI device as `device1` with the path `0000:01:00.0` that
1614has the device id `0001` and the vendor id `0002` on the node `node1`, and
1615`0000:02:00.0` on `node2` you can add it with:
1616
1617----
1618# pvesh create /cluster/mapping/pci --id device1 \
1619 --map node=node1,path=0000:01:00.0,id=0002:0001 \
1620 --map node=node2,path=0000:02:00.0,id=0002:0001
1621----
1622
1623You must repeat the `map` parameter for each node where that device should have
1624a mapping (note that you can currently only map one USB device per node per
1625mapping).
1626
1627Using the GUI makes this much easier, as the correct properties are
1628automatically picked up and sent to the API.
1629
481a0ee4
DC
1630[thumbnail="screenshot/gui-datacenter-mapping-usb-edit.png"]
1631
e2a867b2
DC
1632It's also possible for PCI devices to provide multiple devices per node with
1633multiple map properties for the nodes. If such a device is assigned to a guest,
1634the first free one will be used when the guest is started. The order of the
1635paths given is also the order in which they are tried, so arbitrary allocation
1636policies can be implemented.
1637
1638This is useful for devices with SR-IOV, since some times it is not important
1639which exact virtual function is passed through.
1640
1641You can assign such a device to a guest either with the GUI or with
1642
1643----
d772991e 1644# qm set ID -hostpci0 <name>
e2a867b2
DC
1645----
1646
1647for PCI devices, or
1648
1649----
d772991e 1650# qm set <vmid> -usb0 <name>
e2a867b2
DC
1651----
1652
1653for USB devices.
1654
d772991e 1655Where `<vmid>` is the guests id and `<name>` is the chosen name for the created
e2a867b2
DC
1656mapping. All usual options for passing through the devices are allowed, such as
1657`mdev`.
1658
d772991e
TL
1659To create mappings `Mapping.Modify` on `/mapping/<type>/<name>` is necessary
1660(where `<type>` is the device type and `<name>` is the name of the mapping).
e2a867b2 1661
d772991e
TL
1662To use these mappings, `Mapping.Use` on `/mapping/<type>/<name>` is necessary
1663(in addition to the normal guest privileges to edit the configuration).
e2a867b2 1664
8c1189b6 1665Managing Virtual Machines with `qm`
dd042288 1666------------------------------------
f69cfd23 1667
c730e973 1668qm is the tool to manage QEMU/KVM virtual machines on {pve}. You can
f69cfd23
DM
1669create and destroy virtual machines, and control execution
1670(start/stop/suspend/resume). Besides that, you can use qm to set
1671parameters in the associated config file. It is also possible to
1672create and delete virtual disks.
1673
dd042288
EK
1674CLI Usage Examples
1675~~~~~~~~~~~~~~~~~~
1676
b01b1f2c
EK
1677Using an iso file uploaded on the 'local' storage, create a VM
1678with a 4 GB IDE disk on the 'local-lvm' storage
dd042288 1679
32e8b5b2
AL
1680----
1681# qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1682----
dd042288
EK
1683
1684Start the new VM
1685
32e8b5b2
AL
1686----
1687# qm start 300
1688----
dd042288
EK
1689
1690Send a shutdown request, then wait until the VM is stopped.
1691
32e8b5b2
AL
1692----
1693# qm shutdown 300 && qm wait 300
1694----
dd042288
EK
1695
1696Same as above, but only wait for 40 seconds.
1697
32e8b5b2
AL
1698----
1699# qm shutdown 300 && qm wait 300 -timeout 40
1700----
dd042288 1701
87927c65
DJ
1702Destroying a VM always removes it from Access Control Lists and it always
1703removes the firewall configuration of the VM. You have to activate
1704'--purge', if you want to additionally remove the VM from replication jobs,
1705backup jobs and HA resource configurations.
1706
32e8b5b2
AL
1707----
1708# qm destroy 300 --purge
1709----
87927c65 1710
66aecccb
AL
1711Move a disk image to a different storage.
1712
32e8b5b2
AL
1713----
1714# qm move-disk 300 scsi0 other-storage
1715----
66aecccb
AL
1716
1717Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1718the source VM and attaches it as `scsi3` to the target VM. In the background
1719the disk image is being renamed so that the name matches the new owner.
1720
32e8b5b2
AL
1721----
1722# qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1723----
87927c65 1724
f0a8ab95
DM
1725
1726[[qm_configuration]]
f69cfd23
DM
1727Configuration
1728-------------
1729
f0a8ab95
DM
1730VM configuration files are stored inside the Proxmox cluster file
1731system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1732Like other files stored inside `/etc/pve/`, they get automatically
1733replicated to all other cluster nodes.
f69cfd23 1734
f0a8ab95
DM
1735NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1736unique cluster wide.
1737
1738.Example VM Configuration
1739----
777cf894 1740boot: order=virtio0;net0
f0a8ab95
DM
1741cores: 1
1742sockets: 1
1743memory: 512
1744name: webmail
1745ostype: l26
f0a8ab95
DM
1746net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1747virtio0: local:vm-100-disk-1,size=32G
1748----
1749
1750Those configuration files are simple text files, and you can edit them
1751using a normal text editor (`vi`, `nano`, ...). This is sometimes
1752useful to do small corrections, but keep in mind that you need to
1753restart the VM to apply such changes.
1754
1755For that reason, it is usually better to use the `qm` command to
1756generate and modify those files, or do the whole thing using the GUI.
1757Our toolkit is smart enough to instantaneously apply most changes to
1758running VM. This feature is called "hot plug", and there is no
1759need to restart the VM in that case.
1760
1761
1762File Format
1763~~~~~~~~~~~
1764
1765VM configuration files use a simple colon separated key/value
1766format. Each line has the following format:
1767
1768-----
1769# this is a comment
1770OPTION: value
1771-----
1772
1773Blank lines in those files are ignored, and lines starting with a `#`
1774character are treated as comments and are also ignored.
1775
1776
1777[[qm_snapshots]]
1778Snapshots
1779~~~~~~~~~
1780
1781When you create a snapshot, `qm` stores the configuration at snapshot
1782time into a separate snapshot section within the same configuration
1783file. For example, after creating a snapshot called ``testsnapshot'',
1784your configuration file will look like this:
1785
1786.VM configuration with snapshot
1787----
1788memory: 512
1789swap: 512
1790parent: testsnaphot
1791...
1792
1793[testsnaphot]
1794memory: 512
1795swap: 512
1796snaptime: 1457170803
1797...
1798----
1799
1800There are a few snapshot related properties like `parent` and
1801`snaptime`. The `parent` property is used to store the parent/child
1802relationship between snapshots. `snaptime` is the snapshot creation
1803time stamp (Unix epoch).
f69cfd23 1804
88a31964
DC
1805You can optionally save the memory of a running VM with the option `vmstate`.
1806For details about how the target storage gets chosen for the VM state, see
1807xref:qm_vmstatestorage[State storage selection] in the chapter
1808xref:qm_hibernate[Hibernation].
f69cfd23 1809
80c0adcb 1810[[qm_options]]
a7f36905
DM
1811Options
1812~~~~~~~
1813
1814include::qm.conf.5-opts.adoc[]
1815
f69cfd23
DM
1816
1817Locks
1818-----
1819
d6466262
TL
1820Online migrations, snapshots and backups (`vzdump`) set a lock to prevent
1821incompatible concurrent actions on the affected VMs. Sometimes you need to
1822remove such a lock manually (for example after a power failure).
f69cfd23 1823
32e8b5b2
AL
1824----
1825# qm unlock <vmid>
1826----
f69cfd23 1827
0bcc62dd
DM
1828CAUTION: Only do that if you are sure the action which set the lock is
1829no longer running.
1830
16b4185a
DM
1831ifdef::wiki[]
1832
1833See Also
1834~~~~~~~~
1835
1836* link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1837
1838endif::wiki[]
1839
1840
f69cfd23 1841ifdef::manvolnum[]
704f19fb
DM
1842
1843Files
1844------
1845
1846`/etc/pve/qemu-server/<VMID>.conf`::
1847
1848Configuration file for the VM '<VMID>'.
1849
1850
f69cfd23
DM
1851include::pve-copyright.adoc[]
1852endif::manvolnum[]