1 [[chapter_virtual_machines]]
10 qm - Qemu/KVM Virtual Machine Manager
16 include::qm.1-synopsis.adoc[]
22 Qemu/KVM Virtual Machines
23 =========================
28 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29 // http://pve.proxmox.com/wiki/KVM
30 // http://pve.proxmox.com/wiki/Qemu_Server
32 Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
33 physical computer. From the perspective of the host system where Qemu is
34 running, Qemu is a user program which has access to a number of local resources
35 like partitions, files, network cards which are then passed to an
36 emulated computer which sees them as if they were real devices.
38 A guest operating system running in the emulated computer accesses these
39 devices, and runs as it were running on real hardware. For instance you can pass
40 an iso image as a parameter to Qemu, and the OS running in the emulated computer
41 will see a real CDROM inserted in a CD drive.
43 Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44 only concerned with 32 and 64 bits PC clone emulation, since it represents the
45 overwhelming majority of server hardware. The emulation of PC clones is also one
46 of the fastest due to the availability of processor extensions which greatly
47 speed up Qemu when the emulated architecture is the same as the host
50 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51 It means that Qemu is running with the support of the virtualization processor
52 extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
53 _KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm
56 Qemu inside {pve} runs as a root process, since this is required to access block
60 Emulated devices and paravirtualized devices
61 --------------------------------------------
63 The PC hardware emulated by Qemu includes a mainboard, network controllers,
64 scsi, ide and sata controllers, serial ports (the complete list can be seen in
65 the `kvm(1)` man page) all of them emulated in software. All these devices
66 are the exact software equivalent of existing hardware devices, and if the OS
67 running in the guest has the proper drivers it will use the devices as if it
68 were running on real hardware. This allows Qemu to runs _unmodified_ operating
71 This however has a performance cost, as running in software what was meant to
72 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73 Qemu can present to the guest operating system _paravirtualized devices_, where
74 the guest OS recognizes it is running inside Qemu and cooperates with the
77 Qemu relies on the virtio virtualization standard, and is thus able to present
78 paravirtualized virtio devices, which includes a paravirtualized generic disk
79 controller, a paravirtualized network card, a paravirtualized serial port,
80 a paravirtualized SCSI controller, etc ...
82 It is highly recommended to use the virtio devices whenever you can, as they
83 provide a big performance improvement. Using the virtio generic disk controller
84 versus an emulated IDE controller will double the sequential write throughput,
85 as measured with `bonnie++(8)`. Using the virtio network interface can deliver
86 up to three times the throughput of an emulated Intel E1000 network card, as
87 measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
88 http://www.linux-kvm.org/page/Using_VirtIO_NIC]
91 [[qm_virtual_machines_settings]]
92 Virtual Machines Settings
93 -------------------------
95 Generally speaking {pve} tries to choose sane defaults for virtual machines
96 (VM). Make sure you understand the meaning of the settings you change, as it
97 could incur a performance slowdown, or putting your data at risk.
100 [[qm_general_settings]]
104 [thumbnail="screenshot/gui-create-vm-general.png"]
106 General settings of a VM include
108 * the *Node* : the physical server on which the VM will run
109 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
110 * *Name*: a free form text string you can use to describe the VM
111 * *Resource Pool*: a logical group of VMs
118 [thumbnail="screenshot/gui-create-vm-os.png"]
120 When creating a virtual machine (VM), setting the proper Operating System(OS)
121 allows {pve} to optimize some low level parameters. For instance Windows OS
122 expect the BIOS clock to use the local time, while Unix based OS expect the
123 BIOS clock to have the UTC time.
125 [[qm_system_settings]]
129 On VM creation you can change some basic system components of the new VM. You
130 can specify which xref:qm_display[display type] you want to use.
131 [thumbnail="screenshot/gui-create-vm-system.png"]
132 Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133 If you plan to install the QEMU Guest Agent, or if your selected ISO image
134 already ships and installs it automatically, you may want to tick the 'Qemu
135 Agent' box, which lets {pve} know that it can use its features to show some
136 more information, and complete some actions (for example, shutdown or
137 snapshots) more intelligently.
139 {pve} allows to boot VMs with different firmware and machine types, namely
140 xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
141 the default SeabBIOS to OVMF only if you plan to use
142 xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the
143 hardware layout of the VM's virtual motherboard. You can choose between the
144 default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
145 https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
146 chipset, which also provides a virtual PCIe bus, and thus may be desired if
147 one want's to pass through PCIe hardware.
156 Qemu can emulate a number of storage controllers:
158 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
159 controller. Even if this controller has been superseded by recent designs,
160 each and every OS you can think of has support for it, making it a great choice
161 if you want to run an OS released before 2003. You can connect up to 4 devices
164 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
165 design, allowing higher throughput and a greater number of devices to be
166 connected. You can connect up to 6 devices on this controller.
168 * the *SCSI* controller, designed in 1985, is commonly found on server grade
169 hardware, and can connect up to 14 storage devices. {pve} emulates by default a
170 LSI 53C895A controller.
172 A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for
173 performance and is automatically selected for newly created Linux VMs since
174 {pve} 4.3. Linux distributions have support for this controller since 2012, and
175 FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
176 containing the drivers during the installation.
177 // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
178 If you aim at maximum performance, you can select a SCSI controller of type
179 _VirtIO SCSI single_ which will allow you to select the *IO Thread* option.
180 When selecting _VirtIO SCSI single_ Qemu will create a new controller for
181 each disk, instead of adding all disks to the same controller.
183 * The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
184 is an older type of paravirtualized controller. It has been superseded by the
185 VirtIO SCSI Controller, in terms of features.
187 [thumbnail="screenshot/gui-create-vm-hard-disk.png"]
189 [[qm_hard_disk_formats]]
192 On each controller you attach a number of emulated hard disks, which are backed
193 by a file or a block device residing in the configured storage. The choice of
194 a storage type will determine the format of the hard disk image. Storages which
195 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
196 whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
197 either the *raw disk image format* or the *QEMU image format*.
199 * the *QEMU image format* is a copy on write format which allows snapshots, and
200 thin provisioning of the disk image.
201 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
202 you would get when executing the `dd` command on a block device in Linux. This
203 format does not support thin provisioning or snapshots by itself, requiring
204 cooperation from the storage layer for these tasks. It may, however, be up to
205 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
206 http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
207 * the *VMware image format* only makes sense if you intend to import/export the
208 disk image to other hypervisors.
210 [[qm_hard_disk_cache]]
213 Setting the *Cache* mode of the hard drive will impact how the host system will
214 notify the guest systems of block write completions. The *No cache* default
215 means that the guest system will be notified that a write is complete when each
216 block reaches the physical storage write queue, ignoring the host page cache.
217 This provides a good balance between safety and speed.
219 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
220 you can set the *No backup* option on that disk.
222 If you want the {pve} storage replication mechanism to skip a disk when starting
223 a replication job, you can set the *Skip replication* option on that disk.
224 As of {pve} 5.0, replication requires the disk images to be on a storage of type
225 `zfspool`, so adding a disk image to other storages when the VM has replication
226 configured requires to skip replication for this disk image.
228 [[qm_hard_disk_discard]]
231 If your storage supports _thin provisioning_ (see the storage chapter in the
232 {pve} guide), you can activate the *Discard* option on a drive. With *Discard*
233 set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
234 https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
235 marks blocks as unused after deleting files, the controller will relay this
236 information to the storage, which will then shrink the disk image accordingly.
237 For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
238 option on the drive. Some guest operating systems may also require the
239 *SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
240 only supported on guests using Linux Kernel 5.0 or higher.
242 If you would like a drive to be presented to the guest as a solid-state drive
243 rather than a rotational hard disk, you can set the *SSD emulation* option on
244 that drive. There is no requirement that the underlying storage actually be
245 backed by SSDs; this feature can be used with physical media of any type.
246 Note that *SSD emulation* is not supported on *VirtIO Block* drives.
249 [[qm_hard_disk_iothread]]
252 The option *IO Thread* can only be used when using a disk with the
253 *VirtIO* controller, or with the *SCSI* controller, when the emulated controller
254 type is *VirtIO SCSI single*.
255 With this enabled, Qemu creates one I/O thread per storage controller,
256 instead of a single thread for all I/O, so it increases performance when
257 multiple disks are used and each disk has its own storage controller.
258 Note that backups do not currently work with *IO Thread* enabled.
265 [thumbnail="screenshot/gui-create-vm-cpu.png"]
267 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
268 This CPU can then contain one or many *cores*, which are independent
269 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
270 sockets with two cores is mostly irrelevant from a performance point of view.
271 However some software licenses depend on the number of sockets a machine has,
272 in that case it makes sense to set the number of sockets to what the license
275 Increasing the number of virtual cpus (cores and sockets) will usually provide a
276 performance improvement though that is heavily dependent on the use of the VM.
277 Multithreaded applications will of course benefit from a large number of
278 virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
279 execution on the host system. If you're not sure about the workload of your VM,
280 it is usually a safe bet to set the number of *Total cores* to 2.
282 NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
283 is greater than the number of cores on the server (e.g., 4 VMs with each 4
284 cores on a machine with only 8 cores). In that case the host system will
285 balance the Qemu execution threads between your server cores, just like if you
286 were running a standard multithreaded application. However, {pve} will prevent
287 you from assigning more virtual CPU cores than physically available, as this will
288 only bring the performance down due to the cost of context switches.
290 [[qm_cpu_resource_limits]]
294 In addition to the number of virtual cores, you can configure how much resources
295 a VM can get in relation to the host CPU time and also in relation to other
297 With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
298 the whole VM can use on the host. It is a floating point value representing CPU
299 time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
300 single process would fully use one single core it would have `100%` CPU Time
301 usage. If a VM with four cores utilizes all its cores fully it would
302 theoretically use `400%`. In reality the usage may be even a bit higher as Qemu
303 can have additional threads for VM peripherals besides the vCPU core ones.
304 This setting can be useful if a VM should have multiple vCPUs, as it runs a few
305 processes in parallel, but the VM as a whole should not be able to run all
306 vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
307 which would profit from having 8 vCPUs, but at no time all of those 8 cores
308 should run at full load - as this would make the server so overloaded that
309 other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
310 `4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
311 real host cores CPU time. But, if only 4 would do work they could still get
312 almost 100% of a real core each.
314 NOTE: VMs can, depending on their configuration, use additional threads e.g.,
315 for networking or IO operations but also live migration. Thus a VM can show up
316 to use more CPU time than just its virtual CPUs could use. To ensure that a VM
317 never uses more CPU time than virtual CPUs assigned set the *cpulimit* setting
318 to the same value as the total core count.
320 The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
321 shares or CPU weight), controls how much CPU time a VM gets in regards to other
322 VMs running. It is a relative weight which defaults to `1024`, if you increase
323 this for a VM it will be prioritized by the scheduler in comparison to other
324 VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was
325 changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than
328 For more information see `man systemd.resource-control`, here `CPUQuota`
329 corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits`
330 setting, visit its Notes section for references and implementation details.
335 Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
336 processors. Each new processor generation adds new features, like hardware
337 assisted 3d rendering, random number generation, memory protection, etc ...
338 Usually you should select for your VM a processor type which closely matches the
339 CPU of the host system, as it means that the host CPU features (also called _CPU
340 flags_ ) will be available in your VMs. If you want an exact match, you can set
341 the CPU type to *host* in which case the VM will have exactly the same CPU flags
344 This has a downside though. If you want to do a live migration of VMs between
345 different hosts, your VM might end up on a new system with a different CPU type.
346 If the CPU flags passed to the guest are missing, the qemu process will stop. To
347 remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
348 kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
349 but is guaranteed to work everywhere.
351 In short, if you care about live migration and moving VMs between nodes, leave
352 the kvm64 default. If you don’t care about live migration or have a homogeneous
353 cluster where all nodes have the same CPU, set the CPU type to host, as in
354 theory this will give your guests maximum performance.
356 Meltdown / Spectre related CPU flags
357 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
359 There are several CPU flags related to the Meltdown and Spectre vulnerabilities
360 footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
361 manually unless the selected CPU type of your VM already enables them by default.
363 There are two requirements that need to be fulfilled in order to use these
366 * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
367 * The guest operating system must be updated to a version which mitigates the
368 attacks and is able to utilize the CPU feature
370 Otherwise you need to set the desired CPU flag of the virtual CPU, either by
371 editing the CPU options in the WebUI, or by setting the 'flags' property of the
372 'cpu' option in the VM configuration file.
374 For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
375 so-called ``microcode update'' footnote:[You can use `intel-microcode' /
376 `amd-microcode' from Debian non-free if your vendor does not provide such an
377 update. Note that not all affected CPUs can be updated to support spec-ctrl.]
381 To check if the {pve} host is vulnerable, execute the following command as root:
384 for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
387 A community script is also available to detect is the host is still vulnerable.
388 footnote:[spectre-meltdown-checker https://meltdown.ovh/]
395 This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
396 called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
397 the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
398 mechanism footnote:[PCID is now a critical performance/security feature on x86
399 https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
401 To check if the {pve} host supports PCID, execute the following command as root:
404 # grep ' pcid ' /proc/cpuinfo
407 If this does not return empty your host's CPU has support for 'pcid'.
411 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
412 in cases where retpolines are not sufficient.
413 Included by default in Intel CPU models with -IBRS suffix.
414 Must be explicitly turned on for Intel CPU models without -IBRS suffix.
415 Requires an updated host CPU microcode (intel-microcode >= 20180425).
419 Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
420 Must be explicitly turned on for all Intel CPU models.
421 Requires an updated host CPU microcode(intel-microcode >= 20180703).
429 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
430 in cases where retpolines are not sufficient.
431 Included by default in AMD CPU models with -IBPB suffix.
432 Must be explicitly turned on for AMD CPU models without -IBPB suffix.
433 Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
439 Required to enable the Spectre v4 (CVE-2018-3639) fix.
440 Not included by default in any AMD CPU model.
441 Must be explicitly turned on for all AMD CPU models.
442 This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
443 Note that this must be explicitly enabled when when using the "host" cpu model,
444 because this is a virtual feature which does not exist in the physical CPUs.
449 Required to enable the Spectre v4 (CVE-2018-3639) fix.
450 Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
451 This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
452 virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
457 Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
458 Not included by default in any AMD CPU model.
459 Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
460 and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
461 This is mutually exclusive with virt-ssbd and amd-ssbd.
466 You can also optionally emulate a *NUMA*
467 footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
468 in your VMs. The basics of the NUMA architecture mean that instead of having a
469 global memory pool available to all your cores, the memory is spread into local
470 banks close to each socket.
471 This can bring speed improvements as the memory bus is not a bottleneck
472 anymore. If your system has a NUMA architecture footnote:[if the command
473 `numactl --hardware | grep available` returns more than one node, then your host
474 system has a NUMA architecture] we recommend to activate the option, as this
475 will allow proper distribution of the VM resources on the host system.
476 This option is also required to hot-plug cores or RAM in a VM.
478 If the NUMA option is used, it is recommended to set the number of sockets to
479 the number of nodes of the host system.
484 Modern operating systems introduced the capability to hot-plug and, to a
485 certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us
486 to avoid a lot of the (physical) problems real hardware can cause in such
488 Still, this is a rather new and complicated feature, so its use should be
489 restricted to cases where its absolutely needed. Most of the functionality can
490 be replicated with other, well tested and less complicated, features, see
491 xref:qm_cpu_resource_limits[Resource Limits].
493 In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
494 To start a VM with less than this total core count of CPUs you may use the
495 *vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
497 Currently only this feature is only supported on Linux, a kernel newer than 3.10
498 is needed, a kernel newer than 4.7 is recommended.
500 You can use a udev rule as follow to automatically set new CPUs as online in
504 SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
507 Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
509 Note: CPU hot-remove is machine dependent and requires guest cooperation.
510 The deletion command does not guarantee CPU removal to actually happen,
511 typically it's a request forwarded to guest using target dependent mechanism,
512 e.g., ACPI on x86/amd64.
519 For each VM you have the option to set a fixed size memory or asking
520 {pve} to dynamically allocate memory based on the current RAM usage of the
523 .Fixed Memory Allocation
524 [thumbnail="screenshot/gui-create-vm-memory.png"]
526 When setting memory and minimum memory to the same amount
527 {pve} will simply allocate what you specify to your VM.
529 Even when using a fixed memory size, the ballooning device gets added to the
530 VM, because it delivers useful information such as how much memory the guest
532 In general, you should leave *ballooning* enabled, but if you want to disable
533 it (e.g. for debugging purposes), simply uncheck
534 *Ballooning Device* or set
538 in the configuration.
540 .Automatic Memory Allocation
542 // see autoballoon() in pvestatd.pm
543 When setting the minimum memory lower than memory, {pve} will make sure that the
544 minimum amount you specified is always available to the VM, and if RAM usage on
545 the host is below 80%, will dynamically add memory to the guest up to the
546 maximum memory specified.
548 When the host is running low on RAM, the VM will then release some memory
549 back to the host, swapping running processes if needed and starting the oom
550 killer in last resort. The passing around of memory between host and guest is
551 done via a special `balloon` kernel driver running inside the guest, which will
552 grab or release memory pages from the host.
553 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
555 When multiple VMs use the autoallocate facility, it is possible to set a
556 *Shares* coefficient which indicates the relative amount of the free host memory
557 that each VM should take. Suppose for instance you have four VMs, three of them
558 running an HTTP server and the last one is a database server. To cache more
559 database blocks in the database server RAM, you would like to prioritize the
560 database VM when spare RAM is available. For this you assign a Shares property
561 of 3000 to the database VM, leaving the other VMs to the Shares default setting
562 of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
563 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
564 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
567 All Linux distributions released after 2010 have the balloon kernel driver
568 included. For Windows OSes, the balloon driver needs to be added manually and can
569 incur a slowdown of the guest, so we don't recommend using it on critical
571 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
573 When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
574 of RAM available to the host.
577 [[qm_network_device]]
581 [thumbnail="screenshot/gui-create-vm-network.png"]
583 Each VM can have many _Network interface controllers_ (NIC), of four different
586 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
587 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
588 performance. Like all VirtIO devices, the guest OS should have the proper driver
590 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
591 only be used when emulating older operating systems ( released before 2002 )
592 * the *vmxnet3* is another paravirtualized device, which should only be used
593 when importing a VM from another hypervisor.
595 {pve} will generate for each NIC a random *MAC address*, so that your VM is
596 addressable on Ethernet networks.
598 The NIC you added to the VM can follow one of two different models:
600 * in the default *Bridged mode* each virtual NIC is backed on the host by a
601 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
602 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
603 have direct access to the Ethernet LAN on which the host is located.
604 * in the alternative *NAT mode*, each virtual NIC will only communicate with
605 the Qemu user networking stack, where a built-in router and DHCP server can
606 provide network access. This built-in DHCP will serve addresses in the private
607 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
608 should only be used for testing. This mode is only available via CLI or the API,
609 but not via the WebUI.
611 You can also skip adding a network device when creating a VM by selecting *No
615 If you are using the VirtIO driver, you can optionally activate the
616 *Multiqueue* option. This option allows the guest OS to process networking
617 packets using multiple virtual CPUs, providing an increase in the total number
618 of packets transferred.
620 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
621 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
622 host kernel, where the queue will be processed by a kernel thread spawned by the
623 vhost driver. With this option activated, it is possible to pass _multiple_
624 network queues to the host kernel for each NIC.
626 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
627 When using Multiqueue, it is recommended to set it to a value equal
628 to the number of Total Cores of your guest. You also need to set in
629 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
632 `ethtool -L ens1 combined X`
634 where X is the number of the number of vcpus of the VM.
636 You should note that setting the Multiqueue parameter to a value greater
637 than one will increase the CPU load on the host and guest systems as the
638 traffic increases. We recommend to set this option only when the VM has to
639 process a great number of incoming connections, such as when the VM is running
640 as a router, reverse proxy or a busy HTTP server doing long polling.
646 QEMU can virtualize a few types of VGA hardware. Some examples are:
648 * *std*, the default, emulates a card with Bochs VBE extensions.
649 * *cirrus*, this was once the default, it emulates a very old hardware module
650 with all its problems. This display type should only be used if really
651 necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
652 qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier
653 * *vmware*, is a VMWare SVGA-II compatible adapter.
654 * *qxl*, is the QXL paravirtualized graphics card. Selecting this also
655 enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
658 You can edit the amount of memory given to the virtual GPU, by setting
659 the 'memory' option. This can enable higher resolutions inside the VM,
660 especially with SPICE/QXL.
662 As the memory is reserved by display device, selecting Multi-Monitor mode
663 for SPICE (e.g., `qxl2` for dual monitors) has some implications:
665 * Windows needs a device for each monitor, so if your 'ostype' is some
666 version of Windows, {pve} gives the VM an extra device per monitor.
667 Each device gets the specified amount of memory.
669 * Linux VMs, can always enable more virtual monitors, but selecting
670 a Multi-Monitor mode multiplies the memory given to the device with
671 the number of monitors.
673 Selecting `serialX` as display 'type' disables the VGA output, and redirects
674 the Web Console to the selected serial port. A configured display 'memory'
675 setting will be ignored in that case.
677 [[qm_usb_passthrough]]
681 There are two different types of USB passthrough devices:
683 * Host USB passthrough
684 * SPICE USB passthrough
686 Host USB passthrough works by giving a VM a USB device of the host.
687 This can either be done via the vendor- and product-id, or
688 via the host bus and port.
690 The vendor/product-id looks like this: *0123:abcd*,
691 where *0123* is the id of the vendor, and *abcd* is the id
692 of the product, meaning two pieces of the same usb device
695 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
696 and *2.3.4* is the port path. This represents the physical
697 ports of your host (depending of the internal order of the
700 If a device is present in a VM configuration when the VM starts up,
701 but the device is not present in the host, the VM can boot without problems.
702 As soon as the device/port is available in the host, it gets passed through.
704 WARNING: Using this kind of USB passthrough means that you cannot move
705 a VM online to another host, since the hardware is only available
706 on the host the VM is currently residing.
708 The second type of passthrough is SPICE USB passthrough. This is useful
709 if you use a SPICE client which supports it. If you add a SPICE USB port
710 to your VM, you can passthrough a USB device from where your SPICE client is,
711 directly to the VM (for example an input device or hardware dongle).
718 In order to properly emulate a computer, QEMU needs to use a firmware.
719 Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
720 first steps when booting a VM. It is responsible for doing basic hardware
721 initialization and for providing an interface to the firmware and hardware for
722 the operating system. By default QEMU uses *SeaBIOS* for this, which is an
723 open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
726 There are, however, some scenarios in which a BIOS is not a good firmware
727 to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
728 http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
729 In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
731 If you want to use OVMF, there are several things to consider:
733 In order to save things like the *boot order*, there needs to be an EFI Disk.
734 This disk will be included in backups and snapshots, and there can only be one.
736 You can create such a disk with the following command:
738 qm set <vmid> -efidisk0 <storage>:1,format=<format>
740 Where *<storage>* is the storage where you want to have the disk, and
741 *<format>* is a format which the storage supports. Alternatively, you can
742 create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
743 hardware section of a VM.
745 When using OVMF with a virtual display (without VGA passthrough),
746 you need to set the client resolution in the OVMF menu(which you can reach
747 with a press of the ESC button during boot), or you have to choose
748 SPICE as the display type.
751 Inter-VM shared memory
752 ~~~~~~~~~~~~~~~~~~~~~~
754 You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
755 share memory between the host and a guest, or also between multiple guests.
757 To add such a device, you can use `qm`:
759 qm set <vmid> -ivshmem size=32,name=foo
761 Where the size is in MiB. The file will be located under
762 `/dev/shm/pve-shm-$name` (the default name is the vmid).
764 NOTE: Currently the device will get deleted as soon as any VM using it got
765 shutdown or stopped. Open connections will still persist, but new connections
766 to the exact same device cannot be made anymore.
768 A use case for such a device is the Looking Glass
769 footnote:[Looking Glass: https://looking-glass.hostfission.com/] project,
770 which enables high performance, low-latency display mirroring between
777 To add an audio device run the following command:
780 qm set <vmid> -audio0 device=<device>
783 Supported audio devices are:
785 * `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
786 * `intel-hda`: Intel HD Audio Controller, emulates ICH6
787 * `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
789 NOTE: The audio device works only in combination with SPICE. Remote protocols
790 like Microsoft's RDP have options to play sound. To use the physical audio
791 device of the host use device passthrough (see
792 xref:qm_pci_passthrough[PCI Passthrough] and
793 xref:qm_usb_passthrough[USB Passthrough]).
795 [[qm_startup_and_shutdown]]
796 Automatic Start and Shutdown of Virtual Machines
797 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
799 After creating your VMs, you probably want them to start automatically
800 when the host system boots. For this you need to select the option 'Start at
801 boot' from the 'Options' Tab of your VM in the web interface, or set it with
802 the following command:
804 qm set <vmid> -onboot 1
806 .Start and Shutdown Order
808 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
810 In some case you want to be able to fine tune the boot order of your
811 VMs, for instance if one of your VM is providing firewalling or DHCP
812 to other guest systems. For this you can use the following
815 * *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
816 you want the VM to be the first to be started. (We use the reverse startup
817 order for shutdown, so a machine with a start order of 1 would be the last to
818 be shut down). If multiple VMs have the same order defined on a host, they will
819 additionally be ordered by 'VMID' in ascending order.
820 * *Startup delay*: Defines the interval between this VM start and subsequent
821 VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting
823 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
824 for the VM to be offline after issuing a shutdown command.
825 By default this value is set to 180, which means that {pve} will issue a
826 shutdown request and wait 180 seconds for the machine to be offline. If
827 the machine is still online after the timeout it will be stopped forcefully.
829 NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
830 'boot order' options currently. Those VMs will be skipped by the startup and
831 shutdown algorithm as the HA manager itself ensures that VMs get started and
834 Please note that machines without a Start/Shutdown order parameter will always
835 start after those where the parameter is set. Further, this parameter can only
836 be enforced between virtual machines running on the same host, not
839 [[qm_spice_enhancements]]
843 SPICE Enhancements are optional features that can improve the remote viewer
846 To enable them via the GUI go to the *Options* panel of the virtual machine. Run
847 the following command to enable them via the CLI:
850 qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
853 NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
854 must be set to SPICE (qxl).
859 Share a local folder with the guest. The `spice-webdavd` daemon needs to be
860 installed in the guest. It makes the shared folder available through a local
861 WebDAV server located at http://localhost:9843.
863 For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
865 https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
867 Most Linux distributions have a package called `spice-webdavd` that can be
870 To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
871 Select the folder to share and then enable the checkbox.
873 NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
878 Fast refreshing areas are encoded into a video stream. Two options exist:
880 * *all*: Any fast refreshing area will be encoded into a video stream.
881 * *filter*: Additional filters are used to decide if video streaming should be
882 used (currently only small window surfaces are skipped).
884 A general recommendation if video streaming should be enabled and which option
885 to choose from cannot be given. Your mileage may vary depending on the specific
891 .Shared folder does not show up
893 Make sure the WebDAV service is enabled and running in the guest. On Windows it
894 is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
895 different depending on the distribution.
897 If the service is running, check the WebDAV server by opening
898 http://localhost:9843 in a browser in the guest.
900 It can help to restart the SPICE session.
906 [thumbnail="screenshot/gui-qemu-migrate.png"]
908 If you have a cluster, you can migrate your VM to another host with
910 qm migrate <vmid> <target>
912 There are generally two mechanisms for this
914 * Online Migration (aka Live Migration)
920 When your VM is running and it has no local resources defined (such as disks
921 on local storage, passed through devices, etc.) you can initiate a live
922 migration with the -online flag.
927 This starts a Qemu Process on the target host with the 'incoming' flag, which
928 means that the process starts and waits for the memory data and device states
929 from the source Virtual Machine (since all other resources, e.g. disks,
930 are shared, the memory content and device state are the only things left
933 Once this connection is established, the source begins to send the memory
934 content asynchronously to the target. If the memory on the source changes,
935 those sections are marked dirty and there will be another pass of sending data.
936 This happens until the amount of data to send is so small that it can
937 pause the VM on the source, send the remaining data to the target and start
938 the VM on the target in under a second.
943 For Live Migration to work, there are some things required:
945 * The VM has no local resources (e.g. passed through devices, local disks, etc.)
946 * The hosts are in the same {pve} cluster.
947 * The hosts have a working (and reliable) network connection.
948 * The target host must have the same or higher versions of the
949 {pve} packages. (It *might* work the other way, but this is never guaranteed)
954 If you have local resources, you can still offline migrate your VMs,
955 as long as all disk are on storages, which are defined on both hosts.
956 Then the migration will copy the disk over the network to the target host.
958 [[qm_copy_and_clone]]
962 [thumbnail="screenshot/gui-qemu-full-clone.png"]
964 VM installation is usually done using an installation media (CD-ROM)
965 from the operation system vendor. Depending on the OS, this can be a
966 time consuming task one might want to avoid.
968 An easy way to deploy many VMs of the same type is to copy an existing
969 VM. We use the term 'clone' for such copies, and distinguish between
970 'linked' and 'full' clones.
974 The result of such copy is an independent VM. The
975 new VM does not share any storage resources with the original.
978 It is possible to select a *Target Storage*, so one can use this to
979 migrate a VM to a totally different storage. You can also change the
980 disk image *Format* if the storage driver supports several formats.
983 NOTE: A full clone needs to read and copy all VM image data. This is
984 usually much slower than creating a linked clone.
987 Some storage types allows to copy a specific *Snapshot*, which
988 defaults to the 'current' VM data. This also means that the final copy
989 never includes any additional snapshots from the original VM.
994 Modern storage drivers support a way to generate fast linked
995 clones. Such a clone is a writable copy whose initial contents are the
996 same as the original data. Creating a linked clone is nearly
997 instantaneous, and initially consumes no additional space.
1000 They are called 'linked' because the new image still refers to the
1001 original. Unmodified data blocks are read from the original image, but
1002 modification are written (and afterwards read) from a new
1003 location. This technique is called 'Copy-on-write'.
1006 This requires that the original volume is read-only. With {pve} one
1007 can convert any VM into a read-only <<qm_templates, Template>>). Such
1008 templates can later be used to create linked clones efficiently.
1011 NOTE: You cannot delete an original template while linked clones
1015 It is not possible to change the *Target storage* for linked clones,
1016 because this is a storage internal feature.
1019 The *Target node* option allows you to create the new VM on a
1020 different node. The only restriction is that the VM is on shared
1021 storage, and that storage is also available on the target node.
1023 To avoid resource conflicts, all network interface MAC addresses get
1024 randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1029 Virtual Machine Templates
1030 -------------------------
1032 One can convert a VM into a Template. Such templates are read-only,
1033 and you can use them to create linked clones.
1035 NOTE: It is not possible to start templates, because this would modify
1036 the disk images. If you want to change the template, create a linked
1037 clone and modify that.
1042 {pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
1043 'vmgenid' Specification
1044 https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1045 for virtual machines.
1046 This can be used by the guest operating system to detect any event resulting
1047 in a time shift event, for example, restoring a backup or a snapshot rollback.
1049 When creating new VMs, a 'vmgenid' will be automatically generated and saved
1050 in its configuration file.
1052 To create and add a 'vmgenid' to an already existing VM one can pass the
1053 special value `1' to let {pve} autogenerate one or manually set the 'UUID'
1054 footnote:[Online GUID generator http://guid.one/] by using it as value,
1058 qm set VMID -vmgenid 1
1059 qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
1062 NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1063 in the same effects as a change on snapshot rollback, backup restore, etc., has
1064 as the VM can interpret this as generation change.
1066 In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1067 its value on VM creation, or retroactively delete the property in the
1071 qm set VMID -delete vmgenid
1074 The most prominent use case for 'vmgenid' are newer Microsoft Windows
1075 operating systems, which use it to avoid problems in time sensitive or
1076 replicate services (e.g., databases, domain controller
1077 footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1078 on snapshot rollback, backup restore or a whole VM clone operation.
1080 Importing Virtual Machines and disk images
1081 ------------------------------------------
1083 A VM export from a foreign hypervisor takes usually the form of one or more disk
1084 images, with a configuration file describing the settings of the VM (RAM,
1086 The disk images can be in the vmdk format, if the disks come from
1087 VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1088 The most popular configuration format for VM exports is the OVF standard, but in
1089 practice interoperation is limited because many settings are not implemented in
1090 the standard itself, and hypervisors export the supplementary information
1091 in non-standard extensions.
1093 Besides the problem of format, importing disk images from other hypervisors
1094 may fail if the emulated hardware changes too much from one hypervisor to
1095 another. Windows VMs are particularly concerned by this, as the OS is very
1096 picky about any changes of hardware. This problem may be solved by
1097 installing the MergeIDE.zip utility available from the Internet before exporting
1098 and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1100 Finally there is the question of paravirtualized drivers, which improve the
1101 speed of the emulated system and are specific to the hypervisor.
1102 GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1103 default and you can switch to the paravirtualized drivers right after importing
1104 the VM. For Windows VMs, you need to install the Windows paravirtualized
1105 drivers by yourself.
1107 GNU/Linux and other free Unix can usually be imported without hassle. Note
1108 that we cannot guarantee a successful import/export of Windows VMs in all
1109 cases due to the problems above.
1111 Step-by-step example of a Windows OVF import
1112 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1115 https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
1116 to get started with Windows development.We are going to use one of these
1117 to demonstrate the OVF import feature.
1119 Download the Virtual Machine zip
1120 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1122 After getting informed about the user agreement, choose the _Windows 10
1123 Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
1125 Extract the disk image from the zip
1126 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1128 Using the `unzip` utility or any archiver of your choice, unpack the zip,
1129 and copy via ssh/scp the ovf and vmdk files to your {pve} host.
1131 Import the Virtual Machine
1132 ^^^^^^^^^^^^^^^^^^^^^^^^^^
1134 This will create a new virtual machine, using cores, memory and
1135 VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1136 storage. You have to configure the network manually.
1138 qm importovf 999 WinDev1709Eval.ovf local-lvm
1140 The VM is ready to be started.
1142 Adding an external disk image to a Virtual Machine
1143 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1145 You can also add an existing disk image to a VM, either coming from a
1146 foreign hypervisor, or one that you created yourself.
1148 Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1150 vmdebootstrap --verbose \
1151 --size 10GiB --serial-console \
1152 --grub --no-extlinux \
1153 --package openssh-server \
1154 --package avahi-daemon \
1155 --package qemu-guest-agent \
1156 --hostname vm600 --enable-dhcp \
1157 --customize=./copy_pub_ssh.sh \
1158 --sparse --image vm600.raw
1160 You can now create a new target VM for this image.
1162 qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
1163 --bootdisk scsi0 --scsihw virtio-scsi-pci --ostype l26
1165 Add the disk image as +unused0+ to the VM, using the storage +pvedir+:
1167 qm importdisk 600 vm600.raw pvedir
1169 Finally attach the unused disk to the SCSI controller of the VM:
1171 qm set 600 --scsi0 pvedir:600/vm-600-disk-1.raw
1173 The VM is ready to be started.
1177 include::qm-cloud-init.adoc[]
1181 include::qm-pci-passthrough.adoc[]
1187 You can add a hook script to VMs with the config property `hookscript`.
1189 qm set 100 -hookscript local:snippets/hookscript.pl
1191 It will be called during various phases of the guests lifetime.
1192 For an example and documentation see the example script under
1193 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
1195 Managing Virtual Machines with `qm`
1196 ------------------------------------
1198 qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
1199 create and destroy virtual machines, and control execution
1200 (start/stop/suspend/resume). Besides that, you can use qm to set
1201 parameters in the associated config file. It is also possible to
1202 create and delete virtual disks.
1207 Using an iso file uploaded on the 'local' storage, create a VM
1208 with a 4 GB IDE disk on the 'local-lvm' storage
1210 qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1216 Send a shutdown request, then wait until the VM is stopped.
1218 qm shutdown 300 && qm wait 300
1220 Same as above, but only wait for 40 seconds.
1222 qm shutdown 300 && qm wait 300 -timeout 40
1225 [[qm_configuration]]
1229 VM configuration files are stored inside the Proxmox cluster file
1230 system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1231 Like other files stored inside `/etc/pve/`, they get automatically
1232 replicated to all other cluster nodes.
1234 NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1235 unique cluster wide.
1237 .Example VM Configuration
1245 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1246 virtio0: local:vm-100-disk-1,size=32G
1249 Those configuration files are simple text files, and you can edit them
1250 using a normal text editor (`vi`, `nano`, ...). This is sometimes
1251 useful to do small corrections, but keep in mind that you need to
1252 restart the VM to apply such changes.
1254 For that reason, it is usually better to use the `qm` command to
1255 generate and modify those files, or do the whole thing using the GUI.
1256 Our toolkit is smart enough to instantaneously apply most changes to
1257 running VM. This feature is called "hot plug", and there is no
1258 need to restart the VM in that case.
1264 VM configuration files use a simple colon separated key/value
1265 format. Each line has the following format:
1272 Blank lines in those files are ignored, and lines starting with a `#`
1273 character are treated as comments and are also ignored.
1280 When you create a snapshot, `qm` stores the configuration at snapshot
1281 time into a separate snapshot section within the same configuration
1282 file. For example, after creating a snapshot called ``testsnapshot'',
1283 your configuration file will look like this:
1285 .VM configuration with snapshot
1295 snaptime: 1457170803
1299 There are a few snapshot related properties like `parent` and
1300 `snaptime`. The `parent` property is used to store the parent/child
1301 relationship between snapshots. `snaptime` is the snapshot creation
1302 time stamp (Unix epoch).
1309 include::qm.conf.5-opts.adoc[]
1315 Online migrations, snapshots and backups (`vzdump`) set a lock to
1316 prevent incompatible concurrent actions on the affected VMs. Sometimes
1317 you need to remove such a lock manually (e.g., after a power failure).
1321 CAUTION: Only do that if you are sure the action which set the lock is
1330 * link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1340 `/etc/pve/qemu-server/<VMID>.conf`::
1342 Configuration file for the VM '<VMID>'.
1345 include::pve-copyright.adoc[]