]> git.proxmox.com Git - pve-docs.git/blame_incremental - qm.adoc
README: note level 4 heading issues
[pve-docs.git] / qm.adoc
... / ...
CommitLineData
1[[chapter_virtual_machines]]
2ifdef::manvolnum[]
3qm(1)
4=====
5:pve-toplevel:
6
7NAME
8----
9
10qm - Qemu/KVM Virtual Machine Manager
11
12
13SYNOPSIS
14--------
15
16include::qm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22Qemu/KVM Virtual Machines
23=========================
24:pve-toplevel:
25endif::manvolnum[]
26
27// deprecates
28// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29// http://pve.proxmox.com/wiki/KVM
30// http://pve.proxmox.com/wiki/Qemu_Server
31
32Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
33physical computer. From the perspective of the host system where Qemu is
34running, Qemu is a user program which has access to a number of local resources
35like partitions, files, network cards which are then passed to an
36emulated computer which sees them as if they were real devices.
37
38A guest operating system running in the emulated computer accesses these
39devices, and runs as it were running on real hardware. For instance you can pass
40an iso image as a parameter to Qemu, and the OS running in the emulated computer
41will see a real CDROM inserted in a CD drive.
42
43Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44only concerned with 32 and 64 bits PC clone emulation, since it represents the
45overwhelming majority of server hardware. The emulation of PC clones is also one
46of the fastest due to the availability of processor extensions which greatly
47speed up Qemu when the emulated architecture is the same as the host
48architecture.
49
50NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51It means that Qemu is running with the support of the virtualization processor
52extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
53_KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm
54module.
55
56Qemu inside {pve} runs as a root process, since this is required to access block
57and PCI devices.
58
59
60Emulated devices and paravirtualized devices
61--------------------------------------------
62
63The PC hardware emulated by Qemu includes a mainboard, network controllers,
64scsi, ide and sata controllers, serial ports (the complete list can be seen in
65the `kvm(1)` man page) all of them emulated in software. All these devices
66are the exact software equivalent of existing hardware devices, and if the OS
67running in the guest has the proper drivers it will use the devices as if it
68were running on real hardware. This allows Qemu to runs _unmodified_ operating
69systems.
70
71This however has a performance cost, as running in software what was meant to
72run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73Qemu can present to the guest operating system _paravirtualized devices_, where
74the guest OS recognizes it is running inside Qemu and cooperates with the
75hypervisor.
76
77Qemu relies on the virtio virtualization standard, and is thus able to present
78paravirtualized virtio devices, which includes a paravirtualized generic disk
79controller, a paravirtualized network card, a paravirtualized serial port,
80a paravirtualized SCSI controller, etc ...
81
82It is highly recommended to use the virtio devices whenever you can, as they
83provide a big performance improvement. Using the virtio generic disk controller
84versus an emulated IDE controller will double the sequential write throughput,
85as measured with `bonnie++(8)`. Using the virtio network interface can deliver
86up to three times the throughput of an emulated Intel E1000 network card, as
87measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
88http://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
90
91[[qm_virtual_machines_settings]]
92Virtual Machines Settings
93-------------------------
94
95Generally speaking {pve} tries to choose sane defaults for virtual machines
96(VM). Make sure you understand the meaning of the settings you change, as it
97could incur a performance slowdown, or putting your data at risk.
98
99
100[[qm_general_settings]]
101General Settings
102~~~~~~~~~~~~~~~~
103
104[thumbnail="screenshot/gui-create-vm-general.png"]
105
106General settings of a VM include
107
108* the *Node* : the physical server on which the VM will run
109* the *VM ID*: a unique number in this {pve} installation used to identify your VM
110* *Name*: a free form text string you can use to describe the VM
111* *Resource Pool*: a logical group of VMs
112
113
114[[qm_os_settings]]
115OS Settings
116~~~~~~~~~~~
117
118[thumbnail="screenshot/gui-create-vm-os.png"]
119
120When creating a VM, setting the proper Operating System(OS) allows {pve} to
121optimize some low level parameters. For instance Windows OS expect the BIOS
122clock to use the local time, while Unix based OS expect the BIOS clock to have
123the UTC time.
124
125
126[[qm_hard_disk]]
127Hard Disk
128~~~~~~~~~
129
130Qemu can emulate a number of storage controllers:
131
132* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
133controller. Even if this controller has been superseded by recent designs,
134each and every OS you can think of has support for it, making it a great choice
135if you want to run an OS released before 2003. You can connect up to 4 devices
136on this controller.
137
138* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
139design, allowing higher throughput and a greater number of devices to be
140connected. You can connect up to 6 devices on this controller.
141
142* the *SCSI* controller, designed in 1985, is commonly found on server grade
143hardware, and can connect up to 14 storage devices. {pve} emulates by default a
144LSI 53C895A controller.
145+
146A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for
147performance and is automatically selected for newly created Linux VMs since
148{pve} 4.3. Linux distributions have support for this controller since 2012, and
149FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
150containing the drivers during the installation.
151// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
152If you aim at maximum performance, you can select a SCSI controller of type
153_VirtIO SCSI single_ which will allow you to select the *IO Thread* option.
154When selecting _VirtIO SCSI single_ Qemu will create a new controller for
155each disk, instead of adding all disks to the same controller.
156
157* The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
158is an older type of paravirtualized controller. It has been superseded by the
159VirtIO SCSI Controller, in terms of features.
160
161[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
162On each controller you attach a number of emulated hard disks, which are backed
163by a file or a block device residing in the configured storage. The choice of
164a storage type will determine the format of the hard disk image. Storages which
165present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
166whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
167either the *raw disk image format* or the *QEMU image format*.
168
169 * the *QEMU image format* is a copy on write format which allows snapshots, and
170 thin provisioning of the disk image.
171 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
172 you would get when executing the `dd` command on a block device in Linux. This
173 format does not support thin provisioning or snapshots by itself, requiring
174 cooperation from the storage layer for these tasks. It may, however, be up to
175 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
176 http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
177 * the *VMware image format* only makes sense if you intend to import/export the
178 disk image to other hypervisors.
179
180Setting the *Cache* mode of the hard drive will impact how the host system will
181notify the guest systems of block write completions. The *No cache* default
182means that the guest system will be notified that a write is complete when each
183block reaches the physical storage write queue, ignoring the host page cache.
184This provides a good balance between safety and speed.
185
186If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
187you can set the *No backup* option on that disk.
188
189If you want the {pve} storage replication mechanism to skip a disk when starting
190 a replication job, you can set the *Skip replication* option on that disk.
191As of {pve} 5.0, replication requires the disk images to be on a storage of type
192`zfspool`, so adding a disk image to other storages when the VM has replication
193configured requires to skip replication for this disk image.
194
195If your storage supports _thin provisioning_ (see the storage chapter in the
196{pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
197option on the hard disks connected to that controller. With *Discard* enabled,
198when the filesystem of a VM marks blocks as unused after removing files, the
199emulated SCSI controller will relay this information to the storage, which will
200then shrink the disk image accordingly.
201
202If you would like a drive to be presented to the guest as a solid-state drive
203rather than a rotational hard disk, you can set the *SSD emulation* option on
204that drive. There is no requirement that the underlying storage actually be
205backed by SSDs; this feature can be used with physical media of any type.
206
207.IO Thread
208The option *IO Thread* can only be used when using a disk with the
209*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
210 type is *VirtIO SCSI single*.
211With this enabled, Qemu creates one I/O thread per storage controller,
212instead of a single thread for all I/O, so it increases performance when
213multiple disks are used and each disk has its own storage controller.
214Note that backups do not currently work with *IO Thread* enabled.
215
216
217[[qm_cpu]]
218CPU
219~~~
220
221[thumbnail="screenshot/gui-create-vm-cpu.png"]
222
223A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
224This CPU can then contain one or many *cores*, which are independent
225processing units. Whether you have a single CPU socket with 4 cores, or two CPU
226sockets with two cores is mostly irrelevant from a performance point of view.
227However some software licenses depend on the number of sockets a machine has,
228in that case it makes sense to set the number of sockets to what the license
229allows you.
230
231Increasing the number of virtual cpus (cores and sockets) will usually provide a
232performance improvement though that is heavily dependent on the use of the VM.
233Multithreaded applications will of course benefit from a large number of
234virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
235execution on the host system. If you're not sure about the workload of your VM,
236it is usually a safe bet to set the number of *Total cores* to 2.
237
238NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
239is greater than the number of cores on the server (e.g., 4 VMs with each 4
240cores on a machine with only 8 cores). In that case the host system will
241balance the Qemu execution threads between your server cores, just like if you
242were running a standard multithreaded application. However, {pve} will prevent
243you from assigning more virtual CPU cores than physically available, as this will
244only bring the performance down due to the cost of context switches.
245
246[[qm_cpu_resource_limits]]
247Resource Limits
248^^^^^^^^^^^^^^^
249
250In addition to the number of virtual cores, you can configure how much resources
251a VM can get in relation to the host CPU time and also in relation to other
252VMs.
253With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
254the whole VM can use on the host. It is a floating point value representing CPU
255time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
256single process would fully use one single core it would have `100%` CPU Time
257usage. If a VM with four cores utilizes all its cores fully it would
258theoretically use `400%`. In reality the usage may be even a bit higher as Qemu
259can have additional threads for VM peripherals besides the vCPU core ones.
260This setting can be useful if a VM should have multiple vCPUs, as it runs a few
261processes in parallel, but the VM as a whole should not be able to run all
262vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
263which would profit from having 8 vCPUs, but at no time all of those 8 cores
264should run at full load - as this would make the server so overloaded that
265other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
266`4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
267real host cores CPU time. But, if only 4 would do work they could still get
268almost 100% of a real core each.
269
270NOTE: VMs can, depending on their configuration, use additional threads e.g.,
271for networking or IO operations but also live migration. Thus a VM can show up
272to use more CPU time than just its virtual CPUs could use. To ensure that a VM
273never uses more CPU time than virtual CPUs assigned set the *cpulimit* setting
274to the same value as the total core count.
275
276The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
277shares or CPU weight), controls how much CPU time a VM gets in regards to other
278VMs running. It is a relative weight which defaults to `1024`, if you increase
279this for a VM it will be prioritized by the scheduler in comparison to other
280VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was
281changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than
282the first VM 100.
283
284For more information see `man systemd.resource-control`, here `CPUQuota`
285corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits`
286setting, visit its Notes section for references and implementation details.
287
288CPU Type
289^^^^^^^^
290
291Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
292processors. Each new processor generation adds new features, like hardware
293assisted 3d rendering, random number generation, memory protection, etc ...
294Usually you should select for your VM a processor type which closely matches the
295CPU of the host system, as it means that the host CPU features (also called _CPU
296flags_ ) will be available in your VMs. If you want an exact match, you can set
297the CPU type to *host* in which case the VM will have exactly the same CPU flags
298as your host system.
299
300This has a downside though. If you want to do a live migration of VMs between
301different hosts, your VM might end up on a new system with a different CPU type.
302If the CPU flags passed to the guest are missing, the qemu process will stop. To
303remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
304kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
305but is guaranteed to work everywhere.
306
307In short, if you care about live migration and moving VMs between nodes, leave
308the kvm64 default. If you don’t care about live migration or have a homogeneous
309cluster where all nodes have the same CPU, set the CPU type to host, as in
310theory this will give your guests maximum performance.
311
312Meltdown / Spectre related CPU flags
313^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
314
315There are several CPU flags related to the Meltdown and Spectre vulnerabilities
316footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
317manually unless the selected CPU type of your VM already enables them by default.
318
319There are two requirements that need to be fulfilled in order to use these
320CPU flags:
321
322* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
323* The guest operating system must be updated to a version which mitigates the
324 attacks and is able to utilize the CPU feature
325
326Otherwise you need to set the desired CPU flag of the virtual CPU, either by
327editing the CPU options in the WebUI, or by setting the 'flags' property of the
328'cpu' option in the VM configuration file.
329
330For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
331so-called ``microcode update'' footnote:[You can use `intel-microcode' /
332`amd-microcode' from Debian non-free if your vendor does not provide such an
333update. Note that not all affected CPUs can be updated to support spec-ctrl.]
334for your CPU.
335
336
337To check if the {pve} host is vulnerable, execute the following command as root:
338
339----
340for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
341----
342
343A community script is also available to detect is the host is still vulnerable.
344footnote:[spectre-meltdown-checker https://meltdown.ovh/]
345
346Intel processors
347^^^^^^^^^^^^^^^^
348
349* 'pcid'
350+
351This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
352called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
353the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
354mechanism footnote:[PCID is now a critical performance/security feature on x86
355https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
356+
357To check if the {pve} host supports PCID, execute the following command as root:
358+
359----
360# grep ' pcid ' /proc/cpuinfo
361----
362+
363If this does not return empty your host's CPU has support for 'pcid'.
364
365* 'spec-ctrl'
366+
367Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
368in cases where retpolines are not sufficient.
369Included by default in Intel CPU models with -IBRS suffix.
370Must be explicitly turned on for Intel CPU models without -IBRS suffix.
371Requires an updated host CPU microcode (intel-microcode >= 20180425).
372+
373* 'ssbd'
374+
375Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
376Must be explicitly turned on for all Intel CPU models.
377Requires an updated host CPU microcode(intel-microcode >= 20180703).
378
379
380AMD processors
381^^^^^^^^^^^^^^
382
383* 'ibpb'
384+
385Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
386in cases where retpolines are not sufficient.
387Included by default in AMD CPU models with -IBPB suffix.
388Must be explicitly turned on for AMD CPU models without -IBPB suffix.
389Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
390
391
392
393* 'virt-ssbd'
394+
395Required to enable the Spectre v4 (CVE-2018-3639) fix.
396Not included by default in any AMD CPU model.
397Must be explicitly turned on for all AMD CPU models.
398This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
399Note that this must be explicitly enabled when when using the "host" cpu model,
400because this is a virtual feature which does not exist in the physical CPUs.
401
402
403* 'amd-ssbd'
404+
405Required to enable the Spectre v4 (CVE-2018-3639) fix.
406Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
407This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
408virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
409
410
411* 'amd-no-ssb'
412+
413Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
414Not included by default in any AMD CPU model.
415Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
416and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
417This is mutually exclusive with virt-ssbd and amd-ssbd.
418
419
420NUMA
421^^^^
422You can also optionally emulate a *NUMA*
423footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
424in your VMs. The basics of the NUMA architecture mean that instead of having a
425global memory pool available to all your cores, the memory is spread into local
426banks close to each socket.
427This can bring speed improvements as the memory bus is not a bottleneck
428anymore. If your system has a NUMA architecture footnote:[if the command
429`numactl --hardware | grep available` returns more than one node, then your host
430system has a NUMA architecture] we recommend to activate the option, as this
431will allow proper distribution of the VM resources on the host system.
432This option is also required to hot-plug cores or RAM in a VM.
433
434If the NUMA option is used, it is recommended to set the number of sockets to
435the number of sockets of the host system.
436
437vCPU hot-plug
438^^^^^^^^^^^^^
439
440Modern operating systems introduced the capability to hot-plug and, to a
441certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us
442to avoid a lot of the (physical) problems real hardware can cause in such
443scenarios.
444Still, this is a rather new and complicated feature, so its use should be
445restricted to cases where its absolutely needed. Most of the functionality can
446be replicated with other, well tested and less complicated, features, see
447xref:qm_cpu_resource_limits[Resource Limits].
448
449In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
450To start a VM with less than this total core count of CPUs you may use the
451*vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
452
453Currently only this feature is only supported on Linux, a kernel newer than 3.10
454is needed, a kernel newer than 4.7 is recommended.
455
456You can use a udev rule as follow to automatically set new CPUs as online in
457the guest:
458
459----
460SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
461----
462
463Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
464
465Note: CPU hot-remove is machine dependent and requires guest cooperation.
466The deletion command does not guarantee CPU removal to actually happen,
467typically it's a request forwarded to guest using target dependent mechanism,
468e.g., ACPI on x86/amd64.
469
470
471[[qm_memory]]
472Memory
473~~~~~~
474
475For each VM you have the option to set a fixed size memory or asking
476{pve} to dynamically allocate memory based on the current RAM usage of the
477host.
478
479.Fixed Memory Allocation
480[thumbnail="screenshot/gui-create-vm-memory.png"]
481
482When setting memory and minimum memory to the same amount
483{pve} will simply allocate what you specify to your VM.
484
485Even when using a fixed memory size, the ballooning device gets added to the
486VM, because it delivers useful information such as how much memory the guest
487really uses.
488In general, you should leave *ballooning* enabled, but if you want to disable
489it (e.g. for debugging purposes), simply uncheck
490*Ballooning Device* or set
491
492 balloon: 0
493
494in the configuration.
495
496.Automatic Memory Allocation
497
498// see autoballoon() in pvestatd.pm
499When setting the minimum memory lower than memory, {pve} will make sure that the
500minimum amount you specified is always available to the VM, and if RAM usage on
501the host is below 80%, will dynamically add memory to the guest up to the
502maximum memory specified.
503
504When the host is running low on RAM, the VM will then release some memory
505back to the host, swapping running processes if needed and starting the oom
506killer in last resort. The passing around of memory between host and guest is
507done via a special `balloon` kernel driver running inside the guest, which will
508grab or release memory pages from the host.
509footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
510
511When multiple VMs use the autoallocate facility, it is possible to set a
512*Shares* coefficient which indicates the relative amount of the free host memory
513that each VM should take. Suppose for instance you have four VMs, three of them
514running an HTTP server and the last one is a database server. To cache more
515database blocks in the database server RAM, you would like to prioritize the
516database VM when spare RAM is available. For this you assign a Shares property
517of 3000 to the database VM, leaving the other VMs to the Shares default setting
518of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
519* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
5203000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
521get 1.5 GB.
522
523All Linux distributions released after 2010 have the balloon kernel driver
524included. For Windows OSes, the balloon driver needs to be added manually and can
525incur a slowdown of the guest, so we don't recommend using it on critical
526systems.
527// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
528
529When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
530of RAM available to the host.
531
532
533[[qm_network_device]]
534Network Device
535~~~~~~~~~~~~~~
536
537[thumbnail="screenshot/gui-create-vm-network.png"]
538
539Each VM can have many _Network interface controllers_ (NIC), of four different
540types:
541
542 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
543 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
544performance. Like all VirtIO devices, the guest OS should have the proper driver
545installed.
546 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
547only be used when emulating older operating systems ( released before 2002 )
548 * the *vmxnet3* is another paravirtualized device, which should only be used
549when importing a VM from another hypervisor.
550
551{pve} will generate for each NIC a random *MAC address*, so that your VM is
552addressable on Ethernet networks.
553
554The NIC you added to the VM can follow one of two different models:
555
556 * in the default *Bridged mode* each virtual NIC is backed on the host by a
557_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
558tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
559have direct access to the Ethernet LAN on which the host is located.
560 * in the alternative *NAT mode*, each virtual NIC will only communicate with
561the Qemu user networking stack, where a built-in router and DHCP server can
562provide network access. This built-in DHCP will serve addresses in the private
56310.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
564should only be used for testing. This mode is only available via CLI or the API,
565but not via the WebUI.
566
567You can also skip adding a network device when creating a VM by selecting *No
568network device*.
569
570.Multiqueue
571If you are using the VirtIO driver, you can optionally activate the
572*Multiqueue* option. This option allows the guest OS to process networking
573packets using multiple virtual CPUs, providing an increase in the total number
574of packets transferred.
575
576//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
577When using the VirtIO driver with {pve}, each NIC network queue is passed to the
578host kernel, where the queue will be processed by a kernel thread spawned by the
579vhost driver. With this option activated, it is possible to pass _multiple_
580network queues to the host kernel for each NIC.
581
582//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
583When using Multiqueue, it is recommended to set it to a value equal
584to the number of Total Cores of your guest. You also need to set in
585the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
586command:
587
588`ethtool -L ens1 combined X`
589
590where X is the number of the number of vcpus of the VM.
591
592You should note that setting the Multiqueue parameter to a value greater
593than one will increase the CPU load on the host and guest systems as the
594traffic increases. We recommend to set this option only when the VM has to
595process a great number of incoming connections, such as when the VM is running
596as a router, reverse proxy or a busy HTTP server doing long polling.
597
598[[qm_display]]
599Display
600~~~~~~~
601
602QEMU can virtualize a few types of VGA hardware. Some examples are:
603
604* *std*, the default, emulates a card with Bochs VBE extensions.
605* *cirrus*, this was once the default, it emulates a very old hardware module
606with all its problems. This display type should only be used if really
607necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
608qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier
609* *vmware*, is a VMWare SVGA-II compatible adapter.
610* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
611enables SPICE for the VM.
612
613You can edit the amount of memory given to the virtual GPU, by setting
614the 'memory' option. This can enable higher resolutions inside the VM,
615especially with SPICE/QXL.
616
617As the memory is reserved by display device, selecting Multi-Monitor mode
618for SPICE (e.g., `qxl2` for dual monitors) has some implications:
619
620* Windows needs a device for each monitor, so if your 'ostype' is some
621version of Windows, {pve} gives the VM an extra device per monitor.
622Each device gets the specified amount of memory.
623
624* Linux VMs, can always enable more virtual monitors, but selecting
625a Multi-Monitor mode multiplies the memory given to the device with
626the number of monitors.
627
628Selecting `serialX` as display 'type' disables the VGA output, and redirects
629the Web Console to the selected serial port. A configured display 'memory'
630setting will be ignored in that case.
631
632[[qm_usb_passthrough]]
633USB Passthrough
634~~~~~~~~~~~~~~~
635
636There are two different types of USB passthrough devices:
637
638* Host USB passthrough
639* SPICE USB passthrough
640
641Host USB passthrough works by giving a VM a USB device of the host.
642This can either be done via the vendor- and product-id, or
643via the host bus and port.
644
645The vendor/product-id looks like this: *0123:abcd*,
646where *0123* is the id of the vendor, and *abcd* is the id
647of the product, meaning two pieces of the same usb device
648have the same id.
649
650The bus/port looks like this: *1-2.3.4*, where *1* is the bus
651and *2.3.4* is the port path. This represents the physical
652ports of your host (depending of the internal order of the
653usb controllers).
654
655If a device is present in a VM configuration when the VM starts up,
656but the device is not present in the host, the VM can boot without problems.
657As soon as the device/port is available in the host, it gets passed through.
658
659WARNING: Using this kind of USB passthrough means that you cannot move
660a VM online to another host, since the hardware is only available
661on the host the VM is currently residing.
662
663The second type of passthrough is SPICE USB passthrough. This is useful
664if you use a SPICE client which supports it. If you add a SPICE USB port
665to your VM, you can passthrough a USB device from where your SPICE client is,
666directly to the VM (for example an input device or hardware dongle).
667
668
669[[qm_bios_and_uefi]]
670BIOS and UEFI
671~~~~~~~~~~~~~
672
673In order to properly emulate a computer, QEMU needs to use a firmware.
674By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS
675implementation. SeaBIOS is a good choice for most standard setups.
676
677There are, however, some scenarios in which a BIOS is not a good firmware
678to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
679http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
680In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
681
682If you want to use OVMF, there are several things to consider:
683
684In order to save things like the *boot order*, there needs to be an EFI Disk.
685This disk will be included in backups and snapshots, and there can only be one.
686
687You can create such a disk with the following command:
688
689 qm set <vmid> -efidisk0 <storage>:1,format=<format>
690
691Where *<storage>* is the storage where you want to have the disk, and
692*<format>* is a format which the storage supports. Alternatively, you can
693create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
694hardware section of a VM.
695
696When using OVMF with a virtual display (without VGA passthrough),
697you need to set the client resolution in the OVMF menu(which you can reach
698with a press of the ESC button during boot), or you have to choose
699SPICE as the display type.
700
701[[qm_startup_and_shutdown]]
702Automatic Start and Shutdown of Virtual Machines
703~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
704
705After creating your VMs, you probably want them to start automatically
706when the host system boots. For this you need to select the option 'Start at
707boot' from the 'Options' Tab of your VM in the web interface, or set it with
708the following command:
709
710 qm set <vmid> -onboot 1
711
712.Start and Shutdown Order
713
714[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
715
716In some case you want to be able to fine tune the boot order of your
717VMs, for instance if one of your VM is providing firewalling or DHCP
718to other guest systems. For this you can use the following
719parameters:
720
721* *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
722you want the VM to be the first to be started. (We use the reverse startup
723order for shutdown, so a machine with a start order of 1 would be the last to
724be shut down). If multiple VMs have the same order defined on a host, they will
725additionally be ordered by 'VMID' in ascending order.
726* *Startup delay*: Defines the interval between this VM start and subsequent
727VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting
728other VMs.
729* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
730for the VM to be offline after issuing a shutdown command.
731By default this value is set to 180, which means that {pve} will issue a
732shutdown request and wait 180 seconds for the machine to be offline. If
733the machine is still online after the timeout it will be stopped forcefully.
734
735NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
736'boot order' options currently. Those VMs will be skipped by the startup and
737shutdown algorithm as the HA manager itself ensures that VMs get started and
738stopped.
739
740Please note that machines without a Start/Shutdown order parameter will always
741start after those where the parameter is set. Further, this parameter can only
742be enforced between virtual machines running on the same host, not
743cluster-wide.
744
745
746[[qm_migration]]
747Migration
748---------
749
750[thumbnail="screenshot/gui-qemu-migrate.png"]
751
752If you have a cluster, you can migrate your VM to another host with
753
754 qm migrate <vmid> <target>
755
756There are generally two mechanisms for this
757
758* Online Migration (aka Live Migration)
759* Offline Migration
760
761Online Migration
762~~~~~~~~~~~~~~~~
763
764When your VM is running and it has no local resources defined (such as disks
765on local storage, passed through devices, etc.) you can initiate a live
766migration with the -online flag.
767
768How it works
769^^^^^^^^^^^^
770
771This starts a Qemu Process on the target host with the 'incoming' flag, which
772means that the process starts and waits for the memory data and device states
773from the source Virtual Machine (since all other resources, e.g. disks,
774are shared, the memory content and device state are the only things left
775to transmit).
776
777Once this connection is established, the source begins to send the memory
778content asynchronously to the target. If the memory on the source changes,
779those sections are marked dirty and there will be another pass of sending data.
780This happens until the amount of data to send is so small that it can
781pause the VM on the source, send the remaining data to the target and start
782the VM on the target in under a second.
783
784Requirements
785^^^^^^^^^^^^
786
787For Live Migration to work, there are some things required:
788
789* The VM has no local resources (e.g. passed through devices, local disks, etc.)
790* The hosts are in the same {pve} cluster.
791* The hosts have a working (and reliable) network connection.
792* The target host must have the same or higher versions of the
793 {pve} packages. (It *might* work the other way, but this is never guaranteed)
794
795Offline Migration
796~~~~~~~~~~~~~~~~~
797
798If you have local resources, you can still offline migrate your VMs,
799as long as all disk are on storages, which are defined on both hosts.
800Then the migration will copy the disk over the network to the target host.
801
802[[qm_copy_and_clone]]
803Copies and Clones
804-----------------
805
806[thumbnail="screenshot/gui-qemu-full-clone.png"]
807
808VM installation is usually done using an installation media (CD-ROM)
809from the operation system vendor. Depending on the OS, this can be a
810time consuming task one might want to avoid.
811
812An easy way to deploy many VMs of the same type is to copy an existing
813VM. We use the term 'clone' for such copies, and distinguish between
814'linked' and 'full' clones.
815
816Full Clone::
817
818The result of such copy is an independent VM. The
819new VM does not share any storage resources with the original.
820+
821
822It is possible to select a *Target Storage*, so one can use this to
823migrate a VM to a totally different storage. You can also change the
824disk image *Format* if the storage driver supports several formats.
825+
826
827NOTE: A full clone need to read and copy all VM image data. This is
828usually much slower than creating a linked clone.
829+
830
831Some storage types allows to copy a specific *Snapshot*, which
832defaults to the 'current' VM data. This also means that the final copy
833never includes any additional snapshots from the original VM.
834
835
836Linked Clone::
837
838Modern storage drivers supports a way to generate fast linked
839clones. Such a clone is a writable copy whose initial contents are the
840same as the original data. Creating a linked clone is nearly
841instantaneous, and initially consumes no additional space.
842+
843
844They are called 'linked' because the new image still refers to the
845original. Unmodified data blocks are read from the original image, but
846modification are written (and afterwards read) from a new
847location. This technique is called 'Copy-on-write'.
848+
849
850This requires that the original volume is read-only. With {pve} one
851can convert any VM into a read-only <<qm_templates, Template>>). Such
852templates can later be used to create linked clones efficiently.
853+
854
855NOTE: You cannot delete the original template while linked clones
856exists.
857+
858
859It is not possible to change the *Target storage* for linked clones,
860because this is a storage internal feature.
861
862
863The *Target node* option allows you to create the new VM on a
864different node. The only restriction is that the VM is on shared
865storage, and that storage is also available on the target node.
866
867To avoid resource conflicts, all network interface MAC addresses gets
868randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
869setting.
870
871
872[[qm_templates]]
873Virtual Machine Templates
874-------------------------
875
876One can convert a VM into a Template. Such templates are read-only,
877and you can use them to create linked clones.
878
879NOTE: It is not possible to start templates, because this would modify
880the disk images. If you want to change the template, create a linked
881clone and modify that.
882
883VM Generation ID
884----------------
885
886{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
887'vmgenid' Specification
888https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
889for virtual machines.
890This can be used by the guest operating system to detect any event resulting
891in a time shift event, for example, restoring a backup or a snapshot rollback.
892
893When creating new VMs, a 'vmgenid' will be automatically generated and saved
894in its configuration file.
895
896To create and add a 'vmgenid' to an already existing VM one can pass the
897special value `1' to let {pve} autogenerate one or manually set the 'UUID'
898footnote:[Online GUID generator http://guid.one/] by using it as value,
899e.g.:
900
901----
902 qm set VMID -vmgenid 1
903 qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
904----
905
906NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
907in the same effects as a change on snapshot rollback, backup restore, etc., has
908as the VM can interpret this as generation change.
909
910In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
911its value on VM creation, or retroactively delete the property in the
912configuration with:
913
914----
915 qm set VMID -delete vmgenid
916----
917
918The most prominent use case for 'vmgenid' are newer Microsoft Windows
919operating systems, which use it to avoid problems in time sensitive or
920replicate services (e.g., databases, domain controller
921footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
922on snapshot rollback, backup restore or a whole VM clone operation.
923
924Importing Virtual Machines and disk images
925------------------------------------------
926
927A VM export from a foreign hypervisor takes usually the form of one or more disk
928 images, with a configuration file describing the settings of the VM (RAM,
929 number of cores). +
930The disk images can be in the vmdk format, if the disks come from
931VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
932The most popular configuration format for VM exports is the OVF standard, but in
933practice interoperation is limited because many settings are not implemented in
934the standard itself, and hypervisors export the supplementary information
935in non-standard extensions.
936
937Besides the problem of format, importing disk images from other hypervisors
938may fail if the emulated hardware changes too much from one hypervisor to
939another. Windows VMs are particularly concerned by this, as the OS is very
940picky about any changes of hardware. This problem may be solved by
941installing the MergeIDE.zip utility available from the Internet before exporting
942and choosing a hard disk type of *IDE* before booting the imported Windows VM.
943
944Finally there is the question of paravirtualized drivers, which improve the
945speed of the emulated system and are specific to the hypervisor.
946GNU/Linux and other free Unix OSes have all the necessary drivers installed by
947default and you can switch to the paravirtualized drivers right after importing
948the VM. For Windows VMs, you need to install the Windows paravirtualized
949drivers by yourself.
950
951GNU/Linux and other free Unix can usually be imported without hassle. Note
952that we cannot guarantee a successful import/export of Windows VMs in all
953cases due to the problems above.
954
955Step-by-step example of a Windows OVF import
956~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
957
958Microsoft provides
959https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
960 to get started with Windows development.We are going to use one of these
961to demonstrate the OVF import feature.
962
963Download the Virtual Machine zip
964^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
965
966After getting informed about the user agreement, choose the _Windows 10
967Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
968
969Extract the disk image from the zip
970^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
971
972Using the `unzip` utility or any archiver of your choice, unpack the zip,
973and copy via ssh/scp the ovf and vmdk files to your {pve} host.
974
975Import the Virtual Machine
976^^^^^^^^^^^^^^^^^^^^^^^^^^
977
978This will create a new virtual machine, using cores, memory and
979VM name as read from the OVF manifest, and import the disks to the +local-lvm+
980 storage. You have to configure the network manually.
981
982 qm importovf 999 WinDev1709Eval.ovf local-lvm
983
984The VM is ready to be started.
985
986Adding an external disk image to a Virtual Machine
987~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
988
989You can also add an existing disk image to a VM, either coming from a
990foreign hypervisor, or one that you created yourself.
991
992Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
993
994 vmdebootstrap --verbose \
995 --size 10GiB --serial-console \
996 --grub --no-extlinux \
997 --package openssh-server \
998 --package avahi-daemon \
999 --package qemu-guest-agent \
1000 --hostname vm600 --enable-dhcp \
1001 --customize=./copy_pub_ssh.sh \
1002 --sparse --image vm600.raw
1003
1004You can now create a new target VM for this image.
1005
1006 qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
1007 --bootdisk scsi0 --scsihw virtio-scsi-pci --ostype l26
1008
1009Add the disk image as +unused0+ to the VM, using the storage +pvedir+:
1010
1011 qm importdisk 600 vm600.raw pvedir
1012
1013Finally attach the unused disk to the SCSI controller of the VM:
1014
1015 qm set 600 --scsi0 pvedir:600/vm-600-disk-1.raw
1016
1017The VM is ready to be started.
1018
1019
1020ifndef::wiki[]
1021include::qm-cloud-init.adoc[]
1022endif::wiki[]
1023
1024ifndef::wiki[]
1025include::qm-pci-passthrough.adoc[]
1026endif::wiki[]
1027
1028
1029Managing Virtual Machines with `qm`
1030------------------------------------
1031
1032qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
1033create and destroy virtual machines, and control execution
1034(start/stop/suspend/resume). Besides that, you can use qm to set
1035parameters in the associated config file. It is also possible to
1036create and delete virtual disks.
1037
1038CLI Usage Examples
1039~~~~~~~~~~~~~~~~~~
1040
1041Using an iso file uploaded on the 'local' storage, create a VM
1042with a 4 GB IDE disk on the 'local-lvm' storage
1043
1044 qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1045
1046Start the new VM
1047
1048 qm start 300
1049
1050Send a shutdown request, then wait until the VM is stopped.
1051
1052 qm shutdown 300 && qm wait 300
1053
1054Same as above, but only wait for 40 seconds.
1055
1056 qm shutdown 300 && qm wait 300 -timeout 40
1057
1058
1059[[qm_configuration]]
1060Configuration
1061-------------
1062
1063VM configuration files are stored inside the Proxmox cluster file
1064system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1065Like other files stored inside `/etc/pve/`, they get automatically
1066replicated to all other cluster nodes.
1067
1068NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1069unique cluster wide.
1070
1071.Example VM Configuration
1072----
1073cores: 1
1074sockets: 1
1075memory: 512
1076name: webmail
1077ostype: l26
1078bootdisk: virtio0
1079net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1080virtio0: local:vm-100-disk-1,size=32G
1081----
1082
1083Those configuration files are simple text files, and you can edit them
1084using a normal text editor (`vi`, `nano`, ...). This is sometimes
1085useful to do small corrections, but keep in mind that you need to
1086restart the VM to apply such changes.
1087
1088For that reason, it is usually better to use the `qm` command to
1089generate and modify those files, or do the whole thing using the GUI.
1090Our toolkit is smart enough to instantaneously apply most changes to
1091running VM. This feature is called "hot plug", and there is no
1092need to restart the VM in that case.
1093
1094
1095File Format
1096~~~~~~~~~~~
1097
1098VM configuration files use a simple colon separated key/value
1099format. Each line has the following format:
1100
1101-----
1102# this is a comment
1103OPTION: value
1104-----
1105
1106Blank lines in those files are ignored, and lines starting with a `#`
1107character are treated as comments and are also ignored.
1108
1109
1110[[qm_snapshots]]
1111Snapshots
1112~~~~~~~~~
1113
1114When you create a snapshot, `qm` stores the configuration at snapshot
1115time into a separate snapshot section within the same configuration
1116file. For example, after creating a snapshot called ``testsnapshot'',
1117your configuration file will look like this:
1118
1119.VM configuration with snapshot
1120----
1121memory: 512
1122swap: 512
1123parent: testsnaphot
1124...
1125
1126[testsnaphot]
1127memory: 512
1128swap: 512
1129snaptime: 1457170803
1130...
1131----
1132
1133There are a few snapshot related properties like `parent` and
1134`snaptime`. The `parent` property is used to store the parent/child
1135relationship between snapshots. `snaptime` is the snapshot creation
1136time stamp (Unix epoch).
1137
1138
1139[[qm_options]]
1140Options
1141~~~~~~~
1142
1143include::qm.conf.5-opts.adoc[]
1144
1145
1146Locks
1147-----
1148
1149Online migrations, snapshots and backups (`vzdump`) set a lock to
1150prevent incompatible concurrent actions on the affected VMs. Sometimes
1151you need to remove such a lock manually (e.g., after a power failure).
1152
1153 qm unlock <vmid>
1154
1155CAUTION: Only do that if you are sure the action which set the lock is
1156no longer running.
1157
1158
1159ifdef::wiki[]
1160
1161See Also
1162~~~~~~~~
1163
1164* link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1165
1166endif::wiki[]
1167
1168
1169ifdef::manvolnum[]
1170
1171Files
1172------
1173
1174`/etc/pve/qemu-server/<VMID>.conf`::
1175
1176Configuration file for the VM '<VMID>'.
1177
1178
1179include::pve-copyright.adoc[]
1180endif::manvolnum[]