]> git.proxmox.com Git - pve-docs.git/blame_incremental - qm.adoc
network: adapt apply config section to PVE 7
[pve-docs.git] / qm.adoc
... / ...
CommitLineData
1[[chapter_virtual_machines]]
2ifdef::manvolnum[]
3qm(1)
4=====
5:pve-toplevel:
6
7NAME
8----
9
10qm - Qemu/KVM Virtual Machine Manager
11
12
13SYNOPSIS
14--------
15
16include::qm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22Qemu/KVM Virtual Machines
23=========================
24:pve-toplevel:
25endif::manvolnum[]
26
27// deprecates
28// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29// http://pve.proxmox.com/wiki/KVM
30// http://pve.proxmox.com/wiki/Qemu_Server
31
32Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
33physical computer. From the perspective of the host system where Qemu is
34running, Qemu is a user program which has access to a number of local resources
35like partitions, files, network cards which are then passed to an
36emulated computer which sees them as if they were real devices.
37
38A guest operating system running in the emulated computer accesses these
39devices, and runs as if it were running on real hardware. For instance, you can pass
40an ISO image as a parameter to Qemu, and the OS running in the emulated computer
41will see a real CD-ROM inserted into a CD drive.
42
43Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44only concerned with 32 and 64 bits PC clone emulation, since it represents the
45overwhelming majority of server hardware. The emulation of PC clones is also one
46of the fastest due to the availability of processor extensions which greatly
47speed up Qemu when the emulated architecture is the same as the host
48architecture.
49
50NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51It means that Qemu is running with the support of the virtualization processor
52extensions, via the Linux KVM module. In the context of {pve} _Qemu_ and
53_KVM_ can be used interchangeably, as Qemu in {pve} will always try to load the KVM
54module.
55
56Qemu inside {pve} runs as a root process, since this is required to access block
57and PCI devices.
58
59
60Emulated devices and paravirtualized devices
61--------------------------------------------
62
63The PC hardware emulated by Qemu includes a mainboard, network controllers,
64SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
65the `kvm(1)` man page) all of them emulated in software. All these devices
66are the exact software equivalent of existing hardware devices, and if the OS
67running in the guest has the proper drivers it will use the devices as if it
68were running on real hardware. This allows Qemu to runs _unmodified_ operating
69systems.
70
71This however has a performance cost, as running in software what was meant to
72run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73Qemu can present to the guest operating system _paravirtualized devices_, where
74the guest OS recognizes it is running inside Qemu and cooperates with the
75hypervisor.
76
77Qemu relies on the virtio virtualization standard, and is thus able to present
78paravirtualized virtio devices, which includes a paravirtualized generic disk
79controller, a paravirtualized network card, a paravirtualized serial port,
80a paravirtualized SCSI controller, etc ...
81
82It is highly recommended to use the virtio devices whenever you can, as they
83provide a big performance improvement. Using the virtio generic disk controller
84versus an emulated IDE controller will double the sequential write throughput,
85as measured with `bonnie++(8)`. Using the virtio network interface can deliver
86up to three times the throughput of an emulated Intel E1000 network card, as
87measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
88https://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
90
91[[qm_virtual_machines_settings]]
92Virtual Machines Settings
93-------------------------
94
95Generally speaking {pve} tries to choose sane defaults for virtual machines
96(VM). Make sure you understand the meaning of the settings you change, as it
97could incur a performance slowdown, or putting your data at risk.
98
99
100[[qm_general_settings]]
101General Settings
102~~~~~~~~~~~~~~~~
103
104[thumbnail="screenshot/gui-create-vm-general.png"]
105
106General settings of a VM include
107
108* the *Node* : the physical server on which the VM will run
109* the *VM ID*: a unique number in this {pve} installation used to identify your VM
110* *Name*: a free form text string you can use to describe the VM
111* *Resource Pool*: a logical group of VMs
112
113
114[[qm_os_settings]]
115OS Settings
116~~~~~~~~~~~
117
118[thumbnail="screenshot/gui-create-vm-os.png"]
119
120When creating a virtual machine (VM), setting the proper Operating System(OS)
121allows {pve} to optimize some low level parameters. For instance Windows OS
122expect the BIOS clock to use the local time, while Unix based OS expect the
123BIOS clock to have the UTC time.
124
125[[qm_system_settings]]
126System Settings
127~~~~~~~~~~~~~~~
128
129On VM creation you can change some basic system components of the new VM. You
130can specify which xref:qm_display[display type] you want to use.
131[thumbnail="screenshot/gui-create-vm-system.png"]
132Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133If you plan to install the QEMU Guest Agent, or if your selected ISO image
134already ships and installs it automatically, you may want to tick the 'Qemu
135Agent' box, which lets {pve} know that it can use its features to show some
136more information, and complete some actions (for example, shutdown or
137snapshots) more intelligently.
138
139{pve} allows to boot VMs with different firmware and machine types, namely
140xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
141the default SeaBIOS to OVMF only if you plan to use
142xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the
143hardware layout of the VM's virtual motherboard. You can choose between the
144default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
145https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
146chipset, which also provides a virtual PCIe bus, and thus may be desired if
147one wants to pass through PCIe hardware.
148
149[[qm_hard_disk]]
150Hard Disk
151~~~~~~~~~
152
153[[qm_hard_disk_bus]]
154Bus/Controller
155^^^^^^^^^^^^^^
156Qemu can emulate a number of storage controllers:
157
158* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
159controller. Even if this controller has been superseded by recent designs,
160each and every OS you can think of has support for it, making it a great choice
161if you want to run an OS released before 2003. You can connect up to 4 devices
162on this controller.
163
164* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
165design, allowing higher throughput and a greater number of devices to be
166connected. You can connect up to 6 devices on this controller.
167
168* the *SCSI* controller, designed in 1985, is commonly found on server grade
169hardware, and can connect up to 14 storage devices. {pve} emulates by default a
170LSI 53C895A controller.
171+
172A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for
173performance and is automatically selected for newly created Linux VMs since
174{pve} 4.3. Linux distributions have support for this controller since 2012, and
175FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
176containing the drivers during the installation.
177// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
178If you aim at maximum performance, you can select a SCSI controller of type
179_VirtIO SCSI single_ which will allow you to select the *IO Thread* option.
180When selecting _VirtIO SCSI single_ Qemu will create a new controller for
181each disk, instead of adding all disks to the same controller.
182
183* The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
184is an older type of paravirtualized controller. It has been superseded by the
185VirtIO SCSI Controller, in terms of features.
186
187[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
188
189[[qm_hard_disk_formats]]
190Image Format
191^^^^^^^^^^^^
192On each controller you attach a number of emulated hard disks, which are backed
193by a file or a block device residing in the configured storage. The choice of
194a storage type will determine the format of the hard disk image. Storages which
195present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
196whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
197either the *raw disk image format* or the *QEMU image format*.
198
199 * the *QEMU image format* is a copy on write format which allows snapshots, and
200 thin provisioning of the disk image.
201 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
202 you would get when executing the `dd` command on a block device in Linux. This
203 format does not support thin provisioning or snapshots by itself, requiring
204 cooperation from the storage layer for these tasks. It may, however, be up to
205 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
206 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
207 * the *VMware image format* only makes sense if you intend to import/export the
208 disk image to other hypervisors.
209
210[[qm_hard_disk_cache]]
211Cache Mode
212^^^^^^^^^^
213Setting the *Cache* mode of the hard drive will impact how the host system will
214notify the guest systems of block write completions. The *No cache* default
215means that the guest system will be notified that a write is complete when each
216block reaches the physical storage write queue, ignoring the host page cache.
217This provides a good balance between safety and speed.
218
219If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
220you can set the *No backup* option on that disk.
221
222If you want the {pve} storage replication mechanism to skip a disk when starting
223 a replication job, you can set the *Skip replication* option on that disk.
224As of {pve} 5.0, replication requires the disk images to be on a storage of type
225`zfspool`, so adding a disk image to other storages when the VM has replication
226configured requires to skip replication for this disk image.
227
228[[qm_hard_disk_discard]]
229Trim/Discard
230^^^^^^^^^^^^
231If your storage supports _thin provisioning_ (see the storage chapter in the
232{pve} guide), you can activate the *Discard* option on a drive. With *Discard*
233set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
234https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
235marks blocks as unused after deleting files, the controller will relay this
236information to the storage, which will then shrink the disk image accordingly.
237For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
238option on the drive. Some guest operating systems may also require the
239*SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
240only supported on guests using Linux Kernel 5.0 or higher.
241
242If you would like a drive to be presented to the guest as a solid-state drive
243rather than a rotational hard disk, you can set the *SSD emulation* option on
244that drive. There is no requirement that the underlying storage actually be
245backed by SSDs; this feature can be used with physical media of any type.
246Note that *SSD emulation* is not supported on *VirtIO Block* drives.
247
248
249[[qm_hard_disk_iothread]]
250IO Thread
251^^^^^^^^^
252The option *IO Thread* can only be used when using a disk with the
253*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
254 type is *VirtIO SCSI single*.
255With this enabled, Qemu creates one I/O thread per storage controller,
256rather than a single thread for all I/O. This can increase performance when
257multiple disks are used and each disk has its own storage controller.
258
259
260[[qm_cpu]]
261CPU
262~~~
263
264[thumbnail="screenshot/gui-create-vm-cpu.png"]
265
266A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
267This CPU can then contain one or many *cores*, which are independent
268processing units. Whether you have a single CPU socket with 4 cores, or two CPU
269sockets with two cores is mostly irrelevant from a performance point of view.
270However some software licenses depend on the number of sockets a machine has,
271in that case it makes sense to set the number of sockets to what the license
272allows you.
273
274Increasing the number of virtual CPUs (cores and sockets) will usually provide a
275performance improvement though that is heavily dependent on the use of the VM.
276Multi-threaded applications will of course benefit from a large number of
277virtual CPUs, as for each virtual cpu you add, Qemu will create a new thread of
278execution on the host system. If you're not sure about the workload of your VM,
279it is usually a safe bet to set the number of *Total cores* to 2.
280
281NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
282is greater than the number of cores on the server (e.g., 4 VMs with each 4
283cores on a machine with only 8 cores). In that case the host system will
284balance the Qemu execution threads between your server cores, just like if you
285were running a standard multi-threaded application. However, {pve} will prevent
286you from starting VMs with more virtual CPU cores than physically available, as
287this will only bring the performance down due to the cost of context switches.
288
289[[qm_cpu_resource_limits]]
290Resource Limits
291^^^^^^^^^^^^^^^
292
293In addition to the number of virtual cores, you can configure how much resources
294a VM can get in relation to the host CPU time and also in relation to other
295VMs.
296With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
297the whole VM can use on the host. It is a floating point value representing CPU
298time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
299single process would fully use one single core it would have `100%` CPU Time
300usage. If a VM with four cores utilizes all its cores fully it would
301theoretically use `400%`. In reality the usage may be even a bit higher as Qemu
302can have additional threads for VM peripherals besides the vCPU core ones.
303This setting can be useful if a VM should have multiple vCPUs, as it runs a few
304processes in parallel, but the VM as a whole should not be able to run all
305vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
306which would profit from having 8 vCPUs, but at no time all of those 8 cores
307should run at full load - as this would make the server so overloaded that
308other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
309`4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
310real host cores CPU time. But, if only 4 would do work they could still get
311almost 100% of a real core each.
312
313NOTE: VMs can, depending on their configuration, use additional threads e.g.,
314for networking or IO operations but also live migration. Thus a VM can show up
315to use more CPU time than just its virtual CPUs could use. To ensure that a VM
316never uses more CPU time than virtual CPUs assigned set the *cpulimit* setting
317to the same value as the total core count.
318
319The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
320shares or CPU weight), controls how much CPU time a VM gets in regards to other
321VMs running. It is a relative weight which defaults to `1024`, if you increase
322this for a VM it will be prioritized by the scheduler in comparison to other
323VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was
324changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than
325the first VM 100.
326
327For more information see `man systemd.resource-control`, here `CPUQuota`
328corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits`
329setting, visit its Notes section for references and implementation details.
330
331CPU Type
332^^^^^^^^
333
334Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
335processors. Each new processor generation adds new features, like hardware
336assisted 3d rendering, random number generation, memory protection, etc ...
337Usually you should select for your VM a processor type which closely matches the
338CPU of the host system, as it means that the host CPU features (also called _CPU
339flags_ ) will be available in your VMs. If you want an exact match, you can set
340the CPU type to *host* in which case the VM will have exactly the same CPU flags
341as your host system.
342
343This has a downside though. If you want to do a live migration of VMs between
344different hosts, your VM might end up on a new system with a different CPU type.
345If the CPU flags passed to the guest are missing, the qemu process will stop. To
346remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
347kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
348but is guaranteed to work everywhere.
349
350In short, if you care about live migration and moving VMs between nodes, leave
351the kvm64 default. If you don’t care about live migration or have a homogeneous
352cluster where all nodes have the same CPU, set the CPU type to host, as in
353theory this will give your guests maximum performance.
354
355Custom CPU Types
356^^^^^^^^^^^^^^^^
357
358You can specify custom CPU types with a configurable set of features. These are
359maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
360an administrator. See `man cpu-models.conf` for format details.
361
362Specified custom types can be selected by any user with the `Sys.Audit`
363privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
364or API, the name needs to be prefixed with 'custom-'.
365
366Meltdown / Spectre related CPU flags
367^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
368
369There are several CPU flags related to the Meltdown and Spectre vulnerabilities
370footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
371manually unless the selected CPU type of your VM already enables them by default.
372
373There are two requirements that need to be fulfilled in order to use these
374CPU flags:
375
376* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
377* The guest operating system must be updated to a version which mitigates the
378 attacks and is able to utilize the CPU feature
379
380Otherwise you need to set the desired CPU flag of the virtual CPU, either by
381editing the CPU options in the WebUI, or by setting the 'flags' property of the
382'cpu' option in the VM configuration file.
383
384For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
385so-called ``microcode update'' footnote:[You can use `intel-microcode' /
386`amd-microcode' from Debian non-free if your vendor does not provide such an
387update. Note that not all affected CPUs can be updated to support spec-ctrl.]
388for your CPU.
389
390
391To check if the {pve} host is vulnerable, execute the following command as root:
392
393----
394for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
395----
396
397A community script is also available to detect is the host is still vulnerable.
398footnote:[spectre-meltdown-checker https://meltdown.ovh/]
399
400Intel processors
401^^^^^^^^^^^^^^^^
402
403* 'pcid'
404+
405This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
406called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
407the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
408mechanism footnote:[PCID is now a critical performance/security feature on x86
409https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
410+
411To check if the {pve} host supports PCID, execute the following command as root:
412+
413----
414# grep ' pcid ' /proc/cpuinfo
415----
416+
417If this does not return empty your host's CPU has support for 'pcid'.
418
419* 'spec-ctrl'
420+
421Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
422in cases where retpolines are not sufficient.
423Included by default in Intel CPU models with -IBRS suffix.
424Must be explicitly turned on for Intel CPU models without -IBRS suffix.
425Requires an updated host CPU microcode (intel-microcode >= 20180425).
426+
427* 'ssbd'
428+
429Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
430Must be explicitly turned on for all Intel CPU models.
431Requires an updated host CPU microcode(intel-microcode >= 20180703).
432
433
434AMD processors
435^^^^^^^^^^^^^^
436
437* 'ibpb'
438+
439Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
440in cases where retpolines are not sufficient.
441Included by default in AMD CPU models with -IBPB suffix.
442Must be explicitly turned on for AMD CPU models without -IBPB suffix.
443Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
444
445
446
447* 'virt-ssbd'
448+
449Required to enable the Spectre v4 (CVE-2018-3639) fix.
450Not included by default in any AMD CPU model.
451Must be explicitly turned on for all AMD CPU models.
452This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
453Note that this must be explicitly enabled when when using the "host" cpu model,
454because this is a virtual feature which does not exist in the physical CPUs.
455
456
457* 'amd-ssbd'
458+
459Required to enable the Spectre v4 (CVE-2018-3639) fix.
460Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
461This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
462virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
463
464
465* 'amd-no-ssb'
466+
467Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
468Not included by default in any AMD CPU model.
469Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
470and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
471This is mutually exclusive with virt-ssbd and amd-ssbd.
472
473
474NUMA
475^^^^
476You can also optionally emulate a *NUMA*
477footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
478in your VMs. The basics of the NUMA architecture mean that instead of having a
479global memory pool available to all your cores, the memory is spread into local
480banks close to each socket.
481This can bring speed improvements as the memory bus is not a bottleneck
482anymore. If your system has a NUMA architecture footnote:[if the command
483`numactl --hardware | grep available` returns more than one node, then your host
484system has a NUMA architecture] we recommend to activate the option, as this
485will allow proper distribution of the VM resources on the host system.
486This option is also required to hot-plug cores or RAM in a VM.
487
488If the NUMA option is used, it is recommended to set the number of sockets to
489the number of nodes of the host system.
490
491vCPU hot-plug
492^^^^^^^^^^^^^
493
494Modern operating systems introduced the capability to hot-plug and, to a
495certain extent, hot-unplug CPUs in a running system. Virtualization allows us
496to avoid a lot of the (physical) problems real hardware can cause in such
497scenarios.
498Still, this is a rather new and complicated feature, so its use should be
499restricted to cases where its absolutely needed. Most of the functionality can
500be replicated with other, well tested and less complicated, features, see
501xref:qm_cpu_resource_limits[Resource Limits].
502
503In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
504To start a VM with less than this total core count of CPUs you may use the
505*vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
506
507Currently only this feature is only supported on Linux, a kernel newer than 3.10
508is needed, a kernel newer than 4.7 is recommended.
509
510You can use a udev rule as follow to automatically set new CPUs as online in
511the guest:
512
513----
514SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
515----
516
517Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
518
519Note: CPU hot-remove is machine dependent and requires guest cooperation.
520The deletion command does not guarantee CPU removal to actually happen,
521typically it's a request forwarded to guest using target dependent mechanism,
522e.g., ACPI on x86/amd64.
523
524
525[[qm_memory]]
526Memory
527~~~~~~
528
529For each VM you have the option to set a fixed size memory or asking
530{pve} to dynamically allocate memory based on the current RAM usage of the
531host.
532
533.Fixed Memory Allocation
534[thumbnail="screenshot/gui-create-vm-memory.png"]
535
536When setting memory and minimum memory to the same amount
537{pve} will simply allocate what you specify to your VM.
538
539Even when using a fixed memory size, the ballooning device gets added to the
540VM, because it delivers useful information such as how much memory the guest
541really uses.
542In general, you should leave *ballooning* enabled, but if you want to disable
543it (e.g. for debugging purposes), simply uncheck
544*Ballooning Device* or set
545
546 balloon: 0
547
548in the configuration.
549
550.Automatic Memory Allocation
551
552// see autoballoon() in pvestatd.pm
553When setting the minimum memory lower than memory, {pve} will make sure that the
554minimum amount you specified is always available to the VM, and if RAM usage on
555the host is below 80%, will dynamically add memory to the guest up to the
556maximum memory specified.
557
558When the host is running low on RAM, the VM will then release some memory
559back to the host, swapping running processes if needed and starting the oom
560killer in last resort. The passing around of memory between host and guest is
561done via a special `balloon` kernel driver running inside the guest, which will
562grab or release memory pages from the host.
563footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
564
565When multiple VMs use the autoallocate facility, it is possible to set a
566*Shares* coefficient which indicates the relative amount of the free host memory
567that each VM should take. Suppose for instance you have four VMs, three of them
568running an HTTP server and the last one is a database server. To cache more
569database blocks in the database server RAM, you would like to prioritize the
570database VM when spare RAM is available. For this you assign a Shares property
571of 3000 to the database VM, leaving the other VMs to the Shares default setting
572of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
573* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
5743000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
575get 1.5 GB.
576
577All Linux distributions released after 2010 have the balloon kernel driver
578included. For Windows OSes, the balloon driver needs to be added manually and can
579incur a slowdown of the guest, so we don't recommend using it on critical
580systems.
581// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
582
583When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
584of RAM available to the host.
585
586
587[[qm_network_device]]
588Network Device
589~~~~~~~~~~~~~~
590
591[thumbnail="screenshot/gui-create-vm-network.png"]
592
593Each VM can have many _Network interface controllers_ (NIC), of four different
594types:
595
596 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
597 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
598performance. Like all VirtIO devices, the guest OS should have the proper driver
599installed.
600 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
601only be used when emulating older operating systems ( released before 2002 )
602 * the *vmxnet3* is another paravirtualized device, which should only be used
603when importing a VM from another hypervisor.
604
605{pve} will generate for each NIC a random *MAC address*, so that your VM is
606addressable on Ethernet networks.
607
608The NIC you added to the VM can follow one of two different models:
609
610 * in the default *Bridged mode* each virtual NIC is backed on the host by a
611_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
612tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
613have direct access to the Ethernet LAN on which the host is located.
614 * in the alternative *NAT mode*, each virtual NIC will only communicate with
615the Qemu user networking stack, where a built-in router and DHCP server can
616provide network access. This built-in DHCP will serve addresses in the private
61710.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
618should only be used for testing. This mode is only available via CLI or the API,
619but not via the WebUI.
620
621You can also skip adding a network device when creating a VM by selecting *No
622network device*.
623
624.Multiqueue
625If you are using the VirtIO driver, you can optionally activate the
626*Multiqueue* option. This option allows the guest OS to process networking
627packets using multiple virtual CPUs, providing an increase in the total number
628of packets transferred.
629
630//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
631When using the VirtIO driver with {pve}, each NIC network queue is passed to the
632host kernel, where the queue will be processed by a kernel thread spawned by the
633vhost driver. With this option activated, it is possible to pass _multiple_
634network queues to the host kernel for each NIC.
635
636//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
637When using Multiqueue, it is recommended to set it to a value equal
638to the number of Total Cores of your guest. You also need to set in
639the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
640command:
641
642`ethtool -L ens1 combined X`
643
644where X is the number of the number of vcpus of the VM.
645
646You should note that setting the Multiqueue parameter to a value greater
647than one will increase the CPU load on the host and guest systems as the
648traffic increases. We recommend to set this option only when the VM has to
649process a great number of incoming connections, such as when the VM is running
650as a router, reverse proxy or a busy HTTP server doing long polling.
651
652[[qm_display]]
653Display
654~~~~~~~
655
656QEMU can virtualize a few types of VGA hardware. Some examples are:
657
658* *std*, the default, emulates a card with Bochs VBE extensions.
659* *cirrus*, this was once the default, it emulates a very old hardware module
660with all its problems. This display type should only be used if really
661necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
662qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier
663* *vmware*, is a VMWare SVGA-II compatible adapter.
664* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
665enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
666VM.
667* *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
668 can offload workloads to the host GPU without requiring special (expensive)
669 models and drivers and neither binding the host GPU completely, allowing
670 reuse between multiple guests and or the host.
671+
672NOTE: VirGL support needs some extra libraries that aren't installed by
673default due to being relatively big and also not available as open source for
674all GPU models/vendors. For most setups you'll just need to do:
675`apt install libgl1 libegl1`
676
677You can edit the amount of memory given to the virtual GPU, by setting
678the 'memory' option. This can enable higher resolutions inside the VM,
679especially with SPICE/QXL.
680
681As the memory is reserved by display device, selecting Multi-Monitor mode
682for SPICE (e.g., `qxl2` for dual monitors) has some implications:
683
684* Windows needs a device for each monitor, so if your 'ostype' is some
685version of Windows, {pve} gives the VM an extra device per monitor.
686Each device gets the specified amount of memory.
687
688* Linux VMs, can always enable more virtual monitors, but selecting
689a Multi-Monitor mode multiplies the memory given to the device with
690the number of monitors.
691
692Selecting `serialX` as display 'type' disables the VGA output, and redirects
693the Web Console to the selected serial port. A configured display 'memory'
694setting will be ignored in that case.
695
696[[qm_usb_passthrough]]
697USB Passthrough
698~~~~~~~~~~~~~~~
699
700There are two different types of USB passthrough devices:
701
702* Host USB passthrough
703* SPICE USB passthrough
704
705Host USB passthrough works by giving a VM a USB device of the host.
706This can either be done via the vendor- and product-id, or
707via the host bus and port.
708
709The vendor/product-id looks like this: *0123:abcd*,
710where *0123* is the id of the vendor, and *abcd* is the id
711of the product, meaning two pieces of the same usb device
712have the same id.
713
714The bus/port looks like this: *1-2.3.4*, where *1* is the bus
715and *2.3.4* is the port path. This represents the physical
716ports of your host (depending of the internal order of the
717usb controllers).
718
719If a device is present in a VM configuration when the VM starts up,
720but the device is not present in the host, the VM can boot without problems.
721As soon as the device/port is available in the host, it gets passed through.
722
723WARNING: Using this kind of USB passthrough means that you cannot move
724a VM online to another host, since the hardware is only available
725on the host the VM is currently residing.
726
727The second type of passthrough is SPICE USB passthrough. This is useful
728if you use a SPICE client which supports it. If you add a SPICE USB port
729to your VM, you can passthrough a USB device from where your SPICE client is,
730directly to the VM (for example an input device or hardware dongle).
731
732
733[[qm_bios_and_uefi]]
734BIOS and UEFI
735~~~~~~~~~~~~~
736
737In order to properly emulate a computer, QEMU needs to use a firmware.
738Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
739first steps when booting a VM. It is responsible for doing basic hardware
740initialization and for providing an interface to the firmware and hardware for
741the operating system. By default QEMU uses *SeaBIOS* for this, which is an
742open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
743standard setups.
744
745Some operating systems (such as Windows 11) may require use of an UEFI
746compatible implementation instead. In such cases, you must rather use *OVMF*,
747which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
748
749There are other scenarios in which a BIOS is not a good firmware to boot from,
750e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very
751good blog entry about this https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
752
753If you want to use OVMF, there are several things to consider:
754
755In order to save things like the *boot order*, there needs to be an EFI Disk.
756This disk will be included in backups and snapshots, and there can only be one.
757
758You can create such a disk with the following command:
759
760----
761# qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
762----
763
764Where *<storage>* is the storage where you want to have the disk, and
765*<format>* is a format which the storage supports. Alternatively, you can
766create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
767hardware section of a VM.
768
769The *efitype* option specifies which version of the OVMF firmware should be
770used. For new VMs, this should always be '4m', as it supports Secure Boot and
771has more space allocated to support future development (this is the default in
772the GUI).
773
774*pre-enroll-keys* specifies if the efidisk should come pre-loaded with
775distribution-specific and Microsoft Standard Secure Boot keys. It also enables
776Secure Boot by default (though it can still be disabled in the OVMF menu within
777the VM).
778
779NOTE: If you want to start using Secure Boot in an existing VM (that still uses
780a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
781(`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
782will reset any custom configurations you have made in the OVMF menu!
783
784When using OVMF with a virtual display (without VGA passthrough),
785you need to set the client resolution in the OVMF menu (which you can reach
786with a press of the ESC button during boot), or you have to choose
787SPICE as the display type.
788
789[[qm_tpm]]
790Trusted Platform Module (TPM)
791~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
792
793A *Trusted Platform Module* is a device which stores secret data - such as
794encryption keys - securely and provides tamper-resistance functions for
795validating system boot.
796
797Certain operating systems (e.g. Windows 11) require such a device to be attached
798to a machine (be it physical or virtual).
799
800A TPM is added by specifying a *tpmstate* volume. This works similar to an
801efidisk, in that it cannot be changed (only removed) once created. You can add
802one via the following command:
803
804----
805# qm set <vmid> -tpmstate0 <storage>:1,version=<version>
806----
807
808Where *<storage>* is the storage you want to put the state on, and *<version>*
809is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
810choosing 'Add' -> 'TPM State' in the hardware section of a VM.
811
812The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
813implementation that requires a 'v1.2' TPM, it should be preferred.
814
815NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
816security benefits. The point of a TPM is that the data on it cannot be modified
817easily, except via commands specified as part of the TPM spec. Since with an
818emulated device the data storage happens on a regular volume, it can potentially
819be edited by anyone with access to it.
820
821[[qm_ivshmem]]
822Inter-VM shared memory
823~~~~~~~~~~~~~~~~~~~~~~
824
825You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
826share memory between the host and a guest, or also between multiple guests.
827
828To add such a device, you can use `qm`:
829
830----
831# qm set <vmid> -ivshmem size=32,name=foo
832----
833
834Where the size is in MiB. The file will be located under
835`/dev/shm/pve-shm-$name` (the default name is the vmid).
836
837NOTE: Currently the device will get deleted as soon as any VM using it got
838shutdown or stopped. Open connections will still persist, but new connections
839to the exact same device cannot be made anymore.
840
841A use case for such a device is the Looking Glass
842footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
843performance, low-latency display mirroring between host and guest.
844
845[[qm_audio_device]]
846Audio Device
847~~~~~~~~~~~~
848
849To add an audio device run the following command:
850
851----
852qm set <vmid> -audio0 device=<device>
853----
854
855Supported audio devices are:
856
857* `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
858* `intel-hda`: Intel HD Audio Controller, emulates ICH6
859* `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
860
861There are two backends available:
862
863* 'spice'
864* 'none'
865
866The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
867the 'none' backend can be useful if an audio device is needed in the VM for some
868software to work. To use the physical audio device of the host use device
869passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
870xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
871have options to play sound.
872
873
874[[qm_virtio_rng]]
875VirtIO RNG
876~~~~~~~~~~
877
878A RNG (Random Number Generator) is a device providing entropy ('randomness') to
879a system. A virtual hardware-RNG can be used to provide such entropy from the
880host system to a guest VM. This helps to avoid entropy starvation problems in
881the guest (a situation where not enough entropy is available and the system may
882slow down or run into problems), especially during the guests boot process.
883
884To add a VirtIO-based emulated RNG, run the following command:
885
886----
887qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
888----
889
890`source` specifies where entropy is read from on the host and has to be one of
891the following:
892
893* `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
894* `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
895 starvation on the host system)
896* `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
897 are available, the one selected in
898 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
899
900A limit can be specified via the `max_bytes` and `period` parameters, they are
901read as `max_bytes` per `period` in milliseconds. However, it does not represent
902a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
903available on a 1 second timer, not that 1 KiB is streamed to the guest over the
904course of one second. Reducing the `period` can thus be used to inject entropy
905into the guest at a faster rate.
906
907By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
908recommended to always use a limiter to avoid guests using too many host
909resources. If desired, a value of '0' for `max_bytes` can be used to disable
910all limits.
911
912[[qm_bootorder]]
913Device Boot Order
914~~~~~~~~~~~~~~~~~
915
916QEMU can tell the guest which devices it should boot from, and in which order.
917This can be specified in the config via the `boot` property, e.g.:
918
919----
920boot: order=scsi0;net0;hostpci0
921----
922
923[thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
924
925This way, the guest would first attempt to boot from the disk `scsi0`, if that
926fails, it would go on to attempt network boot from `net0`, and in case that
927fails too, finally attempt to boot from a passed through PCIe device (seen as
928disk in case of NVMe, otherwise tries to launch into an option ROM).
929
930On the GUI you can use a drag-and-drop editor to specify the boot order, and use
931the checkbox to enable or disable certain devices for booting altogether.
932
933NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
934all of them must be marked as 'bootable' (that is, they must have the checkbox
935enabled or appear in the list in the config) for the guest to be able to boot.
936This is because recent SeaBIOS and OVMF versions only initialize disks if they
937are marked 'bootable'.
938
939In any case, even devices not appearing in the list or having the checkmark
940disabled will still be available to the guest, once it's operating system has
941booted and initialized them. The 'bootable' flag only affects the guest BIOS and
942bootloader.
943
944
945[[qm_startup_and_shutdown]]
946Automatic Start and Shutdown of Virtual Machines
947~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
948
949After creating your VMs, you probably want them to start automatically
950when the host system boots. For this you need to select the option 'Start at
951boot' from the 'Options' Tab of your VM in the web interface, or set it with
952the following command:
953
954----
955# qm set <vmid> -onboot 1
956----
957
958.Start and Shutdown Order
959
960[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
961
962In some case you want to be able to fine tune the boot order of your
963VMs, for instance if one of your VM is providing firewalling or DHCP
964to other guest systems. For this you can use the following
965parameters:
966
967* *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
968you want the VM to be the first to be started. (We use the reverse startup
969order for shutdown, so a machine with a start order of 1 would be the last to
970be shut down). If multiple VMs have the same order defined on a host, they will
971additionally be ordered by 'VMID' in ascending order.
972* *Startup delay*: Defines the interval between this VM start and subsequent
973VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting
974other VMs.
975* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
976for the VM to be offline after issuing a shutdown command.
977By default this value is set to 180, which means that {pve} will issue a
978shutdown request and wait 180 seconds for the machine to be offline. If
979the machine is still online after the timeout it will be stopped forcefully.
980
981NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
982'boot order' options currently. Those VMs will be skipped by the startup and
983shutdown algorithm as the HA manager itself ensures that VMs get started and
984stopped.
985
986Please note that machines without a Start/Shutdown order parameter will always
987start after those where the parameter is set. Further, this parameter can only
988be enforced between virtual machines running on the same host, not
989cluster-wide.
990
991If you require a delay between the host boot and the booting of the first VM,
992see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
993
994
995[[qm_qemu_agent]]
996Qemu Guest Agent
997~~~~~~~~~~~~~~~~
998
999The Qemu Guest Agent is a service which runs inside the VM, providing a
1000communication channel between the host and the guest. It is used to exchange
1001information and allows the host to issue commands to the guest.
1002
1003For example, the IP addresses in the VM summary panel are fetched via the guest
1004agent.
1005
1006Or when starting a backup, the guest is told via the guest agent to sync
1007outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1008
1009For the guest agent to work properly the following steps must be taken:
1010
1011* install the agent in the guest and make sure it is running
1012* enable the communication via the agent in {pve}
1013
1014Install Guest Agent
1015^^^^^^^^^^^^^^^^^^^
1016
1017For most Linux distributions, the guest agent is available. The package is
1018usually named `qemu-guest-agent`.
1019
1020For Windows, it can be installed from the
1021https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1022VirtIO driver ISO].
1023
1024Enable Guest Agent Communication
1025^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1026
1027Communication from {pve} with the guest agent can be enabled in the VM's
1028*Options* panel. A fresh start of the VM is necessary for the changes to take
1029effect.
1030
1031It is possible to enable the 'Run guest-trim' option. With this enabled,
1032{pve} will issue a trim command to the guest after the following
1033operations that have the potential to write out zeros to the storage:
1034
1035* moving a disk to another storage
1036* live migrating a VM to another node with local storage
1037
1038On a thin provisioned storage, this can help to free up unused space.
1039
1040Troubleshooting
1041^^^^^^^^^^^^^^^
1042
1043.VM does not shut down
1044
1045Make sure the guest agent is installed and running.
1046
1047Once the guest agent is enabled, {pve} will send power commands like
1048'shutdown' via the guest agent. If the guest agent is not running, commands
1049cannot get executed properly and the shutdown command will run into a timeout.
1050
1051[[qm_spice_enhancements]]
1052SPICE Enhancements
1053~~~~~~~~~~~~~~~~~~
1054
1055SPICE Enhancements are optional features that can improve the remote viewer
1056experience.
1057
1058To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1059the following command to enable them via the CLI:
1060
1061----
1062qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1063----
1064
1065NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1066must be set to SPICE (qxl).
1067
1068Folder Sharing
1069^^^^^^^^^^^^^^
1070
1071Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1072installed in the guest. It makes the shared folder available through a local
1073WebDAV server located at http://localhost:9843.
1074
1075For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1076from the
1077https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1078
1079Most Linux distributions have a package called `spice-webdavd` that can be
1080installed.
1081
1082To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1083Select the folder to share and then enable the checkbox.
1084
1085NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1086
1087CAUTION: Experimental! Currently this feature does not work reliably.
1088
1089Video Streaming
1090^^^^^^^^^^^^^^^
1091
1092Fast refreshing areas are encoded into a video stream. Two options exist:
1093
1094* *all*: Any fast refreshing area will be encoded into a video stream.
1095* *filter*: Additional filters are used to decide if video streaming should be
1096 used (currently only small window surfaces are skipped).
1097
1098A general recommendation if video streaming should be enabled and which option
1099to choose from cannot be given. Your mileage may vary depending on the specific
1100circumstances.
1101
1102Troubleshooting
1103^^^^^^^^^^^^^^^
1104
1105.Shared folder does not show up
1106
1107Make sure the WebDAV service is enabled and running in the guest. On Windows it
1108is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1109different depending on the distribution.
1110
1111If the service is running, check the WebDAV server by opening
1112http://localhost:9843 in a browser in the guest.
1113
1114It can help to restart the SPICE session.
1115
1116[[qm_migration]]
1117Migration
1118---------
1119
1120[thumbnail="screenshot/gui-qemu-migrate.png"]
1121
1122If you have a cluster, you can migrate your VM to another host with
1123
1124----
1125# qm migrate <vmid> <target>
1126----
1127
1128There are generally two mechanisms for this
1129
1130* Online Migration (aka Live Migration)
1131* Offline Migration
1132
1133Online Migration
1134~~~~~~~~~~~~~~~~
1135
1136When your VM is running and it has no local resources defined (such as disks
1137on local storage, passed through devices, etc.) you can initiate a live
1138migration with the -online flag.
1139
1140How it works
1141^^^^^^^^^^^^
1142
1143This starts a Qemu Process on the target host with the 'incoming' flag, which
1144means that the process starts and waits for the memory data and device states
1145from the source Virtual Machine (since all other resources, e.g. disks,
1146are shared, the memory content and device state are the only things left
1147to transmit).
1148
1149Once this connection is established, the source begins to send the memory
1150content asynchronously to the target. If the memory on the source changes,
1151those sections are marked dirty and there will be another pass of sending data.
1152This happens until the amount of data to send is so small that it can
1153pause the VM on the source, send the remaining data to the target and start
1154the VM on the target in under a second.
1155
1156Requirements
1157^^^^^^^^^^^^
1158
1159For Live Migration to work, there are some things required:
1160
1161* The VM has no local resources (e.g. passed through devices, local disks, etc.)
1162* The hosts are in the same {pve} cluster.
1163* The hosts have a working (and reliable) network connection.
1164* The target host must have the same or higher versions of the
1165 {pve} packages. (It *might* work the other way, but this is never guaranteed)
1166* The hosts have CPUs from the same vendor. (It *might* work otherwise, but this
1167 is never guaranteed)
1168
1169Offline Migration
1170~~~~~~~~~~~~~~~~~
1171
1172If you have local resources, you can still offline migrate your VMs,
1173as long as all disk are on storages, which are defined on both hosts.
1174Then the migration will copy the disk over the network to the target host.
1175
1176[[qm_copy_and_clone]]
1177Copies and Clones
1178-----------------
1179
1180[thumbnail="screenshot/gui-qemu-full-clone.png"]
1181
1182VM installation is usually done using an installation media (CD-ROM)
1183from the operating system vendor. Depending on the OS, this can be a
1184time consuming task one might want to avoid.
1185
1186An easy way to deploy many VMs of the same type is to copy an existing
1187VM. We use the term 'clone' for such copies, and distinguish between
1188'linked' and 'full' clones.
1189
1190Full Clone::
1191
1192The result of such copy is an independent VM. The
1193new VM does not share any storage resources with the original.
1194+
1195
1196It is possible to select a *Target Storage*, so one can use this to
1197migrate a VM to a totally different storage. You can also change the
1198disk image *Format* if the storage driver supports several formats.
1199+
1200
1201NOTE: A full clone needs to read and copy all VM image data. This is
1202usually much slower than creating a linked clone.
1203+
1204
1205Some storage types allows to copy a specific *Snapshot*, which
1206defaults to the 'current' VM data. This also means that the final copy
1207never includes any additional snapshots from the original VM.
1208
1209
1210Linked Clone::
1211
1212Modern storage drivers support a way to generate fast linked
1213clones. Such a clone is a writable copy whose initial contents are the
1214same as the original data. Creating a linked clone is nearly
1215instantaneous, and initially consumes no additional space.
1216+
1217
1218They are called 'linked' because the new image still refers to the
1219original. Unmodified data blocks are read from the original image, but
1220modification are written (and afterwards read) from a new
1221location. This technique is called 'Copy-on-write'.
1222+
1223
1224This requires that the original volume is read-only. With {pve} one
1225can convert any VM into a read-only <<qm_templates, Template>>). Such
1226templates can later be used to create linked clones efficiently.
1227+
1228
1229NOTE: You cannot delete an original template while linked clones
1230exist.
1231+
1232
1233It is not possible to change the *Target storage* for linked clones,
1234because this is a storage internal feature.
1235
1236
1237The *Target node* option allows you to create the new VM on a
1238different node. The only restriction is that the VM is on shared
1239storage, and that storage is also available on the target node.
1240
1241To avoid resource conflicts, all network interface MAC addresses get
1242randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1243setting.
1244
1245
1246[[qm_templates]]
1247Virtual Machine Templates
1248-------------------------
1249
1250One can convert a VM into a Template. Such templates are read-only,
1251and you can use them to create linked clones.
1252
1253NOTE: It is not possible to start templates, because this would modify
1254the disk images. If you want to change the template, create a linked
1255clone and modify that.
1256
1257VM Generation ID
1258----------------
1259
1260{pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
1261'vmgenid' Specification
1262https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1263for virtual machines.
1264This can be used by the guest operating system to detect any event resulting
1265in a time shift event, for example, restoring a backup or a snapshot rollback.
1266
1267When creating new VMs, a 'vmgenid' will be automatically generated and saved
1268in its configuration file.
1269
1270To create and add a 'vmgenid' to an already existing VM one can pass the
1271special value `1' to let {pve} autogenerate one or manually set the 'UUID'
1272footnote:[Online GUID generator http://guid.one/] by using it as value,
1273e.g.:
1274
1275----
1276# qm set VMID -vmgenid 1
1277# qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
1278----
1279
1280NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1281in the same effects as a change on snapshot rollback, backup restore, etc., has
1282as the VM can interpret this as generation change.
1283
1284In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1285its value on VM creation, or retroactively delete the property in the
1286configuration with:
1287
1288----
1289# qm set VMID -delete vmgenid
1290----
1291
1292The most prominent use case for 'vmgenid' are newer Microsoft Windows
1293operating systems, which use it to avoid problems in time sensitive or
1294replicate services (e.g., databases, domain controller
1295footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1296on snapshot rollback, backup restore or a whole VM clone operation.
1297
1298Importing Virtual Machines and disk images
1299------------------------------------------
1300
1301A VM export from a foreign hypervisor takes usually the form of one or more disk
1302 images, with a configuration file describing the settings of the VM (RAM,
1303 number of cores). +
1304The disk images can be in the vmdk format, if the disks come from
1305VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1306The most popular configuration format for VM exports is the OVF standard, but in
1307practice interoperation is limited because many settings are not implemented in
1308the standard itself, and hypervisors export the supplementary information
1309in non-standard extensions.
1310
1311Besides the problem of format, importing disk images from other hypervisors
1312may fail if the emulated hardware changes too much from one hypervisor to
1313another. Windows VMs are particularly concerned by this, as the OS is very
1314picky about any changes of hardware. This problem may be solved by
1315installing the MergeIDE.zip utility available from the Internet before exporting
1316and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1317
1318Finally there is the question of paravirtualized drivers, which improve the
1319speed of the emulated system and are specific to the hypervisor.
1320GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1321default and you can switch to the paravirtualized drivers right after importing
1322the VM. For Windows VMs, you need to install the Windows paravirtualized
1323drivers by yourself.
1324
1325GNU/Linux and other free Unix can usually be imported without hassle. Note
1326that we cannot guarantee a successful import/export of Windows VMs in all
1327cases due to the problems above.
1328
1329Step-by-step example of a Windows OVF import
1330~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1331
1332Microsoft provides
1333https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
1334 to get started with Windows development.We are going to use one of these
1335to demonstrate the OVF import feature.
1336
1337Download the Virtual Machine zip
1338^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1339
1340After getting informed about the user agreement, choose the _Windows 10
1341Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
1342
1343Extract the disk image from the zip
1344^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1345
1346Using the `unzip` utility or any archiver of your choice, unpack the zip,
1347and copy via ssh/scp the ovf and vmdk files to your {pve} host.
1348
1349Import the Virtual Machine
1350^^^^^^^^^^^^^^^^^^^^^^^^^^
1351
1352This will create a new virtual machine, using cores, memory and
1353VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1354 storage. You have to configure the network manually.
1355
1356----
1357# qm importovf 999 WinDev1709Eval.ovf local-lvm
1358----
1359
1360The VM is ready to be started.
1361
1362Adding an external disk image to a Virtual Machine
1363~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1364
1365You can also add an existing disk image to a VM, either coming from a
1366foreign hypervisor, or one that you created yourself.
1367
1368Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1369
1370 vmdebootstrap --verbose \
1371 --size 10GiB --serial-console \
1372 --grub --no-extlinux \
1373 --package openssh-server \
1374 --package avahi-daemon \
1375 --package qemu-guest-agent \
1376 --hostname vm600 --enable-dhcp \
1377 --customize=./copy_pub_ssh.sh \
1378 --sparse --image vm600.raw
1379
1380You can now create a new target VM, importing the image to the storage `pvedir`
1381and attaching it to the VM's SCSI controller:
1382
1383----
1384# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
1385 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1386 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
1387----
1388
1389The VM is ready to be started.
1390
1391
1392ifndef::wiki[]
1393include::qm-cloud-init.adoc[]
1394endif::wiki[]
1395
1396ifndef::wiki[]
1397include::qm-pci-passthrough.adoc[]
1398endif::wiki[]
1399
1400Hookscripts
1401-----------
1402
1403You can add a hook script to VMs with the config property `hookscript`.
1404
1405----
1406# qm set 100 --hookscript local:snippets/hookscript.pl
1407----
1408
1409It will be called during various phases of the guests lifetime.
1410For an example and documentation see the example script under
1411`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
1412
1413[[qm_hibernate]]
1414Hibernation
1415-----------
1416
1417You can suspend a VM to disk with the GUI option `Hibernate` or with
1418
1419----
1420# qm suspend ID --todisk
1421----
1422
1423That means that the current content of the memory will be saved onto disk
1424and the VM gets stopped. On the next start, the memory content will be
1425loaded and the VM can continue where it was left off.
1426
1427[[qm_vmstatestorage]]
1428.State storage selection
1429If no target storage for the memory is given, it will be automatically
1430chosen, the first of:
1431
14321. The storage `vmstatestorage` from the VM config.
14332. The first shared storage from any VM disk.
14343. The first non-shared storage from any VM disk.
14354. The storage `local` as a fallback.
1436
1437Managing Virtual Machines with `qm`
1438------------------------------------
1439
1440qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
1441create and destroy virtual machines, and control execution
1442(start/stop/suspend/resume). Besides that, you can use qm to set
1443parameters in the associated config file. It is also possible to
1444create and delete virtual disks.
1445
1446CLI Usage Examples
1447~~~~~~~~~~~~~~~~~~
1448
1449Using an iso file uploaded on the 'local' storage, create a VM
1450with a 4 GB IDE disk on the 'local-lvm' storage
1451
1452----
1453# qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1454----
1455
1456Start the new VM
1457
1458----
1459# qm start 300
1460----
1461
1462Send a shutdown request, then wait until the VM is stopped.
1463
1464----
1465# qm shutdown 300 && qm wait 300
1466----
1467
1468Same as above, but only wait for 40 seconds.
1469
1470----
1471# qm shutdown 300 && qm wait 300 -timeout 40
1472----
1473
1474Destroying a VM always removes it from Access Control Lists and it always
1475removes the firewall configuration of the VM. You have to activate
1476'--purge', if you want to additionally remove the VM from replication jobs,
1477backup jobs and HA resource configurations.
1478
1479----
1480# qm destroy 300 --purge
1481----
1482
1483Move a disk image to a different storage.
1484
1485----
1486# qm move-disk 300 scsi0 other-storage
1487----
1488
1489Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1490the source VM and attaches it as `scsi3` to the target VM. In the background
1491the disk image is being renamed so that the name matches the new owner.
1492
1493----
1494# qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1495----
1496
1497
1498[[qm_configuration]]
1499Configuration
1500-------------
1501
1502VM configuration files are stored inside the Proxmox cluster file
1503system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1504Like other files stored inside `/etc/pve/`, they get automatically
1505replicated to all other cluster nodes.
1506
1507NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1508unique cluster wide.
1509
1510.Example VM Configuration
1511----
1512boot: order=virtio0;net0
1513cores: 1
1514sockets: 1
1515memory: 512
1516name: webmail
1517ostype: l26
1518net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1519virtio0: local:vm-100-disk-1,size=32G
1520----
1521
1522Those configuration files are simple text files, and you can edit them
1523using a normal text editor (`vi`, `nano`, ...). This is sometimes
1524useful to do small corrections, but keep in mind that you need to
1525restart the VM to apply such changes.
1526
1527For that reason, it is usually better to use the `qm` command to
1528generate and modify those files, or do the whole thing using the GUI.
1529Our toolkit is smart enough to instantaneously apply most changes to
1530running VM. This feature is called "hot plug", and there is no
1531need to restart the VM in that case.
1532
1533
1534File Format
1535~~~~~~~~~~~
1536
1537VM configuration files use a simple colon separated key/value
1538format. Each line has the following format:
1539
1540-----
1541# this is a comment
1542OPTION: value
1543-----
1544
1545Blank lines in those files are ignored, and lines starting with a `#`
1546character are treated as comments and are also ignored.
1547
1548
1549[[qm_snapshots]]
1550Snapshots
1551~~~~~~~~~
1552
1553When you create a snapshot, `qm` stores the configuration at snapshot
1554time into a separate snapshot section within the same configuration
1555file. For example, after creating a snapshot called ``testsnapshot'',
1556your configuration file will look like this:
1557
1558.VM configuration with snapshot
1559----
1560memory: 512
1561swap: 512
1562parent: testsnaphot
1563...
1564
1565[testsnaphot]
1566memory: 512
1567swap: 512
1568snaptime: 1457170803
1569...
1570----
1571
1572There are a few snapshot related properties like `parent` and
1573`snaptime`. The `parent` property is used to store the parent/child
1574relationship between snapshots. `snaptime` is the snapshot creation
1575time stamp (Unix epoch).
1576
1577You can optionally save the memory of a running VM with the option `vmstate`.
1578For details about how the target storage gets chosen for the VM state, see
1579xref:qm_vmstatestorage[State storage selection] in the chapter
1580xref:qm_hibernate[Hibernation].
1581
1582[[qm_options]]
1583Options
1584~~~~~~~
1585
1586include::qm.conf.5-opts.adoc[]
1587
1588
1589Locks
1590-----
1591
1592Online migrations, snapshots and backups (`vzdump`) set a lock to
1593prevent incompatible concurrent actions on the affected VMs. Sometimes
1594you need to remove such a lock manually (e.g., after a power failure).
1595
1596----
1597# qm unlock <vmid>
1598----
1599
1600CAUTION: Only do that if you are sure the action which set the lock is
1601no longer running.
1602
1603
1604ifdef::wiki[]
1605
1606See Also
1607~~~~~~~~
1608
1609* link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1610
1611endif::wiki[]
1612
1613
1614ifdef::manvolnum[]
1615
1616Files
1617------
1618
1619`/etc/pve/qemu-server/<VMID>.conf`::
1620
1621Configuration file for the VM '<VMID>'.
1622
1623
1624include::pve-copyright.adoc[]
1625endif::manvolnum[]