]> git.proxmox.com Git - pve-docs.git/blob - qm.adoc
attrs: update cephdocs template to quincy
[pve-docs.git] / qm.adoc
1 [[chapter_virtual_machines]]
2 ifdef::manvolnum[]
3 qm(1)
4 =====
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 qm - QEMU/KVM Virtual Machine Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::qm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 QEMU/KVM Virtual Machines
23 =========================
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 // deprecates
28 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29 // http://pve.proxmox.com/wiki/KVM
30 // http://pve.proxmox.com/wiki/Qemu_Server
31
32 QEMU (short form for Quick Emulator) is an open source hypervisor that emulates a
33 physical computer. From the perspective of the host system where QEMU is
34 running, QEMU is a user program which has access to a number of local resources
35 like partitions, files, network cards which are then passed to an
36 emulated computer which sees them as if they were real devices.
37
38 A guest operating system running in the emulated computer accesses these
39 devices, and runs as if it were running on real hardware. For instance, you can pass
40 an ISO image as a parameter to QEMU, and the OS running in the emulated computer
41 will see a real CD-ROM inserted into a CD drive.
42
43 QEMU can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44 only concerned with 32 and 64 bits PC clone emulation, since it represents the
45 overwhelming majority of server hardware. The emulation of PC clones is also one
46 of the fastest due to the availability of processor extensions which greatly
47 speed up QEMU when the emulated architecture is the same as the host
48 architecture.
49
50 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51 It means that QEMU is running with the support of the virtualization processor
52 extensions, via the Linux KVM module. In the context of {pve} _QEMU_ and
53 _KVM_ can be used interchangeably, as QEMU in {pve} will always try to load the KVM
54 module.
55
56 QEMU inside {pve} runs as a root process, since this is required to access block
57 and PCI devices.
58
59
60 Emulated devices and paravirtualized devices
61 --------------------------------------------
62
63 The PC hardware emulated by QEMU includes a mainboard, network controllers,
64 SCSI, IDE and SATA controllers, serial ports (the complete list can be seen in
65 the `kvm(1)` man page) all of them emulated in software. All these devices
66 are the exact software equivalent of existing hardware devices, and if the OS
67 running in the guest has the proper drivers it will use the devices as if it
68 were running on real hardware. This allows QEMU to runs _unmodified_ operating
69 systems.
70
71 This however has a performance cost, as running in software what was meant to
72 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73 QEMU can present to the guest operating system _paravirtualized devices_, where
74 the guest OS recognizes it is running inside QEMU and cooperates with the
75 hypervisor.
76
77 QEMU relies on the virtio virtualization standard, and is thus able to present
78 paravirtualized virtio devices, which includes a paravirtualized generic disk
79 controller, a paravirtualized network card, a paravirtualized serial port,
80 a paravirtualized SCSI controller, etc ...
81
82 It is highly recommended to use the virtio devices whenever you can, as they
83 provide a big performance improvement. Using the virtio generic disk controller
84 versus an emulated IDE controller will double the sequential write throughput,
85 as measured with `bonnie++(8)`. Using the virtio network interface can deliver
86 up to three times the throughput of an emulated Intel E1000 network card, as
87 measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
88 https://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
90
91 [[qm_virtual_machines_settings]]
92 Virtual Machines Settings
93 -------------------------
94
95 Generally speaking {pve} tries to choose sane defaults for virtual machines
96 (VM). Make sure you understand the meaning of the settings you change, as it
97 could incur a performance slowdown, or putting your data at risk.
98
99
100 [[qm_general_settings]]
101 General Settings
102 ~~~~~~~~~~~~~~~~
103
104 [thumbnail="screenshot/gui-create-vm-general.png"]
105
106 General settings of a VM include
107
108 * the *Node* : the physical server on which the VM will run
109 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
110 * *Name*: a free form text string you can use to describe the VM
111 * *Resource Pool*: a logical group of VMs
112
113
114 [[qm_os_settings]]
115 OS Settings
116 ~~~~~~~~~~~
117
118 [thumbnail="screenshot/gui-create-vm-os.png"]
119
120 When creating a virtual machine (VM), setting the proper Operating System(OS)
121 allows {pve} to optimize some low level parameters. For instance Windows OS
122 expect the BIOS clock to use the local time, while Unix based OS expect the
123 BIOS clock to have the UTC time.
124
125 [[qm_system_settings]]
126 System Settings
127 ~~~~~~~~~~~~~~~
128
129 On VM creation you can change some basic system components of the new VM. You
130 can specify which xref:qm_display[display type] you want to use.
131 [thumbnail="screenshot/gui-create-vm-system.png"]
132 Additionally, the xref:qm_hard_disk[SCSI controller] can be changed.
133 If you plan to install the QEMU Guest Agent, or if your selected ISO image
134 already ships and installs it automatically, you may want to tick the 'QEMU
135 Agent' box, which lets {pve} know that it can use its features to show some
136 more information, and complete some actions (for example, shutdown or
137 snapshots) more intelligently.
138
139 {pve} allows to boot VMs with different firmware and machine types, namely
140 xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from
141 the default SeaBIOS to OVMF only if you plan to use
142 xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the
143 hardware layout of the VM's virtual motherboard. You can choose between the
144 default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
145 https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
146 chipset, which also provides a virtual PCIe bus, and thus may be desired if
147 one wants to pass through PCIe hardware.
148
149 [[qm_hard_disk]]
150 Hard Disk
151 ~~~~~~~~~
152
153 [[qm_hard_disk_bus]]
154 Bus/Controller
155 ^^^^^^^^^^^^^^
156 QEMU can emulate a number of storage controllers:
157
158 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
159 controller. Even if this controller has been superseded by recent designs,
160 each and every OS you can think of has support for it, making it a great choice
161 if you want to run an OS released before 2003. You can connect up to 4 devices
162 on this controller.
163
164 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
165 design, allowing higher throughput and a greater number of devices to be
166 connected. You can connect up to 6 devices on this controller.
167
168 * the *SCSI* controller, designed in 1985, is commonly found on server grade
169 hardware, and can connect up to 14 storage devices. {pve} emulates by default a
170 LSI 53C895A controller.
171 +
172 A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for
173 performance and is automatically selected for newly created Linux VMs since
174 {pve} 4.3. Linux distributions have support for this controller since 2012, and
175 FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
176 containing the drivers during the installation.
177 // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
178 If you aim at maximum performance, you can select a SCSI controller of type
179 _VirtIO SCSI single_ which will allow you to select the *IO Thread* option.
180 When selecting _VirtIO SCSI single_ QEMU will create a new controller for
181 each disk, instead of adding all disks to the same controller.
182
183 * The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
184 is an older type of paravirtualized controller. It has been superseded by the
185 VirtIO SCSI Controller, in terms of features.
186
187 [thumbnail="screenshot/gui-create-vm-hard-disk.png"]
188
189 [[qm_hard_disk_formats]]
190 Image Format
191 ^^^^^^^^^^^^
192 On each controller you attach a number of emulated hard disks, which are backed
193 by a file or a block device residing in the configured storage. The choice of
194 a storage type will determine the format of the hard disk image. Storages which
195 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
196 whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
197 either the *raw disk image format* or the *QEMU image format*.
198
199 * the *QEMU image format* is a copy on write format which allows snapshots, and
200 thin provisioning of the disk image.
201 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
202 you would get when executing the `dd` command on a block device in Linux. This
203 format does not support thin provisioning or snapshots by itself, requiring
204 cooperation from the storage layer for these tasks. It may, however, be up to
205 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
206 https://events.static.linuxfound.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
207 * the *VMware image format* only makes sense if you intend to import/export the
208 disk image to other hypervisors.
209
210 [[qm_hard_disk_cache]]
211 Cache Mode
212 ^^^^^^^^^^
213 Setting the *Cache* mode of the hard drive will impact how the host system will
214 notify the guest systems of block write completions. The *No cache* default
215 means that the guest system will be notified that a write is complete when each
216 block reaches the physical storage write queue, ignoring the host page cache.
217 This provides a good balance between safety and speed.
218
219 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
220 you can set the *No backup* option on that disk.
221
222 If you want the {pve} storage replication mechanism to skip a disk when starting
223 a replication job, you can set the *Skip replication* option on that disk.
224 As of {pve} 5.0, replication requires the disk images to be on a storage of type
225 `zfspool`, so adding a disk image to other storages when the VM has replication
226 configured requires to skip replication for this disk image.
227
228 [[qm_hard_disk_discard]]
229 Trim/Discard
230 ^^^^^^^^^^^^
231 If your storage supports _thin provisioning_ (see the storage chapter in the
232 {pve} guide), you can activate the *Discard* option on a drive. With *Discard*
233 set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard
234 https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem
235 marks blocks as unused after deleting files, the controller will relay this
236 information to the storage, which will then shrink the disk image accordingly.
237 For the guest to be able to issue _TRIM_ commands, you must enable the *Discard*
238 option on the drive. Some guest operating systems may also require the
239 *SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is
240 only supported on guests using Linux Kernel 5.0 or higher.
241
242 If you would like a drive to be presented to the guest as a solid-state drive
243 rather than a rotational hard disk, you can set the *SSD emulation* option on
244 that drive. There is no requirement that the underlying storage actually be
245 backed by SSDs; this feature can be used with physical media of any type.
246 Note that *SSD emulation* is not supported on *VirtIO Block* drives.
247
248
249 [[qm_hard_disk_iothread]]
250 IO Thread
251 ^^^^^^^^^
252 The option *IO Thread* can only be used when using a disk with the *VirtIO*
253 controller, or with the *SCSI* controller, when the emulated controller type is
254 *VirtIO SCSI single*. With *IO Thread* enabled, QEMU creates one I/O thread per
255 storage controller, rather than handling all I/O in the main event loop or vCPU
256 threads. One benefit is better work distribution and utilization of the
257 underlying storage. Another benefit is reduced latency (hangs) in the guest for
258 very I/O-intensive host workloads, since neither the main thread nor a vCPU
259 thread can be blocked by disk I/O.
260
261 [[qm_cpu]]
262 CPU
263 ~~~
264
265 [thumbnail="screenshot/gui-create-vm-cpu.png"]
266
267 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
268 This CPU can then contain one or many *cores*, which are independent
269 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
270 sockets with two cores is mostly irrelevant from a performance point of view.
271 However some software licenses depend on the number of sockets a machine has,
272 in that case it makes sense to set the number of sockets to what the license
273 allows you.
274
275 Increasing the number of virtual CPUs (cores and sockets) will usually provide a
276 performance improvement though that is heavily dependent on the use of the VM.
277 Multi-threaded applications will of course benefit from a large number of
278 virtual CPUs, as for each virtual cpu you add, QEMU will create a new thread of
279 execution on the host system. If you're not sure about the workload of your VM,
280 it is usually a safe bet to set the number of *Total cores* to 2.
281
282 NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
283 is greater than the number of cores on the server (for example, 4 VMs each with
284 4 cores (= total 16) on a machine with only 8 cores). In that case the host
285 system will balance the QEMU execution threads between your server cores, just
286 like if you were running a standard multi-threaded application. However, {pve}
287 will prevent you from starting VMs with more virtual CPU cores than physically
288 available, as this will only bring the performance down due to the cost of
289 context switches.
290
291 [[qm_cpu_resource_limits]]
292 Resource Limits
293 ^^^^^^^^^^^^^^^
294
295 In addition to the number of virtual cores, you can configure how much resources
296 a VM can get in relation to the host CPU time and also in relation to other
297 VMs.
298 With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
299 the whole VM can use on the host. It is a floating point value representing CPU
300 time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
301 single process would fully use one single core it would have `100%` CPU Time
302 usage. If a VM with four cores utilizes all its cores fully it would
303 theoretically use `400%`. In reality the usage may be even a bit higher as QEMU
304 can have additional threads for VM peripherals besides the vCPU core ones.
305 This setting can be useful if a VM should have multiple vCPUs, as it runs a few
306 processes in parallel, but the VM as a whole should not be able to run all
307 vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
308 which would profit from having 8 vCPUs, but at no time all of those 8 cores
309 should run at full load - as this would make the server so overloaded that
310 other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
311 `4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
312 real host cores CPU time. But, if only 4 would do work they could still get
313 almost 100% of a real core each.
314
315 NOTE: VMs can, depending on their configuration, use additional threads, such
316 as for networking or IO operations but also live migration. Thus a VM can show
317 up to use more CPU time than just its virtual CPUs could use. To ensure that a
318 VM never uses more CPU time than virtual CPUs assigned set the *cpulimit*
319 setting to the same value as the total core count.
320
321 The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
322 shares or CPU weight), controls how much CPU time a VM gets compared to other
323 running VMs. It is a relative weight which defaults to `100` (or `1024` if the
324 host uses legacy cgroup v1). If you increase this for a VM it will be
325 prioritized by the scheduler in comparison to other VMs with lower weight. For
326 example, if VM 100 has set the default `100` and VM 200 was changed to `200`,
327 the latter VM 200 would receive twice the CPU bandwidth than the first VM 100.
328
329 For more information see `man systemd.resource-control`, here `CPUQuota`
330 corresponds to `cpulimit` and `CPUWeight` corresponds to our `cpuunits`
331 setting, visit its Notes section for references and implementation details.
332
333 The third CPU resource limiting setting, *affinity*, controls what host cores
334 the virtual machine will be permitted to execute on. E.g., if an affinity value
335 of `0-3,8-11` is provided, the virtual machine will be restricted to using the
336 host cores `0,1,2,3,8,9,10,` and `11`. Valid *affinity* values are written in
337 cpuset `List Format`. List Format is a comma-separated list of CPU numbers and
338 ranges of numbers, in ASCII decimal.
339
340 NOTE: CPU *affinity* uses the `taskset` command to restrict virtual machines to
341 a given set of cores. This restriction will not take effect for some types of
342 processes that may be created for IO. *CPU affinity is not a security feature.*
343
344 For more information regarding *affinity* see `man cpuset`. Here the
345 `List Format` corresponds to valid *affinity* values. Visit its `Formats`
346 section for more examples.
347
348 CPU Type
349 ^^^^^^^^
350
351 QEMU can emulate a number different of *CPU types* from 486 to the latest Xeon
352 processors. Each new processor generation adds new features, like hardware
353 assisted 3d rendering, random number generation, memory protection, etc ...
354 Usually you should select for your VM a processor type which closely matches the
355 CPU of the host system, as it means that the host CPU features (also called _CPU
356 flags_ ) will be available in your VMs. If you want an exact match, you can set
357 the CPU type to *host* in which case the VM will have exactly the same CPU flags
358 as your host system.
359
360 This has a downside though. If you want to do a live migration of VMs between
361 different hosts, your VM might end up on a new system with a different CPU type.
362 If the CPU flags passed to the guest are missing, the qemu process will stop. To
363 remedy this QEMU has also its own CPU type *kvm64*, that {pve} uses by defaults.
364 kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
365 but is guaranteed to work everywhere.
366
367 In short, if you care about live migration and moving VMs between nodes, leave
368 the kvm64 default. If you don’t care about live migration or have a homogeneous
369 cluster where all nodes have the same CPU, set the CPU type to host, as in
370 theory this will give your guests maximum performance.
371
372 Custom CPU Types
373 ^^^^^^^^^^^^^^^^
374
375 You can specify custom CPU types with a configurable set of features. These are
376 maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by
377 an administrator. See `man cpu-models.conf` for format details.
378
379 Specified custom types can be selected by any user with the `Sys.Audit`
380 privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
381 or API, the name needs to be prefixed with 'custom-'.
382
383 Meltdown / Spectre related CPU flags
384 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
385
386 There are several CPU flags related to the Meltdown and Spectre vulnerabilities
387 footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
388 manually unless the selected CPU type of your VM already enables them by default.
389
390 There are two requirements that need to be fulfilled in order to use these
391 CPU flags:
392
393 * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
394 * The guest operating system must be updated to a version which mitigates the
395 attacks and is able to utilize the CPU feature
396
397 Otherwise you need to set the desired CPU flag of the virtual CPU, either by
398 editing the CPU options in the WebUI, or by setting the 'flags' property of the
399 'cpu' option in the VM configuration file.
400
401 For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
402 so-called ``microcode update'' footnote:[You can use `intel-microcode' /
403 `amd-microcode' from Debian non-free if your vendor does not provide such an
404 update. Note that not all affected CPUs can be updated to support spec-ctrl.]
405 for your CPU.
406
407
408 To check if the {pve} host is vulnerable, execute the following command as root:
409
410 ----
411 for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
412 ----
413
414 A community script is also available to detect is the host is still vulnerable.
415 footnote:[spectre-meltdown-checker https://meltdown.ovh/]
416
417 Intel processors
418 ^^^^^^^^^^^^^^^^
419
420 * 'pcid'
421 +
422 This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
423 called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
424 the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
425 mechanism footnote:[PCID is now a critical performance/security feature on x86
426 https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
427 +
428 To check if the {pve} host supports PCID, execute the following command as root:
429 +
430 ----
431 # grep ' pcid ' /proc/cpuinfo
432 ----
433 +
434 If this does not return empty your host's CPU has support for 'pcid'.
435
436 * 'spec-ctrl'
437 +
438 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
439 in cases where retpolines are not sufficient.
440 Included by default in Intel CPU models with -IBRS suffix.
441 Must be explicitly turned on for Intel CPU models without -IBRS suffix.
442 Requires an updated host CPU microcode (intel-microcode >= 20180425).
443 +
444 * 'ssbd'
445 +
446 Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
447 Must be explicitly turned on for all Intel CPU models.
448 Requires an updated host CPU microcode(intel-microcode >= 20180703).
449
450
451 AMD processors
452 ^^^^^^^^^^^^^^
453
454 * 'ibpb'
455 +
456 Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
457 in cases where retpolines are not sufficient.
458 Included by default in AMD CPU models with -IBPB suffix.
459 Must be explicitly turned on for AMD CPU models without -IBPB suffix.
460 Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
461
462
463
464 * 'virt-ssbd'
465 +
466 Required to enable the Spectre v4 (CVE-2018-3639) fix.
467 Not included by default in any AMD CPU model.
468 Must be explicitly turned on for all AMD CPU models.
469 This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
470 Note that this must be explicitly enabled when when using the "host" cpu model,
471 because this is a virtual feature which does not exist in the physical CPUs.
472
473
474 * 'amd-ssbd'
475 +
476 Required to enable the Spectre v4 (CVE-2018-3639) fix.
477 Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
478 This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
479 virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
480
481
482 * 'amd-no-ssb'
483 +
484 Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
485 Not included by default in any AMD CPU model.
486 Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
487 and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
488 This is mutually exclusive with virt-ssbd and amd-ssbd.
489
490
491 NUMA
492 ^^^^
493 You can also optionally emulate a *NUMA*
494 footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
495 in your VMs. The basics of the NUMA architecture mean that instead of having a
496 global memory pool available to all your cores, the memory is spread into local
497 banks close to each socket.
498 This can bring speed improvements as the memory bus is not a bottleneck
499 anymore. If your system has a NUMA architecture footnote:[if the command
500 `numactl --hardware | grep available` returns more than one node, then your host
501 system has a NUMA architecture] we recommend to activate the option, as this
502 will allow proper distribution of the VM resources on the host system.
503 This option is also required to hot-plug cores or RAM in a VM.
504
505 If the NUMA option is used, it is recommended to set the number of sockets to
506 the number of nodes of the host system.
507
508 vCPU hot-plug
509 ^^^^^^^^^^^^^
510
511 Modern operating systems introduced the capability to hot-plug and, to a
512 certain extent, hot-unplug CPUs in a running system. Virtualization allows us
513 to avoid a lot of the (physical) problems real hardware can cause in such
514 scenarios.
515 Still, this is a rather new and complicated feature, so its use should be
516 restricted to cases where its absolutely needed. Most of the functionality can
517 be replicated with other, well tested and less complicated, features, see
518 xref:qm_cpu_resource_limits[Resource Limits].
519
520 In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
521 To start a VM with less than this total core count of CPUs you may use the
522 *vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
523
524 Currently only this feature is only supported on Linux, a kernel newer than 3.10
525 is needed, a kernel newer than 4.7 is recommended.
526
527 You can use a udev rule as follow to automatically set new CPUs as online in
528 the guest:
529
530 ----
531 SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
532 ----
533
534 Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
535
536 Note: CPU hot-remove is machine dependent and requires guest cooperation. The
537 deletion command does not guarantee CPU removal to actually happen, typically
538 it's a request forwarded to guest OS using target dependent mechanism, such as
539 ACPI on x86/amd64.
540
541
542 [[qm_memory]]
543 Memory
544 ~~~~~~
545
546 For each VM you have the option to set a fixed size memory or asking
547 {pve} to dynamically allocate memory based on the current RAM usage of the
548 host.
549
550 .Fixed Memory Allocation
551 [thumbnail="screenshot/gui-create-vm-memory.png"]
552
553 When setting memory and minimum memory to the same amount
554 {pve} will simply allocate what you specify to your VM.
555
556 Even when using a fixed memory size, the ballooning device gets added to the
557 VM, because it delivers useful information such as how much memory the guest
558 really uses.
559 In general, you should leave *ballooning* enabled, but if you want to disable
560 it (like for debugging purposes), simply uncheck *Ballooning Device* or set
561
562 balloon: 0
563
564 in the configuration.
565
566 .Automatic Memory Allocation
567
568 // see autoballoon() in pvestatd.pm
569 When setting the minimum memory lower than memory, {pve} will make sure that the
570 minimum amount you specified is always available to the VM, and if RAM usage on
571 the host is below 80%, will dynamically add memory to the guest up to the
572 maximum memory specified.
573
574 When the host is running low on RAM, the VM will then release some memory
575 back to the host, swapping running processes if needed and starting the oom
576 killer in last resort. The passing around of memory between host and guest is
577 done via a special `balloon` kernel driver running inside the guest, which will
578 grab or release memory pages from the host.
579 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
580
581 When multiple VMs use the autoallocate facility, it is possible to set a
582 *Shares* coefficient which indicates the relative amount of the free host memory
583 that each VM should take. Suppose for instance you have four VMs, three of them
584 running an HTTP server and the last one is a database server. To cache more
585 database blocks in the database server RAM, you would like to prioritize the
586 database VM when spare RAM is available. For this you assign a Shares property
587 of 3000 to the database VM, leaving the other VMs to the Shares default setting
588 of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
589 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
590 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
591 get 1.5 GB.
592
593 All Linux distributions released after 2010 have the balloon kernel driver
594 included. For Windows OSes, the balloon driver needs to be added manually and can
595 incur a slowdown of the guest, so we don't recommend using it on critical
596 systems.
597 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
598
599 When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
600 of RAM available to the host.
601
602
603 [[qm_network_device]]
604 Network Device
605 ~~~~~~~~~~~~~~
606
607 [thumbnail="screenshot/gui-create-vm-network.png"]
608
609 Each VM can have many _Network interface controllers_ (NIC), of four different
610 types:
611
612 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
613 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
614 performance. Like all VirtIO devices, the guest OS should have the proper driver
615 installed.
616 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
617 only be used when emulating older operating systems ( released before 2002 )
618 * the *vmxnet3* is another paravirtualized device, which should only be used
619 when importing a VM from another hypervisor.
620
621 {pve} will generate for each NIC a random *MAC address*, so that your VM is
622 addressable on Ethernet networks.
623
624 The NIC you added to the VM can follow one of two different models:
625
626 * in the default *Bridged mode* each virtual NIC is backed on the host by a
627 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
628 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
629 have direct access to the Ethernet LAN on which the host is located.
630 * in the alternative *NAT mode*, each virtual NIC will only communicate with
631 the QEMU user networking stack, where a built-in router and DHCP server can
632 provide network access. This built-in DHCP will serve addresses in the private
633 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
634 should only be used for testing. This mode is only available via CLI or the API,
635 but not via the WebUI.
636
637 You can also skip adding a network device when creating a VM by selecting *No
638 network device*.
639
640 You can overwrite the *MTU* setting for each VM network device. The option
641 `mtu=1` represents a special case, in which the MTU value will be inherited
642 from the underlying bridge.
643 This option is only available for *VirtIO* network devices.
644
645 .Multiqueue
646 If you are using the VirtIO driver, you can optionally activate the
647 *Multiqueue* option. This option allows the guest OS to process networking
648 packets using multiple virtual CPUs, providing an increase in the total number
649 of packets transferred.
650
651 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
652 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
653 host kernel, where the queue will be processed by a kernel thread spawned by the
654 vhost driver. With this option activated, it is possible to pass _multiple_
655 network queues to the host kernel for each NIC.
656
657 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
658 When using Multiqueue, it is recommended to set it to a value equal
659 to the number of Total Cores of your guest. You also need to set in
660 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
661 command:
662
663 `ethtool -L ens1 combined X`
664
665 where X is the number of the number of vcpus of the VM.
666
667 You should note that setting the Multiqueue parameter to a value greater
668 than one will increase the CPU load on the host and guest systems as the
669 traffic increases. We recommend to set this option only when the VM has to
670 process a great number of incoming connections, such as when the VM is running
671 as a router, reverse proxy or a busy HTTP server doing long polling.
672
673 [[qm_display]]
674 Display
675 ~~~~~~~
676
677 QEMU can virtualize a few types of VGA hardware. Some examples are:
678
679 * *std*, the default, emulates a card with Bochs VBE extensions.
680 * *cirrus*, this was once the default, it emulates a very old hardware module
681 with all its problems. This display type should only be used if really
682 necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/
683 qemu: using cirrus considered harmful], for example, if using Windows XP or
684 earlier
685 * *vmware*, is a VMWare SVGA-II compatible adapter.
686 * *qxl*, is the QXL paravirtualized graphics card. Selecting this also
687 enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
688 VM.
689 * *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
690 can offload workloads to the host GPU without requiring special (expensive)
691 models and drivers and neither binding the host GPU completely, allowing
692 reuse between multiple guests and or the host.
693 +
694 NOTE: VirGL support needs some extra libraries that aren't installed by
695 default due to being relatively big and also not available as open source for
696 all GPU models/vendors. For most setups you'll just need to do:
697 `apt install libgl1 libegl1`
698
699 You can edit the amount of memory given to the virtual GPU, by setting
700 the 'memory' option. This can enable higher resolutions inside the VM,
701 especially with SPICE/QXL.
702
703 As the memory is reserved by display device, selecting Multi-Monitor mode
704 for SPICE (such as `qxl2` for dual monitors) has some implications:
705
706 * Windows needs a device for each monitor, so if your 'ostype' is some
707 version of Windows, {pve} gives the VM an extra device per monitor.
708 Each device gets the specified amount of memory.
709
710 * Linux VMs, can always enable more virtual monitors, but selecting
711 a Multi-Monitor mode multiplies the memory given to the device with
712 the number of monitors.
713
714 Selecting `serialX` as display 'type' disables the VGA output, and redirects
715 the Web Console to the selected serial port. A configured display 'memory'
716 setting will be ignored in that case.
717
718 [[qm_usb_passthrough]]
719 USB Passthrough
720 ~~~~~~~~~~~~~~~
721
722 There are two different types of USB passthrough devices:
723
724 * Host USB passthrough
725 * SPICE USB passthrough
726
727 Host USB passthrough works by giving a VM a USB device of the host.
728 This can either be done via the vendor- and product-id, or
729 via the host bus and port.
730
731 The vendor/product-id looks like this: *0123:abcd*,
732 where *0123* is the id of the vendor, and *abcd* is the id
733 of the product, meaning two pieces of the same usb device
734 have the same id.
735
736 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
737 and *2.3.4* is the port path. This represents the physical
738 ports of your host (depending of the internal order of the
739 usb controllers).
740
741 If a device is present in a VM configuration when the VM starts up,
742 but the device is not present in the host, the VM can boot without problems.
743 As soon as the device/port is available in the host, it gets passed through.
744
745 WARNING: Using this kind of USB passthrough means that you cannot move
746 a VM online to another host, since the hardware is only available
747 on the host the VM is currently residing.
748
749 The second type of passthrough is SPICE USB passthrough. This is useful
750 if you use a SPICE client which supports it. If you add a SPICE USB port
751 to your VM, you can passthrough a USB device from where your SPICE client is,
752 directly to the VM (for example an input device or hardware dongle).
753
754
755 [[qm_bios_and_uefi]]
756 BIOS and UEFI
757 ~~~~~~~~~~~~~
758
759 In order to properly emulate a computer, QEMU needs to use a firmware.
760 Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the
761 first steps when booting a VM. It is responsible for doing basic hardware
762 initialization and for providing an interface to the firmware and hardware for
763 the operating system. By default QEMU uses *SeaBIOS* for this, which is an
764 open-source, x86 BIOS implementation. SeaBIOS is a good choice for most
765 standard setups.
766
767 Some operating systems (such as Windows 11) may require use of an UEFI
768 compatible implementation instead. In such cases, you must rather use *OVMF*,
769 which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
770
771 There are other scenarios in which the SeaBIOS may not be the ideal firmware to
772 boot from, for example if you want to do VGA passthrough. footnote:[Alex
773 Williamson has a good blog entry about this
774 https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
775
776 If you want to use OVMF, there are several things to consider:
777
778 In order to save things like the *boot order*, there needs to be an EFI Disk.
779 This disk will be included in backups and snapshots, and there can only be one.
780
781 You can create such a disk with the following command:
782
783 ----
784 # qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
785 ----
786
787 Where *<storage>* is the storage where you want to have the disk, and
788 *<format>* is a format which the storage supports. Alternatively, you can
789 create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
790 hardware section of a VM.
791
792 The *efitype* option specifies which version of the OVMF firmware should be
793 used. For new VMs, this should always be '4m', as it supports Secure Boot and
794 has more space allocated to support future development (this is the default in
795 the GUI).
796
797 *pre-enroll-keys* specifies if the efidisk should come pre-loaded with
798 distribution-specific and Microsoft Standard Secure Boot keys. It also enables
799 Secure Boot by default (though it can still be disabled in the OVMF menu within
800 the VM).
801
802 NOTE: If you want to start using Secure Boot in an existing VM (that still uses
803 a '2m' efidisk), you need to recreate the efidisk. To do so, delete the old one
804 (`qm set <vmid> -delete efidisk0`) and add a new one as described above. This
805 will reset any custom configurations you have made in the OVMF menu!
806
807 When using OVMF with a virtual display (without VGA passthrough),
808 you need to set the client resolution in the OVMF menu (which you can reach
809 with a press of the ESC button during boot), or you have to choose
810 SPICE as the display type.
811
812 [[qm_tpm]]
813 Trusted Platform Module (TPM)
814 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
815
816 A *Trusted Platform Module* is a device which stores secret data - such as
817 encryption keys - securely and provides tamper-resistance functions for
818 validating system boot.
819
820 Certain operating systems (such as Windows 11) require such a device to be
821 attached to a machine (be it physical or virtual).
822
823 A TPM is added by specifying a *tpmstate* volume. This works similar to an
824 efidisk, in that it cannot be changed (only removed) once created. You can add
825 one via the following command:
826
827 ----
828 # qm set <vmid> -tpmstate0 <storage>:1,version=<version>
829 ----
830
831 Where *<storage>* is the storage you want to put the state on, and *<version>*
832 is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
833 choosing 'Add' -> 'TPM State' in the hardware section of a VM.
834
835 The 'v2.0' TPM spec is newer and better supported, so unless you have a specific
836 implementation that requires a 'v1.2' TPM, it should be preferred.
837
838 NOTE: Compared to a physical TPM, an emulated one does *not* provide any real
839 security benefits. The point of a TPM is that the data on it cannot be modified
840 easily, except via commands specified as part of the TPM spec. Since with an
841 emulated device the data storage happens on a regular volume, it can potentially
842 be edited by anyone with access to it.
843
844 [[qm_ivshmem]]
845 Inter-VM shared memory
846 ~~~~~~~~~~~~~~~~~~~~~~
847
848 You can add an Inter-VM shared memory device (`ivshmem`), which allows one to
849 share memory between the host and a guest, or also between multiple guests.
850
851 To add such a device, you can use `qm`:
852
853 ----
854 # qm set <vmid> -ivshmem size=32,name=foo
855 ----
856
857 Where the size is in MiB. The file will be located under
858 `/dev/shm/pve-shm-$name` (the default name is the vmid).
859
860 NOTE: Currently the device will get deleted as soon as any VM using it got
861 shutdown or stopped. Open connections will still persist, but new connections
862 to the exact same device cannot be made anymore.
863
864 A use case for such a device is the Looking Glass
865 footnote:[Looking Glass: https://looking-glass.io/] project, which enables high
866 performance, low-latency display mirroring between host and guest.
867
868 [[qm_audio_device]]
869 Audio Device
870 ~~~~~~~~~~~~
871
872 To add an audio device run the following command:
873
874 ----
875 qm set <vmid> -audio0 device=<device>
876 ----
877
878 Supported audio devices are:
879
880 * `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9
881 * `intel-hda`: Intel HD Audio Controller, emulates ICH6
882 * `AC97`: Audio Codec '97, useful for older operating systems like Windows XP
883
884 There are two backends available:
885
886 * 'spice'
887 * 'none'
888
889 The 'spice' backend can be used in combination with xref:qm_display[SPICE] while
890 the 'none' backend can be useful if an audio device is needed in the VM for some
891 software to work. To use the physical audio device of the host use device
892 passthrough (see xref:qm_pci_passthrough[PCI Passthrough] and
893 xref:qm_usb_passthrough[USB Passthrough]). Remote protocols like Microsoft’s RDP
894 have options to play sound.
895
896
897 [[qm_virtio_rng]]
898 VirtIO RNG
899 ~~~~~~~~~~
900
901 A RNG (Random Number Generator) is a device providing entropy ('randomness') to
902 a system. A virtual hardware-RNG can be used to provide such entropy from the
903 host system to a guest VM. This helps to avoid entropy starvation problems in
904 the guest (a situation where not enough entropy is available and the system may
905 slow down or run into problems), especially during the guests boot process.
906
907 To add a VirtIO-based emulated RNG, run the following command:
908
909 ----
910 qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y]
911 ----
912
913 `source` specifies where entropy is read from on the host and has to be one of
914 the following:
915
916 * `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
917 * `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
918 starvation on the host system)
919 * `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple
920 are available, the one selected in
921 `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
922
923 A limit can be specified via the `max_bytes` and `period` parameters, they are
924 read as `max_bytes` per `period` in milliseconds. However, it does not represent
925 a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
926 available on a 1 second timer, not that 1 KiB is streamed to the guest over the
927 course of one second. Reducing the `period` can thus be used to inject entropy
928 into the guest at a faster rate.
929
930 By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
931 recommended to always use a limiter to avoid guests using too many host
932 resources. If desired, a value of '0' for `max_bytes` can be used to disable
933 all limits.
934
935 [[qm_bootorder]]
936 Device Boot Order
937 ~~~~~~~~~~~~~~~~~
938
939 QEMU can tell the guest which devices it should boot from, and in which order.
940 This can be specified in the config via the `boot` property, for example:
941
942 ----
943 boot: order=scsi0;net0;hostpci0
944 ----
945
946 [thumbnail="screenshot/gui-qemu-edit-bootorder.png"]
947
948 This way, the guest would first attempt to boot from the disk `scsi0`, if that
949 fails, it would go on to attempt network boot from `net0`, and in case that
950 fails too, finally attempt to boot from a passed through PCIe device (seen as
951 disk in case of NVMe, otherwise tries to launch into an option ROM).
952
953 On the GUI you can use a drag-and-drop editor to specify the boot order, and use
954 the checkbox to enable or disable certain devices for booting altogether.
955
956 NOTE: If your guest uses multiple disks to boot the OS or load the bootloader,
957 all of them must be marked as 'bootable' (that is, they must have the checkbox
958 enabled or appear in the list in the config) for the guest to be able to boot.
959 This is because recent SeaBIOS and OVMF versions only initialize disks if they
960 are marked 'bootable'.
961
962 In any case, even devices not appearing in the list or having the checkmark
963 disabled will still be available to the guest, once it's operating system has
964 booted and initialized them. The 'bootable' flag only affects the guest BIOS and
965 bootloader.
966
967
968 [[qm_startup_and_shutdown]]
969 Automatic Start and Shutdown of Virtual Machines
970 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
971
972 After creating your VMs, you probably want them to start automatically
973 when the host system boots. For this you need to select the option 'Start at
974 boot' from the 'Options' Tab of your VM in the web interface, or set it with
975 the following command:
976
977 ----
978 # qm set <vmid> -onboot 1
979 ----
980
981 .Start and Shutdown Order
982
983 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
984
985 In some case you want to be able to fine tune the boot order of your
986 VMs, for instance if one of your VM is providing firewalling or DHCP
987 to other guest systems. For this you can use the following
988 parameters:
989
990 * *Start/Shutdown order*: Defines the start order priority. For example, set it
991 * to 1 if
992 you want the VM to be the first to be started. (We use the reverse startup
993 order for shutdown, so a machine with a start order of 1 would be the last to
994 be shut down). If multiple VMs have the same order defined on a host, they will
995 additionally be ordered by 'VMID' in ascending order.
996 * *Startup delay*: Defines the interval between this VM start and subsequent
997 VMs starts. For example, set it to 240 if you want to wait 240 seconds before
998 starting other VMs.
999 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
1000 for the VM to be offline after issuing a shutdown command. By default this
1001 value is set to 180, which means that {pve} will issue a shutdown request and
1002 wait 180 seconds for the machine to be offline. If the machine is still online
1003 after the timeout it will be stopped forcefully.
1004
1005 NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
1006 'boot order' options currently. Those VMs will be skipped by the startup and
1007 shutdown algorithm as the HA manager itself ensures that VMs get started and
1008 stopped.
1009
1010 Please note that machines without a Start/Shutdown order parameter will always
1011 start after those where the parameter is set. Further, this parameter can only
1012 be enforced between virtual machines running on the same host, not
1013 cluster-wide.
1014
1015 If you require a delay between the host boot and the booting of the first VM,
1016 see the section on xref:first_guest_boot_delay[Proxmox VE Node Management].
1017
1018
1019 [[qm_qemu_agent]]
1020 QEMU Guest Agent
1021 ~~~~~~~~~~~~~~~~
1022
1023 The QEMU Guest Agent is a service which runs inside the VM, providing a
1024 communication channel between the host and the guest. It is used to exchange
1025 information and allows the host to issue commands to the guest.
1026
1027 For example, the IP addresses in the VM summary panel are fetched via the guest
1028 agent.
1029
1030 Or when starting a backup, the guest is told via the guest agent to sync
1031 outstanding writes via the 'fs-freeze' and 'fs-thaw' commands.
1032
1033 For the guest agent to work properly the following steps must be taken:
1034
1035 * install the agent in the guest and make sure it is running
1036 * enable the communication via the agent in {pve}
1037
1038 Install Guest Agent
1039 ^^^^^^^^^^^^^^^^^^^
1040
1041 For most Linux distributions, the guest agent is available. The package is
1042 usually named `qemu-guest-agent`.
1043
1044 For Windows, it can be installed from the
1045 https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso[Fedora
1046 VirtIO driver ISO].
1047
1048 Enable Guest Agent Communication
1049 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1050
1051 Communication from {pve} with the guest agent can be enabled in the VM's
1052 *Options* panel. A fresh start of the VM is necessary for the changes to take
1053 effect.
1054
1055 It is possible to enable the 'Run guest-trim' option. With this enabled,
1056 {pve} will issue a trim command to the guest after the following
1057 operations that have the potential to write out zeros to the storage:
1058
1059 * moving a disk to another storage
1060 * live migrating a VM to another node with local storage
1061
1062 On a thin provisioned storage, this can help to free up unused space.
1063
1064 Troubleshooting
1065 ^^^^^^^^^^^^^^^
1066
1067 .VM does not shut down
1068
1069 Make sure the guest agent is installed and running.
1070
1071 Once the guest agent is enabled, {pve} will send power commands like
1072 'shutdown' via the guest agent. If the guest agent is not running, commands
1073 cannot get executed properly and the shutdown command will run into a timeout.
1074
1075 [[qm_spice_enhancements]]
1076 SPICE Enhancements
1077 ~~~~~~~~~~~~~~~~~~
1078
1079 SPICE Enhancements are optional features that can improve the remote viewer
1080 experience.
1081
1082 To enable them via the GUI go to the *Options* panel of the virtual machine. Run
1083 the following command to enable them via the CLI:
1084
1085 ----
1086 qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all
1087 ----
1088
1089 NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine
1090 must be set to SPICE (qxl).
1091
1092 Folder Sharing
1093 ^^^^^^^^^^^^^^
1094
1095 Share a local folder with the guest. The `spice-webdavd` daemon needs to be
1096 installed in the guest. It makes the shared folder available through a local
1097 WebDAV server located at http://localhost:9843.
1098
1099 For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded
1100 from the
1101 https://www.spice-space.org/download.html#windows-binaries[official SPICE website].
1102
1103 Most Linux distributions have a package called `spice-webdavd` that can be
1104 installed.
1105
1106 To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'.
1107 Select the folder to share and then enable the checkbox.
1108
1109 NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer.
1110
1111 CAUTION: Experimental! Currently this feature does not work reliably.
1112
1113 Video Streaming
1114 ^^^^^^^^^^^^^^^
1115
1116 Fast refreshing areas are encoded into a video stream. Two options exist:
1117
1118 * *all*: Any fast refreshing area will be encoded into a video stream.
1119 * *filter*: Additional filters are used to decide if video streaming should be
1120 used (currently only small window surfaces are skipped).
1121
1122 A general recommendation if video streaming should be enabled and which option
1123 to choose from cannot be given. Your mileage may vary depending on the specific
1124 circumstances.
1125
1126 Troubleshooting
1127 ^^^^^^^^^^^^^^^
1128
1129 .Shared folder does not show up
1130
1131 Make sure the WebDAV service is enabled and running in the guest. On Windows it
1132 is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be
1133 different depending on the distribution.
1134
1135 If the service is running, check the WebDAV server by opening
1136 http://localhost:9843 in a browser in the guest.
1137
1138 It can help to restart the SPICE session.
1139
1140 [[qm_migration]]
1141 Migration
1142 ---------
1143
1144 [thumbnail="screenshot/gui-qemu-migrate.png"]
1145
1146 If you have a cluster, you can migrate your VM to another host with
1147
1148 ----
1149 # qm migrate <vmid> <target>
1150 ----
1151
1152 There are generally two mechanisms for this
1153
1154 * Online Migration (aka Live Migration)
1155 * Offline Migration
1156
1157 Online Migration
1158 ~~~~~~~~~~~~~~~~
1159
1160 If your VM is running and no locally bound resources are configured (such as
1161 passed-through devices), you can initiate a live migration with the `--online`
1162 flag in the `qm migration` command evocation. The web-interface defaults to
1163 live migration when the VM is running.
1164
1165 How it works
1166 ^^^^^^^^^^^^
1167
1168 Online migration first starts a new QEMU process on the target host with the
1169 'incoming' flag, which performs only basic initialization with the guest vCPUs
1170 still paused and then waits for the guest memory and device state data streams
1171 of the source Virtual Machine.
1172 All other resources, such as disks, are either shared or got already sent
1173 before runtime state migration of the VMs begins; so only the memory content
1174 and device state remain to be transferred.
1175
1176 Once this connection is established, the source begins asynchronously sending
1177 the memory content to the target. If the guest memory on the source changes,
1178 those sections are marked dirty and another pass is made to send the guest
1179 memory data.
1180 This loop is repeated until the data difference between running source VM
1181 and incoming target VM is small enough to be sent in a few milliseconds,
1182 because then the source VM can be paused completely, without a user or program
1183 noticing the pause, so that the remaining data can be sent to the target, and
1184 then unpause the targets VM's CPU to make it the new running VM in well under a
1185 second.
1186
1187 Requirements
1188 ^^^^^^^^^^^^
1189
1190 For Live Migration to work, there are some things required:
1191
1192 * The VM has no local resources that cannot be migrated. For example,
1193 PCI or USB devices that are passed through currently block live-migration.
1194 Local Disks, on the other hand, can be migrated by sending them to the target
1195 just fine.
1196 * The hosts are located in the same {pve} cluster.
1197 * The hosts have a working (and reliable) network connection between them.
1198 * The target host must have the same, or higher versions of the
1199 {pve} packages. Although it can sometimes work the other way around, this
1200 cannot be guaranteed.
1201 * The hosts have CPUs from the same vendor with similar capabilities. Different
1202 vendor *might* work depending on the actual models and VMs CPU type
1203 configured, but it cannot be guaranteed - so please test before deploying
1204 such a setup in production.
1205
1206 Offline Migration
1207 ~~~~~~~~~~~~~~~~~
1208
1209 If you have local resources, you can still migrate your VMs offline as long as
1210 all disk are on storage defined on both hosts.
1211 Migration then copies the disks to the target host over the network, as with
1212 online migration. Note that any hardware pass-through configuration may need to
1213 be adapted to the device location on the target host.
1214
1215 // TODO: mention hardware map IDs as better way to solve that, once available
1216
1217 [[qm_copy_and_clone]]
1218 Copies and Clones
1219 -----------------
1220
1221 [thumbnail="screenshot/gui-qemu-full-clone.png"]
1222
1223 VM installation is usually done using an installation media (CD-ROM)
1224 from the operating system vendor. Depending on the OS, this can be a
1225 time consuming task one might want to avoid.
1226
1227 An easy way to deploy many VMs of the same type is to copy an existing
1228 VM. We use the term 'clone' for such copies, and distinguish between
1229 'linked' and 'full' clones.
1230
1231 Full Clone::
1232
1233 The result of such copy is an independent VM. The
1234 new VM does not share any storage resources with the original.
1235 +
1236
1237 It is possible to select a *Target Storage*, so one can use this to
1238 migrate a VM to a totally different storage. You can also change the
1239 disk image *Format* if the storage driver supports several formats.
1240 +
1241
1242 NOTE: A full clone needs to read and copy all VM image data. This is
1243 usually much slower than creating a linked clone.
1244 +
1245
1246 Some storage types allows to copy a specific *Snapshot*, which
1247 defaults to the 'current' VM data. This also means that the final copy
1248 never includes any additional snapshots from the original VM.
1249
1250
1251 Linked Clone::
1252
1253 Modern storage drivers support a way to generate fast linked
1254 clones. Such a clone is a writable copy whose initial contents are the
1255 same as the original data. Creating a linked clone is nearly
1256 instantaneous, and initially consumes no additional space.
1257 +
1258
1259 They are called 'linked' because the new image still refers to the
1260 original. Unmodified data blocks are read from the original image, but
1261 modification are written (and afterwards read) from a new
1262 location. This technique is called 'Copy-on-write'.
1263 +
1264
1265 This requires that the original volume is read-only. With {pve} one
1266 can convert any VM into a read-only <<qm_templates, Template>>). Such
1267 templates can later be used to create linked clones efficiently.
1268 +
1269
1270 NOTE: You cannot delete an original template while linked clones
1271 exist.
1272 +
1273
1274 It is not possible to change the *Target storage* for linked clones,
1275 because this is a storage internal feature.
1276
1277
1278 The *Target node* option allows you to create the new VM on a
1279 different node. The only restriction is that the VM is on shared
1280 storage, and that storage is also available on the target node.
1281
1282 To avoid resource conflicts, all network interface MAC addresses get
1283 randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
1284 setting.
1285
1286
1287 [[qm_templates]]
1288 Virtual Machine Templates
1289 -------------------------
1290
1291 One can convert a VM into a Template. Such templates are read-only,
1292 and you can use them to create linked clones.
1293
1294 NOTE: It is not possible to start templates, because this would modify
1295 the disk images. If you want to change the template, create a linked
1296 clone and modify that.
1297
1298 VM Generation ID
1299 ----------------
1300
1301 {pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official
1302 'vmgenid' Specification
1303 https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
1304 for virtual machines.
1305 This can be used by the guest operating system to detect any event resulting
1306 in a time shift event, for example, restoring a backup or a snapshot rollback.
1307
1308 When creating new VMs, a 'vmgenid' will be automatically generated and saved
1309 in its configuration file.
1310
1311 To create and add a 'vmgenid' to an already existing VM one can pass the
1312 special value `1' to let {pve} autogenerate one or manually set the 'UUID'
1313 footnote:[Online GUID generator http://guid.one/] by using it as value, for
1314 example:
1315
1316 ----
1317 # qm set VMID -vmgenid 1
1318 # qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
1319 ----
1320
1321 NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
1322 in the same effects as a change on snapshot rollback, backup restore, etc., has
1323 as the VM can interpret this as generation change.
1324
1325 In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
1326 its value on VM creation, or retroactively delete the property in the
1327 configuration with:
1328
1329 ----
1330 # qm set VMID -delete vmgenid
1331 ----
1332
1333 The most prominent use case for 'vmgenid' are newer Microsoft Windows
1334 operating systems, which use it to avoid problems in time sensitive or
1335 replicate services (such as databases or domain controller
1336 footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
1337 on snapshot rollback, backup restore or a whole VM clone operation.
1338
1339 Importing Virtual Machines and disk images
1340 ------------------------------------------
1341
1342 A VM export from a foreign hypervisor takes usually the form of one or more disk
1343 images, with a configuration file describing the settings of the VM (RAM,
1344 number of cores). +
1345 The disk images can be in the vmdk format, if the disks come from
1346 VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
1347 The most popular configuration format for VM exports is the OVF standard, but in
1348 practice interoperation is limited because many settings are not implemented in
1349 the standard itself, and hypervisors export the supplementary information
1350 in non-standard extensions.
1351
1352 Besides the problem of format, importing disk images from other hypervisors
1353 may fail if the emulated hardware changes too much from one hypervisor to
1354 another. Windows VMs are particularly concerned by this, as the OS is very
1355 picky about any changes of hardware. This problem may be solved by
1356 installing the MergeIDE.zip utility available from the Internet before exporting
1357 and choosing a hard disk type of *IDE* before booting the imported Windows VM.
1358
1359 Finally there is the question of paravirtualized drivers, which improve the
1360 speed of the emulated system and are specific to the hypervisor.
1361 GNU/Linux and other free Unix OSes have all the necessary drivers installed by
1362 default and you can switch to the paravirtualized drivers right after importing
1363 the VM. For Windows VMs, you need to install the Windows paravirtualized
1364 drivers by yourself.
1365
1366 GNU/Linux and other free Unix can usually be imported without hassle. Note
1367 that we cannot guarantee a successful import/export of Windows VMs in all
1368 cases due to the problems above.
1369
1370 Step-by-step example of a Windows OVF import
1371 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1372
1373 Microsoft provides
1374 https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
1375 to get started with Windows development.We are going to use one of these
1376 to demonstrate the OVF import feature.
1377
1378 Download the Virtual Machine zip
1379 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1380
1381 After getting informed about the user agreement, choose the _Windows 10
1382 Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
1383
1384 Extract the disk image from the zip
1385 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1386
1387 Using the `unzip` utility or any archiver of your choice, unpack the zip,
1388 and copy via ssh/scp the ovf and vmdk files to your {pve} host.
1389
1390 Import the Virtual Machine
1391 ^^^^^^^^^^^^^^^^^^^^^^^^^^
1392
1393 This will create a new virtual machine, using cores, memory and
1394 VM name as read from the OVF manifest, and import the disks to the +local-lvm+
1395 storage. You have to configure the network manually.
1396
1397 ----
1398 # qm importovf 999 WinDev1709Eval.ovf local-lvm
1399 ----
1400
1401 The VM is ready to be started.
1402
1403 Adding an external disk image to a Virtual Machine
1404 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1405
1406 You can also add an existing disk image to a VM, either coming from a
1407 foreign hypervisor, or one that you created yourself.
1408
1409 Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
1410
1411 vmdebootstrap --verbose \
1412 --size 10GiB --serial-console \
1413 --grub --no-extlinux \
1414 --package openssh-server \
1415 --package avahi-daemon \
1416 --package qemu-guest-agent \
1417 --hostname vm600 --enable-dhcp \
1418 --customize=./copy_pub_ssh.sh \
1419 --sparse --image vm600.raw
1420
1421 You can now create a new target VM, importing the image to the storage `pvedir`
1422 and attaching it to the VM's SCSI controller:
1423
1424 ----
1425 # qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
1426 --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
1427 --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
1428 ----
1429
1430 The VM is ready to be started.
1431
1432
1433 ifndef::wiki[]
1434 include::qm-cloud-init.adoc[]
1435 endif::wiki[]
1436
1437 ifndef::wiki[]
1438 include::qm-pci-passthrough.adoc[]
1439 endif::wiki[]
1440
1441 Hookscripts
1442 -----------
1443
1444 You can add a hook script to VMs with the config property `hookscript`.
1445
1446 ----
1447 # qm set 100 --hookscript local:snippets/hookscript.pl
1448 ----
1449
1450 It will be called during various phases of the guests lifetime.
1451 For an example and documentation see the example script under
1452 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
1453
1454 [[qm_hibernate]]
1455 Hibernation
1456 -----------
1457
1458 You can suspend a VM to disk with the GUI option `Hibernate` or with
1459
1460 ----
1461 # qm suspend ID --todisk
1462 ----
1463
1464 That means that the current content of the memory will be saved onto disk
1465 and the VM gets stopped. On the next start, the memory content will be
1466 loaded and the VM can continue where it was left off.
1467
1468 [[qm_vmstatestorage]]
1469 .State storage selection
1470 If no target storage for the memory is given, it will be automatically
1471 chosen, the first of:
1472
1473 1. The storage `vmstatestorage` from the VM config.
1474 2. The first shared storage from any VM disk.
1475 3. The first non-shared storage from any VM disk.
1476 4. The storage `local` as a fallback.
1477
1478 Managing Virtual Machines with `qm`
1479 ------------------------------------
1480
1481 qm is the tool to manage QEMU/KVM virtual machines on {pve}. You can
1482 create and destroy virtual machines, and control execution
1483 (start/stop/suspend/resume). Besides that, you can use qm to set
1484 parameters in the associated config file. It is also possible to
1485 create and delete virtual disks.
1486
1487 CLI Usage Examples
1488 ~~~~~~~~~~~~~~~~~~
1489
1490 Using an iso file uploaded on the 'local' storage, create a VM
1491 with a 4 GB IDE disk on the 'local-lvm' storage
1492
1493 ----
1494 # qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1495 ----
1496
1497 Start the new VM
1498
1499 ----
1500 # qm start 300
1501 ----
1502
1503 Send a shutdown request, then wait until the VM is stopped.
1504
1505 ----
1506 # qm shutdown 300 && qm wait 300
1507 ----
1508
1509 Same as above, but only wait for 40 seconds.
1510
1511 ----
1512 # qm shutdown 300 && qm wait 300 -timeout 40
1513 ----
1514
1515 Destroying a VM always removes it from Access Control Lists and it always
1516 removes the firewall configuration of the VM. You have to activate
1517 '--purge', if you want to additionally remove the VM from replication jobs,
1518 backup jobs and HA resource configurations.
1519
1520 ----
1521 # qm destroy 300 --purge
1522 ----
1523
1524 Move a disk image to a different storage.
1525
1526 ----
1527 # qm move-disk 300 scsi0 other-storage
1528 ----
1529
1530 Reassign a disk image to a different VM. This will remove the disk `scsi1` from
1531 the source VM and attaches it as `scsi3` to the target VM. In the background
1532 the disk image is being renamed so that the name matches the new owner.
1533
1534 ----
1535 # qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
1536 ----
1537
1538
1539 [[qm_configuration]]
1540 Configuration
1541 -------------
1542
1543 VM configuration files are stored inside the Proxmox cluster file
1544 system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1545 Like other files stored inside `/etc/pve/`, they get automatically
1546 replicated to all other cluster nodes.
1547
1548 NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1549 unique cluster wide.
1550
1551 .Example VM Configuration
1552 ----
1553 boot: order=virtio0;net0
1554 cores: 1
1555 sockets: 1
1556 memory: 512
1557 name: webmail
1558 ostype: l26
1559 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1560 virtio0: local:vm-100-disk-1,size=32G
1561 ----
1562
1563 Those configuration files are simple text files, and you can edit them
1564 using a normal text editor (`vi`, `nano`, ...). This is sometimes
1565 useful to do small corrections, but keep in mind that you need to
1566 restart the VM to apply such changes.
1567
1568 For that reason, it is usually better to use the `qm` command to
1569 generate and modify those files, or do the whole thing using the GUI.
1570 Our toolkit is smart enough to instantaneously apply most changes to
1571 running VM. This feature is called "hot plug", and there is no
1572 need to restart the VM in that case.
1573
1574
1575 File Format
1576 ~~~~~~~~~~~
1577
1578 VM configuration files use a simple colon separated key/value
1579 format. Each line has the following format:
1580
1581 -----
1582 # this is a comment
1583 OPTION: value
1584 -----
1585
1586 Blank lines in those files are ignored, and lines starting with a `#`
1587 character are treated as comments and are also ignored.
1588
1589
1590 [[qm_snapshots]]
1591 Snapshots
1592 ~~~~~~~~~
1593
1594 When you create a snapshot, `qm` stores the configuration at snapshot
1595 time into a separate snapshot section within the same configuration
1596 file. For example, after creating a snapshot called ``testsnapshot'',
1597 your configuration file will look like this:
1598
1599 .VM configuration with snapshot
1600 ----
1601 memory: 512
1602 swap: 512
1603 parent: testsnaphot
1604 ...
1605
1606 [testsnaphot]
1607 memory: 512
1608 swap: 512
1609 snaptime: 1457170803
1610 ...
1611 ----
1612
1613 There are a few snapshot related properties like `parent` and
1614 `snaptime`. The `parent` property is used to store the parent/child
1615 relationship between snapshots. `snaptime` is the snapshot creation
1616 time stamp (Unix epoch).
1617
1618 You can optionally save the memory of a running VM with the option `vmstate`.
1619 For details about how the target storage gets chosen for the VM state, see
1620 xref:qm_vmstatestorage[State storage selection] in the chapter
1621 xref:qm_hibernate[Hibernation].
1622
1623 [[qm_options]]
1624 Options
1625 ~~~~~~~
1626
1627 include::qm.conf.5-opts.adoc[]
1628
1629
1630 Locks
1631 -----
1632
1633 Online migrations, snapshots and backups (`vzdump`) set a lock to prevent
1634 incompatible concurrent actions on the affected VMs. Sometimes you need to
1635 remove such a lock manually (for example after a power failure).
1636
1637 ----
1638 # qm unlock <vmid>
1639 ----
1640
1641 CAUTION: Only do that if you are sure the action which set the lock is
1642 no longer running.
1643
1644
1645 ifdef::wiki[]
1646
1647 See Also
1648 ~~~~~~~~
1649
1650 * link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1651
1652 endif::wiki[]
1653
1654
1655 ifdef::manvolnum[]
1656
1657 Files
1658 ------
1659
1660 `/etc/pve/qemu-server/<VMID>.conf`::
1661
1662 Configuration file for the VM '<VMID>'.
1663
1664
1665 include::pve-copyright.adoc[]
1666 endif::manvolnum[]