]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_virtual_machines]] | |
2 | ifdef::manvolnum[] | |
3 | qm(1) | |
4 | ===== | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | qm - Qemu/KVM Virtual Machine Manager | |
11 | ||
12 | ||
13 | SYNOPSIS | |
14 | -------- | |
15 | ||
16 | include::qm.1-synopsis.adoc[] | |
17 | ||
18 | DESCRIPTION | |
19 | ----------- | |
20 | endif::manvolnum[] | |
21 | ifndef::manvolnum[] | |
22 | Qemu/KVM Virtual Machines | |
23 | ========================= | |
24 | :pve-toplevel: | |
25 | endif::manvolnum[] | |
26 | ||
27 | // deprecates | |
28 | // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization | |
29 | // http://pve.proxmox.com/wiki/KVM | |
30 | // http://pve.proxmox.com/wiki/Qemu_Server | |
31 | ||
32 | Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a | |
33 | physical computer. From the perspective of the host system where Qemu is | |
34 | running, Qemu is a user program which has access to a number of local resources | |
35 | like partitions, files, network cards which are then passed to an | |
36 | emulated computer which sees them as if they were real devices. | |
37 | ||
38 | A guest operating system running in the emulated computer accesses these | |
39 | devices, and runs as it were running on real hardware. For instance you can pass | |
40 | an iso image as a parameter to Qemu, and the OS running in the emulated computer | |
41 | will see a real CDROM inserted in a CD drive. | |
42 | ||
43 | Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is | |
44 | only concerned with 32 and 64 bits PC clone emulation, since it represents the | |
45 | overwhelming majority of server hardware. The emulation of PC clones is also one | |
46 | of the fastest due to the availability of processor extensions which greatly | |
47 | speed up Qemu when the emulated architecture is the same as the host | |
48 | architecture. | |
49 | ||
50 | NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine). | |
51 | It means that Qemu is running with the support of the virtualization processor | |
52 | extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and | |
53 | _KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm | |
54 | module. | |
55 | ||
56 | Qemu inside {pve} runs as a root process, since this is required to access block | |
57 | and PCI devices. | |
58 | ||
59 | ||
60 | Emulated devices and paravirtualized devices | |
61 | -------------------------------------------- | |
62 | ||
63 | The PC hardware emulated by Qemu includes a mainboard, network controllers, | |
64 | scsi, ide and sata controllers, serial ports (the complete list can be seen in | |
65 | the `kvm(1)` man page) all of them emulated in software. All these devices | |
66 | are the exact software equivalent of existing hardware devices, and if the OS | |
67 | running in the guest has the proper drivers it will use the devices as if it | |
68 | were running on real hardware. This allows Qemu to runs _unmodified_ operating | |
69 | systems. | |
70 | ||
71 | This however has a performance cost, as running in software what was meant to | |
72 | run in hardware involves a lot of extra work for the host CPU. To mitigate this, | |
73 | Qemu can present to the guest operating system _paravirtualized devices_, where | |
74 | the guest OS recognizes it is running inside Qemu and cooperates with the | |
75 | hypervisor. | |
76 | ||
77 | Qemu relies on the virtio virtualization standard, and is thus able to present | |
78 | paravirtualized virtio devices, which includes a paravirtualized generic disk | |
79 | controller, a paravirtualized network card, a paravirtualized serial port, | |
80 | a paravirtualized SCSI controller, etc ... | |
81 | ||
82 | It is highly recommended to use the virtio devices whenever you can, as they | |
83 | provide a big performance improvement. Using the virtio generic disk controller | |
84 | versus an emulated IDE controller will double the sequential write throughput, | |
85 | as measured with `bonnie++(8)`. Using the virtio network interface can deliver | |
86 | up to three times the throughput of an emulated Intel E1000 network card, as | |
87 | measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki | |
88 | http://www.linux-kvm.org/page/Using_VirtIO_NIC] | |
89 | ||
90 | ||
91 | [[qm_virtual_machines_settings]] | |
92 | Virtual Machines Settings | |
93 | ------------------------- | |
94 | ||
95 | Generally speaking {pve} tries to choose sane defaults for virtual machines | |
96 | (VM). Make sure you understand the meaning of the settings you change, as it | |
97 | could incur a performance slowdown, or putting your data at risk. | |
98 | ||
99 | ||
100 | [[qm_general_settings]] | |
101 | General Settings | |
102 | ~~~~~~~~~~~~~~~~ | |
103 | ||
104 | [thumbnail="screenshot/gui-create-vm-general.png"] | |
105 | ||
106 | General settings of a VM include | |
107 | ||
108 | * the *Node* : the physical server on which the VM will run | |
109 | * the *VM ID*: a unique number in this {pve} installation used to identify your VM | |
110 | * *Name*: a free form text string you can use to describe the VM | |
111 | * *Resource Pool*: a logical group of VMs | |
112 | ||
113 | ||
114 | [[qm_os_settings]] | |
115 | OS Settings | |
116 | ~~~~~~~~~~~ | |
117 | ||
118 | [thumbnail="screenshot/gui-create-vm-os.png"] | |
119 | ||
120 | When creating a virtual machine (VM), setting the proper Operating System(OS) | |
121 | allows {pve} to optimize some low level parameters. For instance Windows OS | |
122 | expect the BIOS clock to use the local time, while Unix based OS expect the | |
123 | BIOS clock to have the UTC time. | |
124 | ||
125 | [[qm_system_settings]] | |
126 | System Settings | |
127 | ~~~~~~~~~~~~~~~ | |
128 | ||
129 | On VM creation you can change some basic system components of the new VM. You | |
130 | can specify which xref:qm_display[display type] you want to use. | |
131 | [thumbnail="screenshot/gui-create-vm-system.png"] | |
132 | Additionally, the xref:qm_hard_disk[SCSI controller] can be changed. | |
133 | If you plan to install the QEMU Guest Agent, or if your selected ISO image | |
134 | already ships and installs it automatically, you may want to tick the 'Qemu | |
135 | Agent' box, which lets {pve} know that it can use its features to show some | |
136 | more information, and complete some actions (for example, shutdown or | |
137 | snapshots) more intelligently. | |
138 | ||
139 | {pve} allows to boot VMs with different firmware and machine types, namely | |
140 | xref:qm_bios_and_uefi[SeaBIOS and OVMF]. In most cases you want to switch from | |
141 | the default SeabBIOS to OVMF only if you plan to use | |
142 | xref:qm_pci_passthrough[PCIe pass through]. A VMs 'Machine Type' defines the | |
143 | hardware layout of the VM's virtual motherboard. You can choose between the | |
144 | default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the | |
145 | https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35] | |
146 | chipset, which also provides a virtual PCIe bus, and thus may be desired if | |
147 | one wants to pass through PCIe hardware. | |
148 | ||
149 | [[qm_hard_disk]] | |
150 | Hard Disk | |
151 | ~~~~~~~~~ | |
152 | ||
153 | [[qm_hard_disk_bus]] | |
154 | Bus/Controller | |
155 | ^^^^^^^^^^^^^^ | |
156 | Qemu can emulate a number of storage controllers: | |
157 | ||
158 | * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk | |
159 | controller. Even if this controller has been superseded by recent designs, | |
160 | each and every OS you can think of has support for it, making it a great choice | |
161 | if you want to run an OS released before 2003. You can connect up to 4 devices | |
162 | on this controller. | |
163 | ||
164 | * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern | |
165 | design, allowing higher throughput and a greater number of devices to be | |
166 | connected. You can connect up to 6 devices on this controller. | |
167 | ||
168 | * the *SCSI* controller, designed in 1985, is commonly found on server grade | |
169 | hardware, and can connect up to 14 storage devices. {pve} emulates by default a | |
170 | LSI 53C895A controller. | |
171 | + | |
172 | A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for | |
173 | performance and is automatically selected for newly created Linux VMs since | |
174 | {pve} 4.3. Linux distributions have support for this controller since 2012, and | |
175 | FreeBSD since 2014. For Windows OSes, you need to provide an extra iso | |
176 | containing the drivers during the installation. | |
177 | // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation. | |
178 | If you aim at maximum performance, you can select a SCSI controller of type | |
179 | _VirtIO SCSI single_ which will allow you to select the *IO Thread* option. | |
180 | When selecting _VirtIO SCSI single_ Qemu will create a new controller for | |
181 | each disk, instead of adding all disks to the same controller. | |
182 | ||
183 | * The *VirtIO Block* controller, often just called VirtIO or virtio-blk, | |
184 | is an older type of paravirtualized controller. It has been superseded by the | |
185 | VirtIO SCSI Controller, in terms of features. | |
186 | ||
187 | [thumbnail="screenshot/gui-create-vm-hard-disk.png"] | |
188 | ||
189 | [[qm_hard_disk_formats]] | |
190 | Image Format | |
191 | ^^^^^^^^^^^^ | |
192 | On each controller you attach a number of emulated hard disks, which are backed | |
193 | by a file or a block device residing in the configured storage. The choice of | |
194 | a storage type will determine the format of the hard disk image. Storages which | |
195 | present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*, | |
196 | whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose | |
197 | either the *raw disk image format* or the *QEMU image format*. | |
198 | ||
199 | * the *QEMU image format* is a copy on write format which allows snapshots, and | |
200 | thin provisioning of the disk image. | |
201 | * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what | |
202 | you would get when executing the `dd` command on a block device in Linux. This | |
203 | format does not support thin provisioning or snapshots by itself, requiring | |
204 | cooperation from the storage layer for these tasks. It may, however, be up to | |
205 | 10% faster than the *QEMU image format*. footnote:[See this benchmark for details | |
206 | http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] | |
207 | * the *VMware image format* only makes sense if you intend to import/export the | |
208 | disk image to other hypervisors. | |
209 | ||
210 | [[qm_hard_disk_cache]] | |
211 | Cache Mode | |
212 | ^^^^^^^^^^ | |
213 | Setting the *Cache* mode of the hard drive will impact how the host system will | |
214 | notify the guest systems of block write completions. The *No cache* default | |
215 | means that the guest system will be notified that a write is complete when each | |
216 | block reaches the physical storage write queue, ignoring the host page cache. | |
217 | This provides a good balance between safety and speed. | |
218 | ||
219 | If you want the {pve} backup manager to skip a disk when doing a backup of a VM, | |
220 | you can set the *No backup* option on that disk. | |
221 | ||
222 | If you want the {pve} storage replication mechanism to skip a disk when starting | |
223 | a replication job, you can set the *Skip replication* option on that disk. | |
224 | As of {pve} 5.0, replication requires the disk images to be on a storage of type | |
225 | `zfspool`, so adding a disk image to other storages when the VM has replication | |
226 | configured requires to skip replication for this disk image. | |
227 | ||
228 | [[qm_hard_disk_discard]] | |
229 | Trim/Discard | |
230 | ^^^^^^^^^^^^ | |
231 | If your storage supports _thin provisioning_ (see the storage chapter in the | |
232 | {pve} guide), you can activate the *Discard* option on a drive. With *Discard* | |
233 | set and a _TRIM_-enabled guest OS footnote:[TRIM, UNMAP, and discard | |
234 | https://en.wikipedia.org/wiki/Trim_%28computing%29], when the VM's filesystem | |
235 | marks blocks as unused after deleting files, the controller will relay this | |
236 | information to the storage, which will then shrink the disk image accordingly. | |
237 | For the guest to be able to issue _TRIM_ commands, you must enable the *Discard* | |
238 | option on the drive. Some guest operating systems may also require the | |
239 | *SSD Emulation* flag to be set. Note that *Discard* on *VirtIO Block* drives is | |
240 | only supported on guests using Linux Kernel 5.0 or higher. | |
241 | ||
242 | If you would like a drive to be presented to the guest as a solid-state drive | |
243 | rather than a rotational hard disk, you can set the *SSD emulation* option on | |
244 | that drive. There is no requirement that the underlying storage actually be | |
245 | backed by SSDs; this feature can be used with physical media of any type. | |
246 | Note that *SSD emulation* is not supported on *VirtIO Block* drives. | |
247 | ||
248 | ||
249 | [[qm_hard_disk_iothread]] | |
250 | IO Thread | |
251 | ^^^^^^^^^ | |
252 | The option *IO Thread* can only be used when using a disk with the | |
253 | *VirtIO* controller, or with the *SCSI* controller, when the emulated controller | |
254 | type is *VirtIO SCSI single*. | |
255 | With this enabled, Qemu creates one I/O thread per storage controller, | |
256 | instead of a single thread for all I/O, so it can increase performance when | |
257 | multiple isks are used and each disk has its own storage controller. | |
258 | ||
259 | ||
260 | [[qm_cpu]] | |
261 | CPU | |
262 | ~~~ | |
263 | ||
264 | [thumbnail="screenshot/gui-create-vm-cpu.png"] | |
265 | ||
266 | A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU. | |
267 | This CPU can then contain one or many *cores*, which are independent | |
268 | processing units. Whether you have a single CPU socket with 4 cores, or two CPU | |
269 | sockets with two cores is mostly irrelevant from a performance point of view. | |
270 | However some software licenses depend on the number of sockets a machine has, | |
271 | in that case it makes sense to set the number of sockets to what the license | |
272 | allows you. | |
273 | ||
274 | Increasing the number of virtual cpus (cores and sockets) will usually provide a | |
275 | performance improvement though that is heavily dependent on the use of the VM. | |
276 | Multithreaded applications will of course benefit from a large number of | |
277 | virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of | |
278 | execution on the host system. If you're not sure about the workload of your VM, | |
279 | it is usually a safe bet to set the number of *Total cores* to 2. | |
280 | ||
281 | NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs | |
282 | is greater than the number of cores on the server (e.g., 4 VMs with each 4 | |
283 | cores on a machine with only 8 cores). In that case the host system will | |
284 | balance the Qemu execution threads between your server cores, just like if you | |
285 | were running a standard multithreaded application. However, {pve} will prevent | |
286 | you from starting VMs with more virtual CPU cores than physically available, as | |
287 | this will only bring the performance down due to the cost of context switches. | |
288 | ||
289 | [[qm_cpu_resource_limits]] | |
290 | Resource Limits | |
291 | ^^^^^^^^^^^^^^^ | |
292 | ||
293 | In addition to the number of virtual cores, you can configure how much resources | |
294 | a VM can get in relation to the host CPU time and also in relation to other | |
295 | VMs. | |
296 | With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time | |
297 | the whole VM can use on the host. It is a floating point value representing CPU | |
298 | time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a | |
299 | single process would fully use one single core it would have `100%` CPU Time | |
300 | usage. If a VM with four cores utilizes all its cores fully it would | |
301 | theoretically use `400%`. In reality the usage may be even a bit higher as Qemu | |
302 | can have additional threads for VM peripherals besides the vCPU core ones. | |
303 | This setting can be useful if a VM should have multiple vCPUs, as it runs a few | |
304 | processes in parallel, but the VM as a whole should not be able to run all | |
305 | vCPUs at 100% at the same time. Using a specific example: lets say we have a VM | |
306 | which would profit from having 8 vCPUs, but at no time all of those 8 cores | |
307 | should run at full load - as this would make the server so overloaded that | |
308 | other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to | |
309 | `4.0` (=400%). If all cores do the same heavy work they would all get 50% of a | |
310 | real host cores CPU time. But, if only 4 would do work they could still get | |
311 | almost 100% of a real core each. | |
312 | ||
313 | NOTE: VMs can, depending on their configuration, use additional threads e.g., | |
314 | for networking or IO operations but also live migration. Thus a VM can show up | |
315 | to use more CPU time than just its virtual CPUs could use. To ensure that a VM | |
316 | never uses more CPU time than virtual CPUs assigned set the *cpulimit* setting | |
317 | to the same value as the total core count. | |
318 | ||
319 | The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU | |
320 | shares or CPU weight), controls how much CPU time a VM gets in regards to other | |
321 | VMs running. It is a relative weight which defaults to `1024`, if you increase | |
322 | this for a VM it will be prioritized by the scheduler in comparison to other | |
323 | VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was | |
324 | changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than | |
325 | the first VM 100. | |
326 | ||
327 | For more information see `man systemd.resource-control`, here `CPUQuota` | |
328 | corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits` | |
329 | setting, visit its Notes section for references and implementation details. | |
330 | ||
331 | CPU Type | |
332 | ^^^^^^^^ | |
333 | ||
334 | Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon | |
335 | processors. Each new processor generation adds new features, like hardware | |
336 | assisted 3d rendering, random number generation, memory protection, etc ... | |
337 | Usually you should select for your VM a processor type which closely matches the | |
338 | CPU of the host system, as it means that the host CPU features (also called _CPU | |
339 | flags_ ) will be available in your VMs. If you want an exact match, you can set | |
340 | the CPU type to *host* in which case the VM will have exactly the same CPU flags | |
341 | as your host system. | |
342 | ||
343 | This has a downside though. If you want to do a live migration of VMs between | |
344 | different hosts, your VM might end up on a new system with a different CPU type. | |
345 | If the CPU flags passed to the guest are missing, the qemu process will stop. To | |
346 | remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults. | |
347 | kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set, | |
348 | but is guaranteed to work everywhere. | |
349 | ||
350 | In short, if you care about live migration and moving VMs between nodes, leave | |
351 | the kvm64 default. If you don’t care about live migration or have a homogeneous | |
352 | cluster where all nodes have the same CPU, set the CPU type to host, as in | |
353 | theory this will give your guests maximum performance. | |
354 | ||
355 | Custom CPU Types | |
356 | ^^^^^^^^^^^^^^^^ | |
357 | ||
358 | You can specify custom CPU types with a configurable set of features. These are | |
359 | maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` by | |
360 | an administrator. See `man cpu-models.conf` for format details. | |
361 | ||
362 | Specified custom types can be selected by any user with the `Sys.Audit` | |
363 | privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI | |
364 | or API, the name needs to be prefixed with 'custom-'. | |
365 | ||
366 | Meltdown / Spectre related CPU flags | |
367 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
368 | ||
369 | There are several CPU flags related to the Meltdown and Spectre vulnerabilities | |
370 | footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set | |
371 | manually unless the selected CPU type of your VM already enables them by default. | |
372 | ||
373 | There are two requirements that need to be fulfilled in order to use these | |
374 | CPU flags: | |
375 | ||
376 | * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s) | |
377 | * The guest operating system must be updated to a version which mitigates the | |
378 | attacks and is able to utilize the CPU feature | |
379 | ||
380 | Otherwise you need to set the desired CPU flag of the virtual CPU, either by | |
381 | editing the CPU options in the WebUI, or by setting the 'flags' property of the | |
382 | 'cpu' option in the VM configuration file. | |
383 | ||
384 | For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a | |
385 | so-called ``microcode update'' footnote:[You can use `intel-microcode' / | |
386 | `amd-microcode' from Debian non-free if your vendor does not provide such an | |
387 | update. Note that not all affected CPUs can be updated to support spec-ctrl.] | |
388 | for your CPU. | |
389 | ||
390 | ||
391 | To check if the {pve} host is vulnerable, execute the following command as root: | |
392 | ||
393 | ---- | |
394 | for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done | |
395 | ---- | |
396 | ||
397 | A community script is also available to detect is the host is still vulnerable. | |
398 | footnote:[spectre-meltdown-checker https://meltdown.ovh/] | |
399 | ||
400 | Intel processors | |
401 | ^^^^^^^^^^^^^^^^ | |
402 | ||
403 | * 'pcid' | |
404 | + | |
405 | This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation | |
406 | called 'Kernel Page-Table Isolation (KPTI)', which effectively hides | |
407 | the Kernel memory from the user space. Without PCID, KPTI is quite an expensive | |
408 | mechanism footnote:[PCID is now a critical performance/security feature on x86 | |
409 | https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU]. | |
410 | + | |
411 | To check if the {pve} host supports PCID, execute the following command as root: | |
412 | + | |
413 | ---- | |
414 | # grep ' pcid ' /proc/cpuinfo | |
415 | ---- | |
416 | + | |
417 | If this does not return empty your host's CPU has support for 'pcid'. | |
418 | ||
419 | * 'spec-ctrl' | |
420 | + | |
421 | Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix, | |
422 | in cases where retpolines are not sufficient. | |
423 | Included by default in Intel CPU models with -IBRS suffix. | |
424 | Must be explicitly turned on for Intel CPU models without -IBRS suffix. | |
425 | Requires an updated host CPU microcode (intel-microcode >= 20180425). | |
426 | + | |
427 | * 'ssbd' | |
428 | + | |
429 | Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model. | |
430 | Must be explicitly turned on for all Intel CPU models. | |
431 | Requires an updated host CPU microcode(intel-microcode >= 20180703). | |
432 | ||
433 | ||
434 | AMD processors | |
435 | ^^^^^^^^^^^^^^ | |
436 | ||
437 | * 'ibpb' | |
438 | + | |
439 | Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix, | |
440 | in cases where retpolines are not sufficient. | |
441 | Included by default in AMD CPU models with -IBPB suffix. | |
442 | Must be explicitly turned on for AMD CPU models without -IBPB suffix. | |
443 | Requires the host CPU microcode to support this feature before it can be used for guest CPUs. | |
444 | ||
445 | ||
446 | ||
447 | * 'virt-ssbd' | |
448 | + | |
449 | Required to enable the Spectre v4 (CVE-2018-3639) fix. | |
450 | Not included by default in any AMD CPU model. | |
451 | Must be explicitly turned on for all AMD CPU models. | |
452 | This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility. | |
453 | Note that this must be explicitly enabled when when using the "host" cpu model, | |
454 | because this is a virtual feature which does not exist in the physical CPUs. | |
455 | ||
456 | ||
457 | * 'amd-ssbd' | |
458 | + | |
459 | Required to enable the Spectre v4 (CVE-2018-3639) fix. | |
460 | Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models. | |
461 | This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible. | |
462 | virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd. | |
463 | ||
464 | ||
465 | * 'amd-no-ssb' | |
466 | + | |
467 | Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639). | |
468 | Not included by default in any AMD CPU model. | |
469 | Future hardware generations of CPU will not be vulnerable to CVE-2018-3639, | |
470 | and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb. | |
471 | This is mutually exclusive with virt-ssbd and amd-ssbd. | |
472 | ||
473 | ||
474 | NUMA | |
475 | ^^^^ | |
476 | You can also optionally emulate a *NUMA* | |
477 | footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture | |
478 | in your VMs. The basics of the NUMA architecture mean that instead of having a | |
479 | global memory pool available to all your cores, the memory is spread into local | |
480 | banks close to each socket. | |
481 | This can bring speed improvements as the memory bus is not a bottleneck | |
482 | anymore. If your system has a NUMA architecture footnote:[if the command | |
483 | `numactl --hardware | grep available` returns more than one node, then your host | |
484 | system has a NUMA architecture] we recommend to activate the option, as this | |
485 | will allow proper distribution of the VM resources on the host system. | |
486 | This option is also required to hot-plug cores or RAM in a VM. | |
487 | ||
488 | If the NUMA option is used, it is recommended to set the number of sockets to | |
489 | the number of nodes of the host system. | |
490 | ||
491 | vCPU hot-plug | |
492 | ^^^^^^^^^^^^^ | |
493 | ||
494 | Modern operating systems introduced the capability to hot-plug and, to a | |
495 | certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us | |
496 | to avoid a lot of the (physical) problems real hardware can cause in such | |
497 | scenarios. | |
498 | Still, this is a rather new and complicated feature, so its use should be | |
499 | restricted to cases where its absolutely needed. Most of the functionality can | |
500 | be replicated with other, well tested and less complicated, features, see | |
501 | xref:qm_cpu_resource_limits[Resource Limits]. | |
502 | ||
503 | In {pve} the maximal number of plugged CPUs is always `cores * sockets`. | |
504 | To start a VM with less than this total core count of CPUs you may use the | |
505 | *vpus* setting, it denotes how many vCPUs should be plugged in at VM start. | |
506 | ||
507 | Currently only this feature is only supported on Linux, a kernel newer than 3.10 | |
508 | is needed, a kernel newer than 4.7 is recommended. | |
509 | ||
510 | You can use a udev rule as follow to automatically set new CPUs as online in | |
511 | the guest: | |
512 | ||
513 | ---- | |
514 | SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1" | |
515 | ---- | |
516 | ||
517 | Save this under /etc/udev/rules.d/ as a file ending in `.rules`. | |
518 | ||
519 | Note: CPU hot-remove is machine dependent and requires guest cooperation. | |
520 | The deletion command does not guarantee CPU removal to actually happen, | |
521 | typically it's a request forwarded to guest using target dependent mechanism, | |
522 | e.g., ACPI on x86/amd64. | |
523 | ||
524 | ||
525 | [[qm_memory]] | |
526 | Memory | |
527 | ~~~~~~ | |
528 | ||
529 | For each VM you have the option to set a fixed size memory or asking | |
530 | {pve} to dynamically allocate memory based on the current RAM usage of the | |
531 | host. | |
532 | ||
533 | .Fixed Memory Allocation | |
534 | [thumbnail="screenshot/gui-create-vm-memory.png"] | |
535 | ||
536 | When setting memory and minimum memory to the same amount | |
537 | {pve} will simply allocate what you specify to your VM. | |
538 | ||
539 | Even when using a fixed memory size, the ballooning device gets added to the | |
540 | VM, because it delivers useful information such as how much memory the guest | |
541 | really uses. | |
542 | In general, you should leave *ballooning* enabled, but if you want to disable | |
543 | it (e.g. for debugging purposes), simply uncheck | |
544 | *Ballooning Device* or set | |
545 | ||
546 | balloon: 0 | |
547 | ||
548 | in the configuration. | |
549 | ||
550 | .Automatic Memory Allocation | |
551 | ||
552 | // see autoballoon() in pvestatd.pm | |
553 | When setting the minimum memory lower than memory, {pve} will make sure that the | |
554 | minimum amount you specified is always available to the VM, and if RAM usage on | |
555 | the host is below 80%, will dynamically add memory to the guest up to the | |
556 | maximum memory specified. | |
557 | ||
558 | When the host is running low on RAM, the VM will then release some memory | |
559 | back to the host, swapping running processes if needed and starting the oom | |
560 | killer in last resort. The passing around of memory between host and guest is | |
561 | done via a special `balloon` kernel driver running inside the guest, which will | |
562 | grab or release memory pages from the host. | |
563 | footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/] | |
564 | ||
565 | When multiple VMs use the autoallocate facility, it is possible to set a | |
566 | *Shares* coefficient which indicates the relative amount of the free host memory | |
567 | that each VM should take. Suppose for instance you have four VMs, three of them | |
568 | running an HTTP server and the last one is a database server. To cache more | |
569 | database blocks in the database server RAM, you would like to prioritize the | |
570 | database VM when spare RAM is available. For this you assign a Shares property | |
571 | of 3000 to the database VM, leaving the other VMs to the Shares default setting | |
572 | of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32 | |
573 | * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 * | |
574 | 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will | |
575 | get 1.5 GB. | |
576 | ||
577 | All Linux distributions released after 2010 have the balloon kernel driver | |
578 | included. For Windows OSes, the balloon driver needs to be added manually and can | |
579 | incur a slowdown of the guest, so we don't recommend using it on critical | |
580 | systems. | |
581 | // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/ | |
582 | ||
583 | When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB | |
584 | of RAM available to the host. | |
585 | ||
586 | ||
587 | [[qm_network_device]] | |
588 | Network Device | |
589 | ~~~~~~~~~~~~~~ | |
590 | ||
591 | [thumbnail="screenshot/gui-create-vm-network.png"] | |
592 | ||
593 | Each VM can have many _Network interface controllers_ (NIC), of four different | |
594 | types: | |
595 | ||
596 | * *Intel E1000* is the default, and emulates an Intel Gigabit network card. | |
597 | * the *VirtIO* paravirtualized NIC should be used if you aim for maximum | |
598 | performance. Like all VirtIO devices, the guest OS should have the proper driver | |
599 | installed. | |
600 | * the *Realtek 8139* emulates an older 100 MB/s network card, and should | |
601 | only be used when emulating older operating systems ( released before 2002 ) | |
602 | * the *vmxnet3* is another paravirtualized device, which should only be used | |
603 | when importing a VM from another hypervisor. | |
604 | ||
605 | {pve} will generate for each NIC a random *MAC address*, so that your VM is | |
606 | addressable on Ethernet networks. | |
607 | ||
608 | The NIC you added to the VM can follow one of two different models: | |
609 | ||
610 | * in the default *Bridged mode* each virtual NIC is backed on the host by a | |
611 | _tap device_, ( a software loopback device simulating an Ethernet NIC ). This | |
612 | tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs | |
613 | have direct access to the Ethernet LAN on which the host is located. | |
614 | * in the alternative *NAT mode*, each virtual NIC will only communicate with | |
615 | the Qemu user networking stack, where a built-in router and DHCP server can | |
616 | provide network access. This built-in DHCP will serve addresses in the private | |
617 | 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and | |
618 | should only be used for testing. This mode is only available via CLI or the API, | |
619 | but not via the WebUI. | |
620 | ||
621 | You can also skip adding a network device when creating a VM by selecting *No | |
622 | network device*. | |
623 | ||
624 | .Multiqueue | |
625 | If you are using the VirtIO driver, you can optionally activate the | |
626 | *Multiqueue* option. This option allows the guest OS to process networking | |
627 | packets using multiple virtual CPUs, providing an increase in the total number | |
628 | of packets transferred. | |
629 | ||
630 | //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html | |
631 | When using the VirtIO driver with {pve}, each NIC network queue is passed to the | |
632 | host kernel, where the queue will be processed by a kernel thread spawned by the | |
633 | vhost driver. With this option activated, it is possible to pass _multiple_ | |
634 | network queues to the host kernel for each NIC. | |
635 | ||
636 | //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net | |
637 | When using Multiqueue, it is recommended to set it to a value equal | |
638 | to the number of Total Cores of your guest. You also need to set in | |
639 | the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool | |
640 | command: | |
641 | ||
642 | `ethtool -L ens1 combined X` | |
643 | ||
644 | where X is the number of the number of vcpus of the VM. | |
645 | ||
646 | You should note that setting the Multiqueue parameter to a value greater | |
647 | than one will increase the CPU load on the host and guest systems as the | |
648 | traffic increases. We recommend to set this option only when the VM has to | |
649 | process a great number of incoming connections, such as when the VM is running | |
650 | as a router, reverse proxy or a busy HTTP server doing long polling. | |
651 | ||
652 | [[qm_display]] | |
653 | Display | |
654 | ~~~~~~~ | |
655 | ||
656 | QEMU can virtualize a few types of VGA hardware. Some examples are: | |
657 | ||
658 | * *std*, the default, emulates a card with Bochs VBE extensions. | |
659 | * *cirrus*, this was once the default, it emulates a very old hardware module | |
660 | with all its problems. This display type should only be used if really | |
661 | necessary footnote:[https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ | |
662 | qemu: using cirrus considered harmful], e.g., if using Windows XP or earlier | |
663 | * *vmware*, is a VMWare SVGA-II compatible adapter. | |
664 | * *qxl*, is the QXL paravirtualized graphics card. Selecting this also | |
665 | enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the | |
666 | VM. | |
667 | ||
668 | You can edit the amount of memory given to the virtual GPU, by setting | |
669 | the 'memory' option. This can enable higher resolutions inside the VM, | |
670 | especially with SPICE/QXL. | |
671 | ||
672 | As the memory is reserved by display device, selecting Multi-Monitor mode | |
673 | for SPICE (e.g., `qxl2` for dual monitors) has some implications: | |
674 | ||
675 | * Windows needs a device for each monitor, so if your 'ostype' is some | |
676 | version of Windows, {pve} gives the VM an extra device per monitor. | |
677 | Each device gets the specified amount of memory. | |
678 | ||
679 | * Linux VMs, can always enable more virtual monitors, but selecting | |
680 | a Multi-Monitor mode multiplies the memory given to the device with | |
681 | the number of monitors. | |
682 | ||
683 | Selecting `serialX` as display 'type' disables the VGA output, and redirects | |
684 | the Web Console to the selected serial port. A configured display 'memory' | |
685 | setting will be ignored in that case. | |
686 | ||
687 | [[qm_usb_passthrough]] | |
688 | USB Passthrough | |
689 | ~~~~~~~~~~~~~~~ | |
690 | ||
691 | There are two different types of USB passthrough devices: | |
692 | ||
693 | * Host USB passthrough | |
694 | * SPICE USB passthrough | |
695 | ||
696 | Host USB passthrough works by giving a VM a USB device of the host. | |
697 | This can either be done via the vendor- and product-id, or | |
698 | via the host bus and port. | |
699 | ||
700 | The vendor/product-id looks like this: *0123:abcd*, | |
701 | where *0123* is the id of the vendor, and *abcd* is the id | |
702 | of the product, meaning two pieces of the same usb device | |
703 | have the same id. | |
704 | ||
705 | The bus/port looks like this: *1-2.3.4*, where *1* is the bus | |
706 | and *2.3.4* is the port path. This represents the physical | |
707 | ports of your host (depending of the internal order of the | |
708 | usb controllers). | |
709 | ||
710 | If a device is present in a VM configuration when the VM starts up, | |
711 | but the device is not present in the host, the VM can boot without problems. | |
712 | As soon as the device/port is available in the host, it gets passed through. | |
713 | ||
714 | WARNING: Using this kind of USB passthrough means that you cannot move | |
715 | a VM online to another host, since the hardware is only available | |
716 | on the host the VM is currently residing. | |
717 | ||
718 | The second type of passthrough is SPICE USB passthrough. This is useful | |
719 | if you use a SPICE client which supports it. If you add a SPICE USB port | |
720 | to your VM, you can passthrough a USB device from where your SPICE client is, | |
721 | directly to the VM (for example an input device or hardware dongle). | |
722 | ||
723 | ||
724 | [[qm_bios_and_uefi]] | |
725 | BIOS and UEFI | |
726 | ~~~~~~~~~~~~~ | |
727 | ||
728 | In order to properly emulate a computer, QEMU needs to use a firmware. | |
729 | Which, on common PCs often known as BIOS or (U)EFI, is executed as one of the | |
730 | first steps when booting a VM. It is responsible for doing basic hardware | |
731 | initialization and for providing an interface to the firmware and hardware for | |
732 | the operating system. By default QEMU uses *SeaBIOS* for this, which is an | |
733 | open-source, x86 BIOS implementation. SeaBIOS is a good choice for most | |
734 | standard setups. | |
735 | ||
736 | There are, however, some scenarios in which a BIOS is not a good firmware | |
737 | to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this. | |
738 | http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html] | |
739 | In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/] | |
740 | ||
741 | If you want to use OVMF, there are several things to consider: | |
742 | ||
743 | In order to save things like the *boot order*, there needs to be an EFI Disk. | |
744 | This disk will be included in backups and snapshots, and there can only be one. | |
745 | ||
746 | You can create such a disk with the following command: | |
747 | ||
748 | qm set <vmid> -efidisk0 <storage>:1,format=<format> | |
749 | ||
750 | Where *<storage>* is the storage where you want to have the disk, and | |
751 | *<format>* is a format which the storage supports. Alternatively, you can | |
752 | create such a disk through the web interface with 'Add' -> 'EFI Disk' in the | |
753 | hardware section of a VM. | |
754 | ||
755 | When using OVMF with a virtual display (without VGA passthrough), | |
756 | you need to set the client resolution in the OVMF menu(which you can reach | |
757 | with a press of the ESC button during boot), or you have to choose | |
758 | SPICE as the display type. | |
759 | ||
760 | [[qm_ivshmem]] | |
761 | Inter-VM shared memory | |
762 | ~~~~~~~~~~~~~~~~~~~~~~ | |
763 | ||
764 | You can add an Inter-VM shared memory device (`ivshmem`), which allows one to | |
765 | share memory between the host and a guest, or also between multiple guests. | |
766 | ||
767 | To add such a device, you can use `qm`: | |
768 | ||
769 | qm set <vmid> -ivshmem size=32,name=foo | |
770 | ||
771 | Where the size is in MiB. The file will be located under | |
772 | `/dev/shm/pve-shm-$name` (the default name is the vmid). | |
773 | ||
774 | NOTE: Currently the device will get deleted as soon as any VM using it got | |
775 | shutdown or stopped. Open connections will still persist, but new connections | |
776 | to the exact same device cannot be made anymore. | |
777 | ||
778 | A use case for such a device is the Looking Glass | |
779 | footnote:[Looking Glass: https://looking-glass.hostfission.com/] project, | |
780 | which enables high performance, low-latency display mirroring between | |
781 | host and guest. | |
782 | ||
783 | [[qm_audio_device]] | |
784 | Audio Device | |
785 | ~~~~~~~~~~~~ | |
786 | ||
787 | To add an audio device run the following command: | |
788 | ||
789 | ---- | |
790 | qm set <vmid> -audio0 device=<device> | |
791 | ---- | |
792 | ||
793 | Supported audio devices are: | |
794 | ||
795 | * `ich9-intel-hda`: Intel HD Audio Controller, emulates ICH9 | |
796 | * `intel-hda`: Intel HD Audio Controller, emulates ICH6 | |
797 | * `AC97`: Audio Codec '97, useful for older operating systems like Windows XP | |
798 | ||
799 | NOTE: The audio device works only in combination with SPICE. Remote protocols | |
800 | like Microsoft's RDP have options to play sound. To use the physical audio | |
801 | device of the host use device passthrough (see | |
802 | xref:qm_pci_passthrough[PCI Passthrough] and | |
803 | xref:qm_usb_passthrough[USB Passthrough]). | |
804 | ||
805 | [[qm_virtio_rng]] | |
806 | VirtIO RNG | |
807 | ~~~~~~~~~~ | |
808 | ||
809 | A RNG (Random Number Generator) is a device providing entropy ('randomness') to | |
810 | a system. A virtual hardware-RNG can be used to provide such entropy from the | |
811 | host system to a guest VM. This helps to avoid entropy starvation problems in | |
812 | the guest (a situation where not enough entropy is available and the system may | |
813 | slow down or run into problems), especially during the guests boot process. | |
814 | ||
815 | To add a VirtIO-based emulated RNG, run the following command: | |
816 | ||
817 | ---- | |
818 | qm set <vmid> -rng0 source=<source>[,max_bytes=X,period=Y] | |
819 | ---- | |
820 | ||
821 | `source` specifies where entropy is read from on the host and has to be one of | |
822 | the following: | |
823 | ||
824 | * `/dev/urandom`: Non-blocking kernel entropy pool (preferred) | |
825 | * `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy | |
826 | starvation on the host system) | |
827 | * `/dev/hwrng`: To pass through a hardware RNG attached to the host (if multiple | |
828 | are available, the one selected in | |
829 | `/sys/devices/virtual/misc/hw_random/rng_current` will be used) | |
830 | ||
831 | A limit can be specified via the `max_bytes` and `period` parameters, they are | |
832 | read as `max_bytes` per `period` in milliseconds. However, it does not represent | |
833 | a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes | |
834 | available on a 1 second timer, not that 1 KiB is streamed to the guest over the | |
835 | course of one second. Reducing the `period` can thus be used to inject entropy | |
836 | into the guest at a faster rate. | |
837 | ||
838 | By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is | |
839 | recommended to always use a limiter to avoid guests using too many host | |
840 | resources. If desired, a value of '0' for `max_bytes` can be used to disable | |
841 | all limits. | |
842 | ||
843 | [[qm_startup_and_shutdown]] | |
844 | Automatic Start and Shutdown of Virtual Machines | |
845 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
846 | ||
847 | After creating your VMs, you probably want them to start automatically | |
848 | when the host system boots. For this you need to select the option 'Start at | |
849 | boot' from the 'Options' Tab of your VM in the web interface, or set it with | |
850 | the following command: | |
851 | ||
852 | qm set <vmid> -onboot 1 | |
853 | ||
854 | .Start and Shutdown Order | |
855 | ||
856 | [thumbnail="screenshot/gui-qemu-edit-start-order.png"] | |
857 | ||
858 | In some case you want to be able to fine tune the boot order of your | |
859 | VMs, for instance if one of your VM is providing firewalling or DHCP | |
860 | to other guest systems. For this you can use the following | |
861 | parameters: | |
862 | ||
863 | * *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if | |
864 | you want the VM to be the first to be started. (We use the reverse startup | |
865 | order for shutdown, so a machine with a start order of 1 would be the last to | |
866 | be shut down). If multiple VMs have the same order defined on a host, they will | |
867 | additionally be ordered by 'VMID' in ascending order. | |
868 | * *Startup delay*: Defines the interval between this VM start and subsequent | |
869 | VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting | |
870 | other VMs. | |
871 | * *Shutdown timeout*: Defines the duration in seconds {pve} should wait | |
872 | for the VM to be offline after issuing a shutdown command. | |
873 | By default this value is set to 180, which means that {pve} will issue a | |
874 | shutdown request and wait 180 seconds for the machine to be offline. If | |
875 | the machine is still online after the timeout it will be stopped forcefully. | |
876 | ||
877 | NOTE: VMs managed by the HA stack do not follow the 'start on boot' and | |
878 | 'boot order' options currently. Those VMs will be skipped by the startup and | |
879 | shutdown algorithm as the HA manager itself ensures that VMs get started and | |
880 | stopped. | |
881 | ||
882 | Please note that machines without a Start/Shutdown order parameter will always | |
883 | start after those where the parameter is set. Further, this parameter can only | |
884 | be enforced between virtual machines running on the same host, not | |
885 | cluster-wide. | |
886 | ||
887 | [[qm_spice_enhancements]] | |
888 | SPICE Enhancements | |
889 | ~~~~~~~~~~~~~~~~~~ | |
890 | ||
891 | SPICE Enhancements are optional features that can improve the remote viewer | |
892 | experience. | |
893 | ||
894 | To enable them via the GUI go to the *Options* panel of the virtual machine. Run | |
895 | the following command to enable them via the CLI: | |
896 | ||
897 | ---- | |
898 | qm set <vmid> -spice_enhancements foldersharing=1,videostreaming=all | |
899 | ---- | |
900 | ||
901 | NOTE: To use these features the <<qm_display,*Display*>> of the virtual machine | |
902 | must be set to SPICE (qxl). | |
903 | ||
904 | Folder Sharing | |
905 | ^^^^^^^^^^^^^^ | |
906 | ||
907 | Share a local folder with the guest. The `spice-webdavd` daemon needs to be | |
908 | installed in the guest. It makes the shared folder available through a local | |
909 | WebDAV server located at http://localhost:9843. | |
910 | ||
911 | For Windows guests the installer for the 'Spice WebDAV daemon' can be downloaded | |
912 | from the | |
913 | https://www.spice-space.org/download.html#windows-binaries[official SPICE website]. | |
914 | ||
915 | Most Linux distributions have a package called `spice-webdavd` that can be | |
916 | installed. | |
917 | ||
918 | To share a folder in Virt-Viewer (Remote Viewer) go to 'File -> Preferences'. | |
919 | Select the folder to share and then enable the checkbox. | |
920 | ||
921 | NOTE: Folder sharing currently only works in the Linux version of Virt-Viewer. | |
922 | ||
923 | CAUTION: Experimental! Currently this feature does not work reliably. | |
924 | ||
925 | Video Streaming | |
926 | ^^^^^^^^^^^^^^^ | |
927 | ||
928 | Fast refreshing areas are encoded into a video stream. Two options exist: | |
929 | ||
930 | * *all*: Any fast refreshing area will be encoded into a video stream. | |
931 | * *filter*: Additional filters are used to decide if video streaming should be | |
932 | used (currently only small window surfaces are skipped). | |
933 | ||
934 | A general recommendation if video streaming should be enabled and which option | |
935 | to choose from cannot be given. Your mileage may vary depending on the specific | |
936 | circumstances. | |
937 | ||
938 | Troubleshooting | |
939 | ^^^^^^^^^^^^^^^ | |
940 | ||
941 | .Shared folder does not show up | |
942 | ||
943 | Make sure the WebDAV service is enabled and running in the guest. On Windows it | |
944 | is called 'Spice webdav proxy'. In Linux the name is 'spice-webdavd' but can be | |
945 | different depending on the distribution. | |
946 | ||
947 | If the service is running, check the WebDAV server by opening | |
948 | http://localhost:9843 in a browser in the guest. | |
949 | ||
950 | It can help to restart the SPICE session. | |
951 | ||
952 | [[qm_migration]] | |
953 | Migration | |
954 | --------- | |
955 | ||
956 | [thumbnail="screenshot/gui-qemu-migrate.png"] | |
957 | ||
958 | If you have a cluster, you can migrate your VM to another host with | |
959 | ||
960 | qm migrate <vmid> <target> | |
961 | ||
962 | There are generally two mechanisms for this | |
963 | ||
964 | * Online Migration (aka Live Migration) | |
965 | * Offline Migration | |
966 | ||
967 | Online Migration | |
968 | ~~~~~~~~~~~~~~~~ | |
969 | ||
970 | When your VM is running and it has no local resources defined (such as disks | |
971 | on local storage, passed through devices, etc.) you can initiate a live | |
972 | migration with the -online flag. | |
973 | ||
974 | How it works | |
975 | ^^^^^^^^^^^^ | |
976 | ||
977 | This starts a Qemu Process on the target host with the 'incoming' flag, which | |
978 | means that the process starts and waits for the memory data and device states | |
979 | from the source Virtual Machine (since all other resources, e.g. disks, | |
980 | are shared, the memory content and device state are the only things left | |
981 | to transmit). | |
982 | ||
983 | Once this connection is established, the source begins to send the memory | |
984 | content asynchronously to the target. If the memory on the source changes, | |
985 | those sections are marked dirty and there will be another pass of sending data. | |
986 | This happens until the amount of data to send is so small that it can | |
987 | pause the VM on the source, send the remaining data to the target and start | |
988 | the VM on the target in under a second. | |
989 | ||
990 | Requirements | |
991 | ^^^^^^^^^^^^ | |
992 | ||
993 | For Live Migration to work, there are some things required: | |
994 | ||
995 | * The VM has no local resources (e.g. passed through devices, local disks, etc.) | |
996 | * The hosts are in the same {pve} cluster. | |
997 | * The hosts have a working (and reliable) network connection. | |
998 | * The target host must have the same or higher versions of the | |
999 | {pve} packages. (It *might* work the other way, but this is never guaranteed) | |
1000 | ||
1001 | Offline Migration | |
1002 | ~~~~~~~~~~~~~~~~~ | |
1003 | ||
1004 | If you have local resources, you can still offline migrate your VMs, | |
1005 | as long as all disk are on storages, which are defined on both hosts. | |
1006 | Then the migration will copy the disk over the network to the target host. | |
1007 | ||
1008 | [[qm_copy_and_clone]] | |
1009 | Copies and Clones | |
1010 | ----------------- | |
1011 | ||
1012 | [thumbnail="screenshot/gui-qemu-full-clone.png"] | |
1013 | ||
1014 | VM installation is usually done using an installation media (CD-ROM) | |
1015 | from the operation system vendor. Depending on the OS, this can be a | |
1016 | time consuming task one might want to avoid. | |
1017 | ||
1018 | An easy way to deploy many VMs of the same type is to copy an existing | |
1019 | VM. We use the term 'clone' for such copies, and distinguish between | |
1020 | 'linked' and 'full' clones. | |
1021 | ||
1022 | Full Clone:: | |
1023 | ||
1024 | The result of such copy is an independent VM. The | |
1025 | new VM does not share any storage resources with the original. | |
1026 | + | |
1027 | ||
1028 | It is possible to select a *Target Storage*, so one can use this to | |
1029 | migrate a VM to a totally different storage. You can also change the | |
1030 | disk image *Format* if the storage driver supports several formats. | |
1031 | + | |
1032 | ||
1033 | NOTE: A full clone needs to read and copy all VM image data. This is | |
1034 | usually much slower than creating a linked clone. | |
1035 | + | |
1036 | ||
1037 | Some storage types allows to copy a specific *Snapshot*, which | |
1038 | defaults to the 'current' VM data. This also means that the final copy | |
1039 | never includes any additional snapshots from the original VM. | |
1040 | ||
1041 | ||
1042 | Linked Clone:: | |
1043 | ||
1044 | Modern storage drivers support a way to generate fast linked | |
1045 | clones. Such a clone is a writable copy whose initial contents are the | |
1046 | same as the original data. Creating a linked clone is nearly | |
1047 | instantaneous, and initially consumes no additional space. | |
1048 | + | |
1049 | ||
1050 | They are called 'linked' because the new image still refers to the | |
1051 | original. Unmodified data blocks are read from the original image, but | |
1052 | modification are written (and afterwards read) from a new | |
1053 | location. This technique is called 'Copy-on-write'. | |
1054 | + | |
1055 | ||
1056 | This requires that the original volume is read-only. With {pve} one | |
1057 | can convert any VM into a read-only <<qm_templates, Template>>). Such | |
1058 | templates can later be used to create linked clones efficiently. | |
1059 | + | |
1060 | ||
1061 | NOTE: You cannot delete an original template while linked clones | |
1062 | exist. | |
1063 | + | |
1064 | ||
1065 | It is not possible to change the *Target storage* for linked clones, | |
1066 | because this is a storage internal feature. | |
1067 | ||
1068 | ||
1069 | The *Target node* option allows you to create the new VM on a | |
1070 | different node. The only restriction is that the VM is on shared | |
1071 | storage, and that storage is also available on the target node. | |
1072 | ||
1073 | To avoid resource conflicts, all network interface MAC addresses get | |
1074 | randomized, and we generate a new 'UUID' for the VM BIOS (smbios1) | |
1075 | setting. | |
1076 | ||
1077 | ||
1078 | [[qm_templates]] | |
1079 | Virtual Machine Templates | |
1080 | ------------------------- | |
1081 | ||
1082 | One can convert a VM into a Template. Such templates are read-only, | |
1083 | and you can use them to create linked clones. | |
1084 | ||
1085 | NOTE: It is not possible to start templates, because this would modify | |
1086 | the disk images. If you want to change the template, create a linked | |
1087 | clone and modify that. | |
1088 | ||
1089 | VM Generation ID | |
1090 | ---------------- | |
1091 | ||
1092 | {pve} supports Virtual Machine Generation ID ('vmgenid') footnote:[Official | |
1093 | 'vmgenid' Specification | |
1094 | https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier] | |
1095 | for virtual machines. | |
1096 | This can be used by the guest operating system to detect any event resulting | |
1097 | in a time shift event, for example, restoring a backup or a snapshot rollback. | |
1098 | ||
1099 | When creating new VMs, a 'vmgenid' will be automatically generated and saved | |
1100 | in its configuration file. | |
1101 | ||
1102 | To create and add a 'vmgenid' to an already existing VM one can pass the | |
1103 | special value `1' to let {pve} autogenerate one or manually set the 'UUID' | |
1104 | footnote:[Online GUID generator http://guid.one/] by using it as value, | |
1105 | e.g.: | |
1106 | ||
1107 | ---- | |
1108 | qm set VMID -vmgenid 1 | |
1109 | qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000 | |
1110 | ---- | |
1111 | ||
1112 | NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result | |
1113 | in the same effects as a change on snapshot rollback, backup restore, etc., has | |
1114 | as the VM can interpret this as generation change. | |
1115 | ||
1116 | In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for | |
1117 | its value on VM creation, or retroactively delete the property in the | |
1118 | configuration with: | |
1119 | ||
1120 | ---- | |
1121 | qm set VMID -delete vmgenid | |
1122 | ---- | |
1123 | ||
1124 | The most prominent use case for 'vmgenid' are newer Microsoft Windows | |
1125 | operating systems, which use it to avoid problems in time sensitive or | |
1126 | replicate services (e.g., databases, domain controller | |
1127 | footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture]) | |
1128 | on snapshot rollback, backup restore or a whole VM clone operation. | |
1129 | ||
1130 | Importing Virtual Machines and disk images | |
1131 | ------------------------------------------ | |
1132 | ||
1133 | A VM export from a foreign hypervisor takes usually the form of one or more disk | |
1134 | images, with a configuration file describing the settings of the VM (RAM, | |
1135 | number of cores). + | |
1136 | The disk images can be in the vmdk format, if the disks come from | |
1137 | VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor. | |
1138 | The most popular configuration format for VM exports is the OVF standard, but in | |
1139 | practice interoperation is limited because many settings are not implemented in | |
1140 | the standard itself, and hypervisors export the supplementary information | |
1141 | in non-standard extensions. | |
1142 | ||
1143 | Besides the problem of format, importing disk images from other hypervisors | |
1144 | may fail if the emulated hardware changes too much from one hypervisor to | |
1145 | another. Windows VMs are particularly concerned by this, as the OS is very | |
1146 | picky about any changes of hardware. This problem may be solved by | |
1147 | installing the MergeIDE.zip utility available from the Internet before exporting | |
1148 | and choosing a hard disk type of *IDE* before booting the imported Windows VM. | |
1149 | ||
1150 | Finally there is the question of paravirtualized drivers, which improve the | |
1151 | speed of the emulated system and are specific to the hypervisor. | |
1152 | GNU/Linux and other free Unix OSes have all the necessary drivers installed by | |
1153 | default and you can switch to the paravirtualized drivers right after importing | |
1154 | the VM. For Windows VMs, you need to install the Windows paravirtualized | |
1155 | drivers by yourself. | |
1156 | ||
1157 | GNU/Linux and other free Unix can usually be imported without hassle. Note | |
1158 | that we cannot guarantee a successful import/export of Windows VMs in all | |
1159 | cases due to the problems above. | |
1160 | ||
1161 | Step-by-step example of a Windows OVF import | |
1162 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
1163 | ||
1164 | Microsoft provides | |
1165 | https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads] | |
1166 | to get started with Windows development.We are going to use one of these | |
1167 | to demonstrate the OVF import feature. | |
1168 | ||
1169 | Download the Virtual Machine zip | |
1170 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
1171 | ||
1172 | After getting informed about the user agreement, choose the _Windows 10 | |
1173 | Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip. | |
1174 | ||
1175 | Extract the disk image from the zip | |
1176 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
1177 | ||
1178 | Using the `unzip` utility or any archiver of your choice, unpack the zip, | |
1179 | and copy via ssh/scp the ovf and vmdk files to your {pve} host. | |
1180 | ||
1181 | Import the Virtual Machine | |
1182 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
1183 | ||
1184 | This will create a new virtual machine, using cores, memory and | |
1185 | VM name as read from the OVF manifest, and import the disks to the +local-lvm+ | |
1186 | storage. You have to configure the network manually. | |
1187 | ||
1188 | qm importovf 999 WinDev1709Eval.ovf local-lvm | |
1189 | ||
1190 | The VM is ready to be started. | |
1191 | ||
1192 | Adding an external disk image to a Virtual Machine | |
1193 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
1194 | ||
1195 | You can also add an existing disk image to a VM, either coming from a | |
1196 | foreign hypervisor, or one that you created yourself. | |
1197 | ||
1198 | Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool: | |
1199 | ||
1200 | vmdebootstrap --verbose \ | |
1201 | --size 10GiB --serial-console \ | |
1202 | --grub --no-extlinux \ | |
1203 | --package openssh-server \ | |
1204 | --package avahi-daemon \ | |
1205 | --package qemu-guest-agent \ | |
1206 | --hostname vm600 --enable-dhcp \ | |
1207 | --customize=./copy_pub_ssh.sh \ | |
1208 | --sparse --image vm600.raw | |
1209 | ||
1210 | You can now create a new target VM for this image. | |
1211 | ||
1212 | qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \ | |
1213 | --bootdisk scsi0 --scsihw virtio-scsi-pci --ostype l26 | |
1214 | ||
1215 | Add the disk image as +unused0+ to the VM, using the storage +pvedir+: | |
1216 | ||
1217 | qm importdisk 600 vm600.raw pvedir | |
1218 | ||
1219 | Finally attach the unused disk to the SCSI controller of the VM: | |
1220 | ||
1221 | qm set 600 --scsi0 pvedir:600/vm-600-disk-1.raw | |
1222 | ||
1223 | The VM is ready to be started. | |
1224 | ||
1225 | ||
1226 | ifndef::wiki[] | |
1227 | include::qm-cloud-init.adoc[] | |
1228 | endif::wiki[] | |
1229 | ||
1230 | ifndef::wiki[] | |
1231 | include::qm-pci-passthrough.adoc[] | |
1232 | endif::wiki[] | |
1233 | ||
1234 | Hookscripts | |
1235 | ----------- | |
1236 | ||
1237 | You can add a hook script to VMs with the config property `hookscript`. | |
1238 | ||
1239 | qm set 100 -hookscript local:snippets/hookscript.pl | |
1240 | ||
1241 | It will be called during various phases of the guests lifetime. | |
1242 | For an example and documentation see the example script under | |
1243 | `/usr/share/pve-docs/examples/guest-example-hookscript.pl`. | |
1244 | ||
1245 | [[qm_hibernate]] | |
1246 | Hibernation | |
1247 | ----------- | |
1248 | ||
1249 | You can suspend a VM to disk with the GUI option `Hibernate` or with | |
1250 | ||
1251 | qm suspend ID --todisk | |
1252 | ||
1253 | That means that the current content of the memory will be saved onto disk | |
1254 | and the VM gets stopped. On the next start, the memory content will be | |
1255 | loaded and the VM can continue where it was left off. | |
1256 | ||
1257 | [[qm_vmstatestorage]] | |
1258 | .State storage selection | |
1259 | If no target storage for the memory is given, it will be automatically | |
1260 | chosen, the first of: | |
1261 | ||
1262 | 1. The storage `vmstatestorage` from the VM config. | |
1263 | 2. The first shared storage from any VM disk. | |
1264 | 3. The first non-shared storage from any VM disk. | |
1265 | 4. The storage `local` as a fallback. | |
1266 | ||
1267 | Managing Virtual Machines with `qm` | |
1268 | ------------------------------------ | |
1269 | ||
1270 | qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can | |
1271 | create and destroy virtual machines, and control execution | |
1272 | (start/stop/suspend/resume). Besides that, you can use qm to set | |
1273 | parameters in the associated config file. It is also possible to | |
1274 | create and delete virtual disks. | |
1275 | ||
1276 | CLI Usage Examples | |
1277 | ~~~~~~~~~~~~~~~~~~ | |
1278 | ||
1279 | Using an iso file uploaded on the 'local' storage, create a VM | |
1280 | with a 4 GB IDE disk on the 'local-lvm' storage | |
1281 | ||
1282 | qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso | |
1283 | ||
1284 | Start the new VM | |
1285 | ||
1286 | qm start 300 | |
1287 | ||
1288 | Send a shutdown request, then wait until the VM is stopped. | |
1289 | ||
1290 | qm shutdown 300 && qm wait 300 | |
1291 | ||
1292 | Same as above, but only wait for 40 seconds. | |
1293 | ||
1294 | qm shutdown 300 && qm wait 300 -timeout 40 | |
1295 | ||
1296 | ||
1297 | [[qm_configuration]] | |
1298 | Configuration | |
1299 | ------------- | |
1300 | ||
1301 | VM configuration files are stored inside the Proxmox cluster file | |
1302 | system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`. | |
1303 | Like other files stored inside `/etc/pve/`, they get automatically | |
1304 | replicated to all other cluster nodes. | |
1305 | ||
1306 | NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be | |
1307 | unique cluster wide. | |
1308 | ||
1309 | .Example VM Configuration | |
1310 | ---- | |
1311 | cores: 1 | |
1312 | sockets: 1 | |
1313 | memory: 512 | |
1314 | name: webmail | |
1315 | ostype: l26 | |
1316 | bootdisk: virtio0 | |
1317 | net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0 | |
1318 | virtio0: local:vm-100-disk-1,size=32G | |
1319 | ---- | |
1320 | ||
1321 | Those configuration files are simple text files, and you can edit them | |
1322 | using a normal text editor (`vi`, `nano`, ...). This is sometimes | |
1323 | useful to do small corrections, but keep in mind that you need to | |
1324 | restart the VM to apply such changes. | |
1325 | ||
1326 | For that reason, it is usually better to use the `qm` command to | |
1327 | generate and modify those files, or do the whole thing using the GUI. | |
1328 | Our toolkit is smart enough to instantaneously apply most changes to | |
1329 | running VM. This feature is called "hot plug", and there is no | |
1330 | need to restart the VM in that case. | |
1331 | ||
1332 | ||
1333 | File Format | |
1334 | ~~~~~~~~~~~ | |
1335 | ||
1336 | VM configuration files use a simple colon separated key/value | |
1337 | format. Each line has the following format: | |
1338 | ||
1339 | ----- | |
1340 | # this is a comment | |
1341 | OPTION: value | |
1342 | ----- | |
1343 | ||
1344 | Blank lines in those files are ignored, and lines starting with a `#` | |
1345 | character are treated as comments and are also ignored. | |
1346 | ||
1347 | ||
1348 | [[qm_snapshots]] | |
1349 | Snapshots | |
1350 | ~~~~~~~~~ | |
1351 | ||
1352 | When you create a snapshot, `qm` stores the configuration at snapshot | |
1353 | time into a separate snapshot section within the same configuration | |
1354 | file. For example, after creating a snapshot called ``testsnapshot'', | |
1355 | your configuration file will look like this: | |
1356 | ||
1357 | .VM configuration with snapshot | |
1358 | ---- | |
1359 | memory: 512 | |
1360 | swap: 512 | |
1361 | parent: testsnaphot | |
1362 | ... | |
1363 | ||
1364 | [testsnaphot] | |
1365 | memory: 512 | |
1366 | swap: 512 | |
1367 | snaptime: 1457170803 | |
1368 | ... | |
1369 | ---- | |
1370 | ||
1371 | There are a few snapshot related properties like `parent` and | |
1372 | `snaptime`. The `parent` property is used to store the parent/child | |
1373 | relationship between snapshots. `snaptime` is the snapshot creation | |
1374 | time stamp (Unix epoch). | |
1375 | ||
1376 | You can optionally save the memory of a running VM with the option `vmstate`. | |
1377 | For details about how the target storage gets chosen for the VM state, see | |
1378 | xref:qm_vmstatestorage[State storage selection] in the chapter | |
1379 | xref:qm_hibernate[Hibernation]. | |
1380 | ||
1381 | [[qm_options]] | |
1382 | Options | |
1383 | ~~~~~~~ | |
1384 | ||
1385 | include::qm.conf.5-opts.adoc[] | |
1386 | ||
1387 | ||
1388 | Locks | |
1389 | ----- | |
1390 | ||
1391 | Online migrations, snapshots and backups (`vzdump`) set a lock to | |
1392 | prevent incompatible concurrent actions on the affected VMs. Sometimes | |
1393 | you need to remove such a lock manually (e.g., after a power failure). | |
1394 | ||
1395 | qm unlock <vmid> | |
1396 | ||
1397 | CAUTION: Only do that if you are sure the action which set the lock is | |
1398 | no longer running. | |
1399 | ||
1400 | ||
1401 | ifdef::wiki[] | |
1402 | ||
1403 | See Also | |
1404 | ~~~~~~~~ | |
1405 | ||
1406 | * link:/wiki/Cloud-Init_Support[Cloud-Init Support] | |
1407 | ||
1408 | endif::wiki[] | |
1409 | ||
1410 | ||
1411 | ifdef::manvolnum[] | |
1412 | ||
1413 | Files | |
1414 | ------ | |
1415 | ||
1416 | `/etc/pve/qemu-server/<VMID>.conf`:: | |
1417 | ||
1418 | Configuration file for the VM '<VMID>'. | |
1419 | ||
1420 | ||
1421 | include::pve-copyright.adoc[] | |
1422 | endif::manvolnum[] |