]> git.proxmox.com Git - pve-docs.git/blob - qm.adoc
72275107bf26ea3ea71dba30ae414fe30155794c
[pve-docs.git] / qm.adoc
1 [[chapter_virtual_machines]]
2 ifdef::manvolnum[]
3 qm(1)
4 =====
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 qm - Qemu/KVM Virtual Machine Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::qm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 Qemu/KVM Virtual Machines
23 =========================
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 // deprecates
28 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29 // http://pve.proxmox.com/wiki/KVM
30 // http://pve.proxmox.com/wiki/Qemu_Server
31
32 Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
33 physical computer. From the perspective of the host system where Qemu is
34 running, Qemu is a user program which has access to a number of local resources
35 like partitions, files, network cards which are then passed to an
36 emulated computer which sees them as if they were real devices.
37
38 A guest operating system running in the emulated computer accesses these
39 devices, and runs as it were running on real hardware. For instance you can pass
40 an iso image as a parameter to Qemu, and the OS running in the emulated computer
41 will see a real CDROM inserted in a CD drive.
42
43 Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44 only concerned with 32 and 64 bits PC clone emulation, since it represents the
45 overwhelming majority of server hardware. The emulation of PC clones is also one
46 of the fastest due to the availability of processor extensions which greatly
47 speed up Qemu when the emulated architecture is the same as the host
48 architecture.
49
50 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51 It means that Qemu is running with the support of the virtualization processor
52 extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
53 _KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm
54 module.
55
56 Qemu inside {pve} runs as a root process, since this is required to access block
57 and PCI devices.
58
59
60 Emulated devices and paravirtualized devices
61 --------------------------------------------
62
63 The PC hardware emulated by Qemu includes a mainboard, network controllers,
64 scsi, ide and sata controllers, serial ports (the complete list can be seen in
65 the `kvm(1)` man page) all of them emulated in software. All these devices
66 are the exact software equivalent of existing hardware devices, and if the OS
67 running in the guest has the proper drivers it will use the devices as if it
68 were running on real hardware. This allows Qemu to runs _unmodified_ operating
69 systems.
70
71 This however has a performance cost, as running in software what was meant to
72 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73 Qemu can present to the guest operating system _paravirtualized devices_, where
74 the guest OS recognizes it is running inside Qemu and cooperates with the
75 hypervisor.
76
77 Qemu relies on the virtio virtualization standard, and is thus able to present
78 paravirtualized virtio devices, which includes a paravirtualized generic disk
79 controller, a paravirtualized network card, a paravirtualized serial port,
80 a paravirtualized SCSI controller, etc ...
81
82 It is highly recommended to use the virtio devices whenever you can, as they
83 provide a big performance improvement. Using the virtio generic disk controller
84 versus an emulated IDE controller will double the sequential write throughput,
85 as measured with `bonnie++(8)`. Using the virtio network interface can deliver
86 up to three times the throughput of an emulated Intel E1000 network card, as
87 measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
88 http://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
90
91 [[qm_virtual_machines_settings]]
92 Virtual Machines Settings
93 -------------------------
94
95 Generally speaking {pve} tries to choose sane defaults for virtual machines
96 (VM). Make sure you understand the meaning of the settings you change, as it
97 could incur a performance slowdown, or putting your data at risk.
98
99
100 [[qm_general_settings]]
101 General Settings
102 ~~~~~~~~~~~~~~~~
103
104 [thumbnail="gui-create-vm-general.png"]
105
106 General settings of a VM include
107
108 * the *Node* : the physical server on which the VM will run
109 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
110 * *Name*: a free form text string you can use to describe the VM
111 * *Resource Pool*: a logical group of VMs
112
113
114 [[qm_os_settings]]
115 OS Settings
116 ~~~~~~~~~~~
117
118 [thumbnail="gui-create-vm-os.png"]
119
120 When creating a VM, setting the proper Operating System(OS) allows {pve} to
121 optimize some low level parameters. For instance Windows OS expect the BIOS
122 clock to use the local time, while Unix based OS expect the BIOS clock to have
123 the UTC time.
124
125
126 [[qm_hard_disk]]
127 Hard Disk
128 ~~~~~~~~~
129
130 Qemu can emulate a number of storage controllers:
131
132 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
133 controller. Even if this controller has been superseded by recent designs,
134 each and every OS you can think of has support for it, making it a great choice
135 if you want to run an OS released before 2003. You can connect up to 4 devices
136 on this controller.
137
138 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
139 design, allowing higher throughput and a greater number of devices to be
140 connected. You can connect up to 6 devices on this controller.
141
142 * the *SCSI* controller, designed in 1985, is commonly found on server grade
143 hardware, and can connect up to 14 storage devices. {pve} emulates by default a
144 LSI 53C895A controller.
145 +
146 A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for
147 performance and is automatically selected for newly created Linux VMs since
148 {pve} 4.3. Linux distributions have support for this controller since 2012, and
149 FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
150 containing the drivers during the installation.
151 // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
152 If you aim at maximum performance, you can select a SCSI controller of type
153 _VirtIO SCSI single_ which will allow you to select the *IO Thread* option.
154 When selecting _VirtIO SCSI single_ Qemu will create a new controller for
155 each disk, instead of adding all disks to the same controller.
156
157 * The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
158 is an older type of paravirtualized controller. It has been superseded by the
159 VirtIO SCSI Controller, in terms of features.
160
161 [thumbnail="gui-create-vm-hard-disk.png"]
162 On each controller you attach a number of emulated hard disks, which are backed
163 by a file or a block device residing in the configured storage. The choice of
164 a storage type will determine the format of the hard disk image. Storages which
165 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
166 whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose
167 either the *raw disk image format* or the *QEMU image format*.
168
169 * the *QEMU image format* is a copy on write format which allows snapshots, and
170 thin provisioning of the disk image.
171 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
172 you would get when executing the `dd` command on a block device in Linux. This
173 format does not support thin provisioning or snapshots by itself, requiring
174 cooperation from the storage layer for these tasks. It may, however, be up to
175 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
176 http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
177 * the *VMware image format* only makes sense if you intend to import/export the
178 disk image to other hypervisors.
179
180 Setting the *Cache* mode of the hard drive will impact how the host system will
181 notify the guest systems of block write completions. The *No cache* default
182 means that the guest system will be notified that a write is complete when each
183 block reaches the physical storage write queue, ignoring the host page cache.
184 This provides a good balance between safety and speed.
185
186 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
187 you can set the *No backup* option on that disk.
188
189 If you want the {pve} storage replication mechanism to skip a disk when starting
190 a replication job, you can set the *Skip replication* option on that disk.
191 As of {pve} 5.0, replication requires the disk images to be on a storage of type
192 `zfspool`, so adding a disk image to other storages when the VM has replication
193 configured requires to skip replication for this disk image.
194
195 If your storage supports _thin provisioning_ (see the storage chapter in the
196 {pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
197 option on the hard disks connected to that controller. With *Discard* enabled,
198 when the filesystem of a VM marks blocks as unused after removing files, the
199 emulated SCSI controller will relay this information to the storage, which will
200 then shrink the disk image accordingly.
201
202 .IO Thread
203 The option *IO Thread* can only be used when using a disk with the
204 *VirtIO* controller, or with the *SCSI* controller, when the emulated controller
205 type is *VirtIO SCSI single*.
206 With this enabled, Qemu creates one I/O thread per storage controller,
207 instead of a single thread for all I/O, so it increases performance when
208 multiple disks are used and each disk has its own storage controller.
209 Note that backups do not currently work with *IO Thread* enabled.
210
211
212 [[qm_cpu]]
213 CPU
214 ~~~
215
216 [thumbnail="gui-create-vm-cpu.png"]
217
218 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
219 This CPU can then contain one or many *cores*, which are independent
220 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
221 sockets with two cores is mostly irrelevant from a performance point of view.
222 However some software licenses depend on the number of sockets a machine has,
223 in that case it makes sense to set the number of sockets to what the license
224 allows you.
225
226 Increasing the number of virtual cpus (cores and sockets) will usually provide a
227 performance improvement though that is heavily dependent on the use of the VM.
228 Multithreaded applications will of course benefit from a large number of
229 virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
230 execution on the host system. If you're not sure about the workload of your VM,
231 it is usually a safe bet to set the number of *Total cores* to 2.
232
233 NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
234 is greater than the number of cores on the server (e.g., 4 VMs with each 4
235 cores on a machine with only 8 cores). In that case the host system will
236 balance the Qemu execution threads between your server cores, just like if you
237 were running a standard multithreaded application. However, {pve} will prevent
238 you from assigning more virtual CPU cores than physically available, as this will
239 only bring the performance down due to the cost of context switches.
240
241 [[qm_cpu_resource_limits]]
242 Resource Limits
243 ^^^^^^^^^^^^^^^
244
245 In addition to the number of virtual cores, you can configure how much resources
246 a VM can get in relation to the host CPU time and also in relation to other
247 VMs.
248 With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
249 the whole VM can use on the host. It is a floating point value representing CPU
250 time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
251 single process would fully use one single core it would have `100%` CPU Time
252 usage. If a VM with four cores utilizes all its cores fully it would
253 theoretically use `400%`. In reality the usage may be even a bit higher as Qemu
254 can have additional threads for VM peripherals besides the vCPU core ones.
255 This setting can be useful if a VM should have multiple vCPUs, as it runs a few
256 processes in parallel, but the VM as a whole should not be able to run all
257 vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
258 which would profit from having 8 vCPUs, but at no time all of those 8 cores
259 should run at full load - as this would make the server so overloaded that
260 other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
261 `4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
262 real host cores CPU time. But, if only 4 would do work they could still get
263 almost 100% of a real core each.
264
265 NOTE: VMs can, depending on their configuration, use additional threads e.g.,
266 for networking or IO operations but also live migration. Thus a VM can show up
267 to use more CPU time than just its virtual CPUs could use. To ensure that a VM
268 never uses more CPU time than virtual CPUs assigned set the *cpulimit* setting
269 to the same value as the total core count.
270
271 The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
272 shares or CPU weight), controls how much CPU time a VM gets in regards to other
273 VMs running. It is a relative weight which defaults to `1024`, if you increase
274 this for a VM it will be prioritized by the scheduler in comparison to other
275 VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was
276 changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than
277 the first VM 100.
278
279 For more information see `man systemd.resource-control`, here `CPUQuota`
280 corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits`
281 setting, visit its Notes section for references and implementation details.
282
283 CPU Type
284 ^^^^^^^^
285
286 Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
287 processors. Each new processor generation adds new features, like hardware
288 assisted 3d rendering, random number generation, memory protection, etc ...
289 Usually you should select for your VM a processor type which closely matches the
290 CPU of the host system, as it means that the host CPU features (also called _CPU
291 flags_ ) will be available in your VMs. If you want an exact match, you can set
292 the CPU type to *host* in which case the VM will have exactly the same CPU flags
293 as your host system.
294
295 This has a downside though. If you want to do a live migration of VMs between
296 different hosts, your VM might end up on a new system with a different CPU type.
297 If the CPU flags passed to the guest are missing, the qemu process will stop. To
298 remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
299 kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
300 but is guaranteed to work everywhere.
301
302 In short, if you care about live migration and moving VMs between nodes, leave
303 the kvm64 default. If you don’t care about live migration or have a homogeneous
304 cluster where all nodes have the same CPU, set the CPU type to host, as in
305 theory this will give your guests maximum performance.
306
307 Meltdown / Spectre related CPU flags
308 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
309
310 There are two CPU flags related to the Meltdown and Spectre vulnerabilities
311 footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
312 manually unless the selected CPU type of your VM already enables them by default.
313
314 The first, called 'pcid', helps to reduce the performance impact of the Meltdown
315 mitigation called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
316 the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
317 mechanism footnote:[PCID is now a critical performance/security feature on x86
318 https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
319
320 The second CPU flag is called 'spec-ctrl', which allows an operating system to
321 selectively disable or restrict speculative execution in order to limit the
322 ability of attackers to exploit the Spectre vulnerability.
323
324 There are two requirements that need to be fulfilled in order to use these two
325 CPU flags:
326
327 * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
328 * The guest operating system must be updated to a version which mitigates the
329 attacks and is able to utilize the CPU feature
330
331 In order to use 'spec-ctrl', your CPU or system vendor also needs to provide a
332 so-called ``microcode update'' footnote:[You can use `intel-microcode' /
333 `amd-microcode' from Debian non-free if your vendor does not provide such an
334 update. Note that not all affected CPUs can be updated to support spec-ctrl.]
335 for your CPU.
336
337 To check if the {pve} host supports PCID, execute the following command as root:
338
339 ----
340 # grep ' pcid ' /proc/cpuinfo
341 ----
342
343 If this does not return empty your host's CPU has support for 'pcid'.
344
345 To check if the {pve} host supports spec-ctrl, execute the following command as root:
346
347 ----
348 # grep ' spec_ctrl ' /proc/cpuinfo
349 ----
350
351 If this does not return empty your host's CPU has support for 'spec-ctrl'.
352
353 If you use `host' or another CPU type which enables the desired flags by
354 default, and you updated your guest OS to make use of the associated CPU
355 features, you're already set.
356
357 Otherwise you need to set the desired CPU flag of the virtual CPU, either by
358 editing the CPU options in the WebUI, or by setting the 'flags' property of the
359 'cpu' option in the VM configuration file.
360
361 NUMA
362 ^^^^
363 You can also optionally emulate a *NUMA*
364 footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
365 in your VMs. The basics of the NUMA architecture mean that instead of having a
366 global memory pool available to all your cores, the memory is spread into local
367 banks close to each socket.
368 This can bring speed improvements as the memory bus is not a bottleneck
369 anymore. If your system has a NUMA architecture footnote:[if the command
370 `numactl --hardware | grep available` returns more than one node, then your host
371 system has a NUMA architecture] we recommend to activate the option, as this
372 will allow proper distribution of the VM resources on the host system.
373 This option is also required to hot-plug cores or RAM in a VM.
374
375 If the NUMA option is used, it is recommended to set the number of sockets to
376 the number of sockets of the host system.
377
378 vCPU hot-plug
379 ^^^^^^^^^^^^^
380
381 Modern operating systems introduced the capability to hot-plug and, to a
382 certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us
383 to avoid a lot of the (physical) problems real hardware can cause in such
384 scenarios.
385 Still, this is a rather new and complicated feature, so its use should be
386 restricted to cases where its absolutely needed. Most of the functionality can
387 be replicated with other, well tested and less complicated, features, see
388 xref:qm_cpu_resource_limits[Resource Limits].
389
390 In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
391 To start a VM with less than this total core count of CPUs you may use the
392 *vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
393
394 Currently only this feature is only supported on Linux, a kernel newer than 3.10
395 is needed, a kernel newer than 4.7 is recommended.
396
397 You can use a udev rule as follow to automatically set new CPUs as online in
398 the guest:
399
400 ----
401 SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
402 ----
403
404 Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
405
406 Note: CPU hot-remove is machine dependent and requires guest cooperation.
407 The deletion command does not guarantee CPU removal to actually happen,
408 typically it's a request forwarded to guest using target dependent mechanism,
409 e.g., ACPI on x86/amd64.
410
411
412 [[qm_memory]]
413 Memory
414 ~~~~~~
415
416 For each VM you have the option to set a fixed size memory or asking
417 {pve} to dynamically allocate memory based on the current RAM usage of the
418 host.
419
420 .Fixed Memory Allocation
421 [thumbnail="gui-create-vm-memory-fixed.png"]
422
423 When choosing a *fixed size memory* {pve} will simply allocate what you
424 specify to your VM.
425
426 Even when using a fixed memory size, the ballooning device gets added to the
427 VM, because it delivers useful information such as how much memory the guest
428 really uses.
429 In general, you should leave *ballooning* enabled, but if you want to disable
430 it (e.g. for debugging purposes), simply uncheck
431 *Ballooning* or set
432
433 balloon: 0
434
435 in the configuration.
436
437 .Automatic Memory Allocation
438 [thumbnail="gui-create-vm-memory-dynamic.png", float="left"]
439
440 // see autoballoon() in pvestatd.pm
441 When choosing to *automatically allocate memory*, {pve} will make sure that the
442 minimum amount you specified is always available to the VM, and if RAM usage on
443 the host is below 80%, will dynamically add memory to the guest up to the
444 maximum memory specified.
445
446 When the host is becoming short on RAM, the VM will then release some memory
447 back to the host, swapping running processes if needed and starting the oom
448 killer in last resort. The passing around of memory between host and guest is
449 done via a special `balloon` kernel driver running inside the guest, which will
450 grab or release memory pages from the host.
451 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
452
453 When multiple VMs use the autoallocate facility, it is possible to set a
454 *Shares* coefficient which indicates the relative amount of the free host memory
455 that each VM should take. Suppose for instance you have four VMs, three of them
456 running a HTTP server and the last one is a database server. To cache more
457 database blocks in the database server RAM, you would like to prioritize the
458 database VM when spare RAM is available. For this you assign a Shares property
459 of 3000 to the database VM, leaving the other VMs to the Shares default setting
460 of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
461 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
462 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
463 get 1/5 GB.
464
465 All Linux distributions released after 2010 have the balloon kernel driver
466 included. For Windows OSes, the balloon driver needs to be added manually and can
467 incur a slowdown of the guest, so we don't recommend using it on critical
468 systems.
469 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
470
471 When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
472 of RAM available to the host.
473
474
475 [[qm_network_device]]
476 Network Device
477 ~~~~~~~~~~~~~~
478
479 [thumbnail="gui-create-vm-network.png"]
480
481 Each VM can have many _Network interface controllers_ (NIC), of four different
482 types:
483
484 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
485 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
486 performance. Like all VirtIO devices, the guest OS should have the proper driver
487 installed.
488 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
489 only be used when emulating older operating systems ( released before 2002 )
490 * the *vmxnet3* is another paravirtualized device, which should only be used
491 when importing a VM from another hypervisor.
492
493 {pve} will generate for each NIC a random *MAC address*, so that your VM is
494 addressable on Ethernet networks.
495
496 The NIC you added to the VM can follow one of two different models:
497
498 * in the default *Bridged mode* each virtual NIC is backed on the host by a
499 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
500 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
501 have direct access to the Ethernet LAN on which the host is located.
502 * in the alternative *NAT mode*, each virtual NIC will only communicate with
503 the Qemu user networking stack, where a built-in router and DHCP server can
504 provide network access. This built-in DHCP will serve addresses in the private
505 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
506 should only be used for testing.
507
508 You can also skip adding a network device when creating a VM by selecting *No
509 network device*.
510
511 .Multiqueue
512 If you are using the VirtIO driver, you can optionally activate the
513 *Multiqueue* option. This option allows the guest OS to process networking
514 packets using multiple virtual CPUs, providing an increase in the total number
515 of packets transferred.
516
517 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
518 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
519 host kernel, where the queue will be processed by a kernel thread spawn by the
520 vhost driver. With this option activated, it is possible to pass _multiple_
521 network queues to the host kernel for each NIC.
522
523 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
524 When using Multiqueue, it is recommended to set it to a value equal
525 to the number of Total Cores of your guest. You also need to set in
526 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
527 command:
528
529 `ethtool -L ens1 combined X`
530
531 where X is the number of the number of vcpus of the VM.
532
533 You should note that setting the Multiqueue parameter to a value greater
534 than one will increase the CPU load on the host and guest systems as the
535 traffic increases. We recommend to set this option only when the VM has to
536 process a great number of incoming connections, such as when the VM is running
537 as a router, reverse proxy or a busy HTTP server doing long polling.
538
539
540 [[qm_cloud_init]]
541 Cloud-Init Support
542 ~~~~~~~~~~~~~~~~~~
543
544 http://cloudinit.readthedocs.io[Cloud-Init] is the defacto
545 multi-distribution package that handles early initialization of a
546 virtual machine instance. Using Cloud-Init, one can configure network
547 devices and ssh keys on the hypervisor side. When the VM starts the
548 first time, the Cloud-Init software inside the VM applies those
549 settings.
550
551 Many Linux distributions provides ready-to-use Cloud-Init images,
552 mostly designed for 'OpenStack'. Those images also works with
553 {pve}. While it may be convenient to use such read-to-use images, we
554 usually recommend to prepare those images by yourself. That way you know
555 exactly what is installed, and you can easily customize the image for
556 your needs.
557
558 Once you created such image, it is best practice to convert it into a
559 VM template. It is really fast to create linked clones of VM
560 templates, so this is a very fast way to roll out new VM
561 instances. You just need to configure the network (any maybe ssh keys)
562 before you start the new VM.
563
564 We recommend the use of SSH key-based authentication to login to VMs
565 provisioned by Cloud-Init. It is also possible to set a password, but
566 {pve} needs to store an encrypted version of that password inside the
567 Cloud-Init data. So this is not as safe as using SSH key-based
568 authentication.
569
570 {pve} generates an ISO image to pass the Cloud-Init data to the VM. So
571 all Cloud-Init VMs needs to have an assigned CDROM drive for that
572 purpose. Also, many Cloud-Init Images assumes to have a serial
573 console, so it is best to add a serial console and use that as display
574 for those VMs.
575
576
577 Prepare Cloud-Init Templates
578 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
579
580 The first step is to prepare your VM. You can basically use any VM,
581 and simply install the Cloud-Init packages inside the VM you want to
582 prepare. On Debian/Ubuntu based systems this is as simple as:
583
584 ----
585 apt-get install cloud-init
586 ----
587
588 Many distributions provides ready-to-use Cloud-Init images (provided
589 as `.qcow2` files), so as alternative you can simply download and
590 import such image. For the following example, we will use the cloud
591 images provided by Ubuntu at https://cloud-images.ubuntu.com.
592
593 ----
594 # download the image
595 wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
596
597 # create a new VM
598 qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0
599
600 # import the downloaded disk to local-lvm storage
601 qm importdisk 9000 bionic-server-cloudimg-amd64.img local-lvm
602
603 # finally attach the new disk to the VM as scsi drive
604 qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-1
605 ----
606
607 NOTE: Ubuntu Cloud-Init images requires the `virtio-scsi-pci`
608 controller type for SCSI drives.
609
610
611 The next step is to configure a CDROM drive, used to pass the
612 Cloud-Init data to the VM.
613
614 ----
615 qm set 9000 --ide2 local-lvm:cloudinit
616 ----
617
618 We want to boot directly from the Cloud-Init image, so we set the
619 `bootdisk` parameter to `scsi0` and restrict BIOS to boot from disk
620 only. This simply speeds up booting, because VM BIOS skips testing for
621 a bootable CDROM.
622
623 ----
624 qm set 9000 --boot c --bootdisk scsi0
625 ----
626
627 We also want to configure a serial console and use that as display. Many Cloud-Init images rely on that, because it is an requirement for OpenStack images.
628
629 ----
630 qm set 9000 --serial0 socket --vga serial0
631 ----
632
633 Finally, it is usually a good idea to transform such VM into a template. You can create linked clones with them, so deployment from VM templates is much faster than creating a full clone (copy).
634
635 ----
636 qm template 9000
637 ----
638
639
640 Deploy Cloud-Init Templates
641 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
642
643 You can easily deploy such template by cloning:
644
645 ----
646 qm clone 9000 123 --name ubuntu2
647 ----
648
649 Then configure the SSH public key used for authentication, and the IP setup
650
651 ----
652 qm set 123 --sshkey ~/.ssh/id_rsa.pub
653 qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1
654 ----
655
656 You can configure all Cloud-Init options using a single command. I
657 just split above example to separate commands to reduce the line
658 length. Also make sure you adopt the IP setup for your environment.
659
660
661 Cloud-Init specific Options
662 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
663
664
665
666 include::qm-cloud-init-opts.adoc[]
667
668
669
670 [[qm_usb_passthrough]]
671 USB Passthrough
672 ~~~~~~~~~~~~~~~
673
674 There are two different types of USB passthrough devices:
675
676 * Host USB passthrough
677 * SPICE USB passthrough
678
679 Host USB passthrough works by giving a VM a USB device of the host.
680 This can either be done via the vendor- and product-id, or
681 via the host bus and port.
682
683 The vendor/product-id looks like this: *0123:abcd*,
684 where *0123* is the id of the vendor, and *abcd* is the id
685 of the product, meaning two pieces of the same usb device
686 have the same id.
687
688 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
689 and *2.3.4* is the port path. This represents the physical
690 ports of your host (depending of the internal order of the
691 usb controllers).
692
693 If a device is present in a VM configuration when the VM starts up,
694 but the device is not present in the host, the VM can boot without problems.
695 As soon as the device/port is available in the host, it gets passed through.
696
697 WARNING: Using this kind of USB passthrough means that you cannot move
698 a VM online to another host, since the hardware is only available
699 on the host the VM is currently residing.
700
701 The second type of passthrough is SPICE USB passthrough. This is useful
702 if you use a SPICE client which supports it. If you add a SPICE USB port
703 to your VM, you can passthrough a USB device from where your SPICE client is,
704 directly to the VM (for example an input device or hardware dongle).
705
706
707 [[qm_bios_and_uefi]]
708 BIOS and UEFI
709 ~~~~~~~~~~~~~
710
711 In order to properly emulate a computer, QEMU needs to use a firmware.
712 By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS
713 implementation. SeaBIOS is a good choice for most standard setups.
714
715 There are, however, some scenarios in which a BIOS is not a good firmware
716 to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
717 http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
718 In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
719
720 If you want to use OVMF, there are several things to consider:
721
722 In order to save things like the *boot order*, there needs to be an EFI Disk.
723 This disk will be included in backups and snapshots, and there can only be one.
724
725 You can create such a disk with the following command:
726
727 qm set <vmid> -efidisk0 <storage>:1,format=<format>
728
729 Where *<storage>* is the storage where you want to have the disk, and
730 *<format>* is a format which the storage supports. Alternatively, you can
731 create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
732 hardware section of a VM.
733
734 When using OVMF with a virtual display (without VGA passthrough),
735 you need to set the client resolution in the OVMF menu(which you can reach
736 with a press of the ESC button during boot), or you have to choose
737 SPICE as the display type.
738
739 [[qm_startup_and_shutdown]]
740 Automatic Start and Shutdown of Virtual Machines
741 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
742
743 After creating your VMs, you probably want them to start automatically
744 when the host system boots. For this you need to select the option 'Start at
745 boot' from the 'Options' Tab of your VM in the web interface, or set it with
746 the following command:
747
748 qm set <vmid> -onboot 1
749
750 .Start and Shutdown Order
751
752 [thumbnail="gui-qemu-edit-start-order.png"]
753
754 In some case you want to be able to fine tune the boot order of your
755 VMs, for instance if one of your VM is providing firewalling or DHCP
756 to other guest systems. For this you can use the following
757 parameters:
758
759 * *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
760 you want the VM to be the first to be started. (We use the reverse startup
761 order for shutdown, so a machine with a start order of 1 would be the last to
762 be shut down). If multiple VMs have the same order defined on a host, they will
763 additionally be ordered by 'VMID' in ascending order.
764 * *Startup delay*: Defines the interval between this VM start and subsequent
765 VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting
766 other VMs.
767 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
768 for the VM to be offline after issuing a shutdown command.
769 By default this value is set to 180, which means that {pve} will issue a
770 shutdown request and wait 180 seconds for the machine to be offline. If
771 the machine is still online after the timeout it will be stopped forcefully.
772
773 NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
774 'boot order' options currently. Those VMs will be skipped by the startup and
775 shutdown algorithm as the HA manager itself ensures that VMs get started and
776 stopped.
777
778 Please note that machines without a Start/Shutdown order parameter will always
779 start after those where the parameter is set. Further, this parameter can only
780 be enforced between virtual machines running on the same host, not
781 cluster-wide.
782
783
784 [[qm_migration]]
785 Migration
786 ---------
787
788 [thumbnail="gui-qemu-migrate.png"]
789
790 If you have a cluster, you can migrate your VM to another host with
791
792 qm migrate <vmid> <target>
793
794 There are generally two mechanisms for this
795
796 * Online Migration (aka Live Migration)
797 * Offline Migration
798
799 Online Migration
800 ~~~~~~~~~~~~~~~~
801
802 When your VM is running and it has no local resources defined (such as disks
803 on local storage, passed through devices, etc.) you can initiate a live
804 migration with the -online flag.
805
806 How it works
807 ^^^^^^^^^^^^
808
809 This starts a Qemu Process on the target host with the 'incoming' flag, which
810 means that the process starts and waits for the memory data and device states
811 from the source Virtual Machine (since all other resources, e.g. disks,
812 are shared, the memory content and device state are the only things left
813 to transmit).
814
815 Once this connection is established, the source begins to send the memory
816 content asynchronously to the target. If the memory on the source changes,
817 those sections are marked dirty and there will be another pass of sending data.
818 This happens until the amount of data to send is so small that it can
819 pause the VM on the source, send the remaining data to the target and start
820 the VM on the target in under a second.
821
822 Requirements
823 ^^^^^^^^^^^^
824
825 For Live Migration to work, there are some things required:
826
827 * The VM has no local resources (e.g. passed through devices, local disks, etc.)
828 * The hosts are in the same {pve} cluster.
829 * The hosts have a working (and reliable) network connection.
830 * The target host must have the same or higher versions of the
831 {pve} packages. (It *might* work the other way, but this is never guaranteed)
832
833 Offline Migration
834 ~~~~~~~~~~~~~~~~~
835
836 If you have local resources, you can still offline migrate your VMs,
837 as long as all disk are on storages, which are defined on both hosts.
838 Then the migration will copy the disk over the network to the target host.
839
840 [[qm_copy_and_clone]]
841 Copies and Clones
842 -----------------
843
844 [thumbnail="gui-qemu-full-clone.png"]
845
846 VM installation is usually done using an installation media (CD-ROM)
847 from the operation system vendor. Depending on the OS, this can be a
848 time consuming task one might want to avoid.
849
850 An easy way to deploy many VMs of the same type is to copy an existing
851 VM. We use the term 'clone' for such copies, and distinguish between
852 'linked' and 'full' clones.
853
854 Full Clone::
855
856 The result of such copy is an independent VM. The
857 new VM does not share any storage resources with the original.
858 +
859
860 It is possible to select a *Target Storage*, so one can use this to
861 migrate a VM to a totally different storage. You can also change the
862 disk image *Format* if the storage driver supports several formats.
863 +
864
865 NOTE: A full clone need to read and copy all VM image data. This is
866 usually much slower than creating a linked clone.
867 +
868
869 Some storage types allows to copy a specific *Snapshot*, which
870 defaults to the 'current' VM data. This also means that the final copy
871 never includes any additional snapshots from the original VM.
872
873
874 Linked Clone::
875
876 Modern storage drivers supports a way to generate fast linked
877 clones. Such a clone is a writable copy whose initial contents are the
878 same as the original data. Creating a linked clone is nearly
879 instantaneous, and initially consumes no additional space.
880 +
881
882 They are called 'linked' because the new image still refers to the
883 original. Unmodified data blocks are read from the original image, but
884 modification are written (and afterwards read) from a new
885 location. This technique is called 'Copy-on-write'.
886 +
887
888 This requires that the original volume is read-only. With {pve} one
889 can convert any VM into a read-only <<qm_templates, Template>>). Such
890 templates can later be used to create linked clones efficiently.
891 +
892
893 NOTE: You cannot delete the original template while linked clones
894 exists.
895 +
896
897 It is not possible to change the *Target storage* for linked clones,
898 because this is a storage internal feature.
899
900
901 The *Target node* option allows you to create the new VM on a
902 different node. The only restriction is that the VM is on shared
903 storage, and that storage is also available on the target node.
904
905 To avoid resource conflicts, all network interface MAC addresses gets
906 randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
907 setting.
908
909
910 [[qm_templates]]
911 Virtual Machine Templates
912 -------------------------
913
914 One can convert a VM into a Template. Such templates are read-only,
915 and you can use them to create linked clones.
916
917 NOTE: It is not possible to start templates, because this would modify
918 the disk images. If you want to change the template, create a linked
919 clone and modify that.
920
921 Importing Virtual Machines and disk images
922 ------------------------------------------
923
924 A VM export from a foreign hypervisor takes usually the form of one or more disk
925 images, with a configuration file describing the settings of the VM (RAM,
926 number of cores). +
927 The disk images can be in the vmdk format, if the disks come from
928 VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
929 The most popular configuration format for VM exports is the OVF standard, but in
930 practice interoperation is limited because many settings are not implemented in
931 the standard itself, and hypervisors export the supplementary information
932 in non-standard extensions.
933
934 Besides the problem of format, importing disk images from other hypervisors
935 may fail if the emulated hardware changes too much from one hypervisor to
936 another. Windows VMs are particularly concerned by this, as the OS is very
937 picky about any changes of hardware. This problem may be solved by
938 installing the MergeIDE.zip utility available from the Internet before exporting
939 and choosing a hard disk type of *IDE* before booting the imported Windows VM.
940
941 Finally there is the question of paravirtualized drivers, which improve the
942 speed of the emulated system and are specific to the hypervisor.
943 GNU/Linux and other free Unix OSes have all the necessary drivers installed by
944 default and you can switch to the paravirtualized drivers right after importing
945 the VM. For Windows VMs, you need to install the Windows paravirtualized
946 drivers by yourself.
947
948 GNU/Linux and other free Unix can usually be imported without hassle. Note
949 that we cannot guarantee a successful import/export of Windows VMs in all
950 cases due to the problems above.
951
952 Step-by-step example of a Windows OVF import
953 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
954
955 Microsoft provides
956 https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
957 to get started with Windows development.We are going to use one of these
958 to demonstrate the OVF import feature.
959
960 Download the Virtual Machine zip
961 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
962
963 After getting informed about the user agreement, choose the _Windows 10
964 Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
965
966 Extract the disk image from the zip
967 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
968
969 Using the `unzip` utility or any archiver of your choice, unpack the zip,
970 and copy via ssh/scp the ovf and vmdk files to your {pve} host.
971
972 Import the Virtual Machine
973 ^^^^^^^^^^^^^^^^^^^^^^^^^^
974
975 This will create a new virtual machine, using cores, memory and
976 VM name as read from the OVF manifest, and import the disks to the +local-lvm+
977 storage. You have to configure the network manually.
978
979 qm importovf 999 WinDev1709Eval.ovf local-lvm
980
981 The VM is ready to be started.
982
983 Adding an external disk image to a Virtual Machine
984 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
985
986 You can also add an existing disk image to a VM, either coming from a
987 foreign hypervisor, or one that you created yourself.
988
989 Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
990
991 vmdebootstrap --verbose \
992 --size 10GiB --serial-console \
993 --grub --no-extlinux \
994 --package openssh-server \
995 --package avahi-daemon \
996 --package qemu-guest-agent \
997 --hostname vm600 --enable-dhcp \
998 --customize=./copy_pub_ssh.sh \
999 --sparse --image vm600.raw
1000
1001 You can now create a new target VM for this image.
1002
1003 qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
1004 --bootdisk scsi0 --scsihw virtio-scsi-pci --ostype l26
1005
1006 Add the disk image as +unused0+ to the VM, using the storage +pvedir+:
1007
1008 qm importdisk 600 vm600.raw pvedir
1009
1010 Finally attach the unused disk to the SCSI controller of the VM:
1011
1012 qm set 600 --scsi0 pvedir:600/vm-600-disk-1.raw
1013
1014 The VM is ready to be started.
1015
1016 Managing Virtual Machines with `qm`
1017 ------------------------------------
1018
1019 qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
1020 create and destroy virtual machines, and control execution
1021 (start/stop/suspend/resume). Besides that, you can use qm to set
1022 parameters in the associated config file. It is also possible to
1023 create and delete virtual disks.
1024
1025 CLI Usage Examples
1026 ~~~~~~~~~~~~~~~~~~
1027
1028 Using an iso file uploaded on the 'local' storage, create a VM
1029 with a 4 GB IDE disk on the 'local-lvm' storage
1030
1031 qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
1032
1033 Start the new VM
1034
1035 qm start 300
1036
1037 Send a shutdown request, then wait until the VM is stopped.
1038
1039 qm shutdown 300 && qm wait 300
1040
1041 Same as above, but only wait for 40 seconds.
1042
1043 qm shutdown 300 && qm wait 300 -timeout 40
1044
1045
1046 [[qm_configuration]]
1047 Configuration
1048 -------------
1049
1050 VM configuration files are stored inside the Proxmox cluster file
1051 system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1052 Like other files stored inside `/etc/pve/`, they get automatically
1053 replicated to all other cluster nodes.
1054
1055 NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1056 unique cluster wide.
1057
1058 .Example VM Configuration
1059 ----
1060 cores: 1
1061 sockets: 1
1062 memory: 512
1063 name: webmail
1064 ostype: l26
1065 bootdisk: virtio0
1066 net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1067 virtio0: local:vm-100-disk-1,size=32G
1068 ----
1069
1070 Those configuration files are simple text files, and you can edit them
1071 using a normal text editor (`vi`, `nano`, ...). This is sometimes
1072 useful to do small corrections, but keep in mind that you need to
1073 restart the VM to apply such changes.
1074
1075 For that reason, it is usually better to use the `qm` command to
1076 generate and modify those files, or do the whole thing using the GUI.
1077 Our toolkit is smart enough to instantaneously apply most changes to
1078 running VM. This feature is called "hot plug", and there is no
1079 need to restart the VM in that case.
1080
1081
1082 File Format
1083 ~~~~~~~~~~~
1084
1085 VM configuration files use a simple colon separated key/value
1086 format. Each line has the following format:
1087
1088 -----
1089 # this is a comment
1090 OPTION: value
1091 -----
1092
1093 Blank lines in those files are ignored, and lines starting with a `#`
1094 character are treated as comments and are also ignored.
1095
1096
1097 [[qm_snapshots]]
1098 Snapshots
1099 ~~~~~~~~~
1100
1101 When you create a snapshot, `qm` stores the configuration at snapshot
1102 time into a separate snapshot section within the same configuration
1103 file. For example, after creating a snapshot called ``testsnapshot'',
1104 your configuration file will look like this:
1105
1106 .VM configuration with snapshot
1107 ----
1108 memory: 512
1109 swap: 512
1110 parent: testsnaphot
1111 ...
1112
1113 [testsnaphot]
1114 memory: 512
1115 swap: 512
1116 snaptime: 1457170803
1117 ...
1118 ----
1119
1120 There are a few snapshot related properties like `parent` and
1121 `snaptime`. The `parent` property is used to store the parent/child
1122 relationship between snapshots. `snaptime` is the snapshot creation
1123 time stamp (Unix epoch).
1124
1125
1126 [[qm_options]]
1127 Options
1128 ~~~~~~~
1129
1130 include::qm.conf.5-opts.adoc[]
1131
1132
1133 Locks
1134 -----
1135
1136 Online migrations, snapshots and backups (`vzdump`) set a lock to
1137 prevent incompatible concurrent actions on the affected VMs. Sometimes
1138 you need to remove such a lock manually (e.g., after a power failure).
1139
1140 qm unlock <vmid>
1141
1142 CAUTION: Only do that if you are sure the action which set the lock is
1143 no longer running.
1144
1145
1146 ifdef::manvolnum[]
1147
1148 Files
1149 ------
1150
1151 `/etc/pve/qemu-server/<VMID>.conf`::
1152
1153 Configuration file for the VM '<VMID>'.
1154
1155
1156 include::pve-copyright.adoc[]
1157 endif::manvolnum[]