]> git.proxmox.com Git - pve-docs.git/blob - qm.adoc
asciidoc-pve.in: add code to auto generate online help data
[pve-docs.git] / qm.adoc
1 [[chapter_virtual_machines]]
2 ifdef::manvolnum[]
3 qm(1)
4 =====
5 include::attributes.txt[]
6 :pve-toplevel:
7
8 NAME
9 ----
10
11 qm - Qemu/KVM Virtual Machine Manager
12
13
14 SYNOPSIS
15 --------
16
17 include::qm.1-synopsis.adoc[]
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Qemu/KVM Virtual Machines
24 =========================
25 include::attributes.txt[]
26 endif::manvolnum[]
27 ifdef::wiki[]
28 :pve-toplevel:
29 endif::wiki[]
30
31 // deprecates
32 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
33 // http://pve.proxmox.com/wiki/KVM
34 // http://pve.proxmox.com/wiki/Qemu_Server
35
36 Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
37 physical computer. From the perspective of the host system where Qemu is
38 running, Qemu is a user program which has access to a number of local resources
39 like partitions, files, network cards which are then passed to an
40 emulated computer which sees them as if they were real devices.
41
42 A guest operating system running in the emulated computer accesses these
43 devices, and runs as it were running on real hardware. For instance you can pass
44 an iso image as a parameter to Qemu, and the OS running in the emulated computer
45 will see a real CDROM inserted in a CD drive.
46
47 Qemu can emulates a great variety of hardware from ARM to Sparc, but {pve} is
48 only concerned with 32 and 64 bits PC clone emulation, since it represents the
49 overwhelming majority of server hardware. The emulation of PC clones is also one
50 of the fastest due to the availability of processor extensions which greatly
51 speed up Qemu when the emulated architecture is the same as the host
52 architecture.
53
54 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
55 It means that Qemu is running with the support of the virtualization processor
56 extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
57 _KVM_ can be use interchangeably as Qemu in {pve} will always try to load the kvm
58 module.
59
60 Qemu inside {pve} runs as a root process, since this is required to access block
61 and PCI devices.
62
63
64 Emulated devices and paravirtualized devices
65 --------------------------------------------
66
67 The PC hardware emulated by Qemu includes a mainboard, network controllers,
68 scsi, ide and sata controllers, serial ports (the complete list can be seen in
69 the `kvm(1)` man page) all of them emulated in software. All these devices
70 are the exact software equivalent of existing hardware devices, and if the OS
71 running in the guest has the proper drivers it will use the devices as if it
72 were running on real hardware. This allows Qemu to runs _unmodified_ operating
73 systems.
74
75 This however has a performance cost, as running in software what was meant to
76 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
77 Qemu can present to the guest operating system _paravirtualized devices_, where
78 the guest OS recognizes it is running inside Qemu and cooperates with the
79 hypervisor.
80
81 Qemu relies on the virtio virtualization standard, and is thus able to presente
82 paravirtualized virtio devices, which includes a paravirtualized generic disk
83 controller, a paravirtualized network card, a paravirtualized serial port,
84 a paravirtualized SCSI controller, etc ...
85
86 It is highly recommended to use the virtio devices whenever you can, as they
87 provide a big performance improvement. Using the virtio generic disk controller
88 versus an emulated IDE controller will double the sequential write throughput,
89 as measured with `bonnie++(8)`. Using the virtio network interface can deliver
90 up to three times the throughput of an emulated Intel E1000 network card, as
91 measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
92 http://www.linux-kvm.org/page/Using_VirtIO_NIC]
93
94
95 [[qm_virtual_machines_settings]]
96 Virtual Machines settings
97 -------------------------
98
99 Generally speaking {pve} tries to choose sane defaults for virtual machines
100 (VM). Make sure you understand the meaning of the settings you change, as it
101 could incur a performance slowdown, or putting your data at risk.
102
103
104 [[qm_general_settings]]
105 General Settings
106 ~~~~~~~~~~~~~~~~
107
108 General settings of a VM include
109
110 * the *Node* : the physical server on which the VM will run
111 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
112 * *Name*: a free form text string you can use to describe the VM
113 * *Resource Pool*: a logical group of VMs
114
115
116 [[qm_os_settings]]
117 OS Settings
118 ~~~~~~~~~~~
119
120 When creating a VM, setting the proper Operating System(OS) allows {pve} to
121 optimize some low level parameters. For instance Windows OS expect the BIOS
122 clock to use the local time, while Unix based OS expect the BIOS clock to have
123 the UTC time.
124
125
126 [[qm_hard_disk]]
127 Hard Disk
128 ~~~~~~~~~
129
130 Qemu can emulate a number of storage controllers:
131
132 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
133 controller. Even if this controller has been superseded by more more designs,
134 each and every OS you can think has support for it, making it a great choice
135 if you want to run an OS released before 2003. You can connect up to 4 devices
136 on this controller.
137
138 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
139 design, allowing higher throughput and a greater number of devices to be
140 connected. You can connect up to 6 devices on this controller.
141
142 * the *SCSI* controller, designed in 1985, is commonly found on server grade
143 hardware, and can connect up to 14 storage devices. {pve} emulates by default a
144 LSI 53C895A controller. +
145 A SCSI controller of type _Virtio_ is the recommended setting if you aim for
146 performance and is automatically selected for newly created Linux VMs since
147 {pve} 4.3. Linux distributions have support for this controller since 2012, and
148 FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
149 containing the drivers during the installation.
150 // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
151
152 * The *Virtio* controller, also called virtio-blk to distinguish from
153 the Virtio SCSI controller, is an older type of paravirtualized controller
154 which has been superseded in features by the Virtio SCSI Controller.
155
156 On each controller you attach a number of emulated hard disks, which are backed
157 by a file or a block device residing in the configured storage. The choice of
158 a storage type will determine the format of the hard disk image. Storages which
159 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
160 whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose
161 either the *raw disk image format* or the *QEMU image format*.
162
163 * the *QEMU image format* is a copy on write format which allows snapshots, and
164 thin provisioning of the disk image.
165 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
166 you would get when executing the `dd` command on a block device in Linux. This
167 format do not support thin provisioning or snapshotting by itself, requiring
168 cooperation from the storage layer for these tasks. It is however 10% faster
169 than the *QEMU image format*. footnote:[See this benchmark for details
170 http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
171 * the *VMware image format* only makes sense if you intend to import/export the
172 disk image to other hypervisors.
173
174 Setting the *Cache* mode of the hard drive will impact how the host system will
175 notify the guest systems of block write completions. The *No cache* default
176 means that the guest system will be notified that a write is complete when each
177 block reaches the physical storage write queue, ignoring the host page cache.
178 This provides a good balance between safety and speed.
179
180 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
181 you can set the *No backup* option on that disk.
182
183 If your storage supports _thin provisioning_ (see the storage chapter in the
184 {pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
185 option on the hard disks connected to that controller. With *Discard* enabled,
186 when the filesystem of a VM marks blocks as unused after removing files, the
187 emulated SCSI controller will relay this information to the storage, which will
188 then shrink the disk image accordingly.
189
190 .IO Thread
191 The option *IO Thread* can only be enabled when using a disk with the *VirtIO* controller,
192 or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI*.
193 With this enabled, Qemu uses one thread per disk, instead of one thread for all,
194 so it should increase performance when using multiple disks.
195 Note that backups do not currently work with *IO Thread* enabled.
196
197
198 [[qm_cpu]]
199 CPU
200 ~~~
201
202 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
203 This CPU can then contain one or many *cores*, which are independent
204 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
205 sockets with two cores is mostly irrelevant from a performance point of view.
206 However some software is licensed depending on the number of sockets you have in
207 your machine, in that case it makes sense to set the number of of sockets to
208 what the license allows you, and increase the number of cores. +
209 Increasing the number of virtual cpus (cores and sockets) will usually provide a
210 performance improvement though that is heavily dependent on the use of the VM.
211 Multithreaded applications will of course benefit from a large number of
212 virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
213 execution on the host system. If you're not sure about the workload of your VM,
214 it is usually a safe bet to set the number of *Total cores* to 2.
215
216 NOTE: It is perfectly safe to set the _overall_ number of total cores in all
217 your VMs to be greater than the number of of cores you have on your server (ie.
218 4 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
219 the host system will balance the Qemu execution threads between your server
220 cores just like if you were running a standard multithreaded application.
221 However {pve} will prevent you to allocate on a _single_ machine more vcpus than
222 physically available, as this will only bring the performance down due to the
223 cost of context switches.
224
225 Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
226 processors. Each new processor generation adds new features, like hardware
227 assisted 3d rendering, random number generation, memory protection, etc ...
228 Usually you should select for your VM a processor type which closely matches the
229 CPU of the host system, as it means that the host CPU features (also called _CPU
230 flags_ ) will be available in your VMs. If you want an exact match, you can set
231 the CPU type to *host* in which case the VM will have exactly the same CPU flags
232 as your host system. +
233 This has a downside though. If you want to do a live migration of VMs between
234 different hosts, your VM might end up on a new system with a different CPU type.
235 If the CPU flags passed to the guest are missing, the qemu process will stop. To
236 remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
237 kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
238 but is guaranteed to work everywhere. +
239 In short, if you care about live migration and moving VMs between nodes, leave
240 the kvm64 default. If you don’t care about live migration, set the CPU type to
241 host, as in theory this will give your guests maximum performance.
242
243 You can also optionally emulate a *NUMA* architecture in your VMs. The basics of
244 the NUMA architecture mean that instead of having a global memory pool available
245 to all your cores, the memory is spread into local banks close to each socket.
246 This can bring speed improvements as the memory bus is not a bottleneck
247 anymore. If your system has a NUMA architecture footnote:[if the command
248 `numactl --hardware | grep available` returns more than one node, then your host
249 system has a NUMA architecture] we recommend to activate the option, as this
250 will allow proper distribution of the VM resources on the host system. This
251 option is also required in {pve} to allow hotplugging of cores and RAM to a VM.
252
253 If the NUMA option is used, it is recommended to set the number of sockets to
254 the number of sockets of the host system.
255
256
257 [[qm_memory]]
258 Memory
259 ~~~~~~
260
261 For each VM you have the option to set a fixed size memory or asking
262 {pve} to dynamically allocate memory based on the current RAM usage of the
263 host.
264
265 When choosing a *fixed size memory* {pve} will simply allocate what you
266 specify to your VM.
267
268 // see autoballoon() in pvestatd.pm
269 When choosing to *automatically allocate memory*, {pve} will make sure that the
270 minimum amount you specified is always available to the VM, and if RAM usage on
271 the host is below 80%, will dynamically add memory to the guest up to the
272 maximum memory specified. +
273 When the host is becoming short on RAM, the VM will then release some memory
274 back to the host, swapping running processes if needed and starting the oom
275 killer in last resort. The passing around of memory between host and guest is
276 done via a special `balloon` kernel driver running inside the guest, which will
277 grab or release memory pages from the host.
278 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
279
280 When multiple VMs use the autoallocate facility, it is possible to set a
281 *Shares* coefficient which indicates the relative amount of the free host memory
282 that each VM shoud take. Suppose for instance you have four VMs, three of them
283 running a HTTP server and the last one is a database server. To cache more
284 database blocks in the database server RAM, you would like to prioritize the
285 database VM when spare RAM is available. For this you assign a Shares property
286 of 3000 to the database VM, leaving the other VMs to the Shares default setting
287 of 1000. The host server has 32GB of RAM, and is curring using 16GB, leaving 32
288 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
289 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
290 get 1/5 GB.
291
292 All Linux distributions released after 2010 have the balloon kernel driver
293 included. For Windows OSes, the balloon driver needs to be added manually and can
294 incur a slowdown of the guest, so we don't recommend using it on critical
295 systems.
296 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
297
298 When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
299 of RAM available to the host.
300
301
302 [[qm_network_device]]
303 Network Device
304 ~~~~~~~~~~~~~~
305
306 Each VM can have many _Network interface controllers_ (NIC), of four different
307 types:
308
309 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
310 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
311 performance. Like all VirtIO devices, the guest OS should have the proper driver
312 installed.
313 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
314 only be used when emulating older operating systems ( released before 2002 )
315 * the *vmxnet3* is another paravirtualized device, which should only be used
316 when importing a VM from another hypervisor.
317
318 {pve} will generate for each NIC a random *MAC address*, so that your VM is
319 addressable on Ethernet networks.
320
321 The NIC you added to the VM can follow one of two differents models:
322
323 * in the default *Bridged mode* each virtual NIC is backed on the host by a
324 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
325 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
326 have direct access to the Ethernet LAN on which the host is located.
327 * in the alternative *NAT mode*, each virtual NIC will only communicate with
328 the Qemu user networking stack, where a builting router and DHCP server can
329 provide network access. This built-in DHCP will serve adresses in the private
330 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
331 should only be used for testing.
332
333 You can also skip adding a network device when creating a VM by selecting *No
334 network device*.
335
336 .Multiqueue
337 If you are using the VirtIO driver, you can optionally activate the
338 *Multiqueue* option. This option allows the guest OS to process networking
339 packets using multiple virtual CPUs, providing an increase in the total number
340 of packets transfered.
341
342 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
343 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
344 host kernel, where the queue will be processed by a kernel thread spawn by the
345 vhost driver. With this option activated, it is possible to pass _multiple_
346 network queues to the host kernel for each NIC.
347
348 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
349 When using Multiqueue, it is recommended to set it to a value equal
350 to the number of Total Cores of your guest. You also need to set in
351 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
352 command:
353
354 `ethtool -L eth0 combined X`
355
356 where X is the number of the number of vcpus of the VM.
357
358 You should note that setting the Multiqueue parameter to a value greater
359 than one will increase the CPU load on the host and guest systems as the
360 traffic increases. We recommend to set this option only when the VM has to
361 process a great number of incoming connections, such as when the VM is running
362 as a router, reverse proxy or a busy HTTP server doing long polling.
363
364
365 USB Passthrough
366 ~~~~~~~~~~~~~~~
367
368 There are two different types of USB passthrough devices:
369
370 * Host USB passtrough
371 * SPICE USB passthrough
372
373 Host USB passthrough works by giving a VM a USB device of the host.
374 This can either be done via the vendor- and product-id, or
375 via the host bus and port.
376
377 The vendor/product-id looks like this: *0123:abcd*,
378 where *0123* is the id of the vendor, and *abcd* is the id
379 of the product, meaning two pieces of the same usb device
380 have the same id.
381
382 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
383 and *2.3.4* is the port path. This represents the physical
384 ports of your host (depending of the internal order of the
385 usb controllers).
386
387 If a device is present in a VM configuration when the VM starts up,
388 but the device is not present in the host, the VM can boot without problems.
389 As soon as the device/port ist available in the host, it gets passed through.
390
391 WARNING: Using this kind of USB passthrough, means that you cannot move
392 a VM online to another host, since the hardware is only available
393 on the host the VM is currently residing.
394
395 The second type of passthrough is SPICE USB passthrough. This is useful
396 if you use a SPICE client which supports it. If you add a SPICE USB port
397 to your VM, you can passthrough a USB device from where your SPICE client is,
398 directly to the VM (for example an input device or hardware dongle).
399
400
401 [[qm_bios_and_uefi]]
402 BIOS and UEFI
403 ~~~~~~~~~~~~~
404
405 In order to properly emulate a computer, QEMU needs to use a firmware.
406 By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS
407 implementation. SeaBIOS is a good choice for most standard setups.
408
409 There are, however, some scenarios in which a BIOS is not a good firmware
410 to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
411 http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
412 In such cases, you should rather use *OVMF*, which is an open-source UEFI implemenation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
413
414 If you want to use OVMF, there are several things to consider:
415
416 In order to save things like the *boot order*, there needs to be an EFI Disk.
417 This disk will be included in backups and snapshots, and there can only be one.
418
419 You can create such a disk with the following command:
420
421 qm set <vmid> -efidisk0 <storage>:1,format=<format>
422
423 Where *<storage>* is the storage where you want to have the disk, and
424 *<format>* is a format which the storage supports. Alternatively, you can
425 create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
426 hardware section of a VM.
427
428 When using OVMF with a virtual display (without VGA passthrough),
429 you need to set the client resolution in the OVMF menu(which you can reach
430 with a press of the ESC button during boot), or you have to choose
431 SPICE as the display type.
432
433
434 Managing Virtual Machines with `qm`
435 ------------------------------------
436
437 qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
438 create and destroy virtual machines, and control execution
439 (start/stop/suspend/resume). Besides that, you can use qm to set
440 parameters in the associated config file. It is also possible to
441 create and delete virtual disks.
442
443 CLI Usage Examples
444 ~~~~~~~~~~~~~~~~~~
445
446 Create a new VM with 4 GB IDE disk.
447
448 qm create 300 -ide0 4 -net0 e1000 -cdrom proxmox-mailgateway_2.1.iso
449
450 Start the new VM
451
452 qm start 300
453
454 Send a shutdown request, then wait until the VM is stopped.
455
456 qm shutdown 300 && qm wait 300
457
458 Same as above, but only wait for 40 seconds.
459
460 qm shutdown 300 && qm wait 300 -timeout 40
461
462 Configuration
463 -------------
464
465 All configuration files consists of lines in the form
466
467 PARAMETER: value
468
469 Configuration files are stored inside the Proxmox cluster file
470 system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
471
472 [[qm_options]]
473 Options
474 ~~~~~~~
475
476 include::qm.conf.5-opts.adoc[]
477
478
479 Locks
480 -----
481
482 Online migrations and backups (`vzdump`) set a lock to prevent incompatible
483 concurrent actions on the affected VMs. Sometimes you need to remove such a
484 lock manually (e.g., after a power failure).
485
486 qm unlock <vmid>
487
488
489 ifdef::manvolnum[]
490 include::pve-copyright.adoc[]
491 endif::manvolnum[]