]> git.proxmox.com Git - pve-docs.git/blob - qm.adoc
130e22ac5af993a5ad282ab39880dd740615abf3
[pve-docs.git] / qm.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 :pve-toplevel:
7
8 NAME
9 ----
10
11 qm - Qemu/KVM Virtual Machine Manager
12
13
14 SYNOPSIS
15 --------
16
17 include::qm.1-synopsis.adoc[]
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22
23 ifndef::manvolnum[]
24 Qemu/KVM Virtual Machines
25 =========================
26 include::attributes.txt[]
27 endif::manvolnum[]
28
29 ifdef::wiki[]
30 :pve-toplevel:
31 endif::wiki[]
32
33 // deprecates
34 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
35 // http://pve.proxmox.com/wiki/KVM
36 // http://pve.proxmox.com/wiki/Qemu_Server
37
38 Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
39 physical computer. From the perspective of the host system where Qemu is
40 running, Qemu is a user program which has access to a number of local resources
41 like partitions, files, network cards which are then passed to an
42 emulated computer which sees them as if they were real devices.
43
44 A guest operating system running in the emulated computer accesses these
45 devices, and runs as it were running on real hardware. For instance you can pass
46 an iso image as a parameter to Qemu, and the OS running in the emulated computer
47 will see a real CDROM inserted in a CD drive.
48
49 Qemu can emulates a great variety of hardware from ARM to Sparc, but {pve} is
50 only concerned with 32 and 64 bits PC clone emulation, since it represents the
51 overwhelming majority of server hardware. The emulation of PC clones is also one
52 of the fastest due to the availability of processor extensions which greatly
53 speed up Qemu when the emulated architecture is the same as the host
54 architecture.
55
56 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
57 It means that Qemu is running with the support of the virtualization processor
58 extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
59 _KVM_ can be use interchangeably as Qemu in {pve} will always try to load the kvm
60 module.
61
62 Qemu inside {pve} runs as a root process, since this is required to access block
63 and PCI devices.
64
65
66 Emulated devices and paravirtualized devices
67 --------------------------------------------
68
69 The PC hardware emulated by Qemu includes a mainboard, network controllers,
70 scsi, ide and sata controllers, serial ports (the complete list can be seen in
71 the `kvm(1)` man page) all of them emulated in software. All these devices
72 are the exact software equivalent of existing hardware devices, and if the OS
73 running in the guest has the proper drivers it will use the devices as if it
74 were running on real hardware. This allows Qemu to runs _unmodified_ operating
75 systems.
76
77 This however has a performance cost, as running in software what was meant to
78 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
79 Qemu can present to the guest operating system _paravirtualized devices_, where
80 the guest OS recognizes it is running inside Qemu and cooperates with the
81 hypervisor.
82
83 Qemu relies on the virtio virtualization standard, and is thus able to presente
84 paravirtualized virtio devices, which includes a paravirtualized generic disk
85 controller, a paravirtualized network card, a paravirtualized serial port,
86 a paravirtualized SCSI controller, etc ...
87
88 It is highly recommended to use the virtio devices whenever you can, as they
89 provide a big performance improvement. Using the virtio generic disk controller
90 versus an emulated IDE controller will double the sequential write throughput,
91 as measured with `bonnie++(8)`. Using the virtio network interface can deliver
92 up to three times the throughput of an emulated Intel E1000 network card, as
93 measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
94 http://www.linux-kvm.org/page/Using_VirtIO_NIC]
95
96
97 Virtual Machines settings
98 -------------------------
99 Generally speaking {pve} tries to choose sane defaults for virtual machines
100 (VM). Make sure you understand the meaning of the settings you change, as it
101 could incur a performance slowdown, or putting your data at risk.
102
103
104 General Settings
105 ~~~~~~~~~~~~~~~~
106 General settings of a VM include
107
108 * the *Node* : the physical server on which the VM will run
109 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
110 * *Name*: a free form text string you can use to describe the VM
111 * *Resource Pool*: a logical group of VMs
112
113
114 OS Settings
115 ~~~~~~~~~~~
116 When creating a VM, setting the proper Operating System(OS) allows {pve} to
117 optimize some low level parameters. For instance Windows OS expect the BIOS
118 clock to use the local time, while Unix based OS expect the BIOS clock to have
119 the UTC time.
120
121
122 Hard Disk
123 ~~~~~~~~~
124 Qemu can emulate a number of storage controllers:
125
126 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
127 controller. Even if this controller has been superseded by more more designs,
128 each and every OS you can think has support for it, making it a great choice
129 if you want to run an OS released before 2003. You can connect up to 4 devices
130 on this controller.
131
132 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
133 design, allowing higher throughput and a greater number of devices to be
134 connected. You can connect up to 6 devices on this controller.
135
136 * the *SCSI* controller, designed in 1985, is commonly found on server grade
137 hardware, and can connect up to 14 storage devices. {pve} emulates by default a
138 LSI 53C895A controller. +
139 A SCSI controller of type _Virtio_ is the recommended setting if you aim for
140 performance and is automatically selected for newly created Linux VMs since
141 {pve} 4.3. Linux distributions have support for this controller since 2012, and
142 FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
143 containing the drivers during the installation.
144 // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
145
146 * The *Virtio* controller, also called virtio-blk to distinguish from
147 the Virtio SCSI controller, is an older type of paravirtualized controller
148 which has been superseded in features by the Virtio SCSI Controller.
149
150 On each controller you attach a number of emulated hard disks, which are backed
151 by a file or a block device residing in the configured storage. The choice of
152 a storage type will determine the format of the hard disk image. Storages which
153 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
154 whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose
155 either the *raw disk image format* or the *QEMU image format*.
156
157 * the *QEMU image format* is a copy on write format which allows snapshots, and
158 thin provisioning of the disk image.
159 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
160 you would get when executing the `dd` command on a block device in Linux. This
161 format do not support thin provisioning or snapshotting by itself, requiring
162 cooperation from the storage layer for these tasks. It is however 10% faster
163 than the *QEMU image format*. footnote:[See this benchmark for details
164 http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
165 * the *VMware image format* only makes sense if you intend to import/export the
166 disk image to other hypervisors.
167
168 Setting the *Cache* mode of the hard drive will impact how the host system will
169 notify the guest systems of block write completions. The *No cache* default
170 means that the guest system will be notified that a write is complete when each
171 block reaches the physical storage write queue, ignoring the host page cache.
172 This provides a good balance between safety and speed.
173
174 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
175 you can set the *No backup* option on that disk.
176
177 If your storage supports _thin provisioning_ (see the storage chapter in the
178 {pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
179 option on the hard disks connected to that controller. With *Discard* enabled,
180 when the filesystem of a VM marks blocks as unused after removing files, the
181 emulated SCSI controller will relay this information to the storage, which will
182 then shrink the disk image accordingly.
183
184 .IO Thread
185 The option *IO Thread* can only be enabled when using a disk with the *VirtIO* controller,
186 or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI*.
187 With this enabled, Qemu uses one thread per disk, instead of one thread for all,
188 so it should increase performance when using multiple disks.
189 Note that backups do not currently work with *IO Thread* enabled.
190
191 CPU
192 ~~~
193 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
194 This CPU can then contain one or many *cores*, which are independent
195 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
196 sockets with two cores is mostly irrelevant from a performance point of view.
197 However some software is licensed depending on the number of sockets you have in
198 your machine, in that case it makes sense to set the number of of sockets to
199 what the license allows you, and increase the number of cores. +
200 Increasing the number of virtual cpus (cores and sockets) will usually provide a
201 performance improvement though that is heavily dependent on the use of the VM.
202 Multithreaded applications will of course benefit from a large number of
203 virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
204 execution on the host system. If you're not sure about the workload of your VM,
205 it is usually a safe bet to set the number of *Total cores* to 2.
206
207 NOTE: It is perfectly safe to set the _overall_ number of total cores in all
208 your VMs to be greater than the number of of cores you have on your server (ie.
209 4 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
210 the host system will balance the Qemu execution threads between your server
211 cores just like if you were running a standard multithreaded application.
212 However {pve} will prevent you to allocate on a _single_ machine more vcpus than
213 physically available, as this will only bring the performance down due to the
214 cost of context switches.
215
216 Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
217 processors. Each new processor generation adds new features, like hardware
218 assisted 3d rendering, random number generation, memory protection, etc ...
219 Usually you should select for your VM a processor type which closely matches the
220 CPU of the host system, as it means that the host CPU features (also called _CPU
221 flags_ ) will be available in your VMs. If you want an exact match, you can set
222 the CPU type to *host* in which case the VM will have exactly the same CPU flags
223 as your host system. +
224 This has a downside though. If you want to do a live migration of VMs between
225 different hosts, your VM might end up on a new system with a different CPU type.
226 If the CPU flags passed to the guest are missing, the qemu process will stop. To
227 remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
228 kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
229 but is guaranteed to work everywhere. +
230 In short, if you care about live migration and moving VMs between nodes, leave
231 the kvm64 default. If you don’t care about live migration, set the CPU type to
232 host, as in theory this will give your guests maximum performance.
233
234 You can also optionally emulate a *NUMA* architecture in your VMs. The basics of
235 the NUMA architecture mean that instead of having a global memory pool available
236 to all your cores, the memory is spread into local banks close to each socket.
237 This can bring speed improvements as the memory bus is not a bottleneck
238 anymore. If your system has a NUMA architecture footnote:[if the command
239 `numactl --hardware | grep available` returns more than one node, then your host
240 system has a NUMA architecture] we recommend to activate the option, as this
241 will allow proper distribution of the VM resources on the host system. This
242 option is also required in {pve} to allow hotplugging of cores and RAM to a VM.
243
244 If the NUMA option is used, it is recommended to set the number of sockets to
245 the number of sockets of the host system.
246
247 Memory
248 ~~~~~~
249 For each VM you have the option to set a fixed size memory or asking
250 {pve} to dynamically allocate memory based on the current RAM usage of the
251 host.
252
253 When choosing a *fixed size memory* {pve} will simply allocate what you
254 specify to your VM.
255
256 // see autoballoon() in pvestatd.pm
257 When choosing to *automatically allocate memory*, {pve} will make sure that the
258 minimum amount you specified is always available to the VM, and if RAM usage on
259 the host is below 80%, will dynamically add memory to the guest up to the
260 maximum memory specified. +
261 When the host is becoming short on RAM, the VM will then release some memory
262 back to the host, swapping running processes if needed and starting the oom
263 killer in last resort. The passing around of memory between host and guest is
264 done via a special `balloon` kernel driver running inside the guest, which will
265 grab or release memory pages from the host.
266 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
267
268 When multiple VMs use the autoallocate facility, it is possible to set a
269 *Shares* coefficient which indicates the relative amount of the free host memory
270 that each VM shoud take. Suppose for instance you have four VMs, three of them
271 running a HTTP server and the last one is a database server. To cache more
272 database blocks in the database server RAM, you would like to prioritize the
273 database VM when spare RAM is available. For this you assign a Shares property
274 of 3000 to the database VM, leaving the other VMs to the Shares default setting
275 of 1000. The host server has 32GB of RAM, and is curring using 16GB, leaving 32
276 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
277 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
278 get 1/5 GB.
279
280 All Linux distributions released after 2010 have the balloon kernel driver
281 included. For Windows OSes, the balloon driver needs to be added manually and can
282 incur a slowdown of the guest, so we don't recommend using it on critical
283 systems.
284 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
285
286 When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
287 of RAM available to the host.
288
289 Network Device
290 ~~~~~~~~~~~~~~
291 Each VM can have many _Network interface controllers_ (NIC), of four different
292 types:
293
294 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
295 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
296 performance. Like all VirtIO devices, the guest OS should have the proper driver
297 installed.
298 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
299 only be used when emulating older operating systems ( released before 2002 )
300 * the *vmxnet3* is another paravirtualized device, which should only be used
301 when importing a VM from another hypervisor.
302
303 {pve} will generate for each NIC a random *MAC address*, so that your VM is
304 addressable on Ethernet networks.
305
306 The NIC you added to the VM can follow one of two differents models:
307
308 * in the default *Bridged mode* each virtual NIC is backed on the host by a
309 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
310 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
311 have direct access to the Ethernet LAN on which the host is located.
312 * in the alternative *NAT mode*, each virtual NIC will only communicate with
313 the Qemu user networking stack, where a builting router and DHCP server can
314 provide network access. This built-in DHCP will serve adresses in the private
315 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
316 should only be used for testing.
317
318 You can also skip adding a network device when creating a VM by selecting *No
319 network device*.
320
321 .Multiqueue
322 If you are using the VirtIO driver, you can optionally activate the
323 *Multiqueue* option. This option allows the guest OS to process networking
324 packets using multiple virtual CPUs, providing an increase in the total number
325 of packets transfered.
326
327 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
328 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
329 host kernel, where the queue will be processed by a kernel thread spawn by the
330 vhost driver. With this option activated, it is possible to pass _multiple_
331 network queues to the host kernel for each NIC.
332
333 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
334 When using Multiqueue, it is recommended to set it to a value equal
335 to the number of Total Cores of your guest. You also need to set in
336 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
337 command:
338
339 `ethtool -L eth0 combined X`
340
341 where X is the number of the number of vcpus of the VM.
342
343 You should note that setting the Multiqueue parameter to a value greater
344 than one will increase the CPU load on the host and guest systems as the
345 traffic increases. We recommend to set this option only when the VM has to
346 process a great number of incoming connections, such as when the VM is running
347 as a router, reverse proxy or a busy HTTP server doing long polling.
348
349 USB Passthrough
350 ~~~~~~~~~~~~~~~
351 There are two different types of USB passthrough devices:
352
353 * Host USB passtrough
354 * SPICE USB passthrough
355
356 Host USB passthrough works by giving a VM a USB device of the host.
357 This can either be done via the vendor- and product-id, or
358 via the host bus and port.
359
360 The vendor/product-id looks like this: *0123:abcd*,
361 where *0123* is the id of the vendor, and *abcd* is the id
362 of the product, meaning two pieces of the same usb device
363 have the same id.
364
365 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
366 and *2.3.4* is the port path. This represents the physical
367 ports of your host (depending of the internal order of the
368 usb controllers).
369
370 If a device is present in a VM configuration when the VM starts up,
371 but the device is not present in the host, the VM can boot without problems.
372 As soon as the device/port ist available in the host, it gets passed through.
373
374 WARNING: Using this kind of USB passthrough, means that you cannot move
375 a VM online to another host, since the hardware is only available
376 on the host the VM is currently residing.
377
378 The second type of passthrough is SPICE USB passthrough. This is useful
379 if you use a SPICE client which supports it. If you add a SPICE USB port
380 to your VM, you can passthrough a USB device from where your SPICE client is,
381 directly to the VM (for example an input device or hardware dongle).
382
383 BIOS and UEFI
384 ~~~~~~~~~~~~~
385
386 In order to properly emulate a computer, QEMU needs to use a firmware.
387 By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS
388 implementation. SeaBIOS is a good choice for most standard setups.
389
390 There are, however, some scenarios in which a BIOS is not a good firmware
391 to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
392 http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
393 In such cases, you should rather use *OVMF*, which is an open-source UEFI implemenation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
394
395 If you want to use OVMF, there are several things to consider:
396
397 In order to save things like the *boot order*, there needs to be an EFI Disk.
398 This disk will be included in backups and snapshots, and there can only be one.
399
400 You can create such a disk with the following command:
401
402 qm set <vmid> -efidisk0 <storage>:1,format=<format>
403
404 Where *<storage>* is the storage where you want to have the disk, and
405 *<format>* is a format which the storage supports. Alternatively, you can
406 create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
407 hardware section of a VM.
408
409 When using OVMF with a virtual display (without VGA passthrough),
410 you need to set the client resolution in the OVMF menu(which you can reach
411 with a press of the ESC button during boot), or you have to choose
412 SPICE as the display type.
413
414
415 Managing Virtual Machines with `qm`
416 ------------------------------------
417
418 qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
419 create and destroy virtual machines, and control execution
420 (start/stop/suspend/resume). Besides that, you can use qm to set
421 parameters in the associated config file. It is also possible to
422 create and delete virtual disks.
423
424 CLI Usage Examples
425 ~~~~~~~~~~~~~~~~~~
426
427 Create a new VM with 4 GB IDE disk.
428
429 qm create 300 -ide0 4 -net0 e1000 -cdrom proxmox-mailgateway_2.1.iso
430
431 Start the new VM
432
433 qm start 300
434
435 Send a shutdown request, then wait until the VM is stopped.
436
437 qm shutdown 300 && qm wait 300
438
439 Same as above, but only wait for 40 seconds.
440
441 qm shutdown 300 && qm wait 300 -timeout 40
442
443 Configuration
444 -------------
445
446 All configuration files consists of lines in the form
447
448 PARAMETER: value
449
450 Configuration files are stored inside the Proxmox cluster file
451 system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
452
453 Options
454 ~~~~~~~
455
456 include::qm.conf.5-opts.adoc[]
457
458
459 Locks
460 -----
461
462 Online migrations and backups (`vzdump`) set a lock to prevent incompatible
463 concurrent actions on the affected VMs. Sometimes you need to remove such a
464 lock manually (e.g., after a power failure).
465
466 qm unlock <vmid>
467
468
469 ifdef::manvolnum[]
470 include::pve-copyright.adoc[]
471 endif::manvolnum[]