]> git.proxmox.com Git - pve-docs.git/blob - qm.adoc
e8dc3ba16c0ef6e6d8131087c7d1d774e668654f
[pve-docs.git] / qm.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 NAME
7 ----
8
9 qm - Qemu/KVM Virtual Machine Manager
10
11
12 SYNOPSYS
13 --------
14
15 include::qm.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Qemu/KVM Virtual Machines
23 =========================
24 include::attributes.txt[]
25 endif::manvolnum[]
26
27 // deprecates
28 // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29 // http://pve.proxmox.com/wiki/KVM
30 // http://pve.proxmox.com/wiki/Qemu_Server
31
32 Qemu (short form for Quick Emulator) is an opensource hypervisor that emulates a
33 physical computer. From the perspective of the host system where Qemu is
34 running, Qemu is a user program which has access to a number of local resources
35 like partitions, files, network cards which are then passed to an
36 emulated computer which sees them as if they were real devices.
37
38 A guest operating system running in the emulated computer accesses these
39 devices, and runs as it were running on real hardware. For instance you can pass
40 an iso image as a parameter to Qemu, and the OS running in the emulated computer
41 will see a real CDROM inserted in a CD drive.
42
43 Qemu can emulates a great variety of hardware from ARM to Sparc, but {pve} is
44 only concerned with 32 and 64 bits PC clone emulation, since it represents the
45 overwhelming majority of server hardware. The emulation of PC clones is also one
46 of the fastest due to the availability of processor extensions which greatly
47 speed up Qemu when the emulated architecture is the same as the host
48 architecture.
49
50 NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51 It means that Qemu is running with the support of the virtualization processor
52 extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
53 _KVM_ can be use interchangeably as Qemu in {pve} will always try to load the kvm
54 module.
55
56 Qemu inside {pve} runs as a root process, since this is required to access block
57 and PCI devices.
58
59 Emulated devices and paravirtualized devices
60 --------------------------------------------
61
62 The PC hardware emulated by Qemu includes a mainboard, network controllers,
63 scsi, ide and sata controllers, serial ports (the complete list can be seen in
64 the `kvm(1)` man page) all of them emulated in software. All these devices
65 are the exact software equivalent of existing hardware devices, and if the OS
66 running in the guest has the proper drivers it will use the devices as if it
67 were running on real hardware. This allows Qemu to runs _unmodified_ operating
68 systems.
69
70 This however has a performance cost, as running in software what was meant to
71 run in hardware involves a lot of extra work for the host CPU. To mitigate this,
72 Qemu can present to the guest operating system _paravirtualized devices_, where
73 the guest OS recognizes it is running inside Qemu and cooperates with the
74 hypervisor.
75
76 Qemu relies on the virtio virtualization standard, and is thus able to presente
77 paravirtualized virtio devices, which includes a paravirtualized generic disk
78 controller, a paravirtualized network card, a paravirtualized serial port,
79 a paravirtualized SCSI controller, etc ...
80
81 It is highly recommended to use the virtio devices whenever you can, as they
82 provide a big performance improvement. Using the virtio generic disk controller
83 versus an emulated IDE controller will double the sequential write throughput,
84 as measured with `bonnie++(8)`. Using the virtio network interface can deliver
85 up to three times the throughput of an emulated Intel E1000 network card, as
86 measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
87 http://www.linux-kvm.org/page/Using_VirtIO_NIC]
88
89 Virtual Machines settings
90 -------------------------
91 Generally speaking {pve} tries to choose sane defaults for virtual machines
92 (VM). Make sure you understand the meaning of the settings you change, as it
93 could incur a performance slowdown, or putting your data at risk.
94
95 General Settings
96 ~~~~~~~~~~~~~~~~
97 General settings of a VM include
98
99 * the *Node* : the physical server on which the VM will run
100 * the *VM ID*: a unique number in this {pve} installation used to identify your VM
101 * *Name*: a free form text string you can use to describe the VM
102 * *Resource Pool*: a logical group of VMs
103
104 OS Settings
105 ~~~~~~~~~~~
106 When creating a VM, setting the proper Operating System(OS) allows {pve} to
107 optimize some low level parameters. For instance Windows OS expect the BIOS
108 clock to use the local time, while Unix based OS expect the BIOS clock to have
109 the UTC time.
110
111 Hard Disk
112 ~~~~~~~~~
113 Qemu can emulate a number of storage controllers:
114
115 * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
116 controller. Even if this controller has been superseded by more more designs,
117 each and every OS you can think has support for it, making it a great choice
118 if you want to run an OS released before 2003. You can connect up to 4 devices
119 on this controller.
120
121 * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
122 design, allowing higher throughput and a greater number of devices to be
123 connected. You can connect up to 6 devices on this controller.
124
125 * the *SCSI* controller, designed in 1985, is commonly found on server
126 grade hardware, and can connect up to 14 storage devices. {pve} emulates by
127 default a LSI 53C895A controller.
128
129 * The *Virtio* controller is a generic paravirtualized controller, and is the
130 recommended setting if you aim for performance. To use this controller, the OS
131 need to have special drivers which may be included in your installation ISO or
132 not. Linux distributions have support for the Virtio controller since 2010, and
133 FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
134 containing the Virtio drivers during the installation.
135 // see: https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
136 You can connect up to 16 devices on this controller.
137
138 On each controller you attach a number of emulated hard disks, which are backed
139 by a file or a block device residing in the configured storage. The choice of
140 a storage type will determine the format of the hard disk image. Storages which
141 present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
142 whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose
143 either the *raw disk image format* or the *QEMU image format*.
144
145 * the *QEMU image format* is a copy on write format which allows snapshots, and
146 thin provisioning of the disk image.
147 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
148 you would get when executing the `dd` command on a block device in Linux. This
149 format do not support thin provisioning or snapshotting by itself, requiring
150 cooperation from the storage layer for these tasks. It is however 10% faster
151 than the *QEMU image format*. footnote:[See this benchmark for details
152 http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
153 * the *VMware image format* only makes sense if you intend to import/export the
154 disk image to other hypervisors.
155
156 Setting the *Cache* mode of the hard drive will impact how the host system will
157 notify the guest systems of block write completions. The *No cache* default
158 means that the guest system will be notified that a write is complete when each
159 block reaches the physical storage write queue, ignoring the host page cache.
160 This provides a good balance between safety and speed.
161
162 If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
163 you can set the *No backup* option on that disk.
164
165 If your storage supports _thin provisioning_ (see the storage chapter in the
166 {pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
167 option on the hard disks connected to that controller. With *Discard* enabled,
168 when the filesystem of a VM marks blocks as unused after removing files, the
169 emulated SCSI controller will relay this information to the storage, which will
170 then shrink the disk image accordingly.
171
172 The option *IO Thread* can only be enabled when using a disk with the *VirtIO* controller,
173 or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI*.
174 With this enabled, Qemu uses one thread per disk, instead of one thread for all,
175 so it should increase performance when using multiple disks.
176 Note that backups do not currently work with *IO Thread* enabled.
177
178 CPU
179 ~~~
180 A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
181 This CPU can then contain one or many *cores*, which are independent
182 processing units. Whether you have a single CPU socket with 4 cores, or two CPU
183 sockets with two cores is mostly irrelevant from a performance point of view.
184 However some software is licensed depending on the number of sockets you have in
185 your machine, in that case it makes sense to set the number of of sockets to
186 what the license allows you, and increase the number of cores. +
187 Increasing the number of virtual cpus (cores and sockets) will usually provide a
188 performance improvement though that is heavily dependent on the use of the VM.
189 Multithreaded applications will of course benefit from a large number of
190 virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
191 execution on the host system. If you're not sure about the workload of your VM,
192 it is usually a safe bet to set the number of *Total cores* to 2.
193
194 NOTE: It is perfectly safe to set the _overall_ number of total cores in all
195 your VMs to be greater than the number of of cores you have on your server (ie.
196 4 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
197 the host system will balance the Qemu execution threads between your server
198 cores just like if you were running a standard multithreaded application.
199 However {pve} will prevent you to allocate on a _single_ machine more vcpus than
200 physically available, as this will only bring the performance down due to the
201 cost of context switches.
202
203 Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
204 processors. Each new processor generation adds new features, like hardware
205 assisted 3d rendering, random number generation, memory protection, etc ...
206 Usually you should select for your VM a processor type which closely matches the
207 CPU of the host system, as it means that the host CPU features (also called _CPU
208 flags_ ) will be available in your VMs. If you want an exact match, you can set
209 the CPU type to *host* in which case the VM will have exactly the same CPU flags
210 as your host system. +
211 This has a downside though. If you want to do a live migration of VMs between
212 different hosts, your VM might end up on a new system with a different CPU type.
213 If the CPU flags passed to the guest are missing, the qemu process will stop. To
214 remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
215 kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
216 but is guaranteed to work everywhere. +
217 In short, if you care about live migration and moving VMs between nodes, leave
218 the kvm64 default. If you don’t care about live migration, set the CPU type to
219 host, as in theory this will give your guests maximum performance.
220
221 You can also optionally emulate a *NUMA* architecture in your VMs. The basics of
222 the NUMA architecture mean that instead of having a global memory pool available
223 to all your cores, the memory is spread into local banks close to each socket.
224 This can bring speed improvements as the memory bus is not a bottleneck
225 anymore. If your system has a NUMA architecture footnote:[if the command
226 `numactl --hardware | grep available` returns more than one node, then your host
227 system has a NUMA architecture] we recommend to activate the option, as this
228 will allow proper distribution of the VM resources on the host system. This
229 option is also required in {pve} to allow hotplugging of cores and RAM to a VM.
230
231 If the NUMA option is used, it is recommended to set the number of sockets to
232 the number of sockets of the host system.
233
234 Memory
235 ~~~~~~
236 For each VM you have the option to set a fixed size memory or asking
237 {pve} to dynamically allocate memory based on the current RAM usage of the
238 host.
239
240 When choosing a *fixed size memory* {pve} will simply allocate what you
241 specify to your VM.
242
243 // see autoballoon() in pvestatd.pm
244 When choosing to *automatically allocate memory*, {pve} will make sure that the
245 minimum amount you specified is always available to the VM, and if RAM usage on
246 the host is below 80%, will dynamically add memory to the guest up to the
247 maximum memory specified. +
248 When the host is becoming short on RAM, the VM will then release some memory
249 back to the host, swapping running processes if needed and starting the oom
250 killer in last resort. The passing around of memory between host and guest is
251 done via a special `balloon` kernel driver running inside the guest, which will
252 grab or release memory pages from the host.
253 footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
254
255 When multiple VMs use the autoallocate facility, it is possible to set a
256 *Shares* coefficient which indicates the relative amount of the free host memory
257 that each VM shoud take. Suppose for instance you have four VMs, three of them
258 running a HTTP server and the last one is a database server. To cache more
259 database blocks in the database server RAM, you would like to prioritize the
260 database VM when spare RAM is available. For this you assign a Shares property
261 of 3000 to the database VM, leaving the other VMs to the Shares default setting
262 of 1000. The host server has 32GB of RAM, and is curring using 16GB, leaving 32
263 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
264 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
265 get 1/5 GB.
266
267 All Linux distributions released after 2010 have the balloon kernel driver
268 included. For Windows OSes, the balloon driver needs to be added manually and can
269 incur a slowdown of the guest, so we don't recommend using it on critical
270 systems.
271 // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
272
273 When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
274 of RAM available to the host.
275
276 Network Device
277 ~~~~~~~~~~~~~~
278 Each VM can have many _Network interface controllers_ (NIC), of four different
279 types:
280
281 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
282 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
283 performance. Like all VirtIO devices, the guest OS should have the proper driver
284 installed.
285 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
286 only be used when emulating older operating systems ( released before 2002 )
287 * the *vmxnet3* is another paravirtualized device, which should only be used
288 when importing a VM from another hypervisor.
289
290 {pve} will generate for each NIC a random *MAC address*, so that your VM is
291 addressable on Ethernet networks.
292
293 If you are using the VirtIO driver, you can optionally activate the
294 *Multiqueues* option. This option allows the guest OS to process networking
295 packets using multiple virtual CPUs, providing an increase in the total number
296 of packets transfered.
297
298 //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
299 When using the VirtIO driver with {pve}, each NIC network queue is passed to the
300 host kernel, where the queue will be processed by a kernel thread spawn by the
301 vhost driver. With this option activated, it is possible to pass _multiple_
302 network queues to the host kernel for each NIC.
303
304 //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
305 When using Multiqueues, it is recommended to set it to a value equal
306 to the number of Total Cores of your guest. You also need to set in
307 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
308 command:
309
310 `ethtool -L eth0 combined X`
311
312 where X is the number of the number of vcpus of the VM.
313
314 You should note that setting the Multiqueues parameter to a value greater
315 than one will increase the CPU load on the host and guest systems as the
316 traffic increases. We recommend to set this option only when the VM has to
317 process a great number of incoming connections, such as when the VM is running
318 as a router, reverse proxy or a busy HTTP server doing long polling.
319
320 The NIC you added to the VM can follow one of two differents models:
321
322 * in the default *Bridged mode* each virtual NIC is backed on the host by a
323 _tap device_, ( a software loopback device simulating an Ethernet NIC ). This
324 tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
325 have direct access to the Ethernet LAN on which the host is located.
326 * in the alternative *NAT mode*, each virtual NIC will only communicate with
327 the Qemu user networking stack, where a builting router and DHCP server can
328 provide network access. This built-in DHCP will serve adresses in the private
329 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
330 should only be used for testing.
331
332 You can also skip adding a network device when creating a VM by selecting *No
333 network device*.
334
335 USB Passthrough
336 ~~~~~~~~~~~~~~~
337 There are two different types of USB passthrough devices:
338
339 * Host USB passtrough
340 * SPICE USB passthrough
341
342 Host USB passthrough works by giving a VM a USB device of the host.
343 This can either be done via the vendor- and product-id, or
344 via the host bus and port.
345
346 The vendor/product-id looks like this: *0123:abcd*,
347 where *0123* is the id of the vendor, and *abcd* is the id
348 of the product, meaning two pieces of the same usb device
349 have the same id.
350
351 The bus/port looks like this: *1-2.3.4*, where *1* is the bus
352 and *2.3.4* is the port path. This represents the physical
353 ports of your host (depending of the internal order of the
354 usb controllers).
355
356 If a device is present in a VM configuration when the VM starts up,
357 but the device is not present in the host, the VM can boot without problems.
358 As soon as the device/port ist available in the host, it gets passed through.
359
360 WARNING: Using this kind of USB passthrough, means that you cannot move
361 a VM online to another host, since the hardware is only available
362 on the host the VM is currently residing.
363
364 The second type of passthrough is SPICE USB passthrough. This is useful
365 if you use a SPICE client which supports it. If you add a SPICE USB port
366 to your VM, you can passthrough a USB device from where your SPICE client is,
367 directly to the VM (for example an input device or hardware dongle).
368
369 Managing Virtual Machines with 'qm'
370 ------------------------------------
371
372 qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
373 create and destroy virtual machines, and control execution
374 (start/stop/suspend/resume). Besides that, you can use qm to set
375 parameters in the associated config file. It is also possible to
376 create and delete virtual disks.
377
378 CLI Usage Examples
379 ~~~~~~~~~~~~~~~~~~
380
381 Create a new VM with 4 GB IDE disk.
382
383 qm create 300 -ide0 4 -net0 e1000 -cdrom proxmox-mailgateway_2.1.iso
384
385 Start the new VM
386
387 qm start 300
388
389 Send a shutdown request, then wait until the VM is stopped.
390
391 qm shutdown 300 && qm wait 300
392
393 Same as above, but only wait for 40 seconds.
394
395 qm shutdown 300 && qm wait 300 -timeout 40
396
397 Configuration
398 -------------
399
400 All configuration files consists of lines in the form
401
402 PARAMETER: value
403
404 Configuration files are stored inside the Proxmox cluster file
405 system, and can be accessed at '/etc/pve/qemu-server/<VMID>.conf'.
406
407 Options
408 ~~~~~~~
409
410 include::qm.conf.5-opts.adoc[]
411
412
413 Locks
414 -----
415
416 Online migrations and backups ('vzdump') set a lock to prevent incompatible
417 concurrent actions on the affected VMs. Sometimes you need to remove such a
418 lock manually (e.g., after a power failure).
419
420 qm unlock <vmid>
421
422
423 ifdef::manvolnum[]
424 include::pve-copyright.adoc[]
425 endif::manvolnum[]