]> git.proxmox.com Git - pve-docs.git/blame - qm.adoc
Document new SSD emulation feature
[pve-docs.git] / qm.adoc
CommitLineData
80c0adcb 1[[chapter_virtual_machines]]
f69cfd23 2ifdef::manvolnum[]
b2f242ab
DM
3qm(1)
4=====
5f09af76
DM
5:pve-toplevel:
6
f69cfd23
DM
7NAME
8----
9
10qm - Qemu/KVM Virtual Machine Manager
11
12
49a5e11c 13SYNOPSIS
f69cfd23
DM
14--------
15
16include::qm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
f69cfd23
DM
21ifndef::manvolnum[]
22Qemu/KVM Virtual Machines
23=========================
5f09af76 24:pve-toplevel:
194d2f29 25endif::manvolnum[]
5f09af76 26
c4cba5d7
EK
27// deprecates
28// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29// http://pve.proxmox.com/wiki/KVM
30// http://pve.proxmox.com/wiki/Qemu_Server
31
5eba0743 32Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
c4cba5d7
EK
33physical computer. From the perspective of the host system where Qemu is
34running, Qemu is a user program which has access to a number of local resources
35like partitions, files, network cards which are then passed to an
189d3661 36emulated computer which sees them as if they were real devices.
c4cba5d7
EK
37
38A guest operating system running in the emulated computer accesses these
39devices, and runs as it were running on real hardware. For instance you can pass
40an iso image as a parameter to Qemu, and the OS running in the emulated computer
189d3661 41will see a real CDROM inserted in a CD drive.
c4cba5d7 42
6fb50457 43Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
c4cba5d7
EK
44only concerned with 32 and 64 bits PC clone emulation, since it represents the
45overwhelming majority of server hardware. The emulation of PC clones is also one
46of the fastest due to the availability of processor extensions which greatly
47speed up Qemu when the emulated architecture is the same as the host
9c63b5d9
EK
48architecture.
49
50NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51It means that Qemu is running with the support of the virtualization processor
52extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
6fb50457 53_KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm
9c63b5d9
EK
54module.
55
c4cba5d7
EK
56Qemu inside {pve} runs as a root process, since this is required to access block
57and PCI devices.
58
5eba0743 59
c4cba5d7
EK
60Emulated devices and paravirtualized devices
61--------------------------------------------
62
189d3661
DC
63The PC hardware emulated by Qemu includes a mainboard, network controllers,
64scsi, ide and sata controllers, serial ports (the complete list can be seen in
65the `kvm(1)` man page) all of them emulated in software. All these devices
66are the exact software equivalent of existing hardware devices, and if the OS
67running in the guest has the proper drivers it will use the devices as if it
c4cba5d7
EK
68were running on real hardware. This allows Qemu to runs _unmodified_ operating
69systems.
70
71This however has a performance cost, as running in software what was meant to
72run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73Qemu can present to the guest operating system _paravirtualized devices_, where
74the guest OS recognizes it is running inside Qemu and cooperates with the
75hypervisor.
76
470d4313 77Qemu relies on the virtio virtualization standard, and is thus able to present
189d3661
DC
78paravirtualized virtio devices, which includes a paravirtualized generic disk
79controller, a paravirtualized network card, a paravirtualized serial port,
c4cba5d7
EK
80a paravirtualized SCSI controller, etc ...
81
189d3661
DC
82It is highly recommended to use the virtio devices whenever you can, as they
83provide a big performance improvement. Using the virtio generic disk controller
84versus an emulated IDE controller will double the sequential write throughput,
85as measured with `bonnie++(8)`. Using the virtio network interface can deliver
c4cba5d7 86up to three times the throughput of an emulated Intel E1000 network card, as
189d3661 87measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
c4cba5d7
EK
88http://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
5eba0743 90
80c0adcb 91[[qm_virtual_machines_settings]]
5274ad28 92Virtual Machines Settings
c4cba5d7 93-------------------------
80c0adcb 94
c4cba5d7
EK
95Generally speaking {pve} tries to choose sane defaults for virtual machines
96(VM). Make sure you understand the meaning of the settings you change, as it
97could incur a performance slowdown, or putting your data at risk.
98
5eba0743 99
80c0adcb 100[[qm_general_settings]]
c4cba5d7
EK
101General Settings
102~~~~~~~~~~~~~~~~
80c0adcb 103
1ff5e4e8 104[thumbnail="screenshot/gui-create-vm-general.png"]
b16d767f 105
c4cba5d7
EK
106General settings of a VM include
107
108* the *Node* : the physical server on which the VM will run
109* the *VM ID*: a unique number in this {pve} installation used to identify your VM
110* *Name*: a free form text string you can use to describe the VM
111* *Resource Pool*: a logical group of VMs
112
5eba0743 113
80c0adcb 114[[qm_os_settings]]
c4cba5d7
EK
115OS Settings
116~~~~~~~~~~~
80c0adcb 117
1ff5e4e8 118[thumbnail="screenshot/gui-create-vm-os.png"]
200114a7 119
c4cba5d7
EK
120When creating a VM, setting the proper Operating System(OS) allows {pve} to
121optimize some low level parameters. For instance Windows OS expect the BIOS
122clock to use the local time, while Unix based OS expect the BIOS clock to have
123the UTC time.
124
5eba0743 125
80c0adcb 126[[qm_hard_disk]]
c4cba5d7
EK
127Hard Disk
128~~~~~~~~~
80c0adcb 129
2ec49380 130Qemu can emulate a number of storage controllers:
c4cba5d7
EK
131
132* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
44f38275 133controller. Even if this controller has been superseded by recent designs,
6fb50457 134each and every OS you can think of has support for it, making it a great choice
c4cba5d7
EK
135if you want to run an OS released before 2003. You can connect up to 4 devices
136on this controller.
137
138* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
139design, allowing higher throughput and a greater number of devices to be
140connected. You can connect up to 6 devices on this controller.
141
b0b6802b
EK
142* the *SCSI* controller, designed in 1985, is commonly found on server grade
143hardware, and can connect up to 14 storage devices. {pve} emulates by default a
f4bfd701
DM
144LSI 53C895A controller.
145+
81868c7e 146A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for
b0b6802b
EK
147performance and is automatically selected for newly created Linux VMs since
148{pve} 4.3. Linux distributions have support for this controller since 2012, and
c4cba5d7 149FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
b0b6802b
EK
150containing the drivers during the installation.
151// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
81868c7e
EK
152If you aim at maximum performance, you can select a SCSI controller of type
153_VirtIO SCSI single_ which will allow you to select the *IO Thread* option.
154When selecting _VirtIO SCSI single_ Qemu will create a new controller for
155each disk, instead of adding all disks to the same controller.
b0b6802b 156
30e6fe00
TL
157* The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
158is an older type of paravirtualized controller. It has been superseded by the
159VirtIO SCSI Controller, in terms of features.
c4cba5d7 160
1ff5e4e8 161[thumbnail="screenshot/gui-create-vm-hard-disk.png"]
c4cba5d7
EK
162On each controller you attach a number of emulated hard disks, which are backed
163by a file or a block device residing in the configured storage. The choice of
164a storage type will determine the format of the hard disk image. Storages which
165present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
de14ebff 166whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose
c4cba5d7
EK
167either the *raw disk image format* or the *QEMU image format*.
168
169 * the *QEMU image format* is a copy on write format which allows snapshots, and
170 thin provisioning of the disk image.
189d3661
DC
171 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
172 you would get when executing the `dd` command on a block device in Linux. This
4371b2fe 173 format does not support thin provisioning or snapshots by itself, requiring
30e6fe00
TL
174 cooperation from the storage layer for these tasks. It may, however, be up to
175 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
c4cba5d7 176 http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
189d3661 177 * the *VMware image format* only makes sense if you intend to import/export the
c4cba5d7
EK
178 disk image to other hypervisors.
179
180Setting the *Cache* mode of the hard drive will impact how the host system will
181notify the guest systems of block write completions. The *No cache* default
182means that the guest system will be notified that a write is complete when each
183block reaches the physical storage write queue, ignoring the host page cache.
184This provides a good balance between safety and speed.
185
186If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
187you can set the *No backup* option on that disk.
188
3205ac49
EK
189If you want the {pve} storage replication mechanism to skip a disk when starting
190 a replication job, you can set the *Skip replication* option on that disk.
6fb50457 191As of {pve} 5.0, replication requires the disk images to be on a storage of type
3205ac49 192`zfspool`, so adding a disk image to other storages when the VM has replication
6fb50457 193configured requires to skip replication for this disk image.
3205ac49 194
c4cba5d7
EK
195If your storage supports _thin provisioning_ (see the storage chapter in the
196{pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
197option on the hard disks connected to that controller. With *Discard* enabled,
198when the filesystem of a VM marks blocks as unused after removing files, the
199emulated SCSI controller will relay this information to the storage, which will
200then shrink the disk image accordingly.
201
25203dc1
NC
202If you would like a drive to be presented to the guest as a solid-state drive
203rather than a rotational hard disk, you can set the *SSD emulation* option on
204that drive. There is no requirement that the underlying storage actually be
205backed by SSDs; this feature can be used with physical media of any type.
206
af9c6de1 207.IO Thread
59552707 208The option *IO Thread* can only be used when using a disk with the
81868c7e
EK
209*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
210 type is *VirtIO SCSI single*.
211With this enabled, Qemu creates one I/O thread per storage controller,
59552707 212instead of a single thread for all I/O, so it increases performance when
81868c7e 213multiple disks are used and each disk has its own storage controller.
c564fc52
DC
214Note that backups do not currently work with *IO Thread* enabled.
215
80c0adcb
DM
216
217[[qm_cpu]]
34e541c5
EK
218CPU
219~~~
80c0adcb 220
1ff5e4e8 221[thumbnail="screenshot/gui-create-vm-cpu.png"]
397c74c3 222
34e541c5
EK
223A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
224This CPU can then contain one or many *cores*, which are independent
225processing units. Whether you have a single CPU socket with 4 cores, or two CPU
226sockets with two cores is mostly irrelevant from a performance point of view.
44f38275
TL
227However some software licenses depend on the number of sockets a machine has,
228in that case it makes sense to set the number of sockets to what the license
229allows you.
f4bfd701 230
34e541c5
EK
231Increasing the number of virtual cpus (cores and sockets) will usually provide a
232performance improvement though that is heavily dependent on the use of the VM.
233Multithreaded applications will of course benefit from a large number of
234virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
235execution on the host system. If you're not sure about the workload of your VM,
236it is usually a safe bet to set the number of *Total cores* to 2.
237
fb29acdd 238NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs
7dd7a0b7
TL
239is greater than the number of cores on the server (e.g., 4 VMs with each 4
240cores on a machine with only 8 cores). In that case the host system will
241balance the Qemu execution threads between your server cores, just like if you
242were running a standard multithreaded application. However, {pve} will prevent
fb29acdd 243you from assigning more virtual CPU cores than physically available, as this will
7dd7a0b7 244only bring the performance down due to the cost of context switches.
34e541c5 245
af54f54d
TL
246[[qm_cpu_resource_limits]]
247Resource Limits
248^^^^^^^^^^^^^^^
249
4371b2fe 250In addition to the number of virtual cores, you can configure how much resources
af54f54d
TL
251a VM can get in relation to the host CPU time and also in relation to other
252VMs.
046643ec
FG
253With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time
254the whole VM can use on the host. It is a floating point value representing CPU
af54f54d 255time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a
4371b2fe 256single process would fully use one single core it would have `100%` CPU Time
af54f54d
TL
257usage. If a VM with four cores utilizes all its cores fully it would
258theoretically use `400%`. In reality the usage may be even a bit higher as Qemu
259can have additional threads for VM peripherals besides the vCPU core ones.
260This setting can be useful if a VM should have multiple vCPUs, as it runs a few
261processes in parallel, but the VM as a whole should not be able to run all
262vCPUs at 100% at the same time. Using a specific example: lets say we have a VM
263which would profit from having 8 vCPUs, but at no time all of those 8 cores
264should run at full load - as this would make the server so overloaded that
265other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to
266`4.0` (=400%). If all cores do the same heavy work they would all get 50% of a
267real host cores CPU time. But, if only 4 would do work they could still get
268almost 100% of a real core each.
269
270NOTE: VMs can, depending on their configuration, use additional threads e.g.,
271for networking or IO operations but also live migration. Thus a VM can show up
272to use more CPU time than just its virtual CPUs could use. To ensure that a VM
273never uses more CPU time than virtual CPUs assigned set the *cpulimit* setting
274to the same value as the total core count.
275
276The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
277shares or CPU weight), controls how much CPU time a VM gets in regards to other
278VMs running. It is a relative weight which defaults to `1024`, if you increase
279this for a VM it will be prioritized by the scheduler in comparison to other
280VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was
281changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than
282the first VM 100.
283
284For more information see `man systemd.resource-control`, here `CPUQuota`
285corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits`
286setting, visit its Notes section for references and implementation details.
287
288CPU Type
289^^^^^^^^
290
34e541c5
EK
291Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
292processors. Each new processor generation adds new features, like hardware
293assisted 3d rendering, random number generation, memory protection, etc ...
294Usually you should select for your VM a processor type which closely matches the
295CPU of the host system, as it means that the host CPU features (also called _CPU
296flags_ ) will be available in your VMs. If you want an exact match, you can set
297the CPU type to *host* in which case the VM will have exactly the same CPU flags
f4bfd701
DM
298as your host system.
299
34e541c5
EK
300This has a downside though. If you want to do a live migration of VMs between
301different hosts, your VM might end up on a new system with a different CPU type.
302If the CPU flags passed to the guest are missing, the qemu process will stop. To
303remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
304kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
f4bfd701
DM
305but is guaranteed to work everywhere.
306
307In short, if you care about live migration and moving VMs between nodes, leave
af54f54d
TL
308the kvm64 default. If you don’t care about live migration or have a homogeneous
309cluster where all nodes have the same CPU, set the CPU type to host, as in
310theory this will give your guests maximum performance.
311
72ae8aa2
FG
312Meltdown / Spectre related CPU flags
313^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
314
2975cb7a 315There are several CPU flags related to the Meltdown and Spectre vulnerabilities
72ae8aa2
FG
316footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set
317manually unless the selected CPU type of your VM already enables them by default.
318
2975cb7a 319There are two requirements that need to be fulfilled in order to use these
72ae8aa2 320CPU flags:
5dba2677 321
72ae8aa2
FG
322* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s)
323* The guest operating system must be updated to a version which mitigates the
324 attacks and is able to utilize the CPU feature
325
2975cb7a
AD
326Otherwise you need to set the desired CPU flag of the virtual CPU, either by
327editing the CPU options in the WebUI, or by setting the 'flags' property of the
328'cpu' option in the VM configuration file.
329
330For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
72ae8aa2
FG
331so-called ``microcode update'' footnote:[You can use `intel-microcode' /
332`amd-microcode' from Debian non-free if your vendor does not provide such an
333update. Note that not all affected CPUs can be updated to support spec-ctrl.]
334for your CPU.
5dba2677 335
2975cb7a
AD
336
337To check if the {pve} host is vulnerable, execute the following command as root:
5dba2677
TL
338
339----
2975cb7a 340for f in /sys/devices/system/cpu/vulnerabilities/*; do echo "${f##*/} -" $(cat "$f"); done
5dba2677
TL
341----
342
144d5ede 343A community script is also available to detect is the host is still vulnerable.
2975cb7a 344footnote:[spectre-meltdown-checker https://meltdown.ovh/]
72ae8aa2 345
2975cb7a
AD
346Intel processors
347^^^^^^^^^^^^^^^^
72ae8aa2 348
2975cb7a
AD
349* 'pcid'
350+
144d5ede 351This reduces the performance impact of the Meltdown (CVE-2017-5754) mitigation
2975cb7a
AD
352called 'Kernel Page-Table Isolation (KPTI)', which effectively hides
353the Kernel memory from the user space. Without PCID, KPTI is quite an expensive
354mechanism footnote:[PCID is now a critical performance/security feature on x86
355https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU].
356+
357To check if the {pve} host supports PCID, execute the following command as root:
358+
72ae8aa2 359----
2975cb7a 360# grep ' pcid ' /proc/cpuinfo
72ae8aa2 361----
2975cb7a
AD
362+
363If this does not return empty your host's CPU has support for 'pcid'.
72ae8aa2 364
2975cb7a
AD
365* 'spec-ctrl'
366+
144d5ede
WB
367Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
368in cases where retpolines are not sufficient.
369Included by default in Intel CPU models with -IBRS suffix.
370Must be explicitly turned on for Intel CPU models without -IBRS suffix.
371Requires an updated host CPU microcode (intel-microcode >= 20180425).
2975cb7a
AD
372+
373* 'ssbd'
374+
144d5ede
WB
375Required to enable the Spectre V4 (CVE-2018-3639) fix. Not included by default in any Intel CPU model.
376Must be explicitly turned on for all Intel CPU models.
377Requires an updated host CPU microcode(intel-microcode >= 20180703).
72ae8aa2 378
72ae8aa2 379
2975cb7a
AD
380AMD processors
381^^^^^^^^^^^^^^
382
383* 'ibpb'
384+
144d5ede
WB
385Required to enable the Spectre v1 (CVE-2017-5753) and Spectre v2 (CVE-2017-5715) fix,
386in cases where retpolines are not sufficient.
387Included by default in AMD CPU models with -IBPB suffix.
388Must be explicitly turned on for AMD CPU models without -IBPB suffix.
2975cb7a
AD
389Requires the host CPU microcode to support this feature before it can be used for guest CPUs.
390
391
392
393* 'virt-ssbd'
394+
395Required to enable the Spectre v4 (CVE-2018-3639) fix.
144d5ede
WB
396Not included by default in any AMD CPU model.
397Must be explicitly turned on for all AMD CPU models.
398This should be provided to guests, even if amd-ssbd is also provided, for maximum guest compatibility.
399Note that this must be explicitly enabled when when using the "host" cpu model,
400because this is a virtual feature which does not exist in the physical CPUs.
2975cb7a
AD
401
402
403* 'amd-ssbd'
404+
144d5ede
WB
405Required to enable the Spectre v4 (CVE-2018-3639) fix.
406Not included by default in any AMD CPU model. Must be explicitly turned on for all AMD CPU models.
407This provides higher performance than virt-ssbd, therefore a host supporting this should always expose this to guests if possible.
2975cb7a
AD
408virt-ssbd should none the less also be exposed for maximum guest compatibility as some kernels only know about virt-ssbd.
409
410
411* 'amd-no-ssb'
412+
413Recommended to indicate the host is not vulnerable to Spectre V4 (CVE-2018-3639).
144d5ede
WB
414Not included by default in any AMD CPU model.
415Future hardware generations of CPU will not be vulnerable to CVE-2018-3639,
416and thus the guest should be told not to enable its mitigations, by exposing amd-no-ssb.
2975cb7a
AD
417This is mutually exclusive with virt-ssbd and amd-ssbd.
418
5dba2677 419
af54f54d
TL
420NUMA
421^^^^
422You can also optionally emulate a *NUMA*
423footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture
424in your VMs. The basics of the NUMA architecture mean that instead of having a
425global memory pool available to all your cores, the memory is spread into local
426banks close to each socket.
34e541c5
EK
427This can bring speed improvements as the memory bus is not a bottleneck
428anymore. If your system has a NUMA architecture footnote:[if the command
429`numactl --hardware | grep available` returns more than one node, then your host
430system has a NUMA architecture] we recommend to activate the option, as this
af54f54d
TL
431will allow proper distribution of the VM resources on the host system.
432This option is also required to hot-plug cores or RAM in a VM.
34e541c5
EK
433
434If the NUMA option is used, it is recommended to set the number of sockets to
435the number of sockets of the host system.
436
af54f54d
TL
437vCPU hot-plug
438^^^^^^^^^^^^^
439
440Modern operating systems introduced the capability to hot-plug and, to a
4371b2fe
FG
441certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us
442to avoid a lot of the (physical) problems real hardware can cause in such
443scenarios.
444Still, this is a rather new and complicated feature, so its use should be
445restricted to cases where its absolutely needed. Most of the functionality can
446be replicated with other, well tested and less complicated, features, see
af54f54d
TL
447xref:qm_cpu_resource_limits[Resource Limits].
448
449In {pve} the maximal number of plugged CPUs is always `cores * sockets`.
450To start a VM with less than this total core count of CPUs you may use the
4371b2fe 451*vpus* setting, it denotes how many vCPUs should be plugged in at VM start.
af54f54d 452
4371b2fe 453Currently only this feature is only supported on Linux, a kernel newer than 3.10
af54f54d
TL
454is needed, a kernel newer than 4.7 is recommended.
455
456You can use a udev rule as follow to automatically set new CPUs as online in
457the guest:
458
459----
460SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
461----
462
463Save this under /etc/udev/rules.d/ as a file ending in `.rules`.
464
465Note: CPU hot-remove is machine dependent and requires guest cooperation.
466The deletion command does not guarantee CPU removal to actually happen,
467typically it's a request forwarded to guest using target dependent mechanism,
468e.g., ACPI on x86/amd64.
469
80c0adcb
DM
470
471[[qm_memory]]
34e541c5
EK
472Memory
473~~~~~~
80c0adcb 474
34e541c5
EK
475For each VM you have the option to set a fixed size memory or asking
476{pve} to dynamically allocate memory based on the current RAM usage of the
59552707 477host.
34e541c5 478
96124d0f 479.Fixed Memory Allocation
1ff5e4e8 480[thumbnail="screenshot/gui-create-vm-memory.png"]
96124d0f 481
9fb002e6
DC
482When setting memory and minimum memory to the same amount
483{pve} will simply allocate what you specify to your VM.
34e541c5 484
9abfec65
DC
485Even when using a fixed memory size, the ballooning device gets added to the
486VM, because it delivers useful information such as how much memory the guest
487really uses.
488In general, you should leave *ballooning* enabled, but if you want to disable
e60ce90c 489it (e.g. for debugging purposes), simply uncheck
9fb002e6 490*Ballooning Device* or set
9abfec65
DC
491
492 balloon: 0
493
494in the configuration.
495
96124d0f 496.Automatic Memory Allocation
96124d0f 497
34e541c5 498// see autoballoon() in pvestatd.pm
58e04593 499When setting the minimum memory lower than memory, {pve} will make sure that the
34e541c5
EK
500minimum amount you specified is always available to the VM, and if RAM usage on
501the host is below 80%, will dynamically add memory to the guest up to the
f4bfd701
DM
502maximum memory specified.
503
a35aad4a 504When the host is running low on RAM, the VM will then release some memory
34e541c5
EK
505back to the host, swapping running processes if needed and starting the oom
506killer in last resort. The passing around of memory between host and guest is
507done via a special `balloon` kernel driver running inside the guest, which will
508grab or release memory pages from the host.
509footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
510
c9f6e1a4
EK
511When multiple VMs use the autoallocate facility, it is possible to set a
512*Shares* coefficient which indicates the relative amount of the free host memory
470d4313 513that each VM should take. Suppose for instance you have four VMs, three of them
a35aad4a 514running an HTTP server and the last one is a database server. To cache more
c9f6e1a4
EK
515database blocks in the database server RAM, you would like to prioritize the
516database VM when spare RAM is available. For this you assign a Shares property
517of 3000 to the database VM, leaving the other VMs to the Shares default setting
470d4313 518of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
c9f6e1a4
EK
519* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
5203000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
a35aad4a 521get 1.5 GB.
c9f6e1a4 522
34e541c5
EK
523All Linux distributions released after 2010 have the balloon kernel driver
524included. For Windows OSes, the balloon driver needs to be added manually and can
525incur a slowdown of the guest, so we don't recommend using it on critical
59552707 526systems.
34e541c5
EK
527// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
528
470d4313 529When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
34e541c5
EK
530of RAM available to the host.
531
80c0adcb
DM
532
533[[qm_network_device]]
1ff7835b
EK
534Network Device
535~~~~~~~~~~~~~~
80c0adcb 536
1ff5e4e8 537[thumbnail="screenshot/gui-create-vm-network.png"]
c24ddb0a 538
1ff7835b
EK
539Each VM can have many _Network interface controllers_ (NIC), of four different
540types:
541
542 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
543 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
544performance. Like all VirtIO devices, the guest OS should have the proper driver
545installed.
546 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
59552707 547only be used when emulating older operating systems ( released before 2002 )
1ff7835b
EK
548 * the *vmxnet3* is another paravirtualized device, which should only be used
549when importing a VM from another hypervisor.
550
551{pve} will generate for each NIC a random *MAC address*, so that your VM is
552addressable on Ethernet networks.
553
470d4313 554The NIC you added to the VM can follow one of two different models:
af9c6de1
EK
555
556 * in the default *Bridged mode* each virtual NIC is backed on the host by a
557_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
558tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
559have direct access to the Ethernet LAN on which the host is located.
560 * in the alternative *NAT mode*, each virtual NIC will only communicate with
470d4313
DC
561the Qemu user networking stack, where a built-in router and DHCP server can
562provide network access. This built-in DHCP will serve addresses in the private
af9c6de1 56310.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
f5041150
DC
564should only be used for testing. This mode is only available via CLI or the API,
565but not via the WebUI.
af9c6de1
EK
566
567You can also skip adding a network device when creating a VM by selecting *No
568network device*.
569
570.Multiqueue
1ff7835b 571If you are using the VirtIO driver, you can optionally activate the
af9c6de1 572*Multiqueue* option. This option allows the guest OS to process networking
1ff7835b 573packets using multiple virtual CPUs, providing an increase in the total number
470d4313 574of packets transferred.
1ff7835b
EK
575
576//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
577When using the VirtIO driver with {pve}, each NIC network queue is passed to the
a35aad4a 578host kernel, where the queue will be processed by a kernel thread spawned by the
1ff7835b
EK
579vhost driver. With this option activated, it is possible to pass _multiple_
580network queues to the host kernel for each NIC.
581
582//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
af9c6de1 583When using Multiqueue, it is recommended to set it to a value equal
1ff7835b
EK
584to the number of Total Cores of your guest. You also need to set in
585the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
59552707 586command:
1ff7835b 587
7a0d4784 588`ethtool -L ens1 combined X`
1ff7835b
EK
589
590where X is the number of the number of vcpus of the VM.
591
af9c6de1 592You should note that setting the Multiqueue parameter to a value greater
1ff7835b
EK
593than one will increase the CPU load on the host and guest systems as the
594traffic increases. We recommend to set this option only when the VM has to
595process a great number of incoming connections, such as when the VM is running
596as a router, reverse proxy or a busy HTTP server doing long polling.
597
80c0adcb 598
dbb44ef0 599[[qm_usb_passthrough]]
685cc8e0
DC
600USB Passthrough
601~~~~~~~~~~~~~~~
80c0adcb 602
685cc8e0
DC
603There are two different types of USB passthrough devices:
604
470d4313 605* Host USB passthrough
685cc8e0
DC
606* SPICE USB passthrough
607
608Host USB passthrough works by giving a VM a USB device of the host.
609This can either be done via the vendor- and product-id, or
610via the host bus and port.
611
612The vendor/product-id looks like this: *0123:abcd*,
613where *0123* is the id of the vendor, and *abcd* is the id
614of the product, meaning two pieces of the same usb device
615have the same id.
616
617The bus/port looks like this: *1-2.3.4*, where *1* is the bus
618and *2.3.4* is the port path. This represents the physical
619ports of your host (depending of the internal order of the
620usb controllers).
621
622If a device is present in a VM configuration when the VM starts up,
623but the device is not present in the host, the VM can boot without problems.
470d4313 624As soon as the device/port is available in the host, it gets passed through.
685cc8e0 625
e60ce90c 626WARNING: Using this kind of USB passthrough means that you cannot move
685cc8e0
DC
627a VM online to another host, since the hardware is only available
628on the host the VM is currently residing.
629
630The second type of passthrough is SPICE USB passthrough. This is useful
631if you use a SPICE client which supports it. If you add a SPICE USB port
632to your VM, you can passthrough a USB device from where your SPICE client is,
633directly to the VM (for example an input device or hardware dongle).
634
80c0adcb
DM
635
636[[qm_bios_and_uefi]]
076d60ae
DC
637BIOS and UEFI
638~~~~~~~~~~~~~
639
640In order to properly emulate a computer, QEMU needs to use a firmware.
641By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS
642implementation. SeaBIOS is a good choice for most standard setups.
643
644There are, however, some scenarios in which a BIOS is not a good firmware
645to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
646http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
470d4313 647In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
076d60ae
DC
648
649If you want to use OVMF, there are several things to consider:
650
651In order to save things like the *boot order*, there needs to be an EFI Disk.
652This disk will be included in backups and snapshots, and there can only be one.
653
654You can create such a disk with the following command:
655
656 qm set <vmid> -efidisk0 <storage>:1,format=<format>
657
658Where *<storage>* is the storage where you want to have the disk, and
659*<format>* is a format which the storage supports. Alternatively, you can
660create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
661hardware section of a VM.
662
663When using OVMF with a virtual display (without VGA passthrough),
664you need to set the client resolution in the OVMF menu(which you can reach
665with a press of the ESC button during boot), or you have to choose
666SPICE as the display type.
667
288e3f46
EK
668[[qm_startup_and_shutdown]]
669Automatic Start and Shutdown of Virtual Machines
670~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
671
672After creating your VMs, you probably want them to start automatically
673when the host system boots. For this you need to select the option 'Start at
674boot' from the 'Options' Tab of your VM in the web interface, or set it with
675the following command:
676
677 qm set <vmid> -onboot 1
678
4dbeb548
DM
679.Start and Shutdown Order
680
1ff5e4e8 681[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548
DM
682
683In some case you want to be able to fine tune the boot order of your
684VMs, for instance if one of your VM is providing firewalling or DHCP
685to other guest systems. For this you can use the following
686parameters:
288e3f46
EK
687
688* *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
689you want the VM to be the first to be started. (We use the reverse startup
690order for shutdown, so a machine with a start order of 1 would be the last to
7eed72d8 691be shut down). If multiple VMs have the same order defined on a host, they will
d750c851 692additionally be ordered by 'VMID' in ascending order.
288e3f46
EK
693* *Startup delay*: Defines the interval between this VM start and subsequent
694VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting
695other VMs.
696* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
697for the VM to be offline after issuing a shutdown command.
7eed72d8 698By default this value is set to 180, which means that {pve} will issue a
d750c851
WB
699shutdown request and wait 180 seconds for the machine to be offline. If
700the machine is still online after the timeout it will be stopped forcefully.
288e3f46 701
2b2c6286
TL
702NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
703'boot order' options currently. Those VMs will be skipped by the startup and
704shutdown algorithm as the HA manager itself ensures that VMs get started and
705stopped.
706
288e3f46 707Please note that machines without a Start/Shutdown order parameter will always
7eed72d8 708start after those where the parameter is set. Further, this parameter can only
d750c851 709be enforced between virtual machines running on the same host, not
288e3f46 710cluster-wide.
076d60ae 711
c73c190f
DM
712
713[[qm_migration]]
714Migration
715---------
716
1ff5e4e8 717[thumbnail="screenshot/gui-qemu-migrate.png"]
e4bcef0a 718
c73c190f
DM
719If you have a cluster, you can migrate your VM to another host with
720
721 qm migrate <vmid> <target>
722
8df8cfb7
DC
723There are generally two mechanisms for this
724
725* Online Migration (aka Live Migration)
726* Offline Migration
727
728Online Migration
729~~~~~~~~~~~~~~~~
730
c73c190f
DM
731When your VM is running and it has no local resources defined (such as disks
732on local storage, passed through devices, etc.) you can initiate a live
733migration with the -online flag.
734
8df8cfb7
DC
735How it works
736^^^^^^^^^^^^
737
738This starts a Qemu Process on the target host with the 'incoming' flag, which
739means that the process starts and waits for the memory data and device states
740from the source Virtual Machine (since all other resources, e.g. disks,
741are shared, the memory content and device state are the only things left
742to transmit).
743
744Once this connection is established, the source begins to send the memory
745content asynchronously to the target. If the memory on the source changes,
746those sections are marked dirty and there will be another pass of sending data.
747This happens until the amount of data to send is so small that it can
748pause the VM on the source, send the remaining data to the target and start
749the VM on the target in under a second.
750
751Requirements
752^^^^^^^^^^^^
753
754For Live Migration to work, there are some things required:
755
756* The VM has no local resources (e.g. passed through devices, local disks, etc.)
757* The hosts are in the same {pve} cluster.
758* The hosts have a working (and reliable) network connection.
759* The target host must have the same or higher versions of the
760 {pve} packages. (It *might* work the other way, but this is never guaranteed)
761
762Offline Migration
763~~~~~~~~~~~~~~~~~
764
c73c190f
DM
765If you have local resources, you can still offline migrate your VMs,
766as long as all disk are on storages, which are defined on both hosts.
767Then the migration will copy the disk over the network to the target host.
768
eeb87f95
DM
769[[qm_copy_and_clone]]
770Copies and Clones
771-----------------
9e55c76d 772
1ff5e4e8 773[thumbnail="screenshot/gui-qemu-full-clone.png"]
9e55c76d
DM
774
775VM installation is usually done using an installation media (CD-ROM)
776from the operation system vendor. Depending on the OS, this can be a
777time consuming task one might want to avoid.
778
779An easy way to deploy many VMs of the same type is to copy an existing
780VM. We use the term 'clone' for such copies, and distinguish between
781'linked' and 'full' clones.
782
783Full Clone::
784
785The result of such copy is an independent VM. The
786new VM does not share any storage resources with the original.
787+
707e37a2 788
9e55c76d
DM
789It is possible to select a *Target Storage*, so one can use this to
790migrate a VM to a totally different storage. You can also change the
791disk image *Format* if the storage driver supports several formats.
792+
707e37a2 793
9e55c76d
DM
794NOTE: A full clone need to read and copy all VM image data. This is
795usually much slower than creating a linked clone.
707e37a2
DM
796+
797
798Some storage types allows to copy a specific *Snapshot*, which
799defaults to the 'current' VM data. This also means that the final copy
800never includes any additional snapshots from the original VM.
801
9e55c76d
DM
802
803Linked Clone::
804
805Modern storage drivers supports a way to generate fast linked
806clones. Such a clone is a writable copy whose initial contents are the
807same as the original data. Creating a linked clone is nearly
808instantaneous, and initially consumes no additional space.
809+
707e37a2 810
9e55c76d
DM
811They are called 'linked' because the new image still refers to the
812original. Unmodified data blocks are read from the original image, but
813modification are written (and afterwards read) from a new
814location. This technique is called 'Copy-on-write'.
815+
707e37a2
DM
816
817This requires that the original volume is read-only. With {pve} one
818can convert any VM into a read-only <<qm_templates, Template>>). Such
819templates can later be used to create linked clones efficiently.
820+
821
822NOTE: You cannot delete the original template while linked clones
823exists.
9e55c76d 824+
707e37a2
DM
825
826It is not possible to change the *Target storage* for linked clones,
827because this is a storage internal feature.
9e55c76d
DM
828
829
830The *Target node* option allows you to create the new VM on a
831different node. The only restriction is that the VM is on shared
832storage, and that storage is also available on the target node.
833
9e55c76d
DM
834To avoid resource conflicts, all network interface MAC addresses gets
835randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
836setting.
837
838
707e37a2
DM
839[[qm_templates]]
840Virtual Machine Templates
841-------------------------
842
843One can convert a VM into a Template. Such templates are read-only,
844and you can use them to create linked clones.
845
846NOTE: It is not possible to start templates, because this would modify
847the disk images. If you want to change the template, create a linked
848clone and modify that.
849
319d5325
DC
850VM Generation ID
851----------------
852
effa4818
TL
853{pve} supports Virtual Machine Generation ID ('vmgedid') footnote:[Official
854'vmgenid' Specification
855https://docs.microsoft.com/en-us/windows/desktop/hyperv_v2/virtual-machine-generation-identifier]
856for virtual machines.
857This can be used by the guest operating system to detect any event resulting
858in a time shift event, for example, restoring a backup or a snapshot rollback.
319d5325 859
effa4818
TL
860When creating new VMs, a 'vmgenid' will be automatically generated and saved
861in its configuration file.
319d5325 862
effa4818
TL
863To create and add a 'vmgenid' to an already existing VM one can pass the
864special value `1' to let {pve} autogenerate one or manually set the 'UUID'
865footnote:[Online GUID generator http://guid.one/] by using it as value,
866e.g.:
319d5325 867
effa4818
TL
868----
869 qm set VMID -vmgenid 1
870 qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
871----
319d5325 872
cfd48f55
TL
873NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
874in the same effects as a change on snapshot rollback, backup restore, etc., has
875as the VM can interpret this as generation change.
876
effa4818
TL
877In the rare case the 'vmgenid' mechanism is not wanted one can pass `0' for
878its value on VM creation, or retroactively delete the property in the
879configuration with:
319d5325 880
effa4818
TL
881----
882 qm set VMID -delete vmgenid
883----
319d5325 884
effa4818
TL
885The most prominent use case for 'vmgenid' are newer Microsoft Windows
886operating systems, which use it to avoid problems in time sensitive or
cfd48f55
TL
887replicate services (e.g., databases, domain controller
888footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture])
889on snapshot rollback, backup restore or a whole VM clone operation.
319d5325 890
c069256d
EK
891Importing Virtual Machines and disk images
892------------------------------------------
56368da8
EK
893
894A VM export from a foreign hypervisor takes usually the form of one or more disk
59552707 895 images, with a configuration file describing the settings of the VM (RAM,
56368da8
EK
896 number of cores). +
897The disk images can be in the vmdk format, if the disks come from
59552707
DM
898VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
899The most popular configuration format for VM exports is the OVF standard, but in
900practice interoperation is limited because many settings are not implemented in
901the standard itself, and hypervisors export the supplementary information
56368da8
EK
902in non-standard extensions.
903
904Besides the problem of format, importing disk images from other hypervisors
905may fail if the emulated hardware changes too much from one hypervisor to
906another. Windows VMs are particularly concerned by this, as the OS is very
907picky about any changes of hardware. This problem may be solved by
908installing the MergeIDE.zip utility available from the Internet before exporting
909and choosing a hard disk type of *IDE* before booting the imported Windows VM.
910
59552707 911Finally there is the question of paravirtualized drivers, which improve the
56368da8
EK
912speed of the emulated system and are specific to the hypervisor.
913GNU/Linux and other free Unix OSes have all the necessary drivers installed by
914default and you can switch to the paravirtualized drivers right after importing
59552707 915the VM. For Windows VMs, you need to install the Windows paravirtualized
56368da8
EK
916drivers by yourself.
917
918GNU/Linux and other free Unix can usually be imported without hassle. Note
eb01c5cf 919that we cannot guarantee a successful import/export of Windows VMs in all
56368da8
EK
920cases due to the problems above.
921
c069256d
EK
922Step-by-step example of a Windows OVF import
923~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8 924
59552707 925Microsoft provides
c069256d 926https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
144d5ede 927 to get started with Windows development.We are going to use one of these
c069256d 928to demonstrate the OVF import feature.
56368da8 929
c069256d
EK
930Download the Virtual Machine zip
931^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 932
144d5ede 933After getting informed about the user agreement, choose the _Windows 10
c069256d 934Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
56368da8
EK
935
936Extract the disk image from the zip
937^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
938
c069256d
EK
939Using the `unzip` utility or any archiver of your choice, unpack the zip,
940and copy via ssh/scp the ovf and vmdk files to your {pve} host.
56368da8 941
c069256d
EK
942Import the Virtual Machine
943^^^^^^^^^^^^^^^^^^^^^^^^^^
56368da8 944
c069256d
EK
945This will create a new virtual machine, using cores, memory and
946VM name as read from the OVF manifest, and import the disks to the +local-lvm+
947 storage. You have to configure the network manually.
56368da8 948
c069256d 949 qm importovf 999 WinDev1709Eval.ovf local-lvm
56368da8 950
c069256d 951The VM is ready to be started.
56368da8 952
c069256d
EK
953Adding an external disk image to a Virtual Machine
954~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56368da8 955
144d5ede 956You can also add an existing disk image to a VM, either coming from a
c069256d
EK
957foreign hypervisor, or one that you created yourself.
958
959Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
960
961 vmdebootstrap --verbose \
67d59a35 962 --size 10GiB --serial-console \
c069256d
EK
963 --grub --no-extlinux \
964 --package openssh-server \
965 --package avahi-daemon \
966 --package qemu-guest-agent \
967 --hostname vm600 --enable-dhcp \
968 --customize=./copy_pub_ssh.sh \
969 --sparse --image vm600.raw
970
971You can now create a new target VM for this image.
972
973 qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
974 --bootdisk scsi0 --scsihw virtio-scsi-pci --ostype l26
56368da8 975
c069256d
EK
976Add the disk image as +unused0+ to the VM, using the storage +pvedir+:
977
978 qm importdisk 600 vm600.raw pvedir
979
980Finally attach the unused disk to the SCSI controller of the VM:
981
982 qm set 600 --scsi0 pvedir:600/vm-600-disk-1.raw
983
984The VM is ready to be started.
707e37a2 985
7eb69fd2 986
16b4185a 987ifndef::wiki[]
7eb69fd2 988include::qm-cloud-init.adoc[]
16b4185a
DM
989endif::wiki[]
990
991
7eb69fd2 992
8c1189b6 993Managing Virtual Machines with `qm`
dd042288 994------------------------------------
f69cfd23 995
dd042288 996qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
f69cfd23
DM
997create and destroy virtual machines, and control execution
998(start/stop/suspend/resume). Besides that, you can use qm to set
999parameters in the associated config file. It is also possible to
1000create and delete virtual disks.
1001
dd042288
EK
1002CLI Usage Examples
1003~~~~~~~~~~~~~~~~~~
1004
b01b1f2c
EK
1005Using an iso file uploaded on the 'local' storage, create a VM
1006with a 4 GB IDE disk on the 'local-lvm' storage
dd042288 1007
b01b1f2c 1008 qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
dd042288
EK
1009
1010Start the new VM
1011
1012 qm start 300
1013
1014Send a shutdown request, then wait until the VM is stopped.
1015
1016 qm shutdown 300 && qm wait 300
1017
1018Same as above, but only wait for 40 seconds.
1019
1020 qm shutdown 300 && qm wait 300 -timeout 40
1021
f0a8ab95
DM
1022
1023[[qm_configuration]]
f69cfd23
DM
1024Configuration
1025-------------
1026
f0a8ab95
DM
1027VM configuration files are stored inside the Proxmox cluster file
1028system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
1029Like other files stored inside `/etc/pve/`, they get automatically
1030replicated to all other cluster nodes.
f69cfd23 1031
f0a8ab95
DM
1032NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
1033unique cluster wide.
1034
1035.Example VM Configuration
1036----
1037cores: 1
1038sockets: 1
1039memory: 512
1040name: webmail
1041ostype: l26
1042bootdisk: virtio0
1043net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
1044virtio0: local:vm-100-disk-1,size=32G
1045----
1046
1047Those configuration files are simple text files, and you can edit them
1048using a normal text editor (`vi`, `nano`, ...). This is sometimes
1049useful to do small corrections, but keep in mind that you need to
1050restart the VM to apply such changes.
1051
1052For that reason, it is usually better to use the `qm` command to
1053generate and modify those files, or do the whole thing using the GUI.
1054Our toolkit is smart enough to instantaneously apply most changes to
1055running VM. This feature is called "hot plug", and there is no
1056need to restart the VM in that case.
1057
1058
1059File Format
1060~~~~~~~~~~~
1061
1062VM configuration files use a simple colon separated key/value
1063format. Each line has the following format:
1064
1065-----
1066# this is a comment
1067OPTION: value
1068-----
1069
1070Blank lines in those files are ignored, and lines starting with a `#`
1071character are treated as comments and are also ignored.
1072
1073
1074[[qm_snapshots]]
1075Snapshots
1076~~~~~~~~~
1077
1078When you create a snapshot, `qm` stores the configuration at snapshot
1079time into a separate snapshot section within the same configuration
1080file. For example, after creating a snapshot called ``testsnapshot'',
1081your configuration file will look like this:
1082
1083.VM configuration with snapshot
1084----
1085memory: 512
1086swap: 512
1087parent: testsnaphot
1088...
1089
1090[testsnaphot]
1091memory: 512
1092swap: 512
1093snaptime: 1457170803
1094...
1095----
1096
1097There are a few snapshot related properties like `parent` and
1098`snaptime`. The `parent` property is used to store the parent/child
1099relationship between snapshots. `snaptime` is the snapshot creation
1100time stamp (Unix epoch).
f69cfd23 1101
f69cfd23 1102
80c0adcb 1103[[qm_options]]
a7f36905
DM
1104Options
1105~~~~~~~
1106
1107include::qm.conf.5-opts.adoc[]
1108
f69cfd23
DM
1109
1110Locks
1111-----
1112
0bcc62dd
DM
1113Online migrations, snapshots and backups (`vzdump`) set a lock to
1114prevent incompatible concurrent actions on the affected VMs. Sometimes
1115you need to remove such a lock manually (e.g., after a power failure).
f69cfd23
DM
1116
1117 qm unlock <vmid>
1118
0bcc62dd
DM
1119CAUTION: Only do that if you are sure the action which set the lock is
1120no longer running.
1121
f69cfd23 1122
16b4185a
DM
1123ifdef::wiki[]
1124
1125See Also
1126~~~~~~~~
1127
1128* link:/wiki/Cloud-Init_Support[Cloud-Init Support]
1129
1130endif::wiki[]
1131
1132
f69cfd23 1133ifdef::manvolnum[]
704f19fb
DM
1134
1135Files
1136------
1137
1138`/etc/pve/qemu-server/<VMID>.conf`::
1139
1140Configuration file for the VM '<VMID>'.
1141
1142
f69cfd23
DM
1143include::pve-copyright.adoc[]
1144endif::manvolnum[]