]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_virtual_machines]] |
f69cfd23 | 2 | ifdef::manvolnum[] |
b2f242ab DM |
3 | qm(1) |
4 | ===== | |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
f69cfd23 DM |
7 | NAME |
8 | ---- | |
9 | ||
10 | qm - Qemu/KVM Virtual Machine Manager | |
11 | ||
12 | ||
49a5e11c | 13 | SYNOPSIS |
f69cfd23 DM |
14 | -------- |
15 | ||
16 | include::qm.1-synopsis.adoc[] | |
17 | ||
18 | DESCRIPTION | |
19 | ----------- | |
20 | endif::manvolnum[] | |
f69cfd23 DM |
21 | ifndef::manvolnum[] |
22 | Qemu/KVM Virtual Machines | |
23 | ========================= | |
5f09af76 | 24 | :pve-toplevel: |
194d2f29 | 25 | endif::manvolnum[] |
5f09af76 | 26 | |
c4cba5d7 EK |
27 | // deprecates |
28 | // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization | |
29 | // http://pve.proxmox.com/wiki/KVM | |
30 | // http://pve.proxmox.com/wiki/Qemu_Server | |
31 | ||
5eba0743 | 32 | Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a |
c4cba5d7 EK |
33 | physical computer. From the perspective of the host system where Qemu is |
34 | running, Qemu is a user program which has access to a number of local resources | |
35 | like partitions, files, network cards which are then passed to an | |
189d3661 | 36 | emulated computer which sees them as if they were real devices. |
c4cba5d7 EK |
37 | |
38 | A guest operating system running in the emulated computer accesses these | |
39 | devices, and runs as it were running on real hardware. For instance you can pass | |
40 | an iso image as a parameter to Qemu, and the OS running in the emulated computer | |
189d3661 | 41 | will see a real CDROM inserted in a CD drive. |
c4cba5d7 | 42 | |
6fb50457 | 43 | Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is |
c4cba5d7 EK |
44 | only concerned with 32 and 64 bits PC clone emulation, since it represents the |
45 | overwhelming majority of server hardware. The emulation of PC clones is also one | |
46 | of the fastest due to the availability of processor extensions which greatly | |
47 | speed up Qemu when the emulated architecture is the same as the host | |
9c63b5d9 EK |
48 | architecture. |
49 | ||
50 | NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine). | |
51 | It means that Qemu is running with the support of the virtualization processor | |
52 | extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and | |
6fb50457 | 53 | _KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm |
9c63b5d9 EK |
54 | module. |
55 | ||
c4cba5d7 EK |
56 | Qemu inside {pve} runs as a root process, since this is required to access block |
57 | and PCI devices. | |
58 | ||
5eba0743 | 59 | |
c4cba5d7 EK |
60 | Emulated devices and paravirtualized devices |
61 | -------------------------------------------- | |
62 | ||
189d3661 DC |
63 | The PC hardware emulated by Qemu includes a mainboard, network controllers, |
64 | scsi, ide and sata controllers, serial ports (the complete list can be seen in | |
65 | the `kvm(1)` man page) all of them emulated in software. All these devices | |
66 | are the exact software equivalent of existing hardware devices, and if the OS | |
67 | running in the guest has the proper drivers it will use the devices as if it | |
c4cba5d7 EK |
68 | were running on real hardware. This allows Qemu to runs _unmodified_ operating |
69 | systems. | |
70 | ||
71 | This however has a performance cost, as running in software what was meant to | |
72 | run in hardware involves a lot of extra work for the host CPU. To mitigate this, | |
73 | Qemu can present to the guest operating system _paravirtualized devices_, where | |
74 | the guest OS recognizes it is running inside Qemu and cooperates with the | |
75 | hypervisor. | |
76 | ||
470d4313 | 77 | Qemu relies on the virtio virtualization standard, and is thus able to present |
189d3661 DC |
78 | paravirtualized virtio devices, which includes a paravirtualized generic disk |
79 | controller, a paravirtualized network card, a paravirtualized serial port, | |
c4cba5d7 EK |
80 | a paravirtualized SCSI controller, etc ... |
81 | ||
189d3661 DC |
82 | It is highly recommended to use the virtio devices whenever you can, as they |
83 | provide a big performance improvement. Using the virtio generic disk controller | |
84 | versus an emulated IDE controller will double the sequential write throughput, | |
85 | as measured with `bonnie++(8)`. Using the virtio network interface can deliver | |
c4cba5d7 | 86 | up to three times the throughput of an emulated Intel E1000 network card, as |
189d3661 | 87 | measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki |
c4cba5d7 EK |
88 | http://www.linux-kvm.org/page/Using_VirtIO_NIC] |
89 | ||
5eba0743 | 90 | |
80c0adcb | 91 | [[qm_virtual_machines_settings]] |
5274ad28 | 92 | Virtual Machines Settings |
c4cba5d7 | 93 | ------------------------- |
80c0adcb | 94 | |
c4cba5d7 EK |
95 | Generally speaking {pve} tries to choose sane defaults for virtual machines |
96 | (VM). Make sure you understand the meaning of the settings you change, as it | |
97 | could incur a performance slowdown, or putting your data at risk. | |
98 | ||
5eba0743 | 99 | |
80c0adcb | 100 | [[qm_general_settings]] |
c4cba5d7 EK |
101 | General Settings |
102 | ~~~~~~~~~~~~~~~~ | |
80c0adcb | 103 | |
b473f999 | 104 | [thumbnail="gui-create-vm-general.png"] |
b16d767f | 105 | |
c4cba5d7 EK |
106 | General settings of a VM include |
107 | ||
108 | * the *Node* : the physical server on which the VM will run | |
109 | * the *VM ID*: a unique number in this {pve} installation used to identify your VM | |
110 | * *Name*: a free form text string you can use to describe the VM | |
111 | * *Resource Pool*: a logical group of VMs | |
112 | ||
5eba0743 | 113 | |
80c0adcb | 114 | [[qm_os_settings]] |
c4cba5d7 EK |
115 | OS Settings |
116 | ~~~~~~~~~~~ | |
80c0adcb | 117 | |
b473f999 | 118 | [thumbnail="gui-create-vm-os.png"] |
200114a7 | 119 | |
c4cba5d7 EK |
120 | When creating a VM, setting the proper Operating System(OS) allows {pve} to |
121 | optimize some low level parameters. For instance Windows OS expect the BIOS | |
122 | clock to use the local time, while Unix based OS expect the BIOS clock to have | |
123 | the UTC time. | |
124 | ||
5eba0743 | 125 | |
80c0adcb | 126 | [[qm_hard_disk]] |
c4cba5d7 EK |
127 | Hard Disk |
128 | ~~~~~~~~~ | |
80c0adcb | 129 | |
2ec49380 | 130 | Qemu can emulate a number of storage controllers: |
c4cba5d7 EK |
131 | |
132 | * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk | |
44f38275 | 133 | controller. Even if this controller has been superseded by recent designs, |
6fb50457 | 134 | each and every OS you can think of has support for it, making it a great choice |
c4cba5d7 EK |
135 | if you want to run an OS released before 2003. You can connect up to 4 devices |
136 | on this controller. | |
137 | ||
138 | * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern | |
139 | design, allowing higher throughput and a greater number of devices to be | |
140 | connected. You can connect up to 6 devices on this controller. | |
141 | ||
b0b6802b EK |
142 | * the *SCSI* controller, designed in 1985, is commonly found on server grade |
143 | hardware, and can connect up to 14 storage devices. {pve} emulates by default a | |
f4bfd701 DM |
144 | LSI 53C895A controller. |
145 | + | |
81868c7e | 146 | A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for |
b0b6802b EK |
147 | performance and is automatically selected for newly created Linux VMs since |
148 | {pve} 4.3. Linux distributions have support for this controller since 2012, and | |
c4cba5d7 | 149 | FreeBSD since 2014. For Windows OSes, you need to provide an extra iso |
b0b6802b EK |
150 | containing the drivers during the installation. |
151 | // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation. | |
81868c7e EK |
152 | If you aim at maximum performance, you can select a SCSI controller of type |
153 | _VirtIO SCSI single_ which will allow you to select the *IO Thread* option. | |
154 | When selecting _VirtIO SCSI single_ Qemu will create a new controller for | |
155 | each disk, instead of adding all disks to the same controller. | |
b0b6802b | 156 | |
30e6fe00 TL |
157 | * The *VirtIO Block* controller, often just called VirtIO or virtio-blk, |
158 | is an older type of paravirtualized controller. It has been superseded by the | |
159 | VirtIO SCSI Controller, in terms of features. | |
c4cba5d7 | 160 | |
b473f999 | 161 | [thumbnail="gui-create-vm-hard-disk.png"] |
c4cba5d7 EK |
162 | On each controller you attach a number of emulated hard disks, which are backed |
163 | by a file or a block device residing in the configured storage. The choice of | |
164 | a storage type will determine the format of the hard disk image. Storages which | |
165 | present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*, | |
de14ebff | 166 | whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose |
c4cba5d7 EK |
167 | either the *raw disk image format* or the *QEMU image format*. |
168 | ||
169 | * the *QEMU image format* is a copy on write format which allows snapshots, and | |
170 | thin provisioning of the disk image. | |
189d3661 DC |
171 | * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what |
172 | you would get when executing the `dd` command on a block device in Linux. This | |
4371b2fe | 173 | format does not support thin provisioning or snapshots by itself, requiring |
30e6fe00 TL |
174 | cooperation from the storage layer for these tasks. It may, however, be up to |
175 | 10% faster than the *QEMU image format*. footnote:[See this benchmark for details | |
c4cba5d7 | 176 | http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] |
189d3661 | 177 | * the *VMware image format* only makes sense if you intend to import/export the |
c4cba5d7 EK |
178 | disk image to other hypervisors. |
179 | ||
180 | Setting the *Cache* mode of the hard drive will impact how the host system will | |
181 | notify the guest systems of block write completions. The *No cache* default | |
182 | means that the guest system will be notified that a write is complete when each | |
183 | block reaches the physical storage write queue, ignoring the host page cache. | |
184 | This provides a good balance between safety and speed. | |
185 | ||
186 | If you want the {pve} backup manager to skip a disk when doing a backup of a VM, | |
187 | you can set the *No backup* option on that disk. | |
188 | ||
3205ac49 EK |
189 | If you want the {pve} storage replication mechanism to skip a disk when starting |
190 | a replication job, you can set the *Skip replication* option on that disk. | |
6fb50457 | 191 | As of {pve} 5.0, replication requires the disk images to be on a storage of type |
3205ac49 | 192 | `zfspool`, so adding a disk image to other storages when the VM has replication |
6fb50457 | 193 | configured requires to skip replication for this disk image. |
3205ac49 | 194 | |
c4cba5d7 EK |
195 | If your storage supports _thin provisioning_ (see the storage chapter in the |
196 | {pve} guide), and your VM has a *SCSI* controller you can activate the *Discard* | |
197 | option on the hard disks connected to that controller. With *Discard* enabled, | |
198 | when the filesystem of a VM marks blocks as unused after removing files, the | |
199 | emulated SCSI controller will relay this information to the storage, which will | |
200 | then shrink the disk image accordingly. | |
201 | ||
af9c6de1 | 202 | .IO Thread |
59552707 | 203 | The option *IO Thread* can only be used when using a disk with the |
81868c7e EK |
204 | *VirtIO* controller, or with the *SCSI* controller, when the emulated controller |
205 | type is *VirtIO SCSI single*. | |
206 | With this enabled, Qemu creates one I/O thread per storage controller, | |
59552707 | 207 | instead of a single thread for all I/O, so it increases performance when |
81868c7e | 208 | multiple disks are used and each disk has its own storage controller. |
c564fc52 DC |
209 | Note that backups do not currently work with *IO Thread* enabled. |
210 | ||
80c0adcb DM |
211 | |
212 | [[qm_cpu]] | |
34e541c5 EK |
213 | CPU |
214 | ~~~ | |
80c0adcb | 215 | |
b473f999 | 216 | [thumbnail="gui-create-vm-cpu.png"] |
397c74c3 | 217 | |
34e541c5 EK |
218 | A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU. |
219 | This CPU can then contain one or many *cores*, which are independent | |
220 | processing units. Whether you have a single CPU socket with 4 cores, or two CPU | |
221 | sockets with two cores is mostly irrelevant from a performance point of view. | |
44f38275 TL |
222 | However some software licenses depend on the number of sockets a machine has, |
223 | in that case it makes sense to set the number of sockets to what the license | |
224 | allows you. | |
f4bfd701 | 225 | |
34e541c5 EK |
226 | Increasing the number of virtual cpus (cores and sockets) will usually provide a |
227 | performance improvement though that is heavily dependent on the use of the VM. | |
228 | Multithreaded applications will of course benefit from a large number of | |
229 | virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of | |
230 | execution on the host system. If you're not sure about the workload of your VM, | |
231 | it is usually a safe bet to set the number of *Total cores* to 2. | |
232 | ||
fb29acdd | 233 | NOTE: It is perfectly safe if the _overall_ number of cores of all your VMs |
7dd7a0b7 TL |
234 | is greater than the number of cores on the server (e.g., 4 VMs with each 4 |
235 | cores on a machine with only 8 cores). In that case the host system will | |
236 | balance the Qemu execution threads between your server cores, just like if you | |
237 | were running a standard multithreaded application. However, {pve} will prevent | |
fb29acdd | 238 | you from assigning more virtual CPU cores than physically available, as this will |
7dd7a0b7 | 239 | only bring the performance down due to the cost of context switches. |
34e541c5 | 240 | |
af54f54d TL |
241 | [[qm_cpu_resource_limits]] |
242 | Resource Limits | |
243 | ^^^^^^^^^^^^^^^ | |
244 | ||
4371b2fe | 245 | In addition to the number of virtual cores, you can configure how much resources |
af54f54d TL |
246 | a VM can get in relation to the host CPU time and also in relation to other |
247 | VMs. | |
046643ec FG |
248 | With the *cpulimit* (``Host CPU Time'') option you can limit how much CPU time |
249 | the whole VM can use on the host. It is a floating point value representing CPU | |
af54f54d | 250 | time in percent, so `1.0` is equal to `100%`, `2.5` to `250%` and so on. If a |
4371b2fe | 251 | single process would fully use one single core it would have `100%` CPU Time |
af54f54d TL |
252 | usage. If a VM with four cores utilizes all its cores fully it would |
253 | theoretically use `400%`. In reality the usage may be even a bit higher as Qemu | |
254 | can have additional threads for VM peripherals besides the vCPU core ones. | |
255 | This setting can be useful if a VM should have multiple vCPUs, as it runs a few | |
256 | processes in parallel, but the VM as a whole should not be able to run all | |
257 | vCPUs at 100% at the same time. Using a specific example: lets say we have a VM | |
258 | which would profit from having 8 vCPUs, but at no time all of those 8 cores | |
259 | should run at full load - as this would make the server so overloaded that | |
260 | other VMs and CTs would get to less CPU. So, we set the *cpulimit* limit to | |
261 | `4.0` (=400%). If all cores do the same heavy work they would all get 50% of a | |
262 | real host cores CPU time. But, if only 4 would do work they could still get | |
263 | almost 100% of a real core each. | |
264 | ||
265 | NOTE: VMs can, depending on their configuration, use additional threads e.g., | |
266 | for networking or IO operations but also live migration. Thus a VM can show up | |
267 | to use more CPU time than just its virtual CPUs could use. To ensure that a VM | |
268 | never uses more CPU time than virtual CPUs assigned set the *cpulimit* setting | |
269 | to the same value as the total core count. | |
270 | ||
271 | The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU | |
272 | shares or CPU weight), controls how much CPU time a VM gets in regards to other | |
273 | VMs running. It is a relative weight which defaults to `1024`, if you increase | |
274 | this for a VM it will be prioritized by the scheduler in comparison to other | |
275 | VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was | |
276 | changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than | |
277 | the first VM 100. | |
278 | ||
279 | For more information see `man systemd.resource-control`, here `CPUQuota` | |
280 | corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits` | |
281 | setting, visit its Notes section for references and implementation details. | |
282 | ||
283 | CPU Type | |
284 | ^^^^^^^^ | |
285 | ||
34e541c5 EK |
286 | Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon |
287 | processors. Each new processor generation adds new features, like hardware | |
288 | assisted 3d rendering, random number generation, memory protection, etc ... | |
289 | Usually you should select for your VM a processor type which closely matches the | |
290 | CPU of the host system, as it means that the host CPU features (also called _CPU | |
291 | flags_ ) will be available in your VMs. If you want an exact match, you can set | |
292 | the CPU type to *host* in which case the VM will have exactly the same CPU flags | |
f4bfd701 DM |
293 | as your host system. |
294 | ||
34e541c5 EK |
295 | This has a downside though. If you want to do a live migration of VMs between |
296 | different hosts, your VM might end up on a new system with a different CPU type. | |
297 | If the CPU flags passed to the guest are missing, the qemu process will stop. To | |
298 | remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults. | |
299 | kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set, | |
f4bfd701 DM |
300 | but is guaranteed to work everywhere. |
301 | ||
302 | In short, if you care about live migration and moving VMs between nodes, leave | |
af54f54d TL |
303 | the kvm64 default. If you don’t care about live migration or have a homogeneous |
304 | cluster where all nodes have the same CPU, set the CPU type to host, as in | |
305 | theory this will give your guests maximum performance. | |
306 | ||
72ae8aa2 FG |
307 | Meltdown / Spectre related CPU flags |
308 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
309 | ||
310 | There are two CPU flags related to the Meltdown and Spectre vulnerabilities | |
311 | footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set | |
312 | manually unless the selected CPU type of your VM already enables them by default. | |
313 | ||
314 | The first, called 'pcid', helps to reduce the performance impact of the Meltdown | |
315 | mitigation called 'Kernel Page-Table Isolation (KPTI)', which effectively hides | |
316 | the Kernel memory from the user space. Without PCID, KPTI is quite an expensive | |
317 | mechanism footnote:[PCID is now a critical performance/security feature on x86 | |
5dba2677 TL |
318 | https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU]. |
319 | ||
72ae8aa2 FG |
320 | The second CPU flag is called 'spec-ctrl', which allows an operating system to |
321 | selectively disable or restrict speculative execution in order to limit the | |
322 | ability of attackers to exploit the Spectre vulnerability. | |
323 | ||
324 | There are two requirements that need to be fulfilled in order to use these two | |
325 | CPU flags: | |
5dba2677 | 326 | |
72ae8aa2 FG |
327 | * The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s) |
328 | * The guest operating system must be updated to a version which mitigates the | |
329 | attacks and is able to utilize the CPU feature | |
330 | ||
331 | In order to use 'spec-ctrl', your CPU or system vendor also needs to provide a | |
332 | so-called ``microcode update'' footnote:[You can use `intel-microcode' / | |
333 | `amd-microcode' from Debian non-free if your vendor does not provide such an | |
334 | update. Note that not all affected CPUs can be updated to support spec-ctrl.] | |
335 | for your CPU. | |
5dba2677 | 336 | |
9c54f973 | 337 | To check if the {pve} host supports PCID, execute the following command as root: |
5dba2677 TL |
338 | |
339 | ---- | |
340 | # grep ' pcid ' /proc/cpuinfo | |
341 | ---- | |
342 | ||
72ae8aa2 FG |
343 | If this does not return empty your host's CPU has support for 'pcid'. |
344 | ||
345 | To check if the {pve} host supports spec-ctrl, execute the following command as root: | |
346 | ||
347 | ---- | |
348 | # grep ' spec_ctrl ' /proc/cpuinfo | |
349 | ---- | |
350 | ||
351 | If this does not return empty your host's CPU has support for 'spec-ctrl'. | |
352 | ||
353 | If you use `host' or another CPU type which enables the desired flags by | |
354 | default, and you updated your guest OS to make use of the associated CPU | |
355 | features, you're already set. | |
356 | ||
357 | Otherwise you need to set the desired CPU flag of the virtual CPU, either by | |
358 | editing the CPU options in the WebUI, or by setting the 'flags' property of the | |
359 | 'cpu' option in the VM configuration file. | |
5dba2677 | 360 | |
af54f54d TL |
361 | NUMA |
362 | ^^^^ | |
363 | You can also optionally emulate a *NUMA* | |
364 | footnote:[https://en.wikipedia.org/wiki/Non-uniform_memory_access] architecture | |
365 | in your VMs. The basics of the NUMA architecture mean that instead of having a | |
366 | global memory pool available to all your cores, the memory is spread into local | |
367 | banks close to each socket. | |
34e541c5 EK |
368 | This can bring speed improvements as the memory bus is not a bottleneck |
369 | anymore. If your system has a NUMA architecture footnote:[if the command | |
370 | `numactl --hardware | grep available` returns more than one node, then your host | |
371 | system has a NUMA architecture] we recommend to activate the option, as this | |
af54f54d TL |
372 | will allow proper distribution of the VM resources on the host system. |
373 | This option is also required to hot-plug cores or RAM in a VM. | |
34e541c5 EK |
374 | |
375 | If the NUMA option is used, it is recommended to set the number of sockets to | |
376 | the number of sockets of the host system. | |
377 | ||
af54f54d TL |
378 | vCPU hot-plug |
379 | ^^^^^^^^^^^^^ | |
380 | ||
381 | Modern operating systems introduced the capability to hot-plug and, to a | |
4371b2fe FG |
382 | certain extent, hot-unplug CPUs in a running systems. Virtualisation allows us |
383 | to avoid a lot of the (physical) problems real hardware can cause in such | |
384 | scenarios. | |
385 | Still, this is a rather new and complicated feature, so its use should be | |
386 | restricted to cases where its absolutely needed. Most of the functionality can | |
387 | be replicated with other, well tested and less complicated, features, see | |
af54f54d TL |
388 | xref:qm_cpu_resource_limits[Resource Limits]. |
389 | ||
390 | In {pve} the maximal number of plugged CPUs is always `cores * sockets`. | |
391 | To start a VM with less than this total core count of CPUs you may use the | |
4371b2fe | 392 | *vpus* setting, it denotes how many vCPUs should be plugged in at VM start. |
af54f54d | 393 | |
4371b2fe | 394 | Currently only this feature is only supported on Linux, a kernel newer than 3.10 |
af54f54d TL |
395 | is needed, a kernel newer than 4.7 is recommended. |
396 | ||
397 | You can use a udev rule as follow to automatically set new CPUs as online in | |
398 | the guest: | |
399 | ||
400 | ---- | |
401 | SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1" | |
402 | ---- | |
403 | ||
404 | Save this under /etc/udev/rules.d/ as a file ending in `.rules`. | |
405 | ||
406 | Note: CPU hot-remove is machine dependent and requires guest cooperation. | |
407 | The deletion command does not guarantee CPU removal to actually happen, | |
408 | typically it's a request forwarded to guest using target dependent mechanism, | |
409 | e.g., ACPI on x86/amd64. | |
410 | ||
80c0adcb DM |
411 | |
412 | [[qm_memory]] | |
34e541c5 EK |
413 | Memory |
414 | ~~~~~~ | |
80c0adcb | 415 | |
34e541c5 EK |
416 | For each VM you have the option to set a fixed size memory or asking |
417 | {pve} to dynamically allocate memory based on the current RAM usage of the | |
59552707 | 418 | host. |
34e541c5 | 419 | |
96124d0f | 420 | .Fixed Memory Allocation |
58e04593 | 421 | [thumbnail="gui-create-vm-memory.png"] |
96124d0f | 422 | |
9fb002e6 DC |
423 | When setting memory and minimum memory to the same amount |
424 | {pve} will simply allocate what you specify to your VM. | |
34e541c5 | 425 | |
9abfec65 DC |
426 | Even when using a fixed memory size, the ballooning device gets added to the |
427 | VM, because it delivers useful information such as how much memory the guest | |
428 | really uses. | |
429 | In general, you should leave *ballooning* enabled, but if you want to disable | |
e60ce90c | 430 | it (e.g. for debugging purposes), simply uncheck |
9fb002e6 | 431 | *Ballooning Device* or set |
9abfec65 DC |
432 | |
433 | balloon: 0 | |
434 | ||
435 | in the configuration. | |
436 | ||
96124d0f | 437 | .Automatic Memory Allocation |
96124d0f | 438 | |
34e541c5 | 439 | // see autoballoon() in pvestatd.pm |
58e04593 | 440 | When setting the minimum memory lower than memory, {pve} will make sure that the |
34e541c5 EK |
441 | minimum amount you specified is always available to the VM, and if RAM usage on |
442 | the host is below 80%, will dynamically add memory to the guest up to the | |
f4bfd701 DM |
443 | maximum memory specified. |
444 | ||
34e541c5 EK |
445 | When the host is becoming short on RAM, the VM will then release some memory |
446 | back to the host, swapping running processes if needed and starting the oom | |
447 | killer in last resort. The passing around of memory between host and guest is | |
448 | done via a special `balloon` kernel driver running inside the guest, which will | |
449 | grab or release memory pages from the host. | |
450 | footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/] | |
451 | ||
c9f6e1a4 EK |
452 | When multiple VMs use the autoallocate facility, it is possible to set a |
453 | *Shares* coefficient which indicates the relative amount of the free host memory | |
470d4313 | 454 | that each VM should take. Suppose for instance you have four VMs, three of them |
c9f6e1a4 EK |
455 | running a HTTP server and the last one is a database server. To cache more |
456 | database blocks in the database server RAM, you would like to prioritize the | |
457 | database VM when spare RAM is available. For this you assign a Shares property | |
458 | of 3000 to the database VM, leaving the other VMs to the Shares default setting | |
470d4313 | 459 | of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32 |
c9f6e1a4 EK |
460 | * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 * |
461 | 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will | |
462 | get 1/5 GB. | |
463 | ||
34e541c5 EK |
464 | All Linux distributions released after 2010 have the balloon kernel driver |
465 | included. For Windows OSes, the balloon driver needs to be added manually and can | |
466 | incur a slowdown of the guest, so we don't recommend using it on critical | |
59552707 | 467 | systems. |
34e541c5 EK |
468 | // see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/ |
469 | ||
470d4313 | 470 | When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB |
34e541c5 EK |
471 | of RAM available to the host. |
472 | ||
80c0adcb DM |
473 | |
474 | [[qm_network_device]] | |
1ff7835b EK |
475 | Network Device |
476 | ~~~~~~~~~~~~~~ | |
80c0adcb | 477 | |
b473f999 | 478 | [thumbnail="gui-create-vm-network.png"] |
c24ddb0a | 479 | |
1ff7835b EK |
480 | Each VM can have many _Network interface controllers_ (NIC), of four different |
481 | types: | |
482 | ||
483 | * *Intel E1000* is the default, and emulates an Intel Gigabit network card. | |
484 | * the *VirtIO* paravirtualized NIC should be used if you aim for maximum | |
485 | performance. Like all VirtIO devices, the guest OS should have the proper driver | |
486 | installed. | |
487 | * the *Realtek 8139* emulates an older 100 MB/s network card, and should | |
59552707 | 488 | only be used when emulating older operating systems ( released before 2002 ) |
1ff7835b EK |
489 | * the *vmxnet3* is another paravirtualized device, which should only be used |
490 | when importing a VM from another hypervisor. | |
491 | ||
492 | {pve} will generate for each NIC a random *MAC address*, so that your VM is | |
493 | addressable on Ethernet networks. | |
494 | ||
470d4313 | 495 | The NIC you added to the VM can follow one of two different models: |
af9c6de1 EK |
496 | |
497 | * in the default *Bridged mode* each virtual NIC is backed on the host by a | |
498 | _tap device_, ( a software loopback device simulating an Ethernet NIC ). This | |
499 | tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs | |
500 | have direct access to the Ethernet LAN on which the host is located. | |
501 | * in the alternative *NAT mode*, each virtual NIC will only communicate with | |
470d4313 DC |
502 | the Qemu user networking stack, where a built-in router and DHCP server can |
503 | provide network access. This built-in DHCP will serve addresses in the private | |
af9c6de1 | 504 | 10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and |
f5041150 DC |
505 | should only be used for testing. This mode is only available via CLI or the API, |
506 | but not via the WebUI. | |
af9c6de1 EK |
507 | |
508 | You can also skip adding a network device when creating a VM by selecting *No | |
509 | network device*. | |
510 | ||
511 | .Multiqueue | |
1ff7835b | 512 | If you are using the VirtIO driver, you can optionally activate the |
af9c6de1 | 513 | *Multiqueue* option. This option allows the guest OS to process networking |
1ff7835b | 514 | packets using multiple virtual CPUs, providing an increase in the total number |
470d4313 | 515 | of packets transferred. |
1ff7835b EK |
516 | |
517 | //http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html | |
518 | When using the VirtIO driver with {pve}, each NIC network queue is passed to the | |
519 | host kernel, where the queue will be processed by a kernel thread spawn by the | |
520 | vhost driver. With this option activated, it is possible to pass _multiple_ | |
521 | network queues to the host kernel for each NIC. | |
522 | ||
523 | //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net | |
af9c6de1 | 524 | When using Multiqueue, it is recommended to set it to a value equal |
1ff7835b EK |
525 | to the number of Total Cores of your guest. You also need to set in |
526 | the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool | |
59552707 | 527 | command: |
1ff7835b | 528 | |
7a0d4784 | 529 | `ethtool -L ens1 combined X` |
1ff7835b EK |
530 | |
531 | where X is the number of the number of vcpus of the VM. | |
532 | ||
af9c6de1 | 533 | You should note that setting the Multiqueue parameter to a value greater |
1ff7835b EK |
534 | than one will increase the CPU load on the host and guest systems as the |
535 | traffic increases. We recommend to set this option only when the VM has to | |
536 | process a great number of incoming connections, such as when the VM is running | |
537 | as a router, reverse proxy or a busy HTTP server doing long polling. | |
538 | ||
80c0adcb | 539 | |
dbb44ef0 | 540 | [[qm_usb_passthrough]] |
685cc8e0 DC |
541 | USB Passthrough |
542 | ~~~~~~~~~~~~~~~ | |
80c0adcb | 543 | |
685cc8e0 DC |
544 | There are two different types of USB passthrough devices: |
545 | ||
470d4313 | 546 | * Host USB passthrough |
685cc8e0 DC |
547 | * SPICE USB passthrough |
548 | ||
549 | Host USB passthrough works by giving a VM a USB device of the host. | |
550 | This can either be done via the vendor- and product-id, or | |
551 | via the host bus and port. | |
552 | ||
553 | The vendor/product-id looks like this: *0123:abcd*, | |
554 | where *0123* is the id of the vendor, and *abcd* is the id | |
555 | of the product, meaning two pieces of the same usb device | |
556 | have the same id. | |
557 | ||
558 | The bus/port looks like this: *1-2.3.4*, where *1* is the bus | |
559 | and *2.3.4* is the port path. This represents the physical | |
560 | ports of your host (depending of the internal order of the | |
561 | usb controllers). | |
562 | ||
563 | If a device is present in a VM configuration when the VM starts up, | |
564 | but the device is not present in the host, the VM can boot without problems. | |
470d4313 | 565 | As soon as the device/port is available in the host, it gets passed through. |
685cc8e0 | 566 | |
e60ce90c | 567 | WARNING: Using this kind of USB passthrough means that you cannot move |
685cc8e0 DC |
568 | a VM online to another host, since the hardware is only available |
569 | on the host the VM is currently residing. | |
570 | ||
571 | The second type of passthrough is SPICE USB passthrough. This is useful | |
572 | if you use a SPICE client which supports it. If you add a SPICE USB port | |
573 | to your VM, you can passthrough a USB device from where your SPICE client is, | |
574 | directly to the VM (for example an input device or hardware dongle). | |
575 | ||
80c0adcb DM |
576 | |
577 | [[qm_bios_and_uefi]] | |
076d60ae DC |
578 | BIOS and UEFI |
579 | ~~~~~~~~~~~~~ | |
580 | ||
581 | In order to properly emulate a computer, QEMU needs to use a firmware. | |
582 | By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS | |
583 | implementation. SeaBIOS is a good choice for most standard setups. | |
584 | ||
585 | There are, however, some scenarios in which a BIOS is not a good firmware | |
586 | to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this. | |
587 | http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html] | |
470d4313 | 588 | In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/] |
076d60ae DC |
589 | |
590 | If you want to use OVMF, there are several things to consider: | |
591 | ||
592 | In order to save things like the *boot order*, there needs to be an EFI Disk. | |
593 | This disk will be included in backups and snapshots, and there can only be one. | |
594 | ||
595 | You can create such a disk with the following command: | |
596 | ||
597 | qm set <vmid> -efidisk0 <storage>:1,format=<format> | |
598 | ||
599 | Where *<storage>* is the storage where you want to have the disk, and | |
600 | *<format>* is a format which the storage supports. Alternatively, you can | |
601 | create such a disk through the web interface with 'Add' -> 'EFI Disk' in the | |
602 | hardware section of a VM. | |
603 | ||
604 | When using OVMF with a virtual display (without VGA passthrough), | |
605 | you need to set the client resolution in the OVMF menu(which you can reach | |
606 | with a press of the ESC button during boot), or you have to choose | |
607 | SPICE as the display type. | |
608 | ||
288e3f46 EK |
609 | [[qm_startup_and_shutdown]] |
610 | Automatic Start and Shutdown of Virtual Machines | |
611 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
612 | ||
613 | After creating your VMs, you probably want them to start automatically | |
614 | when the host system boots. For this you need to select the option 'Start at | |
615 | boot' from the 'Options' Tab of your VM in the web interface, or set it with | |
616 | the following command: | |
617 | ||
618 | qm set <vmid> -onboot 1 | |
619 | ||
4dbeb548 DM |
620 | .Start and Shutdown Order |
621 | ||
622 | [thumbnail="gui-qemu-edit-start-order.png"] | |
623 | ||
624 | In some case you want to be able to fine tune the boot order of your | |
625 | VMs, for instance if one of your VM is providing firewalling or DHCP | |
626 | to other guest systems. For this you can use the following | |
627 | parameters: | |
288e3f46 EK |
628 | |
629 | * *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if | |
630 | you want the VM to be the first to be started. (We use the reverse startup | |
631 | order for shutdown, so a machine with a start order of 1 would be the last to | |
7eed72d8 | 632 | be shut down). If multiple VMs have the same order defined on a host, they will |
d750c851 | 633 | additionally be ordered by 'VMID' in ascending order. |
288e3f46 EK |
634 | * *Startup delay*: Defines the interval between this VM start and subsequent |
635 | VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting | |
636 | other VMs. | |
637 | * *Shutdown timeout*: Defines the duration in seconds {pve} should wait | |
638 | for the VM to be offline after issuing a shutdown command. | |
7eed72d8 | 639 | By default this value is set to 180, which means that {pve} will issue a |
d750c851 WB |
640 | shutdown request and wait 180 seconds for the machine to be offline. If |
641 | the machine is still online after the timeout it will be stopped forcefully. | |
288e3f46 | 642 | |
2b2c6286 TL |
643 | NOTE: VMs managed by the HA stack do not follow the 'start on boot' and |
644 | 'boot order' options currently. Those VMs will be skipped by the startup and | |
645 | shutdown algorithm as the HA manager itself ensures that VMs get started and | |
646 | stopped. | |
647 | ||
288e3f46 | 648 | Please note that machines without a Start/Shutdown order parameter will always |
7eed72d8 | 649 | start after those where the parameter is set. Further, this parameter can only |
d750c851 | 650 | be enforced between virtual machines running on the same host, not |
288e3f46 | 651 | cluster-wide. |
076d60ae | 652 | |
c73c190f DM |
653 | |
654 | [[qm_migration]] | |
655 | Migration | |
656 | --------- | |
657 | ||
e4bcef0a DM |
658 | [thumbnail="gui-qemu-migrate.png"] |
659 | ||
c73c190f DM |
660 | If you have a cluster, you can migrate your VM to another host with |
661 | ||
662 | qm migrate <vmid> <target> | |
663 | ||
8df8cfb7 DC |
664 | There are generally two mechanisms for this |
665 | ||
666 | * Online Migration (aka Live Migration) | |
667 | * Offline Migration | |
668 | ||
669 | Online Migration | |
670 | ~~~~~~~~~~~~~~~~ | |
671 | ||
c73c190f DM |
672 | When your VM is running and it has no local resources defined (such as disks |
673 | on local storage, passed through devices, etc.) you can initiate a live | |
674 | migration with the -online flag. | |
675 | ||
8df8cfb7 DC |
676 | How it works |
677 | ^^^^^^^^^^^^ | |
678 | ||
679 | This starts a Qemu Process on the target host with the 'incoming' flag, which | |
680 | means that the process starts and waits for the memory data and device states | |
681 | from the source Virtual Machine (since all other resources, e.g. disks, | |
682 | are shared, the memory content and device state are the only things left | |
683 | to transmit). | |
684 | ||
685 | Once this connection is established, the source begins to send the memory | |
686 | content asynchronously to the target. If the memory on the source changes, | |
687 | those sections are marked dirty and there will be another pass of sending data. | |
688 | This happens until the amount of data to send is so small that it can | |
689 | pause the VM on the source, send the remaining data to the target and start | |
690 | the VM on the target in under a second. | |
691 | ||
692 | Requirements | |
693 | ^^^^^^^^^^^^ | |
694 | ||
695 | For Live Migration to work, there are some things required: | |
696 | ||
697 | * The VM has no local resources (e.g. passed through devices, local disks, etc.) | |
698 | * The hosts are in the same {pve} cluster. | |
699 | * The hosts have a working (and reliable) network connection. | |
700 | * The target host must have the same or higher versions of the | |
701 | {pve} packages. (It *might* work the other way, but this is never guaranteed) | |
702 | ||
703 | Offline Migration | |
704 | ~~~~~~~~~~~~~~~~~ | |
705 | ||
c73c190f DM |
706 | If you have local resources, you can still offline migrate your VMs, |
707 | as long as all disk are on storages, which are defined on both hosts. | |
708 | Then the migration will copy the disk over the network to the target host. | |
709 | ||
eeb87f95 DM |
710 | [[qm_copy_and_clone]] |
711 | Copies and Clones | |
712 | ----------------- | |
9e55c76d DM |
713 | |
714 | [thumbnail="gui-qemu-full-clone.png"] | |
715 | ||
716 | VM installation is usually done using an installation media (CD-ROM) | |
717 | from the operation system vendor. Depending on the OS, this can be a | |
718 | time consuming task one might want to avoid. | |
719 | ||
720 | An easy way to deploy many VMs of the same type is to copy an existing | |
721 | VM. We use the term 'clone' for such copies, and distinguish between | |
722 | 'linked' and 'full' clones. | |
723 | ||
724 | Full Clone:: | |
725 | ||
726 | The result of such copy is an independent VM. The | |
727 | new VM does not share any storage resources with the original. | |
728 | + | |
707e37a2 | 729 | |
9e55c76d DM |
730 | It is possible to select a *Target Storage*, so one can use this to |
731 | migrate a VM to a totally different storage. You can also change the | |
732 | disk image *Format* if the storage driver supports several formats. | |
733 | + | |
707e37a2 | 734 | |
9e55c76d DM |
735 | NOTE: A full clone need to read and copy all VM image data. This is |
736 | usually much slower than creating a linked clone. | |
707e37a2 DM |
737 | + |
738 | ||
739 | Some storage types allows to copy a specific *Snapshot*, which | |
740 | defaults to the 'current' VM data. This also means that the final copy | |
741 | never includes any additional snapshots from the original VM. | |
742 | ||
9e55c76d DM |
743 | |
744 | Linked Clone:: | |
745 | ||
746 | Modern storage drivers supports a way to generate fast linked | |
747 | clones. Such a clone is a writable copy whose initial contents are the | |
748 | same as the original data. Creating a linked clone is nearly | |
749 | instantaneous, and initially consumes no additional space. | |
750 | + | |
707e37a2 | 751 | |
9e55c76d DM |
752 | They are called 'linked' because the new image still refers to the |
753 | original. Unmodified data blocks are read from the original image, but | |
754 | modification are written (and afterwards read) from a new | |
755 | location. This technique is called 'Copy-on-write'. | |
756 | + | |
707e37a2 DM |
757 | |
758 | This requires that the original volume is read-only. With {pve} one | |
759 | can convert any VM into a read-only <<qm_templates, Template>>). Such | |
760 | templates can later be used to create linked clones efficiently. | |
761 | + | |
762 | ||
763 | NOTE: You cannot delete the original template while linked clones | |
764 | exists. | |
9e55c76d | 765 | + |
707e37a2 DM |
766 | |
767 | It is not possible to change the *Target storage* for linked clones, | |
768 | because this is a storage internal feature. | |
9e55c76d DM |
769 | |
770 | ||
771 | The *Target node* option allows you to create the new VM on a | |
772 | different node. The only restriction is that the VM is on shared | |
773 | storage, and that storage is also available on the target node. | |
774 | ||
9e55c76d DM |
775 | To avoid resource conflicts, all network interface MAC addresses gets |
776 | randomized, and we generate a new 'UUID' for the VM BIOS (smbios1) | |
777 | setting. | |
778 | ||
779 | ||
707e37a2 DM |
780 | [[qm_templates]] |
781 | Virtual Machine Templates | |
782 | ------------------------- | |
783 | ||
784 | One can convert a VM into a Template. Such templates are read-only, | |
785 | and you can use them to create linked clones. | |
786 | ||
787 | NOTE: It is not possible to start templates, because this would modify | |
788 | the disk images. If you want to change the template, create a linked | |
789 | clone and modify that. | |
790 | ||
c069256d EK |
791 | Importing Virtual Machines and disk images |
792 | ------------------------------------------ | |
56368da8 EK |
793 | |
794 | A VM export from a foreign hypervisor takes usually the form of one or more disk | |
59552707 | 795 | images, with a configuration file describing the settings of the VM (RAM, |
56368da8 EK |
796 | number of cores). + |
797 | The disk images can be in the vmdk format, if the disks come from | |
59552707 DM |
798 | VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor. |
799 | The most popular configuration format for VM exports is the OVF standard, but in | |
800 | practice interoperation is limited because many settings are not implemented in | |
801 | the standard itself, and hypervisors export the supplementary information | |
56368da8 EK |
802 | in non-standard extensions. |
803 | ||
804 | Besides the problem of format, importing disk images from other hypervisors | |
805 | may fail if the emulated hardware changes too much from one hypervisor to | |
806 | another. Windows VMs are particularly concerned by this, as the OS is very | |
807 | picky about any changes of hardware. This problem may be solved by | |
808 | installing the MergeIDE.zip utility available from the Internet before exporting | |
809 | and choosing a hard disk type of *IDE* before booting the imported Windows VM. | |
810 | ||
59552707 | 811 | Finally there is the question of paravirtualized drivers, which improve the |
56368da8 EK |
812 | speed of the emulated system and are specific to the hypervisor. |
813 | GNU/Linux and other free Unix OSes have all the necessary drivers installed by | |
814 | default and you can switch to the paravirtualized drivers right after importing | |
59552707 | 815 | the VM. For Windows VMs, you need to install the Windows paravirtualized |
56368da8 EK |
816 | drivers by yourself. |
817 | ||
818 | GNU/Linux and other free Unix can usually be imported without hassle. Note | |
eb01c5cf | 819 | that we cannot guarantee a successful import/export of Windows VMs in all |
56368da8 EK |
820 | cases due to the problems above. |
821 | ||
c069256d EK |
822 | Step-by-step example of a Windows OVF import |
823 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
56368da8 | 824 | |
59552707 | 825 | Microsoft provides |
c069256d EK |
826 | https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads] |
827 | to get started with Windows development.We are going to use one of these | |
828 | to demonstrate the OVF import feature. | |
56368da8 | 829 | |
c069256d EK |
830 | Download the Virtual Machine zip |
831 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
56368da8 | 832 | |
c069256d EK |
833 | After getting informed about the user agreement, choose the _Windows 10 |
834 | Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip. | |
56368da8 EK |
835 | |
836 | Extract the disk image from the zip | |
837 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
838 | ||
c069256d EK |
839 | Using the `unzip` utility or any archiver of your choice, unpack the zip, |
840 | and copy via ssh/scp the ovf and vmdk files to your {pve} host. | |
56368da8 | 841 | |
c069256d EK |
842 | Import the Virtual Machine |
843 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
56368da8 | 844 | |
c069256d EK |
845 | This will create a new virtual machine, using cores, memory and |
846 | VM name as read from the OVF manifest, and import the disks to the +local-lvm+ | |
847 | storage. You have to configure the network manually. | |
56368da8 | 848 | |
c069256d | 849 | qm importovf 999 WinDev1709Eval.ovf local-lvm |
56368da8 | 850 | |
c069256d | 851 | The VM is ready to be started. |
56368da8 | 852 | |
c069256d EK |
853 | Adding an external disk image to a Virtual Machine |
854 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
56368da8 | 855 | |
c069256d EK |
856 | You can also add an existing disk image to a VM, either coming from a |
857 | foreign hypervisor, or one that you created yourself. | |
858 | ||
859 | Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool: | |
860 | ||
861 | vmdebootstrap --verbose \ | |
67d59a35 | 862 | --size 10GiB --serial-console \ |
c069256d EK |
863 | --grub --no-extlinux \ |
864 | --package openssh-server \ | |
865 | --package avahi-daemon \ | |
866 | --package qemu-guest-agent \ | |
867 | --hostname vm600 --enable-dhcp \ | |
868 | --customize=./copy_pub_ssh.sh \ | |
869 | --sparse --image vm600.raw | |
870 | ||
871 | You can now create a new target VM for this image. | |
872 | ||
873 | qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \ | |
874 | --bootdisk scsi0 --scsihw virtio-scsi-pci --ostype l26 | |
56368da8 | 875 | |
c069256d EK |
876 | Add the disk image as +unused0+ to the VM, using the storage +pvedir+: |
877 | ||
878 | qm importdisk 600 vm600.raw pvedir | |
879 | ||
880 | Finally attach the unused disk to the SCSI controller of the VM: | |
881 | ||
882 | qm set 600 --scsi0 pvedir:600/vm-600-disk-1.raw | |
883 | ||
884 | The VM is ready to be started. | |
707e37a2 | 885 | |
7eb69fd2 | 886 | |
16b4185a | 887 | ifndef::wiki[] |
7eb69fd2 | 888 | include::qm-cloud-init.adoc[] |
16b4185a DM |
889 | endif::wiki[] |
890 | ||
891 | ||
7eb69fd2 | 892 | |
8c1189b6 | 893 | Managing Virtual Machines with `qm` |
dd042288 | 894 | ------------------------------------ |
f69cfd23 | 895 | |
dd042288 | 896 | qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can |
f69cfd23 DM |
897 | create and destroy virtual machines, and control execution |
898 | (start/stop/suspend/resume). Besides that, you can use qm to set | |
899 | parameters in the associated config file. It is also possible to | |
900 | create and delete virtual disks. | |
901 | ||
dd042288 EK |
902 | CLI Usage Examples |
903 | ~~~~~~~~~~~~~~~~~~ | |
904 | ||
b01b1f2c EK |
905 | Using an iso file uploaded on the 'local' storage, create a VM |
906 | with a 4 GB IDE disk on the 'local-lvm' storage | |
dd042288 | 907 | |
b01b1f2c | 908 | qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso |
dd042288 EK |
909 | |
910 | Start the new VM | |
911 | ||
912 | qm start 300 | |
913 | ||
914 | Send a shutdown request, then wait until the VM is stopped. | |
915 | ||
916 | qm shutdown 300 && qm wait 300 | |
917 | ||
918 | Same as above, but only wait for 40 seconds. | |
919 | ||
920 | qm shutdown 300 && qm wait 300 -timeout 40 | |
921 | ||
f0a8ab95 DM |
922 | |
923 | [[qm_configuration]] | |
f69cfd23 DM |
924 | Configuration |
925 | ------------- | |
926 | ||
f0a8ab95 DM |
927 | VM configuration files are stored inside the Proxmox cluster file |
928 | system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`. | |
929 | Like other files stored inside `/etc/pve/`, they get automatically | |
930 | replicated to all other cluster nodes. | |
f69cfd23 | 931 | |
f0a8ab95 DM |
932 | NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be |
933 | unique cluster wide. | |
934 | ||
935 | .Example VM Configuration | |
936 | ---- | |
937 | cores: 1 | |
938 | sockets: 1 | |
939 | memory: 512 | |
940 | name: webmail | |
941 | ostype: l26 | |
942 | bootdisk: virtio0 | |
943 | net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0 | |
944 | virtio0: local:vm-100-disk-1,size=32G | |
945 | ---- | |
946 | ||
947 | Those configuration files are simple text files, and you can edit them | |
948 | using a normal text editor (`vi`, `nano`, ...). This is sometimes | |
949 | useful to do small corrections, but keep in mind that you need to | |
950 | restart the VM to apply such changes. | |
951 | ||
952 | For that reason, it is usually better to use the `qm` command to | |
953 | generate and modify those files, or do the whole thing using the GUI. | |
954 | Our toolkit is smart enough to instantaneously apply most changes to | |
955 | running VM. This feature is called "hot plug", and there is no | |
956 | need to restart the VM in that case. | |
957 | ||
958 | ||
959 | File Format | |
960 | ~~~~~~~~~~~ | |
961 | ||
962 | VM configuration files use a simple colon separated key/value | |
963 | format. Each line has the following format: | |
964 | ||
965 | ----- | |
966 | # this is a comment | |
967 | OPTION: value | |
968 | ----- | |
969 | ||
970 | Blank lines in those files are ignored, and lines starting with a `#` | |
971 | character are treated as comments and are also ignored. | |
972 | ||
973 | ||
974 | [[qm_snapshots]] | |
975 | Snapshots | |
976 | ~~~~~~~~~ | |
977 | ||
978 | When you create a snapshot, `qm` stores the configuration at snapshot | |
979 | time into a separate snapshot section within the same configuration | |
980 | file. For example, after creating a snapshot called ``testsnapshot'', | |
981 | your configuration file will look like this: | |
982 | ||
983 | .VM configuration with snapshot | |
984 | ---- | |
985 | memory: 512 | |
986 | swap: 512 | |
987 | parent: testsnaphot | |
988 | ... | |
989 | ||
990 | [testsnaphot] | |
991 | memory: 512 | |
992 | swap: 512 | |
993 | snaptime: 1457170803 | |
994 | ... | |
995 | ---- | |
996 | ||
997 | There are a few snapshot related properties like `parent` and | |
998 | `snaptime`. The `parent` property is used to store the parent/child | |
999 | relationship between snapshots. `snaptime` is the snapshot creation | |
1000 | time stamp (Unix epoch). | |
f69cfd23 | 1001 | |
f69cfd23 | 1002 | |
80c0adcb | 1003 | [[qm_options]] |
a7f36905 DM |
1004 | Options |
1005 | ~~~~~~~ | |
1006 | ||
1007 | include::qm.conf.5-opts.adoc[] | |
1008 | ||
f69cfd23 DM |
1009 | |
1010 | Locks | |
1011 | ----- | |
1012 | ||
0bcc62dd DM |
1013 | Online migrations, snapshots and backups (`vzdump`) set a lock to |
1014 | prevent incompatible concurrent actions on the affected VMs. Sometimes | |
1015 | you need to remove such a lock manually (e.g., after a power failure). | |
f69cfd23 DM |
1016 | |
1017 | qm unlock <vmid> | |
1018 | ||
0bcc62dd DM |
1019 | CAUTION: Only do that if you are sure the action which set the lock is |
1020 | no longer running. | |
1021 | ||
f69cfd23 | 1022 | |
16b4185a DM |
1023 | ifdef::wiki[] |
1024 | ||
1025 | See Also | |
1026 | ~~~~~~~~ | |
1027 | ||
1028 | * link:/wiki/Cloud-Init_Support[Cloud-Init Support] | |
1029 | ||
1030 | endif::wiki[] | |
1031 | ||
1032 | ||
f69cfd23 | 1033 | ifdef::manvolnum[] |
704f19fb DM |
1034 | |
1035 | Files | |
1036 | ------ | |
1037 | ||
1038 | `/etc/pve/qemu-server/<VMID>.conf`:: | |
1039 | ||
1040 | Configuration file for the VM '<VMID>'. | |
1041 | ||
1042 | ||
f69cfd23 DM |
1043 | include::pve-copyright.adoc[] |
1044 | endif::manvolnum[] |