]> git.proxmox.com Git - pve-docs.git/blame_incremental - qm.adoc
qm: improve disk controller wording a bit
[pve-docs.git] / qm.adoc
... / ...
CommitLineData
1[[chapter_virtual_machines]]
2ifdef::manvolnum[]
3qm(1)
4=====
5:pve-toplevel:
6
7NAME
8----
9
10qm - Qemu/KVM Virtual Machine Manager
11
12
13SYNOPSIS
14--------
15
16include::qm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22Qemu/KVM Virtual Machines
23=========================
24:pve-toplevel:
25endif::manvolnum[]
26
27// deprecates
28// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization
29// http://pve.proxmox.com/wiki/KVM
30// http://pve.proxmox.com/wiki/Qemu_Server
31
32Qemu (short form for Quick Emulator) is an open source hypervisor that emulates a
33physical computer. From the perspective of the host system where Qemu is
34running, Qemu is a user program which has access to a number of local resources
35like partitions, files, network cards which are then passed to an
36emulated computer which sees them as if they were real devices.
37
38A guest operating system running in the emulated computer accesses these
39devices, and runs as it were running on real hardware. For instance you can pass
40an iso image as a parameter to Qemu, and the OS running in the emulated computer
41will see a real CDROM inserted in a CD drive.
42
43Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
44only concerned with 32 and 64 bits PC clone emulation, since it represents the
45overwhelming majority of server hardware. The emulation of PC clones is also one
46of the fastest due to the availability of processor extensions which greatly
47speed up Qemu when the emulated architecture is the same as the host
48architecture.
49
50NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
51It means that Qemu is running with the support of the virtualization processor
52extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
53_KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm
54module.
55
56Qemu inside {pve} runs as a root process, since this is required to access block
57and PCI devices.
58
59
60Emulated devices and paravirtualized devices
61--------------------------------------------
62
63The PC hardware emulated by Qemu includes a mainboard, network controllers,
64scsi, ide and sata controllers, serial ports (the complete list can be seen in
65the `kvm(1)` man page) all of them emulated in software. All these devices
66are the exact software equivalent of existing hardware devices, and if the OS
67running in the guest has the proper drivers it will use the devices as if it
68were running on real hardware. This allows Qemu to runs _unmodified_ operating
69systems.
70
71This however has a performance cost, as running in software what was meant to
72run in hardware involves a lot of extra work for the host CPU. To mitigate this,
73Qemu can present to the guest operating system _paravirtualized devices_, where
74the guest OS recognizes it is running inside Qemu and cooperates with the
75hypervisor.
76
77Qemu relies on the virtio virtualization standard, and is thus able to present
78paravirtualized virtio devices, which includes a paravirtualized generic disk
79controller, a paravirtualized network card, a paravirtualized serial port,
80a paravirtualized SCSI controller, etc ...
81
82It is highly recommended to use the virtio devices whenever you can, as they
83provide a big performance improvement. Using the virtio generic disk controller
84versus an emulated IDE controller will double the sequential write throughput,
85as measured with `bonnie++(8)`. Using the virtio network interface can deliver
86up to three times the throughput of an emulated Intel E1000 network card, as
87measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
88http://www.linux-kvm.org/page/Using_VirtIO_NIC]
89
90
91[[qm_virtual_machines_settings]]
92Virtual Machines Settings
93-------------------------
94
95Generally speaking {pve} tries to choose sane defaults for virtual machines
96(VM). Make sure you understand the meaning of the settings you change, as it
97could incur a performance slowdown, or putting your data at risk.
98
99
100[[qm_general_settings]]
101General Settings
102~~~~~~~~~~~~~~~~
103
104[thumbnail="gui-create-vm-general.png"]
105
106General settings of a VM include
107
108* the *Node* : the physical server on which the VM will run
109* the *VM ID*: a unique number in this {pve} installation used to identify your VM
110* *Name*: a free form text string you can use to describe the VM
111* *Resource Pool*: a logical group of VMs
112
113
114[[qm_os_settings]]
115OS Settings
116~~~~~~~~~~~
117
118[thumbnail="gui-create-vm-os.png"]
119
120When creating a VM, setting the proper Operating System(OS) allows {pve} to
121optimize some low level parameters. For instance Windows OS expect the BIOS
122clock to use the local time, while Unix based OS expect the BIOS clock to have
123the UTC time.
124
125
126[[qm_hard_disk]]
127Hard Disk
128~~~~~~~~~
129
130Qemu can emulate a number of storage controllers:
131
132* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
133controller. Even if this controller has been superseded by recent designs,
134each and every OS you can think of has support for it, making it a great choice
135if you want to run an OS released before 2003. You can connect up to 4 devices
136on this controller.
137
138* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern
139design, allowing higher throughput and a greater number of devices to be
140connected. You can connect up to 6 devices on this controller.
141
142* the *SCSI* controller, designed in 1985, is commonly found on server grade
143hardware, and can connect up to 14 storage devices. {pve} emulates by default a
144LSI 53C895A controller.
145+
146A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for
147performance and is automatically selected for newly created Linux VMs since
148{pve} 4.3. Linux distributions have support for this controller since 2012, and
149FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
150containing the drivers during the installation.
151// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
152If you aim at maximum performance, you can select a SCSI controller of type
153_VirtIO SCSI single_ which will allow you to select the *IO Thread* option.
154When selecting _VirtIO SCSI single_ Qemu will create a new controller for
155each disk, instead of adding all disks to the same controller.
156
157* The *VirtIO Block* controller, often just called VirtIO or virtio-blk,
158is an older type of paravirtualized controller. It has been superseded by the
159VirtIO SCSI Controller, in terms of features.
160
161[thumbnail="gui-create-vm-hard-disk.png"]
162On each controller you attach a number of emulated hard disks, which are backed
163by a file or a block device residing in the configured storage. The choice of
164a storage type will determine the format of the hard disk image. Storages which
165present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*,
166whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose
167either the *raw disk image format* or the *QEMU image format*.
168
169 * the *QEMU image format* is a copy on write format which allows snapshots, and
170 thin provisioning of the disk image.
171 * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what
172 you would get when executing the `dd` command on a block device in Linux. This
173 format do not support thin provisioning or snapshots by itself, requiring
174 cooperation from the storage layer for these tasks. It may, however, be up to
175 10% faster than the *QEMU image format*. footnote:[See this benchmark for details
176 http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf]
177 * the *VMware image format* only makes sense if you intend to import/export the
178 disk image to other hypervisors.
179
180Setting the *Cache* mode of the hard drive will impact how the host system will
181notify the guest systems of block write completions. The *No cache* default
182means that the guest system will be notified that a write is complete when each
183block reaches the physical storage write queue, ignoring the host page cache.
184This provides a good balance between safety and speed.
185
186If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
187you can set the *No backup* option on that disk.
188
189If you want the {pve} storage replication mechanism to skip a disk when starting
190 a replication job, you can set the *Skip replication* option on that disk.
191As of {pve} 5.0, replication requires the disk images to be on a storage of type
192`zfspool`, so adding a disk image to other storages when the VM has replication
193configured requires to skip replication for this disk image.
194
195If your storage supports _thin provisioning_ (see the storage chapter in the
196{pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
197option on the hard disks connected to that controller. With *Discard* enabled,
198when the filesystem of a VM marks blocks as unused after removing files, the
199emulated SCSI controller will relay this information to the storage, which will
200then shrink the disk image accordingly.
201
202.IO Thread
203The option *IO Thread* can only be used when using a disk with the
204*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
205 type is *VirtIO SCSI single*.
206With this enabled, Qemu creates one I/O thread per storage controller,
207instead of a single thread for all I/O, so it increases performance when
208multiple disks are used and each disk has its own storage controller.
209Note that backups do not currently work with *IO Thread* enabled.
210
211
212[[qm_cpu]]
213CPU
214~~~
215
216[thumbnail="gui-create-vm-cpu.png"]
217
218A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU.
219This CPU can then contain one or many *cores*, which are independent
220processing units. Whether you have a single CPU socket with 4 cores, or two CPU
221sockets with two cores is mostly irrelevant from a performance point of view.
222However some software licenses depend on the number of sockets a machine has,
223in that case it makes sense to set the number of sockets to what the license
224allows you.
225
226Increasing the number of virtual cpus (cores and sockets) will usually provide a
227performance improvement though that is heavily dependent on the use of the VM.
228Multithreaded applications will of course benefit from a large number of
229virtual cpus, as for each virtual cpu you add, Qemu will create a new thread of
230execution on the host system. If you're not sure about the workload of your VM,
231it is usually a safe bet to set the number of *Total cores* to 2.
232
233NOTE: It is perfectly safe to set the _overall_ number of total cores in all
234your VMs to be greater than the number of of cores you have on your server (i.e.
2354 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
236the host system will balance the Qemu execution threads between your server
237cores just like if you were running a standard multithreaded application.
238However {pve} will prevent you to allocate on a _single_ machine more vcpus than
239physically available, as this will only bring the performance down due to the
240cost of context switches.
241
242Qemu can emulate a number different of *CPU types* from 486 to the latest Xeon
243processors. Each new processor generation adds new features, like hardware
244assisted 3d rendering, random number generation, memory protection, etc ...
245Usually you should select for your VM a processor type which closely matches the
246CPU of the host system, as it means that the host CPU features (also called _CPU
247flags_ ) will be available in your VMs. If you want an exact match, you can set
248the CPU type to *host* in which case the VM will have exactly the same CPU flags
249as your host system.
250
251This has a downside though. If you want to do a live migration of VMs between
252different hosts, your VM might end up on a new system with a different CPU type.
253If the CPU flags passed to the guest are missing, the qemu process will stop. To
254remedy this Qemu has also its own CPU type *kvm64*, that {pve} uses by defaults.
255kvm64 is a Pentium 4 look a like CPU type, which has a reduced CPU flags set,
256but is guaranteed to work everywhere.
257
258In short, if you care about live migration and moving VMs between nodes, leave
259the kvm64 default. If you don’t care about live migration, set the CPU type to
260host, as in theory this will give your guests maximum performance.
261
262You can also optionally emulate a *NUMA* architecture in your VMs. The basics of
263the NUMA architecture mean that instead of having a global memory pool available
264to all your cores, the memory is spread into local banks close to each socket.
265This can bring speed improvements as the memory bus is not a bottleneck
266anymore. If your system has a NUMA architecture footnote:[if the command
267`numactl --hardware | grep available` returns more than one node, then your host
268system has a NUMA architecture] we recommend to activate the option, as this
269will allow proper distribution of the VM resources on the host system. This
270option is also required in {pve} to allow hotplugging of cores and RAM to a VM.
271
272If the NUMA option is used, it is recommended to set the number of sockets to
273the number of sockets of the host system.
274
275
276[[qm_memory]]
277Memory
278~~~~~~
279
280For each VM you have the option to set a fixed size memory or asking
281{pve} to dynamically allocate memory based on the current RAM usage of the
282host.
283
284.Fixed Memory Allocation
285[thumbnail="gui-create-vm-memory-fixed.png"]
286
287When choosing a *fixed size memory* {pve} will simply allocate what you
288specify to your VM.
289
290Even when using a fixed memory size, the ballooning device gets added to the
291VM, because it delivers useful information such as how much memory the guest
292really uses.
293In general, you should leave *ballooning* enabled, but if you want to disable
294it (e.g. for debugging purposes), simply uncheck
295*Ballooning* or set
296
297 balloon: 0
298
299in the configuration.
300
301.Automatic Memory Allocation
302[thumbnail="gui-create-vm-memory-dynamic.png", float="left"]
303
304// see autoballoon() in pvestatd.pm
305When choosing to *automatically allocate memory*, {pve} will make sure that the
306minimum amount you specified is always available to the VM, and if RAM usage on
307the host is below 80%, will dynamically add memory to the guest up to the
308maximum memory specified.
309
310When the host is becoming short on RAM, the VM will then release some memory
311back to the host, swapping running processes if needed and starting the oom
312killer in last resort. The passing around of memory between host and guest is
313done via a special `balloon` kernel driver running inside the guest, which will
314grab or release memory pages from the host.
315footnote:[A good explanation of the inner workings of the balloon driver can be found here https://rwmj.wordpress.com/2010/07/17/virtio-balloon/]
316
317When multiple VMs use the autoallocate facility, it is possible to set a
318*Shares* coefficient which indicates the relative amount of the free host memory
319that each VM should take. Suppose for instance you have four VMs, three of them
320running a HTTP server and the last one is a database server. To cache more
321database blocks in the database server RAM, you would like to prioritize the
322database VM when spare RAM is available. For this you assign a Shares property
323of 3000 to the database VM, leaving the other VMs to the Shares default setting
324of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
325* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
3263000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
327get 1/5 GB.
328
329All Linux distributions released after 2010 have the balloon kernel driver
330included. For Windows OSes, the balloon driver needs to be added manually and can
331incur a slowdown of the guest, so we don't recommend using it on critical
332systems.
333// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
334
335When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
336of RAM available to the host.
337
338
339[[qm_network_device]]
340Network Device
341~~~~~~~~~~~~~~
342
343[thumbnail="gui-create-vm-network.png"]
344
345Each VM can have many _Network interface controllers_ (NIC), of four different
346types:
347
348 * *Intel E1000* is the default, and emulates an Intel Gigabit network card.
349 * the *VirtIO* paravirtualized NIC should be used if you aim for maximum
350performance. Like all VirtIO devices, the guest OS should have the proper driver
351installed.
352 * the *Realtek 8139* emulates an older 100 MB/s network card, and should
353only be used when emulating older operating systems ( released before 2002 )
354 * the *vmxnet3* is another paravirtualized device, which should only be used
355when importing a VM from another hypervisor.
356
357{pve} will generate for each NIC a random *MAC address*, so that your VM is
358addressable on Ethernet networks.
359
360The NIC you added to the VM can follow one of two different models:
361
362 * in the default *Bridged mode* each virtual NIC is backed on the host by a
363_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
364tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
365have direct access to the Ethernet LAN on which the host is located.
366 * in the alternative *NAT mode*, each virtual NIC will only communicate with
367the Qemu user networking stack, where a built-in router and DHCP server can
368provide network access. This built-in DHCP will serve addresses in the private
36910.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
370should only be used for testing.
371
372You can also skip adding a network device when creating a VM by selecting *No
373network device*.
374
375.Multiqueue
376If you are using the VirtIO driver, you can optionally activate the
377*Multiqueue* option. This option allows the guest OS to process networking
378packets using multiple virtual CPUs, providing an increase in the total number
379of packets transferred.
380
381//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
382When using the VirtIO driver with {pve}, each NIC network queue is passed to the
383host kernel, where the queue will be processed by a kernel thread spawn by the
384vhost driver. With this option activated, it is possible to pass _multiple_
385network queues to the host kernel for each NIC.
386
387//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net
388When using Multiqueue, it is recommended to set it to a value equal
389to the number of Total Cores of your guest. You also need to set in
390the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
391command:
392
393`ethtool -L ens1 combined X`
394
395where X is the number of the number of vcpus of the VM.
396
397You should note that setting the Multiqueue parameter to a value greater
398than one will increase the CPU load on the host and guest systems as the
399traffic increases. We recommend to set this option only when the VM has to
400process a great number of incoming connections, such as when the VM is running
401as a router, reverse proxy or a busy HTTP server doing long polling.
402
403
404[[qm_usb_passthrough]]
405USB Passthrough
406~~~~~~~~~~~~~~~
407
408There are two different types of USB passthrough devices:
409
410* Host USB passthrough
411* SPICE USB passthrough
412
413Host USB passthrough works by giving a VM a USB device of the host.
414This can either be done via the vendor- and product-id, or
415via the host bus and port.
416
417The vendor/product-id looks like this: *0123:abcd*,
418where *0123* is the id of the vendor, and *abcd* is the id
419of the product, meaning two pieces of the same usb device
420have the same id.
421
422The bus/port looks like this: *1-2.3.4*, where *1* is the bus
423and *2.3.4* is the port path. This represents the physical
424ports of your host (depending of the internal order of the
425usb controllers).
426
427If a device is present in a VM configuration when the VM starts up,
428but the device is not present in the host, the VM can boot without problems.
429As soon as the device/port is available in the host, it gets passed through.
430
431WARNING: Using this kind of USB passthrough means that you cannot move
432a VM online to another host, since the hardware is only available
433on the host the VM is currently residing.
434
435The second type of passthrough is SPICE USB passthrough. This is useful
436if you use a SPICE client which supports it. If you add a SPICE USB port
437to your VM, you can passthrough a USB device from where your SPICE client is,
438directly to the VM (for example an input device or hardware dongle).
439
440
441[[qm_bios_and_uefi]]
442BIOS and UEFI
443~~~~~~~~~~~~~
444
445In order to properly emulate a computer, QEMU needs to use a firmware.
446By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS
447implementation. SeaBIOS is a good choice for most standard setups.
448
449There are, however, some scenarios in which a BIOS is not a good firmware
450to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
451http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
452In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
453
454If you want to use OVMF, there are several things to consider:
455
456In order to save things like the *boot order*, there needs to be an EFI Disk.
457This disk will be included in backups and snapshots, and there can only be one.
458
459You can create such a disk with the following command:
460
461 qm set <vmid> -efidisk0 <storage>:1,format=<format>
462
463Where *<storage>* is the storage where you want to have the disk, and
464*<format>* is a format which the storage supports. Alternatively, you can
465create such a disk through the web interface with 'Add' -> 'EFI Disk' in the
466hardware section of a VM.
467
468When using OVMF with a virtual display (without VGA passthrough),
469you need to set the client resolution in the OVMF menu(which you can reach
470with a press of the ESC button during boot), or you have to choose
471SPICE as the display type.
472
473[[qm_startup_and_shutdown]]
474Automatic Start and Shutdown of Virtual Machines
475~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
476
477After creating your VMs, you probably want them to start automatically
478when the host system boots. For this you need to select the option 'Start at
479boot' from the 'Options' Tab of your VM in the web interface, or set it with
480the following command:
481
482 qm set <vmid> -onboot 1
483
484.Start and Shutdown Order
485
486[thumbnail="gui-qemu-edit-start-order.png"]
487
488In some case you want to be able to fine tune the boot order of your
489VMs, for instance if one of your VM is providing firewalling or DHCP
490to other guest systems. For this you can use the following
491parameters:
492
493* *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
494you want the VM to be the first to be started. (We use the reverse startup
495order for shutdown, so a machine with a start order of 1 would be the last to
496be shut down)
497* *Startup delay*: Defines the interval between this VM start and subsequent
498VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting
499other VMs.
500* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
501for the VM to be offline after issuing a shutdown command.
502By default this value is set to 60, which means that {pve} will issue a
503shutdown request, wait 60s for the machine to be offline, and if after 60s
504the machine is still online will notify that the shutdown action failed.
505
506NOTE: VMs managed by the HA stack do not follow the 'start on boot' and
507'boot order' options currently. Those VMs will be skipped by the startup and
508shutdown algorithm as the HA manager itself ensures that VMs get started and
509stopped.
510
511Please note that machines without a Start/Shutdown order parameter will always
512start after those where the parameter is set, and this parameter only
513makes sense between the machines running locally on a host, and not
514cluster-wide.
515
516
517[[qm_migration]]
518Migration
519---------
520
521[thumbnail="gui-qemu-migrate.png"]
522
523If you have a cluster, you can migrate your VM to another host with
524
525 qm migrate <vmid> <target>
526
527There are generally two mechanisms for this
528
529* Online Migration (aka Live Migration)
530* Offline Migration
531
532Online Migration
533~~~~~~~~~~~~~~~~
534
535When your VM is running and it has no local resources defined (such as disks
536on local storage, passed through devices, etc.) you can initiate a live
537migration with the -online flag.
538
539How it works
540^^^^^^^^^^^^
541
542This starts a Qemu Process on the target host with the 'incoming' flag, which
543means that the process starts and waits for the memory data and device states
544from the source Virtual Machine (since all other resources, e.g. disks,
545are shared, the memory content and device state are the only things left
546to transmit).
547
548Once this connection is established, the source begins to send the memory
549content asynchronously to the target. If the memory on the source changes,
550those sections are marked dirty and there will be another pass of sending data.
551This happens until the amount of data to send is so small that it can
552pause the VM on the source, send the remaining data to the target and start
553the VM on the target in under a second.
554
555Requirements
556^^^^^^^^^^^^
557
558For Live Migration to work, there are some things required:
559
560* The VM has no local resources (e.g. passed through devices, local disks, etc.)
561* The hosts are in the same {pve} cluster.
562* The hosts have a working (and reliable) network connection.
563* The target host must have the same or higher versions of the
564 {pve} packages. (It *might* work the other way, but this is never guaranteed)
565
566Offline Migration
567~~~~~~~~~~~~~~~~~
568
569If you have local resources, you can still offline migrate your VMs,
570as long as all disk are on storages, which are defined on both hosts.
571Then the migration will copy the disk over the network to the target host.
572
573[[qm_copy_and_clone]]
574Copies and Clones
575-----------------
576
577[thumbnail="gui-qemu-full-clone.png"]
578
579VM installation is usually done using an installation media (CD-ROM)
580from the operation system vendor. Depending on the OS, this can be a
581time consuming task one might want to avoid.
582
583An easy way to deploy many VMs of the same type is to copy an existing
584VM. We use the term 'clone' for such copies, and distinguish between
585'linked' and 'full' clones.
586
587Full Clone::
588
589The result of such copy is an independent VM. The
590new VM does not share any storage resources with the original.
591+
592
593It is possible to select a *Target Storage*, so one can use this to
594migrate a VM to a totally different storage. You can also change the
595disk image *Format* if the storage driver supports several formats.
596+
597
598NOTE: A full clone need to read and copy all VM image data. This is
599usually much slower than creating a linked clone.
600+
601
602Some storage types allows to copy a specific *Snapshot*, which
603defaults to the 'current' VM data. This also means that the final copy
604never includes any additional snapshots from the original VM.
605
606
607Linked Clone::
608
609Modern storage drivers supports a way to generate fast linked
610clones. Such a clone is a writable copy whose initial contents are the
611same as the original data. Creating a linked clone is nearly
612instantaneous, and initially consumes no additional space.
613+
614
615They are called 'linked' because the new image still refers to the
616original. Unmodified data blocks are read from the original image, but
617modification are written (and afterwards read) from a new
618location. This technique is called 'Copy-on-write'.
619+
620
621This requires that the original volume is read-only. With {pve} one
622can convert any VM into a read-only <<qm_templates, Template>>). Such
623templates can later be used to create linked clones efficiently.
624+
625
626NOTE: You cannot delete the original template while linked clones
627exists.
628+
629
630It is not possible to change the *Target storage* for linked clones,
631because this is a storage internal feature.
632
633
634The *Target node* option allows you to create the new VM on a
635different node. The only restriction is that the VM is on shared
636storage, and that storage is also available on the target node.
637
638To avoid resource conflicts, all network interface MAC addresses gets
639randomized, and we generate a new 'UUID' for the VM BIOS (smbios1)
640setting.
641
642
643[[qm_templates]]
644Virtual Machine Templates
645-------------------------
646
647One can convert a VM into a Template. Such templates are read-only,
648and you can use them to create linked clones.
649
650NOTE: It is not possible to start templates, because this would modify
651the disk images. If you want to change the template, create a linked
652clone and modify that.
653
654Importing Virtual Machines and disk images
655------------------------------------------
656
657A VM export from a foreign hypervisor takes usually the form of one or more disk
658 images, with a configuration file describing the settings of the VM (RAM,
659 number of cores). +
660The disk images can be in the vmdk format, if the disks come from
661VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
662The most popular configuration format for VM exports is the OVF standard, but in
663practice interoperation is limited because many settings are not implemented in
664the standard itself, and hypervisors export the supplementary information
665in non-standard extensions.
666
667Besides the problem of format, importing disk images from other hypervisors
668may fail if the emulated hardware changes too much from one hypervisor to
669another. Windows VMs are particularly concerned by this, as the OS is very
670picky about any changes of hardware. This problem may be solved by
671installing the MergeIDE.zip utility available from the Internet before exporting
672and choosing a hard disk type of *IDE* before booting the imported Windows VM.
673
674Finally there is the question of paravirtualized drivers, which improve the
675speed of the emulated system and are specific to the hypervisor.
676GNU/Linux and other free Unix OSes have all the necessary drivers installed by
677default and you can switch to the paravirtualized drivers right after importing
678the VM. For Windows VMs, you need to install the Windows paravirtualized
679drivers by yourself.
680
681GNU/Linux and other free Unix can usually be imported without hassle. Note
682that we cannot guarantee a successful import/export of Windows VMs in all
683cases due to the problems above.
684
685Step-by-step example of a Windows OVF import
686~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
687
688Microsoft provides
689https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads]
690 to get started with Windows development.We are going to use one of these
691to demonstrate the OVF import feature.
692
693Download the Virtual Machine zip
694^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
695
696After getting informed about the user agreement, choose the _Windows 10
697Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip.
698
699Extract the disk image from the zip
700^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
701
702Using the `unzip` utility or any archiver of your choice, unpack the zip,
703and copy via ssh/scp the ovf and vmdk files to your {pve} host.
704
705Import the Virtual Machine
706^^^^^^^^^^^^^^^^^^^^^^^^^^
707
708This will create a new virtual machine, using cores, memory and
709VM name as read from the OVF manifest, and import the disks to the +local-lvm+
710 storage. You have to configure the network manually.
711
712 qm importovf 999 WinDev1709Eval.ovf local-lvm
713
714The VM is ready to be started.
715
716Adding an external disk image to a Virtual Machine
717~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
718
719You can also add an existing disk image to a VM, either coming from a
720foreign hypervisor, or one that you created yourself.
721
722Suppose you created a Debian/Ubuntu disk image with the 'vmdebootstrap' tool:
723
724 vmdebootstrap --verbose \
725 --size 10G --serial-console \
726 --grub --no-extlinux \
727 --package openssh-server \
728 --package avahi-daemon \
729 --package qemu-guest-agent \
730 --hostname vm600 --enable-dhcp \
731 --customize=./copy_pub_ssh.sh \
732 --sparse --image vm600.raw
733
734You can now create a new target VM for this image.
735
736 qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
737 --bootdisk scsi0 --scsihw virtio-scsi-pci --ostype l26
738
739Add the disk image as +unused0+ to the VM, using the storage +pvedir+:
740
741 qm importdisk 600 vm600.raw pvedir
742
743Finally attach the unused disk to the SCSI controller of the VM:
744
745 qm set 600 --scsi0 pvedir:600/vm-600-disk-1.raw
746
747The VM is ready to be started.
748
749Managing Virtual Machines with `qm`
750------------------------------------
751
752qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
753create and destroy virtual machines, and control execution
754(start/stop/suspend/resume). Besides that, you can use qm to set
755parameters in the associated config file. It is also possible to
756create and delete virtual disks.
757
758CLI Usage Examples
759~~~~~~~~~~~~~~~~~~
760
761Using an iso file uploaded on the 'local' storage, create a VM
762with a 4 GB IDE disk on the 'local-lvm' storage
763
764 qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
765
766Start the new VM
767
768 qm start 300
769
770Send a shutdown request, then wait until the VM is stopped.
771
772 qm shutdown 300 && qm wait 300
773
774Same as above, but only wait for 40 seconds.
775
776 qm shutdown 300 && qm wait 300 -timeout 40
777
778
779[[qm_configuration]]
780Configuration
781-------------
782
783VM configuration files are stored inside the Proxmox cluster file
784system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
785Like other files stored inside `/etc/pve/`, they get automatically
786replicated to all other cluster nodes.
787
788NOTE: VMIDs < 100 are reserved for internal purposes, and VMIDs need to be
789unique cluster wide.
790
791.Example VM Configuration
792----
793cores: 1
794sockets: 1
795memory: 512
796name: webmail
797ostype: l26
798bootdisk: virtio0
799net0: e1000=EE:D2:28:5F:B6:3E,bridge=vmbr0
800virtio0: local:vm-100-disk-1,size=32G
801----
802
803Those configuration files are simple text files, and you can edit them
804using a normal text editor (`vi`, `nano`, ...). This is sometimes
805useful to do small corrections, but keep in mind that you need to
806restart the VM to apply such changes.
807
808For that reason, it is usually better to use the `qm` command to
809generate and modify those files, or do the whole thing using the GUI.
810Our toolkit is smart enough to instantaneously apply most changes to
811running VM. This feature is called "hot plug", and there is no
812need to restart the VM in that case.
813
814
815File Format
816~~~~~~~~~~~
817
818VM configuration files use a simple colon separated key/value
819format. Each line has the following format:
820
821-----
822# this is a comment
823OPTION: value
824-----
825
826Blank lines in those files are ignored, and lines starting with a `#`
827character are treated as comments and are also ignored.
828
829
830[[qm_snapshots]]
831Snapshots
832~~~~~~~~~
833
834When you create a snapshot, `qm` stores the configuration at snapshot
835time into a separate snapshot section within the same configuration
836file. For example, after creating a snapshot called ``testsnapshot'',
837your configuration file will look like this:
838
839.VM configuration with snapshot
840----
841memory: 512
842swap: 512
843parent: testsnaphot
844...
845
846[testsnaphot]
847memory: 512
848swap: 512
849snaptime: 1457170803
850...
851----
852
853There are a few snapshot related properties like `parent` and
854`snaptime`. The `parent` property is used to store the parent/child
855relationship between snapshots. `snaptime` is the snapshot creation
856time stamp (Unix epoch).
857
858
859[[qm_options]]
860Options
861~~~~~~~
862
863include::qm.conf.5-opts.adoc[]
864
865
866Locks
867-----
868
869Online migrations, snapshots and backups (`vzdump`) set a lock to
870prevent incompatible concurrent actions on the affected VMs. Sometimes
871you need to remove such a lock manually (e.g., after a power failure).
872
873 qm unlock <vmid>
874
875CAUTION: Only do that if you are sure the action which set the lock is
876no longer running.
877
878
879ifdef::manvolnum[]
880
881Files
882------
883
884`/etc/pve/qemu-server/<VMID>.conf`::
885
886Configuration file for the VM '<VMID>'.
887
888
889include::pve-copyright.adoc[]
890endif::manvolnum[]