X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=qm.adoc;h=d66cc683b78275cb80a10df8e9873c9e72436de2;hp=0b8aa925a4d8a4962deb665f7ad144a78364b829;hb=189d3661134d004814e14e95973afa514590326c;hpb=f69cfd23cbd6329f1e7b9ecccb6b51300e2ea7e1 diff --git a/qm.adoc b/qm.adoc index 0b8aa92..d66cc68 100644 --- a/qm.adoc +++ b/qm.adoc @@ -1,7 +1,7 @@ -include::attributes.txt[] ifdef::manvolnum[] PVE({manvolnum}) ================ +include::attributes.txt[] NAME ---- @@ -21,57 +21,205 @@ endif::manvolnum[] ifndef::manvolnum[] Qemu/KVM Virtual Machines ========================= +include::attributes.txt[] endif::manvolnum[] - -qm is a script to manage virtual machines with Qemu/Kvm. You can +// deprecates +// http://pve.proxmox.com/wiki/Container_and_Full_Virtualization +// http://pve.proxmox.com/wiki/KVM +// http://pve.proxmox.com/wiki/Qemu_Server + +Qemu (short form for Quick Emulator) is an opensource hypervisor that emulates a +physical computer. From the perspective of the host system where Qemu is +running, Qemu is a user program which has access to a number of local resources +like partitions, files, network cards which are then passed to an +emulated computer which sees them as if they were real devices. + +A guest operating system running in the emulated computer accesses these +devices, and runs as it were running on real hardware. For instance you can pass +an iso image as a parameter to Qemu, and the OS running in the emulated computer +will see a real CDROM inserted in a CD drive. + +Qemu can emulates a great variety of hardware from ARM to Sparc, but {pve} is +only concerned with 32 and 64 bits PC clone emulation, since it represents the +overwhelming majority of server hardware. The emulation of PC clones is also one +of the fastest due to the availability of processor extensions which greatly +speed up Qemu when the emulated architecture is the same as the host +architecture. + +Qemu inside {pve} runs as a root process, since this is required to access block +and PCI devices. + +Emulated devices and paravirtualized devices +-------------------------------------------- + +The PC hardware emulated by Qemu includes a mainboard, network controllers, +scsi, ide and sata controllers, serial ports (the complete list can be seen in +the `kvm(1)` man page) all of them emulated in software. All these devices +are the exact software equivalent of existing hardware devices, and if the OS +running in the guest has the proper drivers it will use the devices as if it +were running on real hardware. This allows Qemu to runs _unmodified_ operating +systems. + +This however has a performance cost, as running in software what was meant to +run in hardware involves a lot of extra work for the host CPU. To mitigate this, +Qemu can present to the guest operating system _paravirtualized devices_, where +the guest OS recognizes it is running inside Qemu and cooperates with the +hypervisor. + +Qemu relies on the virtio virtualization standard, and is thus able to presente +paravirtualized virtio devices, which includes a paravirtualized generic disk +controller, a paravirtualized network card, a paravirtualized serial port, +a paravirtualized SCSI controller, etc ... + +It is highly recommended to use the virtio devices whenever you can, as they +provide a big performance improvement. Using the virtio generic disk controller +versus an emulated IDE controller will double the sequential write throughput, +as measured with `bonnie++(8)`. Using the virtio network interface can deliver +up to three times the throughput of an emulated Intel E1000 network card, as +measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki +http://www.linux-kvm.org/page/Using_VirtIO_NIC] + +Virtual Machines settings +------------------------- +Generally speaking {pve} tries to choose sane defaults for virtual machines +(VM). Make sure you understand the meaning of the settings you change, as it +could incur a performance slowdown, or putting your data at risk. + +General Settings +~~~~~~~~~~~~~~~~ +General settings of a VM include + +* the *Node* : the physical server on which the VM will run +* the *VM ID*: a unique number in this {pve} installation used to identify your VM +* *Name*: a free form text string you can use to describe the VM +* *Resource Pool*: a logical group of VMs + +OS Settings +~~~~~~~~~~~ +When creating a VM, setting the proper Operating System(OS) allows {pve} to +optimize some low level parameters. For instance Windows OS expect the BIOS +clock to use the local time, while Unix based OS expect the BIOS clock to have +the UTC time. + +Hard Disk +~~~~~~~~~ +Qemu can emulate a number of storage controllers: + +* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk +controller. Even if this controller has been superseded by more more designs, +each and every OS you can think has support for it, making it a great choice +if you want to run an OS released before 2003. You can connect up to 4 devices +on this controller. + +* the *SATA* (Serial ATA) controller, dating from 2003, has a more modern +design, allowing higher throughput and a greater number of devices to be +connected. You can connect up to 6 devices on this controller. + +* the *SCSI* controller, designed in 1985, is commonly found on server +grade hardware, and can connect up to 14 storage devices. {pve} emulates by +default a LSI 53C895A controller. + +* The *Virtio* controller is a generic paravirtualized controller, and is the +recommended setting if you aim for performance. To use this controller, the OS +need to have special drivers which may be included in your installation ISO or +not. Linux distributions have support for the Virtio controller since 2010, and +FreeBSD since 2014. For Windows OSes, you need to provide an extra iso +containing the Virtio drivers during the installation. +// see: https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation. +You can connect up to 16 devices on this controller. + +On each controller you attach a number of emulated hard disks, which are backed +by a file or a block device residing in the configured storage. The choice of +a storage type will determine the format of the hard disk image. Storages which +present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*, +whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose +either the *raw disk image format* or the *QEMU image format*. + + * the *QEMU image format* is a copy on write format which allows snapshots, and + thin provisioning of the disk image. + * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what + you would get when executing the `dd` command on a block device in Linux. This + format do not support thin provisioning or snapshotting by itself, requiring + cooperation from the storage layer for these tasks. It is however 10% faster + than the *QEMU image format*. footnote:[See this benchmark for details + http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] + * the *VMware image format* only makes sense if you intend to import/export the + disk image to other hypervisors. + +Setting the *Cache* mode of the hard drive will impact how the host system will +notify the guest systems of block write completions. The *No cache* default +means that the guest system will be notified that a write is complete when each +block reaches the physical storage write queue, ignoring the host page cache. +This provides a good balance between safety and speed. + +If you want the {pve} backup manager to skip a disk when doing a backup of a VM, +you can set the *No backup* option on that disk. + +If your storage supports _thin provisioning_ (see the storage chapter in the +{pve} guide), and your VM has a *SCSI* controller you can activate the *Discard* +option on the hard disks connected to that controller. With *Discard* enabled, +when the filesystem of a VM marks blocks as unused after removing files, the +emulated SCSI controller will relay this information to the storage, which will +then shrink the disk image accordingly. + +The option *IO Thread* can only be enabled when using a disk with the *Virtio* controller, +or with the *SCSI* controller, when the emulated controller type is *VIRTIO*. +With this enabled, Qemu uses one thread per disk, instead of one thread for all, +so it should increase performance when using multiple disks. +Note that backups do not currently work with *IO Thread* enabled. + +Managing Virtual Machines with 'qm' +------------------------------------ + +qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can create and destroy virtual machines, and control execution (start/stop/suspend/resume). Besides that, you can use qm to set parameters in the associated config file. It is also possible to create and delete virtual disks. -Configuration -------------- +CLI Usage Examples +~~~~~~~~~~~~~~~~~~ -All configuration files consists of lines in the form +Create a new VM with 4 GB IDE disk. - PARAMETER: value + qm create 300 -ide0 4 -net0 e1000 -cdrom proxmox-mailgateway_2.1.iso -See 'man vm.conf' for a complete list of options. +Start the new VM -Configuration files are stored inside the Proxmox configuration file -system, and can be access at '/etc/pve/qemu-server/.conf'. + qm start 300 -The default for option `keyboard` is read from -'/etc/pve/datacenter.conf'. +Send a shutdown request, then wait until the VM is stopped. -Locks ------ + qm shutdown 300 && qm wait 300 -Online migration and backups ('vzdump') set a lock to prevent -unintentional action on such VMs. Sometimes you need remove such lock -manually (power failure). +Same as above, but only wait for 40 seconds. - qm unlock + qm shutdown 300 && qm wait 300 -timeout 40 -Examples --------- +Configuration +------------- -Create a new VM with 4 GB IDE disk. +All configuration files consists of lines in the form - qm create 300 -ide0 4 -net0 e1000 -cdrom proxmox-mailgateway_2.1.iso + PARAMETER: value -Start the new VM +Configuration files are stored inside the Proxmox cluster file +system, and can be accessed at '/etc/pve/qemu-server/.conf'. - qm start 300 +Options +~~~~~~~ -Send a shutdown request, then wait until the VM is stopped. +include::qm.conf.5-opts.adoc[] - qm shutdown 300 && qm wait 300 -Same as above, but only wait for 40 seconds. +Locks +----- - qm shutdown 300 && qm wait 300 -timeout 40 +Online migrations and backups ('vzdump') set a lock to prevent incompatible +concurrent actions on the affected VMs. Sometimes you need to remove such a +lock manually (e.g., after a power failure). + + qm unlock ifdef::manvolnum[]