X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=qm.adoc;h=ed2ab8d5d4231b5689a785bb4f26f20dddc16ad7;hp=0964976471330007cc018e9261c4171c28d17184;hb=f00afaef559beefc56c8eed970c5b8ef2a550ada;hpb=c80725fe3579a1dc7e0f3b8a50eb0b22ac3810ad diff --git a/qm.adoc b/qm.adoc index 0964976..ed2ab8d 100644 --- a/qm.adoc +++ b/qm.adoc @@ -101,7 +101,7 @@ could incur a performance slowdown, or putting your data at risk. General Settings ~~~~~~~~~~~~~~~~ -[thumbnail="qm-general-settings.png"] +[thumbnail="gui-create-vm-general.png"] General settings of a VM include @@ -115,7 +115,7 @@ General settings of a VM include OS Settings ~~~~~~~~~~~ -[thumbnail="qm-os-settings.png"] +[thumbnail="gui-create-vm-os.png"] When creating a VM, setting the proper Operating System(OS) allows {pve} to optimize some low level parameters. For instance Windows OS expect the BIOS @@ -143,18 +143,22 @@ connected. You can connect up to 6 devices on this controller. hardware, and can connect up to 14 storage devices. {pve} emulates by default a LSI 53C895A controller. + -A SCSI controller of type _Virtio_ is the recommended setting if you aim for +A SCSI controller of type _VirtIO SCSI_ is the recommended setting if you aim for performance and is automatically selected for newly created Linux VMs since {pve} 4.3. Linux distributions have support for this controller since 2012, and FreeBSD since 2014. For Windows OSes, you need to provide an extra iso containing the drivers during the installation. // https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation. +If you aim at maximum performance, you can select a SCSI controller of type +_VirtIO SCSI single_ which will allow you to select the *IO Thread* option. +When selecting _VirtIO SCSI single_ Qemu will create a new controller for +each disk, instead of adding all disks to the same controller. * The *Virtio* controller, also called virtio-blk to distinguish from -the Virtio SCSI controller, is an older type of paravirtualized controller +the VirtIO SCSI controller, is an older type of paravirtualized controller which has been superseded in features by the Virtio SCSI Controller. -[thumbnail="qm-hard-disk.png"] +[thumbnail="gui-create-vm-hard-disk.png"] On each controller you attach a number of emulated hard disks, which are backed by a file or a block device residing in the configured storage. The choice of a storage type will determine the format of the hard disk image. Storages which @@ -190,10 +194,12 @@ emulated SCSI controller will relay this information to the storage, which will then shrink the disk image accordingly. .IO Thread -The option *IO Thread* can only be enabled when using a disk with the *VirtIO* controller, -or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI*. -With this enabled, Qemu uses one thread per disk, instead of one thread for all, -so it should increase performance when using multiple disks. +The option *IO Thread* can only be used when using a disk with the +*VirtIO* controller, or with the *SCSI* controller, when the emulated controller + type is *VirtIO SCSI single*. +With this enabled, Qemu creates one I/O thread per storage controller, +instead of a single thread for all I/O, so it increases performance when +multiple disks are used and each disk has its own storage controller. Note that backups do not currently work with *IO Thread* enabled. @@ -201,7 +207,7 @@ Note that backups do not currently work with *IO Thread* enabled. CPU ~~~ -[thumbnail="qm-cpu-settings.png"] +[thumbnail="gui-create-vm-cpu.png"] A *CPU socket* is a physical slot on a PC motherboard where you can plug a CPU. This CPU can then contain one or many *cores*, which are independent @@ -270,13 +276,24 @@ For each VM you have the option to set a fixed size memory or asking host. .Fixed Memory Allocation -[thumbnail="qm-memory-fixed.png"] +[thumbnail="gui-create-vm-memory-fixed.png"] When choosing a *fixed size memory* {pve} will simply allocate what you specify to your VM. +Even when using a fixed memory size, the ballooning device gets added to the +VM, because it delivers useful information such as how much memory the guest +really uses. +In general, you should leave *ballooning* enabled, but if you want to disable +it (e.g. for debugging purposes), simply uncheck +*Ballooning* or set + + balloon: 0 + +in the configuration. + .Automatic Memory Allocation -[thumbnail="qm-memory-auto.png", float="left"] +[thumbnail="gui-create-vm-memory-dynamic.png", float="left"] // see autoballoon() in pvestatd.pm When choosing to *automatically allocate memory*, {pve} will make sure that the @@ -317,6 +334,8 @@ of RAM available to the host. Network Device ~~~~~~~~~~~~~~ +[thumbnail="gui-create-vm-network.png"] + Each VM can have many _Network interface controllers_ (NIC), of four different types: @@ -402,7 +421,7 @@ If a device is present in a VM configuration when the VM starts up, but the device is not present in the host, the VM can boot without problems. As soon as the device/port ist available in the host, it gets passed through. -WARNING: Using this kind of USB passthrough, means that you cannot move +WARNING: Using this kind of USB passthrough means that you cannot move a VM online to another host, since the hardware is only available on the host the VM is currently residing. @@ -455,10 +474,14 @@ the following command: qm set -onboot 1 -In some case you want to be able to fine tune the boot order of your VMs, for -instance if one of your VM is providing firewalling or DHCP to other guest -systems. -For this you can use the following parameters: +.Start and Shutdown Order + +[thumbnail="gui-qemu-edit-start-order.png"] + +In some case you want to be able to fine tune the boot order of your +VMs, for instance if one of your VM is providing firewalling or DHCP +to other guest systems. For this you can use the following +parameters: * *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if you want the VM to be the first to be started. (We use the reverse startup @@ -478,6 +501,145 @@ start after those where the parameter is set, and this parameter only makes sense between the machines running locally on a host, and not cluster-wide. + +[[qm_migration]] +Migration +--------- + +[thumbnail="gui-qemu-migrate.png"] + +If you have a cluster, you can migrate your VM to another host with + + qm migrate + +There are generally two mechanisms for this + +* Online Migration (aka Live Migration) +* Offline Migration + +Online Migration +~~~~~~~~~~~~~~~~ + +When your VM is running and it has no local resources defined (such as disks +on local storage, passed through devices, etc.) you can initiate a live +migration with the -online flag. + +How it works +^^^^^^^^^^^^ + +This starts a Qemu Process on the target host with the 'incoming' flag, which +means that the process starts and waits for the memory data and device states +from the source Virtual Machine (since all other resources, e.g. disks, +are shared, the memory content and device state are the only things left +to transmit). + +Once this connection is established, the source begins to send the memory +content asynchronously to the target. If the memory on the source changes, +those sections are marked dirty and there will be another pass of sending data. +This happens until the amount of data to send is so small that it can +pause the VM on the source, send the remaining data to the target and start +the VM on the target in under a second. + +Requirements +^^^^^^^^^^^^ + +For Live Migration to work, there are some things required: + +* The VM has no local resources (e.g. passed through devices, local disks, etc.) +* The hosts are in the same {pve} cluster. +* The hosts have a working (and reliable) network connection. +* The target host must have the same or higher versions of the + {pve} packages. (It *might* work the other way, but this is never guaranteed) + +Offline Migration +~~~~~~~~~~~~~~~~~ + +If you have local resources, you can still offline migrate your VMs, +as long as all disk are on storages, which are defined on both hosts. +Then the migration will copy the disk over the network to the target host. + +[[qm_copy_and_clone]] +Copies and Clones +----------------- + +[thumbnail="gui-qemu-full-clone.png"] + +VM installation is usually done using an installation media (CD-ROM) +from the operation system vendor. Depending on the OS, this can be a +time consuming task one might want to avoid. + +An easy way to deploy many VMs of the same type is to copy an existing +VM. We use the term 'clone' for such copies, and distinguish between +'linked' and 'full' clones. + +Full Clone:: + +The result of such copy is an independent VM. The +new VM does not share any storage resources with the original. ++ + +It is possible to select a *Target Storage*, so one can use this to +migrate a VM to a totally different storage. You can also change the +disk image *Format* if the storage driver supports several formats. ++ + +NOTE: A full clone need to read and copy all VM image data. This is +usually much slower than creating a linked clone. ++ + +Some storage types allows to copy a specific *Snapshot*, which +defaults to the 'current' VM data. This also means that the final copy +never includes any additional snapshots from the original VM. + + +Linked Clone:: + +Modern storage drivers supports a way to generate fast linked +clones. Such a clone is a writable copy whose initial contents are the +same as the original data. Creating a linked clone is nearly +instantaneous, and initially consumes no additional space. ++ + +They are called 'linked' because the new image still refers to the +original. Unmodified data blocks are read from the original image, but +modification are written (and afterwards read) from a new +location. This technique is called 'Copy-on-write'. ++ + +This requires that the original volume is read-only. With {pve} one +can convert any VM into a read-only <>). Such +templates can later be used to create linked clones efficiently. ++ + +NOTE: You cannot delete the original template while linked clones +exists. ++ + +It is not possible to change the *Target storage* for linked clones, +because this is a storage internal feature. + + +The *Target node* option allows you to create the new VM on a +different node. The only restriction is that the VM is on shared +storage, and that storage is also available on the target node. + +To avoid resource conflicts, all network interface MAC addresses gets +randomized, and we generate a new 'UUID' for the VM BIOS (smbios1) +setting. + + +[[qm_templates]] +Virtual Machine Templates +------------------------- + +One can convert a VM into a Template. Such templates are read-only, +and you can use them to create linked clones. + +NOTE: It is not possible to start templates, because this would modify +the disk images. If you want to change the template, create a linked +clone and modify that. + + Managing Virtual Machines with `qm` ------------------------------------