X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=qm.adoc;h=67e5da9065147a5de5b27c234e9f9e1560f02ea6;hp=bcd80aadbe12049cc88236f45443b8d53b1c49fe;hb=d067c2ad940989c15c5756aa50e9eb2e4f063e1a;hpb=2b6e4b66e33deac65dc7f1d5e4f683be09388e4b diff --git a/qm.adoc b/qm.adoc index bcd80aa..67e5da9 100644 --- a/qm.adoc +++ b/qm.adoc @@ -122,18 +122,19 @@ on this controller. design, allowing higher throughput and a greater number of devices to be connected. You can connect up to 6 devices on this controller. -* the *SCSI* controller, designed in 1985, is commonly found on server -grade hardware, and can connect up to 14 storage devices. {pve} emulates by -default a LSI 53C895A controller. - -* The *Virtio* controller is a generic paravirtualized controller, and is the -recommended setting if you aim for performance. To use this controller, the OS -need to have special drivers which may be included in your installation ISO or -not. Linux distributions have support for the Virtio controller since 2010, and +* the *SCSI* controller, designed in 1985, is commonly found on server grade +hardware, and can connect up to 14 storage devices. {pve} emulates by default a +LSI 53C895A controller. + +A SCSI controller of type _Virtio_ is the recommended setting if you aim for +performance and is automatically selected for newly created Linux VMs since +{pve} 4.3. Linux distributions have support for this controller since 2012, and FreeBSD since 2014. For Windows OSes, you need to provide an extra iso -containing the Virtio drivers during the installation. -// see: https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation. -You can connect up to 16 devices on this controller. +containing the drivers during the installation. +// https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation. + +* The *Virtio* controller, also called virtio-blk to distinguish from +the Virtio SCSI controller, is an older type of paravirtualized controller +which has been superseded in features by the Virtio SCSI Controller. On each controller you attach a number of emulated hard disks, which are backed by a file or a block device residing in the configured storage. The choice of @@ -169,6 +170,7 @@ when the filesystem of a VM marks blocks as unused after removing files, the emulated SCSI controller will relay this information to the storage, which will then shrink the disk image accordingly. +.IO Thread The option *IO Thread* can only be enabled when using a disk with the *VirtIO* controller, or with the *SCSI* controller, when the emulated controller type is *VirtIO SCSI*. With this enabled, Qemu uses one thread per disk, instead of one thread for all, @@ -273,6 +275,66 @@ systems. When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB of RAM available to the host. +Network Device +~~~~~~~~~~~~~~ +Each VM can have many _Network interface controllers_ (NIC), of four different +types: + + * *Intel E1000* is the default, and emulates an Intel Gigabit network card. + * the *VirtIO* paravirtualized NIC should be used if you aim for maximum +performance. Like all VirtIO devices, the guest OS should have the proper driver +installed. + * the *Realtek 8139* emulates an older 100 MB/s network card, and should +only be used when emulating older operating systems ( released before 2002 ) + * the *vmxnet3* is another paravirtualized device, which should only be used +when importing a VM from another hypervisor. + +{pve} will generate for each NIC a random *MAC address*, so that your VM is +addressable on Ethernet networks. + +The NIC you added to the VM can follow one of two differents models: + + * in the default *Bridged mode* each virtual NIC is backed on the host by a +_tap device_, ( a software loopback device simulating an Ethernet NIC ). This +tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs +have direct access to the Ethernet LAN on which the host is located. + * in the alternative *NAT mode*, each virtual NIC will only communicate with +the Qemu user networking stack, where a builting router and DHCP server can +provide network access. This built-in DHCP will serve adresses in the private +10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and +should only be used for testing. + +You can also skip adding a network device when creating a VM by selecting *No +network device*. + +.Multiqueue +If you are using the VirtIO driver, you can optionally activate the +*Multiqueue* option. This option allows the guest OS to process networking +packets using multiple virtual CPUs, providing an increase in the total number +of packets transfered. + +//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html +When using the VirtIO driver with {pve}, each NIC network queue is passed to the +host kernel, where the queue will be processed by a kernel thread spawn by the +vhost driver. With this option activated, it is possible to pass _multiple_ +network queues to the host kernel for each NIC. + +//https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net +When using Multiqueue, it is recommended to set it to a value equal +to the number of Total Cores of your guest. You also need to set in +the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool +command: + +`ethtool -L eth0 combined X` + +where X is the number of the number of vcpus of the VM. + +You should note that setting the Multiqueue parameter to a value greater +than one will increase the CPU load on the host and guest systems as the +traffic increases. We recommend to set this option only when the VM has to +process a great number of incoming connections, such as when the VM is running +as a router, reverse proxy or a busy HTTP server doing long polling. + USB Passthrough ~~~~~~~~~~~~~~~ There are two different types of USB passthrough devices: @@ -307,6 +369,38 @@ if you use a SPICE client which supports it. If you add a SPICE USB port to your VM, you can passthrough a USB device from where your SPICE client is, directly to the VM (for example an input device or hardware dongle). +BIOS and UEFI +~~~~~~~~~~~~~ + +In order to properly emulate a computer, QEMU needs to use a firmware. +By default QEMU uses *SeaBIOS* for this, which is an open-source, x86 BIOS +implementation. SeaBIOS is a good choice for most standard setups. + +There are, however, some scenarios in which a BIOS is not a good firmware +to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this. +http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html] +In such cases, you should rather use *OVMF*, which is an open-source UEFI implemenation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/] + +If you want to use OVMF, there are several things to consider: + +In order to save things like the *boot order*, there needs to be an EFI Disk. +This disk will be included in backups and snapshots, and there can only be one. + +You can create such a disk with the following command: + + qm set -efidisk0 :1,format= + +Where ** is the storage where you want to have the disk, and +** is a format which the storage supports. Alternatively, you can +create such a disk through the web interface with 'Add' -> 'EFI Disk' in the +hardware section of a VM. + +When using OVMF with a virtual display (without VGA passthrough), +you need to set the client resolution in the OVMF menu(which you can reach +with a press of the ESC button during boot), or you have to choose +SPICE as the display type. + + Managing Virtual Machines with 'qm' ------------------------------------