an iso image as a parameter to Qemu, and the OS running in the emulated computer
will see a real CDROM inserted in a CD drive.
-Qemu can emulates a great variety of hardware from ARM to Sparc, but {pve} is
+Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
only concerned with 32 and 64 bits PC clone emulation, since it represents the
overwhelming majority of server hardware. The emulation of PC clones is also one
of the fastest due to the availability of processor extensions which greatly
NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
It means that Qemu is running with the support of the virtualization processor
extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
-_KVM_ can be use interchangeably as Qemu in {pve} will always try to load the kvm
+_KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm
module.
Qemu inside {pve} runs as a root process, since this is required to access block
* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
controller. Even if this controller has been superseded by more more designs,
-each and every OS you can think has support for it, making it a great choice
+each and every OS you can think of has support for it, making it a great choice
if you want to run an OS released before 2003. You can connect up to 4 devices
on this controller.
If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
you can set the *No backup* option on that disk.
+If you want the {pve} storage replication mechanism to skip a disk when starting
+ a replication job, you can set the *Skip replication* option on that disk.
+As of {pve} 5.0, replication requires the disk images to be on a storage of type
+`zfspool`, so adding a disk image to other storages when the VM has replication
+configured requires to skip replication for this disk image.
+
If your storage supports _thin provisioning_ (see the storage chapter in the
{pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
option on the hard disks connected to that controller. With *Discard* enabled,
then shrink the disk image accordingly.
.IO Thread
-The option *IO Thread* can only be used when using a disk with the
+The option *IO Thread* can only be used when using a disk with the
*VirtIO* controller, or with the *SCSI* controller, when the emulated controller
type is *VirtIO SCSI single*.
With this enabled, Qemu creates one I/O thread per storage controller,
-instead of a single thread for all I/O, so it increases performance when
+instead of a single thread for all I/O, so it increases performance when
multiple disks are used and each disk has its own storage controller.
Note that backups do not currently work with *IO Thread* enabled.
For each VM you have the option to set a fixed size memory or asking
{pve} to dynamically allocate memory based on the current RAM usage of the
-host.
+host.
.Fixed Memory Allocation
[thumbnail="gui-create-vm-memory-fixed.png"]
All Linux distributions released after 2010 have the balloon kernel driver
included. For Windows OSes, the balloon driver needs to be added manually and can
incur a slowdown of the guest, so we don't recommend using it on critical
-systems.
+systems.
// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
performance. Like all VirtIO devices, the guest OS should have the proper driver
installed.
* the *Realtek 8139* emulates an older 100 MB/s network card, and should
-only be used when emulating older operating systems ( released before 2002 )
+only be used when emulating older operating systems ( released before 2002 )
* the *vmxnet3* is another paravirtualized device, which should only be used
when importing a VM from another hypervisor.
When using Multiqueue, it is recommended to set it to a value equal
to the number of Total Cores of your guest. You also need to set in
the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
-command:
+command:
`ethtool -L eth0 combined X`
---------------------------------------------------
A VM export from a foreign hypervisor takes usually the form of one or more disk
- images, with a configuration file describing the settings of the VM (RAM,
+ images, with a configuration file describing the settings of the VM (RAM,
number of cores). +
The disk images can be in the vmdk format, if the disks come from
-VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
-The most popular configuration format for VM exports is the OVF standard, but in
-practice interoperation is limited because many settings are not implemented in
-the standard itself, and hypervisors export the supplementary information
+VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor.
+The most popular configuration format for VM exports is the OVF standard, but in
+practice interoperation is limited because many settings are not implemented in
+the standard itself, and hypervisors export the supplementary information
in non-standard extensions.
Besides the problem of format, importing disk images from other hypervisors
installing the MergeIDE.zip utility available from the Internet before exporting
and choosing a hard disk type of *IDE* before booting the imported Windows VM.
-Finally there is the question of paravirtualized drivers, which improve the
+Finally there is the question of paravirtualized drivers, which improve the
speed of the emulated system and are specific to the hypervisor.
GNU/Linux and other free Unix OSes have all the necessary drivers installed by
default and you can switch to the paravirtualized drivers right after importing
-the VM. For Windows VMs, you need to install the Windows paravirtualized
+the VM. For Windows VMs, you need to install the Windows paravirtualized
drivers by yourself.
GNU/Linux and other free Unix can usually be imported without hassle. Note
Step-by-step example of a Windows disk image import
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Microsoft provides
+Microsoft provides
https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/[Virtual Machines exports]
in different formats for browser testing. We are going to use one of these to
demonstrate a VMDK import.
CLI Usage Examples
~~~~~~~~~~~~~~~~~~
-Create a new VM with 4 GB IDE disk.
+Using an iso file uploaded on the 'local' storage, create a VM
+with a 4 GB IDE disk on the 'local-lvm' storage
- qm create 300 -ide0 4 -net0 e1000 -cdrom proxmox-mailgateway_2.1.iso
+ qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
Start the new VM