an iso image as a parameter to Qemu, and the OS running in the emulated computer
will see a real CDROM inserted in a CD drive.
-Qemu can emulates a great variety of hardware from ARM to Sparc, but {pve} is
+Qemu can emulate a great variety of hardware from ARM to Sparc, but {pve} is
only concerned with 32 and 64 bits PC clone emulation, since it represents the
overwhelming majority of server hardware. The emulation of PC clones is also one
of the fastest due to the availability of processor extensions which greatly
NOTE: You may sometimes encounter the term _KVM_ (Kernel-based Virtual Machine).
It means that Qemu is running with the support of the virtualization processor
extensions, via the Linux kvm module. In the context of {pve} _Qemu_ and
-_KVM_ can be use interchangeably as Qemu in {pve} will always try to load the kvm
+_KVM_ can be used interchangeably as Qemu in {pve} will always try to load the kvm
module.
Qemu inside {pve} runs as a root process, since this is required to access block
the guest OS recognizes it is running inside Qemu and cooperates with the
hypervisor.
-Qemu relies on the virtio virtualization standard, and is thus able to presente
+Qemu relies on the virtio virtualization standard, and is thus able to present
paravirtualized virtio devices, which includes a paravirtualized generic disk
controller, a paravirtualized network card, a paravirtualized serial port,
a paravirtualized SCSI controller, etc ...
* the *IDE* controller, has a design which goes back to the 1984 PC/AT disk
controller. Even if this controller has been superseded by more more designs,
-each and every OS you can think has support for it, making it a great choice
+each and every OS you can think of has support for it, making it a great choice
if you want to run an OS released before 2003. You can connect up to 4 devices
on this controller.
If you want the {pve} backup manager to skip a disk when doing a backup of a VM,
you can set the *No backup* option on that disk.
+If you want the {pve} storage replication mechanism to skip a disk when starting
+ a replication job, you can set the *Skip replication* option on that disk.
+As of {pve} 5.0, replication requires the disk images to be on a storage of type
+`zfspool`, so adding a disk image to other storages when the VM has replication
+configured requires to skip replication for this disk image.
+
If your storage supports _thin provisioning_ (see the storage chapter in the
{pve} guide), and your VM has a *SCSI* controller you can activate the *Discard*
option on the hard disks connected to that controller. With *Discard* enabled,
it is usually a safe bet to set the number of *Total cores* to 2.
NOTE: It is perfectly safe to set the _overall_ number of total cores in all
-your VMs to be greater than the number of of cores you have on your server (ie.
+your VMs to be greater than the number of of cores you have on your server (i.e.
4 VMs with each 4 Total cores running in a 8 core machine is OK) In that case
the host system will balance the Qemu execution threads between your server
cores just like if you were running a standard multithreaded application.
When multiple VMs use the autoallocate facility, it is possible to set a
*Shares* coefficient which indicates the relative amount of the free host memory
-that each VM shoud take. Suppose for instance you have four VMs, three of them
+that each VM should take. Suppose for instance you have four VMs, three of them
running a HTTP server and the last one is a database server. To cache more
database blocks in the database server RAM, you would like to prioritize the
database VM when spare RAM is available. For this you assign a Shares property
of 3000 to the database VM, leaving the other VMs to the Shares default setting
-of 1000. The host server has 32GB of RAM, and is curring using 16GB, leaving 32
+of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
get 1/5 GB.
systems.
// see https://forum.proxmox.com/threads/solved-hyper-threading-vs-no-hyper-threading-fixed-vs-variable-memory.20265/
-When allocating RAMs to your VMs, a good rule of thumb is always to leave 1GB
+When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
of RAM available to the host.
{pve} will generate for each NIC a random *MAC address*, so that your VM is
addressable on Ethernet networks.
-The NIC you added to the VM can follow one of two differents models:
+The NIC you added to the VM can follow one of two different models:
* in the default *Bridged mode* each virtual NIC is backed on the host by a
_tap device_, ( a software loopback device simulating an Ethernet NIC ). This
tap device is added to a bridge, by default vmbr0 in {pve}. In this mode, VMs
have direct access to the Ethernet LAN on which the host is located.
* in the alternative *NAT mode*, each virtual NIC will only communicate with
-the Qemu user networking stack, where a builting router and DHCP server can
-provide network access. This built-in DHCP will serve adresses in the private
+the Qemu user networking stack, where a built-in router and DHCP server can
+provide network access. This built-in DHCP will serve addresses in the private
10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
should only be used for testing.
If you are using the VirtIO driver, you can optionally activate the
*Multiqueue* option. This option allows the guest OS to process networking
packets using multiple virtual CPUs, providing an increase in the total number
-of packets transfered.
+of packets transferred.
//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
When using the VirtIO driver with {pve}, each NIC network queue is passed to the
the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
command:
-`ethtool -L eth0 combined X`
+`ethtool -L ens1 combined X`
where X is the number of the number of vcpus of the VM.
There are two different types of USB passthrough devices:
-* Host USB passtrough
+* Host USB passthrough
* SPICE USB passthrough
Host USB passthrough works by giving a VM a USB device of the host.
If a device is present in a VM configuration when the VM starts up,
but the device is not present in the host, the VM can boot without problems.
-As soon as the device/port ist available in the host, it gets passed through.
+As soon as the device/port is available in the host, it gets passed through.
WARNING: Using this kind of USB passthrough means that you cannot move
a VM online to another host, since the hardware is only available
There are, however, some scenarios in which a BIOS is not a good firmware
to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
-In such cases, you should rather use *OVMF*, which is an open-source UEFI implemenation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
+In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
If you want to use OVMF, there are several things to consider:
Import the disk image to the +local-lvm+ storage:
- qm importdisk 999 MSEdge "MSEdge - Win10_preview.vmdk" local-lvm
+ qm importdisk 999 "MSEdge - Win10_preview.vmdk" local-lvm
The disk will be marked as *Unused* in the VM 999 configuration.
After that you can go in the GUI, in the VM *Hardware*, *Edit* the unused disk