the host is below 80%, will dynamically add memory to the guest up to the
maximum memory specified.
-When the host is becoming short on RAM, the VM will then release some memory
+When the host is running low on RAM, the VM will then release some memory
back to the host, swapping running processes if needed and starting the oom
killer in last resort. The passing around of memory between host and guest is
done via a special `balloon` kernel driver running inside the guest, which will
When multiple VMs use the autoallocate facility, it is possible to set a
*Shares* coefficient which indicates the relative amount of the free host memory
that each VM should take. Suppose for instance you have four VMs, three of them
-running a HTTP server and the last one is a database server. To cache more
+running an HTTP server and the last one is a database server. To cache more
database blocks in the database server RAM, you would like to prioritize the
database VM when spare RAM is available. For this you assign a Shares property
of 3000 to the database VM, leaving the other VMs to the Shares default setting
of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32
* 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 *
3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will
-get 1/5 GB.
+get 1.5 GB.
All Linux distributions released after 2010 have the balloon kernel driver
included. For Windows OSes, the balloon driver needs to be added manually and can
//http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html
When using the VirtIO driver with {pve}, each NIC network queue is passed to the
-host kernel, where the queue will be processed by a kernel thread spawn by the
+host kernel, where the queue will be processed by a kernel thread spawned by the
vhost driver. With this option activated, it is possible to pass _multiple_
network queues to the host kernel for each NIC.