to the same value as the total core count.
The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU
-shares or CPU weight), controls how much CPU time a VM gets in regards to other
-VMs running. It is a relative weight which defaults to `1024`, if you increase
-this for a VM it will be prioritized by the scheduler in comparison to other
-VMs with lower weight. E.g., if VM 100 has set the default 1024 and VM 200 was
-changed to `2048`, the latter VM 200 would receive twice the CPU bandwidth than
-the first VM 100.
+shares or CPU weight), controls how much CPU time a VM gets compared to other
+running VMs. It is a relative weight which defaults to `100` (or `1024` if the
+host uses legacy cgroup v1). If you increase this for a VM it will be
+prioritized by the scheduler in comparison to other VMs with lower weight. E.g.,
+if VM 100 has set the default `100` and VM 200 was changed to `200`, the latter
+VM 200 would receive twice the CPU bandwidth than the first VM 100.
For more information see `man systemd.resource-control`, here `CPUQuota`
-corresponds to `cpulimit` and `CPUShares` corresponds to our `cpuunits`
+corresponds to `cpulimit` and `CPUWeight` corresponds to our `cpuunits`
setting, visit its Notes section for references and implementation details.
CPU Type
* *qxl*, is the QXL paravirtualized graphics card. Selecting this also
enables https://www.spice-space.org/[SPICE] (a remote viewer protocol) for the
VM.
+* *virtio-gl*, often named VirGL is a virtual 3D GPU for use inside VMs that
+ can offload workloads to the host GPU without requiring special (expensive)
+ models and drivers and neither binding the host GPU completely, allowing
+ reuse between multiple guests and or the host.
++
+NOTE: VirGL support needs some extra libraries that aren't installed by
+default due to being relatively big and also not available as open source for
+all GPU models/vendors. For most setups you'll just need to do:
+`apt install libgl1 libegl1`
You can edit the amount of memory given to the virtual GPU, by setting
the 'memory' option. This can enable higher resolutions inside the VM,
You can create such a disk with the following command:
- qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
+----
+# qm set <vmid> -efidisk0 <storage>:1,format=<format>,efitype=4m,pre-enrolled-keys=1
+----
Where *<storage>* is the storage where you want to have the disk, and
*<format>* is a format which the storage supports. Alternatively, you can
efidisk, in that it cannot be changed (only removed) once created. You can add
one via the following command:
- qm set <vmid> -tpmstate0 <storage>:1,version=<version>
+----
+# qm set <vmid> -tpmstate0 <storage>:1,version=<version>
+----
Where *<storage>* is the storage you want to put the state on, and *<version>*
is either 'v1.2' or 'v2.0'. You can also add one via the web interface, by
To add such a device, you can use `qm`:
- qm set <vmid> -ivshmem size=32,name=foo
+----
+# qm set <vmid> -ivshmem size=32,name=foo
+----
Where the size is in MiB. The file will be located under
`/dev/shm/pve-shm-$name` (the default name is the vmid).
boot' from the 'Options' Tab of your VM in the web interface, or set it with
the following command:
- qm set <vmid> -onboot 1
+----
+# qm set <vmid> -onboot 1
+----
.Start and Shutdown Order
If you have a cluster, you can migrate your VM to another host with
- qm migrate <vmid> <target>
+----
+# qm migrate <vmid> <target>
+----
There are generally two mechanisms for this
* The hosts have a working (and reliable) network connection.
* The target host must have the same or higher versions of the
{pve} packages. (It *might* work the other way, but this is never guaranteed)
+* The hosts have CPUs from the same vendor. (It *might* work otherwise, but this
+ is never guaranteed)
Offline Migration
~~~~~~~~~~~~~~~~~
e.g.:
----
- qm set VMID -vmgenid 1
- qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
+# qm set VMID -vmgenid 1
+# qm set VMID -vmgenid 00000000-0000-0000-0000-000000000000
----
NOTE: The initial addition of a 'vmgenid' device to an existing VM, may result
configuration with:
----
- qm set VMID -delete vmgenid
+# qm set VMID -delete vmgenid
----
The most prominent use case for 'vmgenid' are newer Microsoft Windows
VM name as read from the OVF manifest, and import the disks to the +local-lvm+
storage. You have to configure the network manually.
- qm importovf 999 WinDev1709Eval.ovf local-lvm
+----
+# qm importovf 999 WinDev1709Eval.ovf local-lvm
+----
The VM is ready to be started.
--customize=./copy_pub_ssh.sh \
--sparse --image vm600.raw
-You can now create a new target VM for this image.
-
- qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
- --bootdisk scsi0 --scsihw virtio-scsi-pci --ostype l26
-
-Add the disk image as +unused0+ to the VM, using the storage +pvedir+:
-
- qm importdisk 600 vm600.raw pvedir
+You can now create a new target VM, importing the image to the storage `pvedir`
+and attaching it to the VM's SCSI controller:
-Finally attach the unused disk to the SCSI controller of the VM:
-
- qm set 600 --scsi0 pvedir:600/vm-600-disk-1.raw
+----
+# qm create 600 --net0 virtio,bridge=vmbr0 --name vm600 --serial0 socket \
+ --boot order=scsi0 --scsihw virtio-scsi-pci --ostype l26 \
+ --scsi0 pvedir:0,import-from=/path/to/dir/vm600.raw
+----
The VM is ready to be started.
You can add a hook script to VMs with the config property `hookscript`.
- qm set 100 --hookscript local:snippets/hookscript.pl
+----
+# qm set 100 --hookscript local:snippets/hookscript.pl
+----
It will be called during various phases of the guests lifetime.
For an example and documentation see the example script under
You can suspend a VM to disk with the GUI option `Hibernate` or with
- qm suspend ID --todisk
+----
+# qm suspend ID --todisk
+----
That means that the current content of the memory will be saved onto disk
and the VM gets stopped. On the next start, the memory content will be
Using an iso file uploaded on the 'local' storage, create a VM
with a 4 GB IDE disk on the 'local-lvm' storage
- qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
+----
+# qm create 300 -ide0 local-lvm:4 -net0 e1000 -cdrom local:iso/proxmox-mailgateway_2.1.iso
+----
Start the new VM
- qm start 300
+----
+# qm start 300
+----
Send a shutdown request, then wait until the VM is stopped.
- qm shutdown 300 && qm wait 300
+----
+# qm shutdown 300 && qm wait 300
+----
Same as above, but only wait for 40 seconds.
- qm shutdown 300 && qm wait 300 -timeout 40
+----
+# qm shutdown 300 && qm wait 300 -timeout 40
+----
Destroying a VM always removes it from Access Control Lists and it always
removes the firewall configuration of the VM. You have to activate
'--purge', if you want to additionally remove the VM from replication jobs,
backup jobs and HA resource configurations.
- qm destroy 300 --purge
+----
+# qm destroy 300 --purge
+----
+
+Move a disk image to a different storage.
+
+----
+# qm move-disk 300 scsi0 other-storage
+----
+Reassign a disk image to a different VM. This will remove the disk `scsi1` from
+the source VM and attaches it as `scsi3` to the target VM. In the background
+the disk image is being renamed so that the name matches the new owner.
+
+----
+# qm move-disk 300 scsi1 --target-vmid 400 --target-disk scsi3
+----
[[qm_configuration]]
prevent incompatible concurrent actions on the affected VMs. Sometimes
you need to remove such a lock manually (e.g., after a power failure).
- qm unlock <vmid>
+----
+# qm unlock <vmid>
+----
CAUTION: Only do that if you are sure the action which set the lock is
no longer running.