X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=qm.adoc;h=b84de9ea9f8751ed69afd57aa7e9958dde167da9;hp=12e5921aff6e0be7eab47c1cc68d7268be189b9d;hb=HEAD;hpb=d17b6bd3d53816a562c70ea113c8c7fb644072f7 diff --git a/qm.adoc b/qm.adoc index 12e5921..42c26db 100644 --- a/qm.adoc +++ b/qm.adoc @@ -152,6 +152,7 @@ https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35] chipset, which also provides a virtual PCIe bus, and thus may be desired if you want to pass through PCIe hardware. +Additionally, you can select a xref:qm_pci_viommu[vIOMMU] implementation. Machine Version +++++++++++++++ @@ -347,16 +348,19 @@ fully it would theoretically use `400%`. In reality the usage may be even a bit higher as QEMU can have additional threads for VM peripherals besides the vCPU core ones. -This setting can be useful if a VM should have multiple vCPUs, as it runs a few -processes in parallel, but the VM as a whole should not be able to run all -vCPUs at 100% at the same time. +This setting can be useful when a VM should have multiple vCPUs because it is +running some processes in parallel, but the VM as a whole should not be able to +run all vCPUs at 100% at the same time. -Using a specific example: lets say we have a VM which would profit from having -8 vCPUs, but at no time all of those 8 cores should run at full load - as this -would make the server so overloaded that other VMs and CTs would get too less -CPU. So, we set the *cpulimit* limit to `4.0` (=400%). If we now fully utilize -all 8 vCPUs, they will receive maximum 50% CPU time of the physical cores. But -with only 4 vCPUs fully utilized, they could still get up to 100% CPU time. +For example, suppose you have a virtual machine that would benefit from having 8 +virtual CPUs, but you don't want the VM to be able to max out all 8 cores +running at full load - because that would overload the server and leave other +virtual machines and containers with too little CPU time. To solve this, you +could set *cpulimit* to `4.0` (=400%). This means that if the VM fully utilizes +all 8 virtual CPUs by running 8 processes simultaneously, each vCPU will receive +a maximum of 50% CPU time from the physical cores. However, if the VM workload +only fully utilizes 4 virtual CPUs, it could still receive up to 100% CPU time +from a physical core, for a total of 400%. NOTE: VMs can, depending on their configuration, use additional threads, such as for networking or IO operations but also live migration. Thus a VM can show @@ -364,32 +368,40 @@ up to use more CPU time than just its virtual CPUs could use. To ensure that a VM never uses more CPU time than vCPUs assigned, set the *cpulimit* to the same value as the total core count. -The second CPU resource limiting setting, *cpuunits* (nowadays often called CPU -shares or CPU weight), controls how much CPU time a VM gets compared to other -running VMs. It is a relative weight which defaults to `100` (or `1024` if the -host uses legacy cgroup v1). If you increase this for a VM it will be -prioritized by the scheduler in comparison to other VMs with lower weight. For -example, if VM 100 has set the default `100` and VM 200 was changed to `200`, -the latter VM 200 would receive twice the CPU bandwidth than the first VM 100. +*cpuuntis* + +With the *cpuunits* option, nowadays often called CPU shares or CPU weight, you +can control how much CPU time a VM gets compared to other running VMs. It is a +relative weight which defaults to `100` (or `1024` if the host uses legacy +cgroup v1). If you increase this for a VM it will be prioritized by the +scheduler in comparison to other VMs with lower weight. + +For example, if VM 100 has set the default `100` and VM 200 was changed to +`200`, the latter VM 200 would receive twice the CPU bandwidth than the first +VM 100. For more information see `man systemd.resource-control`, here `CPUQuota` -corresponds to `cpulimit` and `CPUWeight` corresponds to our `cpuunits` -setting, visit its Notes section for references and implementation details. +corresponds to `cpulimit` and `CPUWeight` to our `cpuunits` setting. Visit its +Notes section for references and implementation details. + +*affinity* -The third CPU resource limiting setting, *affinity*, controls what host cores -the virtual machine will be permitted to execute on. E.g., if an affinity value -of `0-3,8-11` is provided, the virtual machine will be restricted to using the -host cores `0,1,2,3,8,9,10,` and `11`. Valid *affinity* values are written in -cpuset `List Format`. List Format is a comma-separated list of CPU numbers and -ranges of numbers, in ASCII decimal. +With the *affinity* option, you can specify the physical CPU cores that are used +to run the VM's vCPUs. Peripheral VM processes, such as those for I/O, are not +affected by this setting. Note that the *CPU affinity is not a security +feature*. -NOTE: CPU *affinity* uses the `taskset` command to restrict virtual machines to -a given set of cores. This restriction will not take effect for some types of -processes that may be created for IO. *CPU affinity is not a security feature.* +Forcing a CPU *affinity* can make sense in certain cases but is accompanied by +an increase in complexity and maintenance effort. For example, if you want to +add more VMs later or migrate VMs to nodes with fewer CPU cores. It can also +easily lead to asynchronous and therefore limited system performance if some +CPUs are fully utilized while others are almost idle. -For more information regarding *affinity* see `man cpuset`. Here the -`List Format` corresponds to valid *affinity* values. Visit its `Formats` -section for more examples. +The *affinity* is set through the `taskset` CLI tool. It accepts the host CPU +numbers (see `lscpu`) in the `List Format` from `man cpuset`. This ASCII decimal +list can contain numbers but also number ranges. For example, the *affinity* +`0-1,8-11` (expanded `0, 1, 8, 9, 10, 11`) would allow the VM to run on only +these six specific host cores. CPU Type ^^^^^^^^ @@ -759,14 +771,25 @@ vhost driver. With this option activated, it is possible to pass _multiple_ network queues to the host kernel for each NIC. //https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques.html#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net -When using Multiqueue, it is recommended to set it to a value equal -to the number of Total Cores of your guest. You also need to set in -the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool -command: +When using Multiqueue, it is recommended to set it to a value equal to the +number of vCPUs of your guest. Remember that the number of vCPUs is the number +of sockets times the number of cores configured for the VM. You also need to set +the number of multi-purpose channels on each VirtIO NIC in the VM with this +ethtool command: `ethtool -L ens1 combined X` -where X is the number of the number of vcpus of the VM. +where X is the number of the number of vCPUs of the VM. + +To configure a Windows guest for Multiqueue install the +https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers[Redhat VirtIO Ethernet +Adapter drivers], then adapt the NIC's configuration as follows. Open the +device manager, right click the NIC under "Network adapters", and select +"Properties". Then open the "Advanced" tab and select "Receive Side Scaling" +from the list on the left. Make sure it is set to "Enabled". Next, navigate to +"Maximum number of RSS Queues" in the list and set it to the number of vCPUs of +your VM. Once you verified that the settings are correct, click "OK" to confirm +them. You should note that setting the Multiqueue parameter to a value greater than one will increase the CPU load on the host and guest systems as the @@ -1495,8 +1518,64 @@ replicate services (such as databases or domain controller footnote:[https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture]) on snapshot rollback, backup restore or a whole VM clone operation. -Importing Virtual Machines and disk images ------------------------------------------- +[[qm_import_virtual_machines]] +Importing Virtual Machines +-------------------------- + +Importing existing virtual machines from foreign hypervisors or other {pve} +clusters can be achieved through various methods, the most common ones are: + +* Using the native import wizard, which utilizes the 'import' content type, such + as provided by the ESXi special storage. +* Performing a backup on the source and then restoring on the target. This + method works best when migrating from another {pve} instance. +* using the OVF-specific import command of the `qm` command-line tool. + +If you import VMs to {pve} from other hypervisors, it’s recommended to +familiarize yourself with the +https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Concepts[concepts of {pve}]. + +Import Wizard +~~~~~~~~~~~~~ + +[thumbnail="screenshot/gui-import-wizard-general.png"] + +{pve} provides an integrated VM importer using the storage plugin system for +native integration into the API and web-based user interface. You can use this +to import the VM as a whole, with most of its config mapped to {pve}'s config +model and reduced downtime. + +NOTE: The import wizard was added during the {pve} 8.2 development cycle and is +in tech preview state. While it's already promising and working stable, it's +still under active development, focusing on adding other import-sources, like +for example OVF/OVA files, in the future. + +To use the import wizard you have to first set up a new storage for an import +source, you can do so on the web-interface under _Datacenter -> Storage -> Add_. + +Then you can select the new storage in the resource tree and use the 'Virtual +Guests' content tab to see all available guests that can be imported. + +[thumbnail="screenshot/gui-import-wizard-advanced.png"] + +Select one and use the 'Import' button (or double-click) to open the import +wizard. You can modify a subset of the available options here and then start the +import. Please note that you can do more advanced modifications after the import +finished. + +TIP: The import wizard is currently (2024-03) available for ESXi and has been +tested with ESXi versions 6.5 through 8.0. Note that guests using vSAN storage +cannot be directly imported directly; their disks must first be moved to another +storage. While it is possible to use a vCenter as the import source, performance +is dramatically degraded (5 to 10 times slower). + +For a step-by-step guide and tips for how to adapt the virtual guest to the new +hyper-visor see our +https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Migration[migrate to {pve} +wiki article]. + +Import OVF/OVA Through CLI +~~~~~~~~~~~~~~~~~~~~~~~~~~ A VM export from a foreign hypervisor takes usually the form of one or more disk images, with a configuration file describing the settings of the VM (RAM, @@ -1527,7 +1606,7 @@ that we cannot guarantee a successful import/export of Windows VMs in all cases due to the problems above. Step-by-step example of a Windows OVF import -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Microsoft provides https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtual Machines downloads] @@ -1535,19 +1614,19 @@ https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/[Virtua to demonstrate the OVF import feature. Download the Virtual Machine zip -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++ After getting informed about the user agreement, choose the _Windows 10 Enterprise (Evaluation - Build)_ for the VMware platform, and download the zip. Extract the disk image from the zip -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++++++++++++++++ Using the `unzip` utility or any archiver of your choice, unpack the zip, and copy via ssh/scp the ovf and vmdk files to your {pve} host. Import the Virtual Machine -^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++ This will create a new virtual machine, using cores, memory and VM name as read from the OVF manifest, and import the disks to the +local-lvm+ @@ -1560,7 +1639,7 @@ VM name as read from the OVF manifest, and import the disks to the +local-lvm+ The VM is ready to be started. Adding an external disk image to a Virtual Machine -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++++++++++++++++++++++++++++++++++++++ You can also add an existing disk image to a VM, either coming from a foreign hypervisor, or one that you created yourself. @@ -1760,6 +1839,13 @@ Same as above, but only wait for 40 seconds. # qm shutdown 300 && qm wait 300 -timeout 40 ---- +If the VM does not shut down, force-stop it and overrule any running shutdown +tasks. As stopping VMs may incur data loss, use it with caution. + +---- +# qm stop 300 -overrule-shutdown 1 +---- + Destroying a VM always removes it from Access Control Lists and it always removes the firewall configuration of the VM. You have to activate '--purge', if you want to additionally remove the VM from replication jobs,