X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=qm-pci-passthrough.adoc;h=df6cf214c54f04fbfb870098ba7d77c85d63b6d3;hb=336ed7a0f0f4de297d3dcd6c840e3dedf7a56042;hp=c1bca7c575e0d7addb0d75c7a4b2d72a586c7eac;hpb=0aebb0d9fc9c47015423f5c6b3dff98c110c3596;p=pve-docs.git diff --git a/qm-pci-passthrough.adoc b/qm-pci-passthrough.adoc index c1bca7c..df6cf21 100644 --- a/qm-pci-passthrough.adoc +++ b/qm-pci-passthrough.adoc @@ -42,25 +42,35 @@ Configuration Once you ensured that your hardware supports passthrough, you will need to do some configuration to enable PCI(e) passthrough. - .IOMMU -The IOMMU has to be activated on the -xref:sysboot_edit_kernel_cmdline[kernel commandline]. +First, you have to enable IOMMU support in your BIOS/UEFI. Usually the +corresponding setting is called `IOMMU` or `VT-d`,but you should find the exact +option name in the manual of your motherboard. -The command line parameters are: +For Intel CPUs, you may also need to enable the IOMMU on the +xref:sysboot_edit_kernel_cmdline[kernel command line] for older (pre-5.15) +kernels by adding: -* for Intel CPUs: -+ ---- intel_iommu=on ---- -* for AMD CPUs: -+ + +For AMD CPUs it should be enabled automatically. + +.IOMMU Passthrough Mode + +If your hardware supports IOMMU passthrough mode, enabling this mode might +increase performance. +This is because VMs then bypass the (default) DMA translation normally +performed by the hyper-visor and instead pass DMA requests directly to the +hardware IOMMU. To enable these options, add: + ---- - amd_iommu=on + iommu=pt ---- +to the xref:sysboot_edit_kernel_cmdline[kernel commandline]. .Kernel Modules @@ -149,7 +159,7 @@ PCI(e) card, for example a GPU or a network card. Host Configuration ^^^^^^^^^^^^^^^^^^ -In this case, the host cannot use the card. There are two methods to achieve +In this case, the host must not use the card. There are two methods to achieve this: * pass the device IDs to the options of the 'vfio-pci' modules by adding @@ -162,7 +172,7 @@ to a .conf file in */etc/modprobe.d/* where `1234:5678` and `4321:8765` are the vendor and device IDs obtained by: + ---- -# lcpci -nn +# lspci -nn ---- * blacklist the driver completely on the host, ensuring that it is free to bind @@ -178,6 +188,23 @@ For both methods you need to xref:qm_pci_passthrough_update_initramfs[update the `initramfs`] again and reboot after that. +.Verify Configuration + +To check if your changes were successful, you can use + +---- +# lspci -nnk +---- + +and check your device entry. If it says + +---- +Kernel driver in use: vfio-pci +---- + +or the 'in use' line is missing entirely, the device is ready to be used for +passthrough. + [[qm_pci_passthrough_vm_config]] VM Configuration ^^^^^^^^^^^^^^^^ @@ -189,7 +216,9 @@ configuration, for example by executing: ---- If your device has multiple functions (e.g., ``00:02.0`' and ``00:02.1`' ), -you can pass them through all together with the shortened syntax ``00:02`' +you can pass them through all together with the shortened syntax ``00:02`'. +This is equivalent with checking the ``All Functions`' checkbox in the +web-interface. There are some options to which may be necessary, depending on the device and guest OS: @@ -215,6 +244,24 @@ An example of PCIe passthrough with a GPU set to primary: # qm set VMID -hostpci0 02:00,pcie=on,x-vga=on ---- +.PCI ID overrides + +You can override the PCI vendor ID, device ID, and subsystem IDs that will be +seen by the guest. This is useful if your device is a variant with an ID that +your guest's drivers don't recognize, but you want to force those drivers to be +loaded anyway (e.g. if you know your device shares the same chipset as a +supported variant). + +The available options are `vendor-id`, `device-id`, `sub-vendor-id`, and +`sub-device-id`. You can set any or all of these to override your device's +default IDs. + +For example: + +---- +# qm set VMID -hostpci0 02:00,device-id=0x10f6,sub-vendor-id=0x0000 +---- + Other considerations ^^^^^^^^^^^^^^^^^^^^ @@ -292,7 +339,7 @@ Mediated Devices (vGPU, GVT-g) Mediated devices are another method to reuse features and performance from physical hardware for virtualized hardware. These are found most common in -virtualized GPU setups such as Intels GVT-g and Nvidias vGPUs used in their +virtualized GPU setups such as Intel's GVT-g and NVIDIA's vGPUs used in their GRID technology. With this, a physical Card is able to create virtual cards, similar to SR-IOV. @@ -307,7 +354,7 @@ In general your card's driver must support that feature, otherwise it will not work. So please refer to your vendor for compatible drivers and how to configure them. -Intels drivers for GVT-g are integrated in the Kernel and should work +Intel's drivers for GVT-g are integrated in the Kernel and should work with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3 v5 and E3 v6 Xeon Processors.