8 PCI(e) passthrough is a mechanism to give a virtual machine control over
9 a PCI device from the host. This can have some advantages over using
10 virtualized hardware, for example lower latency, higher performance, or more
11 features (e.g., offloading).
13 But, if you pass through a device to a virtual machine, you cannot use that
14 device anymore on the host or in any other VM.
16 Note that, while PCI passthrough is available for i440fx and q35 machines, PCIe
17 passthrough is only available on q35 machines. This does not mean that
18 PCIe capable devices that are passed through as PCI devices will only run at
19 PCI speeds. Passing through devices as PCIe just sets a flag for the guest to
20 tell it that the device is a PCIe device instead of a "really fast legacy PCI
21 device". Some guest applications benefit from this.
26 Since passthrough is performed on real hardware, it needs to fulfill some
27 requirements. A brief overview of these requirements is given below, for more
28 information on specific devices, see
29 https://pve.proxmox.com/wiki/PCI_Passthrough[PCI Passthrough Examples].
33 Your hardware needs to support `IOMMU` (*I*/*O* **M**emory **M**anagement
34 **U**nit) interrupt remapping, this includes the CPU and the motherboard.
36 Generally, Intel systems with VT-d and AMD systems with AMD-Vi support this.
37 But it is not guaranteed that everything will work out of the box, due
38 to bad hardware implementation and missing or low quality drivers.
40 Further, server grade hardware has often better support than consumer grade
41 hardware, but even then, many modern system can support this.
43 Please refer to your hardware vendor to check if they support this feature
44 under Linux for your specific setup.
46 Determining PCI Card Address
47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
49 The easiest way is to use the GUI to add a device of type "Host PCI" in the VM's
50 hardware tab. Alternatively, you can use the command line.
52 You can locate your card using
61 Once you ensured that your hardware supports passthrough, you will need to do
62 some configuration to enable PCI(e) passthrough.
66 First, you will have to enable IOMMU support in your BIOS/UEFI. Usually the
67 corresponding setting is called `IOMMU` or `VT-d`, but you should find the exact
68 option name in the manual of your motherboard.
70 For Intel CPUs, you also need to enable the IOMMU on the
71 xref:sysboot_edit_kernel_cmdline[kernel command line] kernels by adding:
77 For AMD CPUs it should be enabled automatically.
79 .IOMMU Passthrough Mode
81 If your hardware supports IOMMU passthrough mode, enabling this mode might
83 This is because VMs then bypass the (default) DMA translation normally
84 performed by the hyper-visor and instead pass DMA requests directly to the
85 hardware IOMMU. To enable these options, add:
91 to the xref:sysboot_edit_kernel_cmdline[kernel commandline].
95 //TODO: remove `vfio_virqfd` stuff with eol of pve 7
96 You have to make sure the following modules are loaded. This can be achieved by
97 adding them to `'/etc/modules''. In kernels newer than 6.2 ({pve} 8 and onward)
98 the 'vfio_virqfd' module is part of the 'vfio' module, therefore loading
99 'vfio_virqfd' in {pve} 8 and newer is not necessary.
105 vfio_virqfd #not needed if on kernel 6.2 or newer
108 [[qm_pci_passthrough_update_initramfs]]
109 After changing anything modules related, you need to refresh your
110 `initramfs`. On {pve} this can be done by executing:
113 # update-initramfs -u -k all
116 To check if the modules are being loaded, the output of
122 should include the four modules from above.
124 .Finish Configuration
126 Finally reboot to bring the changes into effect and check that it is indeed
130 # dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
133 should display that `IOMMU`, `Directed I/O` or `Interrupt Remapping` is
134 enabled, depending on hardware and kernel the exact message can vary.
136 For notes on how to troubleshoot or verify if IOMMU is working as intended, please
137 see the https://pve.proxmox.com/wiki/PCI_Passthrough#Verifying_IOMMU_parameters[Verifying IOMMU Parameters]
140 It is also important that the device(s) you want to pass through
141 are in a *separate* `IOMMU` group. This can be checked with a call to the {pve}
145 # pvesh get /nodes/{nodename}/hardware/pci --pci-class-blacklist ""
148 It is okay if the device is in an `IOMMU` group together with its functions
149 (e.g. a GPU with the HDMI Audio device) or with its root port or PCI(e) bridge.
154 Some platforms handle their physical PCI(e) slots differently. So, sometimes
155 it can help to put the card in a another PCI(e) slot, if you do not get the
156 desired `IOMMU` group separation.
162 For some platforms, it may be necessary to allow unsafe interrupts.
163 For this add the following line in a file ending with `.conf' file in
167 options vfio_iommu_type1 allow_unsafe_interrupts=1
170 Please be aware that this option can make your system unstable.
173 GPU Passthrough Notes
174 ^^^^^^^^^^^^^^^^^^^^^
176 It is not possible to display the frame buffer of the GPU via NoVNC or SPICE on
177 the {pve} web interface.
179 When passing through a whole GPU or a vGPU and graphic output is wanted, one
180 has to either physically connect a monitor to the card, or configure a remote
181 desktop software (for example, VNC or RDP) inside the guest.
183 If you want to use the GPU as a hardware accelerator, for example, for
184 programs using OpenCL or CUDA, this is not required.
186 Host Device Passthrough
187 ~~~~~~~~~~~~~~~~~~~~~~~
189 The most used variant of PCI(e) passthrough is to pass through a whole
190 PCI(e) card, for example a GPU or a network card.
196 {pve} tries to automatically make the PCI(e) device unavailable for the host.
197 However, if this doesn't work, there are two things that can be done:
199 * pass the device IDs to the options of the 'vfio-pci' modules by adding
202 options vfio-pci ids=1234:5678,4321:8765
205 to a .conf file in */etc/modprobe.d/* where `1234:5678` and `4321:8765` are
206 the vendor and device IDs obtained by:
212 * blacklist the driver on the host completely, ensuring that it is free to bind
213 for passthrough, with
219 in a .conf file in */etc/modprobe.d/*.
221 To find the drivername, execute
230 # lspci -k | grep -A 3 "VGA"
233 will output something similar to
236 01:00.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1)
237 Subsystem: Micro-Star International Co., Ltd. [MSI] GP108 [GeForce GT 1030]
238 Kernel driver in use: <some-module>
239 Kernel modules: <some-module>
242 Now we can blacklist the drivers by writing them into a .conf file:
245 echo "blacklist <some-module>" >> /etc/modprobe.d/blacklist.conf
248 For both methods you need to
249 xref:qm_pci_passthrough_update_initramfs[update the `initramfs`] again and
252 Should this not work, you might need to set a soft dependency to load the gpu
253 modules before loading 'vfio-pci'. This can be done with the 'softdep' flag, see
254 also the manpages on 'modprobe.d' for more information.
256 For example, if you are using drivers named <some-module>:
259 # echo "softdep <some-module> pre: vfio-pci" >> /etc/modprobe.d/<some-module>.conf
263 .Verify Configuration
265 To check if your changes were successful, you can use
271 and check your device entry. If it says
274 Kernel driver in use: vfio-pci
277 or the 'in use' line is missing entirely, the device is ready to be used for
280 [[qm_pci_passthrough_vm_config]]
283 When passing through a GPU, the best compatibility is reached when using
284 'q35' as machine type, 'OVMF' ('UEFI' for VMs) instead of SeaBIOS and PCIe
285 instead of PCI. Note that if you want to use 'OVMF' for GPU passthrough, the
286 GPU needs to have an UEFI capable ROM, otherwise use SeaBIOS instead. To check if
287 the ROM is UEFI capable, see the
288 https://pve.proxmox.com/wiki/PCI_Passthrough#How_to_know_if_a_graphics_card_is_UEFI_.28OVMF.29_compatible[PCI Passthrough Examples]
291 Furthermore, using OVMF, disabling vga arbitration may be possible, reducing the
292 amount of legacy code needed to be run during boot. To disable vga arbitration:
295 echo "options vfio-pci ids=<vendor-id>,<device-id> disable_vga=1" > /etc/modprobe.d/vfio.conf
298 replacing the <vendor-id> and <device-id> with the ones obtained from:
304 PCI devices can be added in the web interface in the hardware section of the VM.
305 Alternatively, you can use the command line; set the *hostpciX* option in the VM
306 configuration, for example by executing:
309 # qm set VMID -hostpci0 00:02.0
312 or by adding a line to the VM configuration file:
319 If your device has multiple functions (e.g., ``00:02.0`' and ``00:02.1`' ),
320 you can pass them through all together with the shortened syntax ``00:02`'.
321 This is equivalent with checking the ``All Functions`' checkbox in the
324 There are some options to which may be necessary, depending on the device
327 * *x-vga=on|off* marks the PCI(e) device as the primary GPU of the VM.
328 With this enabled the *vga* configuration option will be ignored.
330 * *pcie=on|off* tells {pve} to use a PCIe or PCI port. Some guests/device
331 combination require PCIe rather than PCI. PCIe is only available for 'q35'
334 * *rombar=on|off* makes the firmware ROM visible for the guest. Default is on.
335 Some PCI(e) devices need this disabled.
337 * *romfile=<path>*, is an optional path to a ROM file for the device to use.
338 This is a relative path under */usr/share/kvm/*.
342 An example of PCIe passthrough with a GPU set to primary:
345 # qm set VMID -hostpci0 02:00,pcie=on,x-vga=on
350 You can override the PCI vendor ID, device ID, and subsystem IDs that will be
351 seen by the guest. This is useful if your device is a variant with an ID that
352 your guest's drivers don't recognize, but you want to force those drivers to be
353 loaded anyway (e.g. if you know your device shares the same chipset as a
356 The available options are `vendor-id`, `device-id`, `sub-vendor-id`, and
357 `sub-device-id`. You can set any or all of these to override your device's
363 # qm set VMID -hostpci0 02:00,device-id=0x10f6,sub-vendor-id=0x0000
369 Another variant for passing through PCI(e) devices is to use the hardware
370 virtualization features of your devices, if available.
375 To use SR-IOV, platform support is especially important. It may be necessary
376 to enable this feature in the BIOS/UEFI first, or to use a specific PCI(e) port
377 for it to work. In doubt, consult the manual of the platform or contact its
381 'SR-IOV' (**S**ingle-**R**oot **I**nput/**O**utput **V**irtualization) enables
382 a single device to provide multiple 'VF' (**V**irtual **F**unctions) to the
383 system. Each of those 'VF' can be used in a different VM, with full hardware
384 features and also better performance and lower latency than software
387 Currently, the most common use case for this are NICs (**N**etwork
388 **I**nterface **C**ard) with SR-IOV support, which can provide multiple VFs per
389 physical port. This allows using features such as checksum offloading, etc. to
390 be used inside a VM, reducing the (host) CPU overhead.
395 Generally, there are two methods for enabling virtual functions on a device.
397 * sometimes there is an option for the driver module e.g. for some
404 which could be put file with '.conf' ending under */etc/modprobe.d/*.
405 (Do not forget to update your initramfs after that)
407 Please refer to your driver module documentation for the exact
408 parameters and options.
410 * The second, more generic, approach is using the `sysfs`.
411 If a device and driver supports this you can change the number of VFs on
412 the fly. For example, to setup 4 VFs on device 0000:01:00.0 execute:
415 # echo 4 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs
418 To make this change persistent you can use the `sysfsutils` Debian package.
419 After installation configure it via */etc/sysfs.conf* or a `FILE.conf' in
425 After creating VFs, you should see them as separate PCI(e) devices when
426 outputting them with `lspci`. Get their ID and pass them through like a
427 xref:qm_pci_passthrough_vm_config[normal PCI(e) device].
429 Mediated Devices (vGPU, GVT-g)
430 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
432 Mediated devices are another method to reuse features and performance from
433 physical hardware for virtualized hardware. These are found most common in
434 virtualized GPU setups such as Intel's GVT-g and NVIDIA's vGPUs used in their
437 With this, a physical Card is able to create virtual cards, similar to SR-IOV.
438 The difference is that mediated devices do not appear as PCI(e) devices in the
439 host, and are such only suited for using in virtual machines.
444 In general your card's driver must support that feature, otherwise it will
445 not work. So please refer to your vendor for compatible drivers and how to
448 Intel's drivers for GVT-g are integrated in the Kernel and should work
449 with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3
450 v5 and E3 v6 Xeon Processors.
452 To enable it for Intel Graphics, you have to make sure to load the module
453 'kvmgt' (for example via `/etc/modules`) and to enable it on the
454 xref:sysboot_edit_kernel_cmdline[Kernel commandline] and add the following parameter:
460 After that remember to
461 xref:qm_pci_passthrough_update_initramfs[update the `initramfs`],
462 and reboot your host.
467 To use a mediated device, simply specify the `mdev` property on a `hostpciX`
468 VM configuration option.
470 You can get the supported devices via the 'sysfs'. For example, to list the
471 supported types for the device '0000:00:02.0' you would simply execute:
474 # ls /sys/bus/pci/devices/0000:00:02.0/mdev_supported_types
477 Each entry is a directory which contains the following important files:
479 * 'available_instances' contains the amount of still available instances of
480 this type, each 'mdev' use in a VM reduces this.
481 * 'description' contains a short description about the capabilities of the type
482 * 'create' is the endpoint to create such a device, {pve} does this
483 automatically for you, if a 'hostpciX' option with `mdev` is configured.
485 Example configuration with an `Intel GVT-g vGPU` (`Intel Skylake 6700k`):
488 # qm set VMID -hostpci0 00:02.0,mdev=i915-GVTg_V5_4
491 With this set, {pve} automatically creates such a device on VM start, and
492 cleans it up again when the VM stops.
497 It is also possible to map devices on a cluster level, so that they can be
498 properly used with HA and hardware changes are detected and non root users
499 can configure them. See xref:resource_mapping[Resource Mapping]
507 * link:/wiki/Pci_passthrough[PCI Passthrough Examples]