]> git.proxmox.com Git - pve-docs.git/blame - qm-pci-passthrough.adoc
Fix typo in output-format.adoc
[pve-docs.git] / qm-pci-passthrough.adoc
CommitLineData
6e4c46c4
DC
1[[qm_pci_passthrough]]
2PCI(e) Passthrough
3------------------
e582833b
DC
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
6e4c46c4
DC
7
8PCI(e) passthrough is a mechanism to give a virtual machine control over
49f20f1b
TL
9a PCI device from the host. This can have some advantages over using
10virtualized hardware, for example lower latency, higher performance, or more
11features (e.g., offloading).
6e4c46c4 12
49f20f1b 13But, if you pass through a device to a virtual machine, you cannot use that
6e4c46c4
DC
14device anymore on the host or in any other VM.
15
16General Requirements
17~~~~~~~~~~~~~~~~~~~~
18
19Since passthrough is a feature which also needs hardware support, there are
49f20f1b
TL
20some requirements to check and preparations to be done to make it work.
21
6e4c46c4
DC
22
23Hardware
24^^^^^^^^
49f20f1b
TL
25Your hardware needs to support `IOMMU` (*I*/*O* **M**emory **M**anagement
26**U**nit) interrupt remapping, this includes the CPU and the mainboard.
6e4c46c4 27
49f20f1b
TL
28Generally, Intel systems with VT-d, and AMD systems with AMD-Vi support this.
29But it is not guaranteed that everything will work out of the box, due
30to bad hardware implementation and missing or low quality drivers.
6e4c46c4 31
49f20f1b 32Further, server grade hardware has often better support than consumer grade
6e4c46c4
DC
33hardware, but even then, many modern system can support this.
34
49f20f1b 35Please refer to your hardware vendor to check if they support this feature
a22d7c24 36under Linux for your specific setup.
49f20f1b 37
6e4c46c4
DC
38
39Configuration
40^^^^^^^^^^^^^
41
49f20f1b
TL
42Once you ensured that your hardware supports passthrough, you will need to do
43some configuration to enable PCI(e) passthrough.
6e4c46c4 44
6e4c46c4 45
39d84f28 46.IOMMU
6e4c46c4 47
1748211a 48The IOMMU has to be activated on the
69055103 49xref:sysboot_edit_kernel_cmdline[kernel commandline].
1748211a
SI
50
51The command line parameters are:
6e4c46c4 52
49f20f1b
TL
53* for Intel CPUs:
54+
55----
56 intel_iommu=on
57----
58* for AMD CPUs:
59+
60----
6e4c46c4 61 amd_iommu=on
49f20f1b 62----
6e4c46c4 63
6e4c46c4 64
39d84f28 65.Kernel Modules
6e4c46c4 66
49f20f1b
TL
67You have to make sure the following modules are loaded. This can be achieved by
68adding them to `'/etc/modules''
6e4c46c4 69
49f20f1b 70----
6e4c46c4
DC
71 vfio
72 vfio_iommu_type1
73 vfio_pci
74 vfio_virqfd
49f20f1b 75----
6e4c46c4 76
49f20f1b 77[[qm_pci_passthrough_update_initramfs]]
6e4c46c4 78After changing anything modules related, you need to refresh your
49f20f1b 79`initramfs`. On {pve} this can be done by executing:
6e4c46c4
DC
80
81----
49f20f1b 82# update-initramfs -u -k all
6e4c46c4
DC
83----
84
39d84f28 85.Finish Configuration
49f20f1b
TL
86
87Finally reboot to bring the changes into effect and check that it is indeed
88enabled.
6e4c46c4
DC
89
90----
5e235b99 91# dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
6e4c46c4
DC
92----
93
49f20f1b
TL
94should display that `IOMMU`, `Directed I/O` or `Interrupt Remapping` is
95enabled, depending on hardware and kernel the exact message can vary.
6e4c46c4
DC
96
97It is also important that the device(s) you want to pass through
49f20f1b 98are in a *separate* `IOMMU` group. This can be checked with:
6e4c46c4
DC
99
100----
49f20f1b 101# find /sys/kernel/iommu_groups/ -type l
6e4c46c4
DC
102----
103
49f20f1b 104It is okay if the device is in an `IOMMU` group together with its functions
6e4c46c4
DC
105(e.g. a GPU with the HDMI Audio device) or with its root port or PCI(e) bridge.
106
107.PCI(e) slots
108[NOTE]
109====
49f20f1b
TL
110Some platforms handle their physical PCI(e) slots differently. So, sometimes
111it can help to put the card in a another PCI(e) slot, if you do not get the
112desired `IOMMU` group separation.
6e4c46c4
DC
113====
114
115.Unsafe interrupts
116[NOTE]
117====
118For some platforms, it may be necessary to allow unsafe interrupts.
49f20f1b
TL
119For this add the following line in a file ending with `.conf' file in
120*/etc/modprobe.d/*:
6e4c46c4 121
49f20f1b 122----
6e4c46c4 123 options vfio_iommu_type1 allow_unsafe_interrupts=1
49f20f1b 124----
6e4c46c4
DC
125
126Please be aware that this option can make your system unstable.
127====
128
082b32fb
TL
129GPU Passthrough Notes
130^^^^^^^^^^^^^^^^^^^^^
13cae0c1 131
082b32fb
TL
132It is not possible to display the frame buffer of the GPU via NoVNC or SPICE on
133the {pve} web interface.
13cae0c1 134
082b32fb
TL
135When passing through a whole GPU or a vGPU and graphic output is wanted, one
136has to either physically connect a monitor to the card, or configure a remote
137desktop software (for example, VNC or RDP) inside the guest.
13cae0c1 138
082b32fb
TL
139If you want to use the GPU as a hardware accelerator, for example, for
140programs using OpenCL or CUDA, this is not required.
13cae0c1 141
49f20f1b 142Host Device Passthrough
6e4c46c4
DC
143~~~~~~~~~~~~~~~~~~~~~~~
144
145The most used variant of PCI(e) passthrough is to pass through a whole
49f20f1b
TL
146PCI(e) card, for example a GPU or a network card.
147
6e4c46c4
DC
148
149Host Configuration
150^^^^^^^^^^^^^^^^^^
151
49f20f1b
TL
152In this case, the host cannot use the card. There are two methods to achieve
153this:
6e4c46c4 154
49f20f1b
TL
155* pass the device IDs to the options of the 'vfio-pci' modules by adding
156+
157----
6e4c46c4 158 options vfio-pci ids=1234:5678,4321:8765
6e4c46c4 159----
49f20f1b
TL
160+
161to a .conf file in */etc/modprobe.d/* where `1234:5678` and `4321:8765` are
162the vendor and device IDs obtained by:
163+
164----
165# lcpci -nn
6e4c46c4
DC
166----
167
49f20f1b
TL
168* blacklist the driver completely on the host, ensuring that it is free to bind
169for passthrough, with
170+
171----
6e4c46c4 172 blacklist DRIVERNAME
49f20f1b
TL
173----
174+
175in a .conf file in */etc/modprobe.d/*.
6e4c46c4 176
49f20f1b
TL
177For both methods you need to
178xref:qm_pci_passthrough_update_initramfs[update the `initramfs`] again and
179reboot after that.
6e4c46c4 180
49f20f1b 181[[qm_pci_passthrough_vm_config]]
6e4c46c4
DC
182VM Configuration
183^^^^^^^^^^^^^^^^
49f20f1b
TL
184To pass through the device you need to set the *hostpciX* option in the VM
185configuration, for example by executing:
6e4c46c4
DC
186
187----
49f20f1b 188# qm set VMID -hostpci0 00:02.0
6e4c46c4
DC
189----
190
5ee3d3cd 191If your device has multiple functions (e.g., ``00:02.0`' and ``00:02.1`' ),
1c1241f2 192you can pass them through all together with the shortened syntax ``00:02`'
6e4c46c4
DC
193
194There are some options to which may be necessary, depending on the device
49f20f1b
TL
195and guest OS:
196
197* *x-vga=on|off* marks the PCI(e) device as the primary GPU of the VM.
198With this enabled the *vga* configuration option will be ignored.
6e4c46c4 199
6e4c46c4 200* *pcie=on|off* tells {pve} to use a PCIe or PCI port. Some guests/device
49f20f1b
TL
201combination require PCIe rather than PCI. PCIe is only available for 'q35'
202machine types.
203
6e4c46c4
DC
204* *rombar=on|off* makes the firmware ROM visible for the guest. Default is on.
205Some PCI(e) devices need this disabled.
49f20f1b 206
6e4c46c4 207* *romfile=<path>*, is an optional path to a ROM file for the device to use.
49f20f1b
TL
208This is a relative path under */usr/share/kvm/*.
209
39d84f28 210.Example
6e4c46c4
DC
211
212An example of PCIe passthrough with a GPU set to primary:
213
214----
49f20f1b 215# qm set VMID -hostpci0 02:00,pcie=on,x-vga=on
6e4c46c4
DC
216----
217
49f20f1b 218
6e4c46c4
DC
219Other considerations
220^^^^^^^^^^^^^^^^^^^^
221
222When passing through a GPU, the best compatibility is reached when using
49f20f1b
TL
223'q35' as machine type, 'OVMF' ('EFI' for VMs) instead of SeaBIOS and PCIe
224instead of PCI. Note that if you want to use 'OVMF' for GPU passthrough, the
225GPU needs to have an EFI capable ROM, otherwise use SeaBIOS instead.
6e4c46c4
DC
226
227SR-IOV
228~~~~~~
229
49f20f1b
TL
230Another variant for passing through PCI(e) devices, is to use the hardware
231virtualization features of your devices, if available.
232
233'SR-IOV' (**S**ingle-**R**oot **I**nput/**O**utput **V**irtualization) enables
234a single device to provide multiple 'VF' (**V**irtual **F**unctions) to the
235system. Each of those 'VF' can be used in a different VM, with full hardware
236features and also better performance and lower latency than software
237virtualized devices.
6e4c46c4 238
49f20f1b
TL
239Currently, the most common use case for this are NICs (**N**etwork
240**I**nterface **C**ard) with SR-IOV support, which can provide multiple VFs per
241physical port. This allows using features such as checksum offloading, etc. to
242be used inside a VM, reducing the (host) CPU overhead.
6e4c46c4 243
6e4c46c4
DC
244
245Host Configuration
246^^^^^^^^^^^^^^^^^^
247
49f20f1b 248Generally, there are two methods for enabling virtual functions on a device.
6e4c46c4 249
49f20f1b 250* sometimes there is an option for the driver module e.g. for some
6e4c46c4 251Intel drivers
49f20f1b
TL
252+
253----
6e4c46c4 254 max_vfs=4
49f20f1b
TL
255----
256+
257which could be put file with '.conf' ending under */etc/modprobe.d/*.
6e4c46c4 258(Do not forget to update your initramfs after that)
49f20f1b 259+
6e4c46c4
DC
260Please refer to your driver module documentation for the exact
261parameters and options.
262
49f20f1b
TL
263* The second, more generic, approach is using the `sysfs`.
264If a device and driver supports this you can change the number of VFs on
265the fly. For example, to setup 4 VFs on device 0000:01:00.0 execute:
266+
6e4c46c4 267----
49f20f1b 268# echo 4 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs
6e4c46c4 269----
49f20f1b
TL
270+
271To make this change persistent you can use the `sysfsutils` Debian package.
39d84f28 272After installation configure it via */etc/sysfs.conf* or a `FILE.conf' in
49f20f1b 273*/etc/sysfs.d/*.
6e4c46c4
DC
274
275VM Configuration
276^^^^^^^^^^^^^^^^
277
49f20f1b
TL
278After creating VFs, you should see them as separate PCI(e) devices when
279outputting them with `lspci`. Get their ID and pass them through like a
280xref:qm_pci_passthrough_vm_config[normal PCI(e) device].
6e4c46c4
DC
281
282Other considerations
283^^^^^^^^^^^^^^^^^^^^
284
285For this feature, platform support is especially important. It may be necessary
49f20f1b
TL
286to enable this feature in the BIOS/EFI first, or to use a specific PCI(e) port
287for it to work. In doubt, consult the manual of the platform or contact its
288vendor.
050192c5 289
d25f097c
TL
290Mediated Devices (vGPU, GVT-g)
291~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
050192c5 292
a22d7c24 293Mediated devices are another method to reuse features and performance from
d25f097c
TL
294physical hardware for virtualized hardware. These are found most common in
295virtualized GPU setups such as Intels GVT-g and Nvidias vGPUs used in their
296GRID technology.
297
298With this, a physical Card is able to create virtual cards, similar to SR-IOV.
299The difference is that mediated devices do not appear as PCI(e) devices in the
300host, and are such only suited for using in virtual machines.
050192c5 301
050192c5
DC
302
303Host Configuration
304^^^^^^^^^^^^^^^^^^
305
d25f097c 306In general your card's driver must support that feature, otherwise it will
a22d7c24 307not work. So please refer to your vendor for compatible drivers and how to
050192c5
DC
308configure them.
309
a22d7c24
SR
310Intels drivers for GVT-g are integrated in the Kernel and should work
311with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3
312v5 and E3 v6 Xeon Processors.
050192c5 313
1748211a
SI
314To enable it for Intel Graphics, you have to make sure to load the module
315'kvmgt' (for example via `/etc/modules`) and to enable it on the
69055103 316xref:sysboot_edit_kernel_cmdline[Kernel commandline] and add the following parameter:
050192c5
DC
317
318----
319 i915.enable_gvt=1
320----
321
322After that remember to
323xref:qm_pci_passthrough_update_initramfs[update the `initramfs`],
1748211a 324and reboot your host.
050192c5
DC
325
326VM Configuration
327^^^^^^^^^^^^^^^^
328
d25f097c
TL
329To use a mediated device, simply specify the `mdev` property on a `hostpciX`
330VM configuration option.
050192c5 331
d25f097c
TL
332You can get the supported devices via the 'sysfs'. For example, to list the
333supported types for the device '0000:00:02.0' you would simply execute:
050192c5
DC
334
335----
336# ls /sys/bus/pci/devices/0000:00:02.0/mdev_supported_types
337----
338
339Each entry is a directory which contains the following important files:
340
d25f097c
TL
341* 'available_instances' contains the amount of still available instances of
342this type, each 'mdev' use in a VM reduces this.
050192c5 343* 'description' contains a short description about the capabilities of the type
d25f097c
TL
344* 'create' is the endpoint to create such a device, {pve} does this
345automatically for you, if a 'hostpciX' option with `mdev` is configured.
050192c5 346
d25f097c 347Example configuration with an `Intel GVT-g vGPU` (`Intel Skylake 6700k`):
050192c5
DC
348
349----
350# qm set VMID -hostpci0 00:02.0,mdev=i915-GVTg_V5_4
351----
352
353With this set, {pve} automatically creates such a device on VM start, and
354cleans it up again when the VM stops.
e582833b
DC
355
356ifdef::wiki[]
357
358See Also
359~~~~~~~~
360
361* link:/wiki/Pci_passthrough[PCI Passthrough Examples]
362
363endif::wiki[]