]>
Commit | Line | Data |
---|---|---|
6e4c46c4 DC |
1 | [[qm_pci_passthrough]] |
2 | PCI(e) Passthrough | |
3 | ------------------ | |
4 | ||
5 | PCI(e) passthrough is a mechanism to give a virtual machine control over | |
49f20f1b TL |
6 | a PCI device from the host. This can have some advantages over using |
7 | virtualized hardware, for example lower latency, higher performance, or more | |
8 | features (e.g., offloading). | |
6e4c46c4 | 9 | |
49f20f1b | 10 | But, if you pass through a device to a virtual machine, you cannot use that |
6e4c46c4 DC |
11 | device anymore on the host or in any other VM. |
12 | ||
13 | General Requirements | |
14 | ~~~~~~~~~~~~~~~~~~~~ | |
15 | ||
16 | Since passthrough is a feature which also needs hardware support, there are | |
49f20f1b TL |
17 | some requirements to check and preparations to be done to make it work. |
18 | ||
6e4c46c4 DC |
19 | |
20 | Hardware | |
21 | ^^^^^^^^ | |
49f20f1b TL |
22 | Your hardware needs to support `IOMMU` (*I*/*O* **M**emory **M**anagement |
23 | **U**nit) interrupt remapping, this includes the CPU and the mainboard. | |
6e4c46c4 | 24 | |
49f20f1b TL |
25 | Generally, Intel systems with VT-d, and AMD systems with AMD-Vi support this. |
26 | But it is not guaranteed that everything will work out of the box, due | |
27 | to bad hardware implementation and missing or low quality drivers. | |
6e4c46c4 | 28 | |
49f20f1b | 29 | Further, server grade hardware has often better support than consumer grade |
6e4c46c4 DC |
30 | hardware, but even then, many modern system can support this. |
31 | ||
49f20f1b TL |
32 | Please refer to your hardware vendor to check if they support this feature |
33 | under Linux for your specific setup | |
34 | ||
6e4c46c4 DC |
35 | |
36 | Configuration | |
37 | ^^^^^^^^^^^^^ | |
38 | ||
49f20f1b TL |
39 | Once you ensured that your hardware supports passthrough, you will need to do |
40 | some configuration to enable PCI(e) passthrough. | |
6e4c46c4 | 41 | |
6e4c46c4 | 42 | |
39d84f28 | 43 | .IOMMU |
6e4c46c4 | 44 | |
49f20f1b | 45 | The IOMMU has to be activated on the kernel commandline. The easiest way is to |
39d84f28 | 46 | enable trough grub. Edit `'/etc/default/grub'' and add the following to the |
49f20f1b | 47 | 'GRUB_CMDLINE_LINUX_DEFAULT' variable: |
6e4c46c4 | 48 | |
49f20f1b TL |
49 | * for Intel CPUs: |
50 | + | |
51 | ---- | |
52 | intel_iommu=on | |
53 | ---- | |
54 | * for AMD CPUs: | |
55 | + | |
56 | ---- | |
6e4c46c4 | 57 | amd_iommu=on |
49f20f1b | 58 | ---- |
6e4c46c4 | 59 | |
39d84f28 | 60 | [[qm_pci_passthrough_update_grub]] |
49f20f1b TL |
61 | To bring this change in effect, make sure you run: |
62 | ||
63 | ---- | |
64 | # update-grub | |
65 | ---- | |
6e4c46c4 | 66 | |
39d84f28 | 67 | .Kernel Modules |
6e4c46c4 | 68 | |
49f20f1b TL |
69 | You have to make sure the following modules are loaded. This can be achieved by |
70 | adding them to `'/etc/modules'' | |
6e4c46c4 | 71 | |
49f20f1b | 72 | ---- |
6e4c46c4 DC |
73 | vfio |
74 | vfio_iommu_type1 | |
75 | vfio_pci | |
76 | vfio_virqfd | |
49f20f1b | 77 | ---- |
6e4c46c4 | 78 | |
49f20f1b | 79 | [[qm_pci_passthrough_update_initramfs]] |
6e4c46c4 | 80 | After changing anything modules related, you need to refresh your |
49f20f1b | 81 | `initramfs`. On {pve} this can be done by executing: |
6e4c46c4 DC |
82 | |
83 | ---- | |
49f20f1b | 84 | # update-initramfs -u -k all |
6e4c46c4 DC |
85 | ---- |
86 | ||
39d84f28 | 87 | .Finish Configuration |
49f20f1b TL |
88 | |
89 | Finally reboot to bring the changes into effect and check that it is indeed | |
90 | enabled. | |
6e4c46c4 DC |
91 | |
92 | ---- | |
49f20f1b | 93 | # dmesg -e DMAR -e IOMMU -e AMD-Vi |
6e4c46c4 DC |
94 | ---- |
95 | ||
49f20f1b TL |
96 | should display that `IOMMU`, `Directed I/O` or `Interrupt Remapping` is |
97 | enabled, depending on hardware and kernel the exact message can vary. | |
6e4c46c4 DC |
98 | |
99 | It is also important that the device(s) you want to pass through | |
49f20f1b | 100 | are in a *separate* `IOMMU` group. This can be checked with: |
6e4c46c4 DC |
101 | |
102 | ---- | |
49f20f1b | 103 | # find /sys/kernel/iommu_groups/ -type l |
6e4c46c4 DC |
104 | ---- |
105 | ||
49f20f1b | 106 | It is okay if the device is in an `IOMMU` group together with its functions |
6e4c46c4 DC |
107 | (e.g. a GPU with the HDMI Audio device) or with its root port or PCI(e) bridge. |
108 | ||
109 | .PCI(e) slots | |
110 | [NOTE] | |
111 | ==== | |
49f20f1b TL |
112 | Some platforms handle their physical PCI(e) slots differently. So, sometimes |
113 | it can help to put the card in a another PCI(e) slot, if you do not get the | |
114 | desired `IOMMU` group separation. | |
6e4c46c4 DC |
115 | ==== |
116 | ||
117 | .Unsafe interrupts | |
118 | [NOTE] | |
119 | ==== | |
120 | For some platforms, it may be necessary to allow unsafe interrupts. | |
49f20f1b TL |
121 | For this add the following line in a file ending with `.conf' file in |
122 | */etc/modprobe.d/*: | |
6e4c46c4 | 123 | |
49f20f1b | 124 | ---- |
6e4c46c4 | 125 | options vfio_iommu_type1 allow_unsafe_interrupts=1 |
49f20f1b | 126 | ---- |
6e4c46c4 DC |
127 | |
128 | Please be aware that this option can make your system unstable. | |
129 | ==== | |
130 | ||
082b32fb TL |
131 | GPU Passthrough Notes |
132 | ^^^^^^^^^^^^^^^^^^^^^ | |
13cae0c1 | 133 | |
082b32fb TL |
134 | It is not possible to display the frame buffer of the GPU via NoVNC or SPICE on |
135 | the {pve} web interface. | |
13cae0c1 | 136 | |
082b32fb TL |
137 | When passing through a whole GPU or a vGPU and graphic output is wanted, one |
138 | has to either physically connect a monitor to the card, or configure a remote | |
139 | desktop software (for example, VNC or RDP) inside the guest. | |
13cae0c1 | 140 | |
082b32fb TL |
141 | If you want to use the GPU as a hardware accelerator, for example, for |
142 | programs using OpenCL or CUDA, this is not required. | |
13cae0c1 | 143 | |
49f20f1b | 144 | Host Device Passthrough |
6e4c46c4 DC |
145 | ~~~~~~~~~~~~~~~~~~~~~~~ |
146 | ||
147 | The most used variant of PCI(e) passthrough is to pass through a whole | |
49f20f1b TL |
148 | PCI(e) card, for example a GPU or a network card. |
149 | ||
6e4c46c4 DC |
150 | |
151 | Host Configuration | |
152 | ^^^^^^^^^^^^^^^^^^ | |
153 | ||
49f20f1b TL |
154 | In this case, the host cannot use the card. There are two methods to achieve |
155 | this: | |
6e4c46c4 | 156 | |
49f20f1b TL |
157 | * pass the device IDs to the options of the 'vfio-pci' modules by adding |
158 | + | |
159 | ---- | |
6e4c46c4 | 160 | options vfio-pci ids=1234:5678,4321:8765 |
6e4c46c4 | 161 | ---- |
49f20f1b TL |
162 | + |
163 | to a .conf file in */etc/modprobe.d/* where `1234:5678` and `4321:8765` are | |
164 | the vendor and device IDs obtained by: | |
165 | + | |
166 | ---- | |
167 | # lcpci -nn | |
6e4c46c4 DC |
168 | ---- |
169 | ||
49f20f1b TL |
170 | * blacklist the driver completely on the host, ensuring that it is free to bind |
171 | for passthrough, with | |
172 | + | |
173 | ---- | |
6e4c46c4 | 174 | blacklist DRIVERNAME |
49f20f1b TL |
175 | ---- |
176 | + | |
177 | in a .conf file in */etc/modprobe.d/*. | |
6e4c46c4 | 178 | |
49f20f1b TL |
179 | For both methods you need to |
180 | xref:qm_pci_passthrough_update_initramfs[update the `initramfs`] again and | |
181 | reboot after that. | |
6e4c46c4 | 182 | |
49f20f1b | 183 | [[qm_pci_passthrough_vm_config]] |
6e4c46c4 DC |
184 | VM Configuration |
185 | ^^^^^^^^^^^^^^^^ | |
49f20f1b TL |
186 | To pass through the device you need to set the *hostpciX* option in the VM |
187 | configuration, for example by executing: | |
6e4c46c4 DC |
188 | |
189 | ---- | |
49f20f1b | 190 | # qm set VMID -hostpci0 00:02.0 |
6e4c46c4 DC |
191 | ---- |
192 | ||
1c1241f2 DC |
193 | If your device has multiple functions (e.g., ``00:02.0`' and ``00:02.1`'), |
194 | you can pass them through all together with the shortened syntax ``00:02`' | |
6e4c46c4 DC |
195 | |
196 | There are some options to which may be necessary, depending on the device | |
49f20f1b TL |
197 | and guest OS: |
198 | ||
199 | * *x-vga=on|off* marks the PCI(e) device as the primary GPU of the VM. | |
200 | With this enabled the *vga* configuration option will be ignored. | |
6e4c46c4 | 201 | |
6e4c46c4 | 202 | * *pcie=on|off* tells {pve} to use a PCIe or PCI port. Some guests/device |
49f20f1b TL |
203 | combination require PCIe rather than PCI. PCIe is only available for 'q35' |
204 | machine types. | |
205 | ||
6e4c46c4 DC |
206 | * *rombar=on|off* makes the firmware ROM visible for the guest. Default is on. |
207 | Some PCI(e) devices need this disabled. | |
49f20f1b | 208 | |
6e4c46c4 | 209 | * *romfile=<path>*, is an optional path to a ROM file for the device to use. |
49f20f1b TL |
210 | This is a relative path under */usr/share/kvm/*. |
211 | ||
39d84f28 | 212 | .Example |
6e4c46c4 DC |
213 | |
214 | An example of PCIe passthrough with a GPU set to primary: | |
215 | ||
216 | ---- | |
49f20f1b | 217 | # qm set VMID -hostpci0 02:00,pcie=on,x-vga=on |
6e4c46c4 DC |
218 | ---- |
219 | ||
49f20f1b | 220 | |
6e4c46c4 DC |
221 | Other considerations |
222 | ^^^^^^^^^^^^^^^^^^^^ | |
223 | ||
224 | When passing through a GPU, the best compatibility is reached when using | |
49f20f1b TL |
225 | 'q35' as machine type, 'OVMF' ('EFI' for VMs) instead of SeaBIOS and PCIe |
226 | instead of PCI. Note that if you want to use 'OVMF' for GPU passthrough, the | |
227 | GPU needs to have an EFI capable ROM, otherwise use SeaBIOS instead. | |
6e4c46c4 DC |
228 | |
229 | SR-IOV | |
230 | ~~~~~~ | |
231 | ||
49f20f1b TL |
232 | Another variant for passing through PCI(e) devices, is to use the hardware |
233 | virtualization features of your devices, if available. | |
234 | ||
235 | 'SR-IOV' (**S**ingle-**R**oot **I**nput/**O**utput **V**irtualization) enables | |
236 | a single device to provide multiple 'VF' (**V**irtual **F**unctions) to the | |
237 | system. Each of those 'VF' can be used in a different VM, with full hardware | |
238 | features and also better performance and lower latency than software | |
239 | virtualized devices. | |
6e4c46c4 | 240 | |
49f20f1b TL |
241 | Currently, the most common use case for this are NICs (**N**etwork |
242 | **I**nterface **C**ard) with SR-IOV support, which can provide multiple VFs per | |
243 | physical port. This allows using features such as checksum offloading, etc. to | |
244 | be used inside a VM, reducing the (host) CPU overhead. | |
6e4c46c4 | 245 | |
6e4c46c4 DC |
246 | |
247 | Host Configuration | |
248 | ^^^^^^^^^^^^^^^^^^ | |
249 | ||
49f20f1b | 250 | Generally, there are two methods for enabling virtual functions on a device. |
6e4c46c4 | 251 | |
49f20f1b | 252 | * sometimes there is an option for the driver module e.g. for some |
6e4c46c4 | 253 | Intel drivers |
49f20f1b TL |
254 | + |
255 | ---- | |
6e4c46c4 | 256 | max_vfs=4 |
49f20f1b TL |
257 | ---- |
258 | + | |
259 | which could be put file with '.conf' ending under */etc/modprobe.d/*. | |
6e4c46c4 | 260 | (Do not forget to update your initramfs after that) |
49f20f1b | 261 | + |
6e4c46c4 DC |
262 | Please refer to your driver module documentation for the exact |
263 | parameters and options. | |
264 | ||
49f20f1b TL |
265 | * The second, more generic, approach is using the `sysfs`. |
266 | If a device and driver supports this you can change the number of VFs on | |
267 | the fly. For example, to setup 4 VFs on device 0000:01:00.0 execute: | |
268 | + | |
6e4c46c4 | 269 | ---- |
49f20f1b | 270 | # echo 4 > /sys/bus/pci/devices/0000:01:00.0/sriov_numvfs |
6e4c46c4 | 271 | ---- |
49f20f1b TL |
272 | + |
273 | To make this change persistent you can use the `sysfsutils` Debian package. | |
39d84f28 | 274 | After installation configure it via */etc/sysfs.conf* or a `FILE.conf' in |
49f20f1b | 275 | */etc/sysfs.d/*. |
6e4c46c4 DC |
276 | |
277 | VM Configuration | |
278 | ^^^^^^^^^^^^^^^^ | |
279 | ||
49f20f1b TL |
280 | After creating VFs, you should see them as separate PCI(e) devices when |
281 | outputting them with `lspci`. Get their ID and pass them through like a | |
282 | xref:qm_pci_passthrough_vm_config[normal PCI(e) device]. | |
6e4c46c4 DC |
283 | |
284 | Other considerations | |
285 | ^^^^^^^^^^^^^^^^^^^^ | |
286 | ||
287 | For this feature, platform support is especially important. It may be necessary | |
49f20f1b TL |
288 | to enable this feature in the BIOS/EFI first, or to use a specific PCI(e) port |
289 | for it to work. In doubt, consult the manual of the platform or contact its | |
290 | vendor. | |
050192c5 | 291 | |
d25f097c TL |
292 | Mediated Devices (vGPU, GVT-g) |
293 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
050192c5 | 294 | |
d25f097c TL |
295 | Mediated devices are another method to use reuse features and performance from |
296 | physical hardware for virtualized hardware. These are found most common in | |
297 | virtualized GPU setups such as Intels GVT-g and Nvidias vGPUs used in their | |
298 | GRID technology. | |
299 | ||
300 | With this, a physical Card is able to create virtual cards, similar to SR-IOV. | |
301 | The difference is that mediated devices do not appear as PCI(e) devices in the | |
302 | host, and are such only suited for using in virtual machines. | |
050192c5 | 303 | |
050192c5 DC |
304 | |
305 | Host Configuration | |
306 | ^^^^^^^^^^^^^^^^^^ | |
307 | ||
d25f097c | 308 | In general your card's driver must support that feature, otherwise it will |
050192c5 DC |
309 | not work. So please refer to your vendor for compatbile drivers and how to |
310 | configure them. | |
311 | ||
312 | Intels drivers for GVT-g are integraded in the Kernel and should work | |
d25f097c TL |
313 | with the 5th, 6th and 7th generation Intel Core Processors, further E3 v4, E3 |
314 | v5 and E3 v6 Xeon Processors are supported. | |
050192c5 DC |
315 | |
316 | To enable it for Intel Graphcs, you have to make sure to load the module | |
317 | 'kvmgt' (for example via `/etc/modules`) and to enable it on the Kernel | |
d25f097c TL |
318 | commandline. For this you can edit `'/etc/default/grub'' and add the following |
319 | to the 'GRUB_CMDLINE_LINUX_DEFAULT' variable: | |
050192c5 DC |
320 | |
321 | ---- | |
322 | i915.enable_gvt=1 | |
323 | ---- | |
324 | ||
325 | After that remember to | |
326 | xref:qm_pci_passthrough_update_initramfs[update the `initramfs`], | |
327 | xref:qm_pci_passthrough_update_grub[update grub] and | |
328 | reboot your host. | |
329 | ||
330 | VM Configuration | |
331 | ^^^^^^^^^^^^^^^^ | |
332 | ||
d25f097c TL |
333 | To use a mediated device, simply specify the `mdev` property on a `hostpciX` |
334 | VM configuration option. | |
050192c5 | 335 | |
d25f097c TL |
336 | You can get the supported devices via the 'sysfs'. For example, to list the |
337 | supported types for the device '0000:00:02.0' you would simply execute: | |
050192c5 DC |
338 | |
339 | ---- | |
340 | # ls /sys/bus/pci/devices/0000:00:02.0/mdev_supported_types | |
341 | ---- | |
342 | ||
343 | Each entry is a directory which contains the following important files: | |
344 | ||
d25f097c TL |
345 | * 'available_instances' contains the amount of still available instances of |
346 | this type, each 'mdev' use in a VM reduces this. | |
050192c5 | 347 | * 'description' contains a short description about the capabilities of the type |
d25f097c TL |
348 | * 'create' is the endpoint to create such a device, {pve} does this |
349 | automatically for you, if a 'hostpciX' option with `mdev` is configured. | |
050192c5 | 350 | |
d25f097c | 351 | Example configuration with an `Intel GVT-g vGPU` (`Intel Skylake 6700k`): |
050192c5 DC |
352 | |
353 | ---- | |
354 | # qm set VMID -hostpci0 00:02.0,mdev=i915-GVTg_V5_4 | |
355 | ---- | |
356 | ||
357 | With this set, {pve} automatically creates such a device on VM start, and | |
358 | cleans it up again when the VM stops. |