]>
Commit | Line | Data |
---|---|---|
542cc9bb TG |
1 | Using Open vSwitch with DPDK |
2 | ============================ | |
3 | ||
4 | Open vSwitch can use Intel(R) DPDK lib to operate entirely in | |
5 | userspace. This file explains how to install and use Open vSwitch in | |
6 | such a mode. | |
7 | ||
8 | The DPDK support of Open vSwitch is considered experimental. | |
9 | It has not been thoroughly tested. | |
10 | ||
11 | This version of Open vSwitch should be built manually with `configure` | |
12 | and `make`. | |
13 | ||
14 | OVS needs a system with 1GB hugepages support. | |
15 | ||
16 | Building and Installing: | |
17 | ------------------------ | |
18 | ||
02ab4b1a | 19 | Required: DPDK 2.2 |
7d1ced01 CL |
20 | Optional (if building with vhost-cuse): `fuse`, `fuse-devel` (`libfuse-dev` |
21 | on Debian/Ubuntu) | |
542cc9bb TG |
22 | |
23 | 1. Configure build & install DPDK: | |
24 | 1. Set `$DPDK_DIR` | |
25 | ||
26 | ``` | |
02ab4b1a | 27 | export DPDK_DIR=/usr/src/dpdk-2.2 |
542cc9bb TG |
28 | cd $DPDK_DIR |
29 | ``` | |
30 | ||
31 | 2. Update `config/common_linuxapp` so that DPDK generate single lib file. | |
32 | (modification also required for IVSHMEM build) | |
33 | ||
34 | `CONFIG_RTE_BUILD_COMBINE_LIBS=y` | |
35 | ||
777cb787 | 36 | Then run `make install` to build and install the library. |
542cc9bb TG |
37 | For default install without IVSHMEM: |
38 | ||
d60a9c21 | 39 | `make install T=x86_64-native-linuxapp-gcc DESTDIR=install` |
542cc9bb TG |
40 | |
41 | To include IVSHMEM (shared memory): | |
42 | ||
d60a9c21 | 43 | `make install T=x86_64-ivshmem-linuxapp-gcc DESTDIR=install` |
542cc9bb TG |
44 | |
45 | For further details refer to http://dpdk.org/ | |
46 | ||
47 | 2. Configure & build the Linux kernel: | |
48 | ||
49 | Refer to intel-dpdk-getting-started-guide.pdf for understanding | |
50 | DPDK kernel requirement. | |
51 | ||
52 | 3. Configure & build OVS: | |
53 | ||
54 | * Non IVSHMEM: | |
55 | ||
56 | `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/` | |
57 | ||
58 | * IVSHMEM: | |
59 | ||
60 | `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/` | |
61 | ||
62 | ``` | |
15b612f8 | 63 | cd $(OVS_DIR)/ |
542cc9bb | 64 | ./boot.sh |
543342a4 | 65 | ./configure --with-dpdk=$DPDK_BUILD [CFLAGS="-g -O2 -Wno-cast-align"] |
542cc9bb TG |
66 | make |
67 | ``` | |
68 | ||
543342a4 MK |
69 | Note: 'clang' users may specify the '-Wno-cast-align' flag to suppress DPDK cast-align warnings. |
70 | ||
542cc9bb TG |
71 | To have better performance one can enable aggressive compiler optimizations and |
72 | use the special instructions(popcnt, crc32) that may not be available on all | |
73 | machines. Instead of typing `make`, type: | |
74 | ||
75 | `make CFLAGS='-O3 -march=native'` | |
76 | ||
9feb1017 | 77 | Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS. |
542cc9bb TG |
78 | |
79 | Using the DPDK with ovs-vswitchd: | |
80 | --------------------------------- | |
81 | ||
82 | 1. Setup system boot | |
83 | Add the following options to the kernel bootline: | |
84 | ||
85 | `default_hugepagesz=1GB hugepagesz=1G hugepages=1` | |
86 | ||
87 | 2. Setup DPDK devices: | |
491c2ea3 MG |
88 | |
89 | DPDK devices can be setup using either the VFIO (for DPDK 1.7+) or UIO | |
90 | modules. UIO requires inserting an out of tree driver igb_uio.ko that is | |
91 | available in DPDK. Setup for both methods are described below. | |
92 | ||
93 | * UIO: | |
94 | 1. insert uio.ko: `modprobe uio` | |
95 | 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko` | |
96 | 3. Bind network device to igb_uio: | |
dbde55e7 | 97 | `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1` |
491c2ea3 MG |
98 | |
99 | * VFIO: | |
100 | ||
101 | VFIO needs to be supported in the kernel and the BIOS. More information | |
102 | can be found in the [DPDK Linux GSG]. | |
103 | ||
104 | 1. Insert vfio-pci.ko: `modprobe vfio-pci` | |
105 | 2. Set correct permissions on vfio device: `sudo /usr/bin/chmod a+x /dev/vfio` | |
106 | and: `sudo /usr/bin/chmod 0666 /dev/vfio/*` | |
107 | 3. Bind network device to vfio-pci: | |
dbde55e7 | 108 | `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=vfio-pci eth1` |
542cc9bb | 109 | |
18f777b2 | 110 | 3. Mount the hugetable filesystem |
542cc9bb TG |
111 | |
112 | `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages` | |
113 | ||
114 | Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup. | |
115 | ||
a52b0492 GS |
116 | 4. Follow the instructions in [INSTALL.md] to install only the |
117 | userspace daemons and utilities (via 'make install'). | |
542cc9bb TG |
118 | 1. First time only db creation (or clearing): |
119 | ||
a52b0492 GS |
120 | ``` |
121 | mkdir -p /usr/local/etc/openvswitch | |
122 | mkdir -p /usr/local/var/run/openvswitch | |
123 | rm /usr/local/etc/openvswitch/conf.db | |
124 | ovsdb-tool create /usr/local/etc/openvswitch/conf.db \ | |
125 | /usr/local/share/openvswitch/vswitch.ovsschema | |
126 | ``` | |
542cc9bb | 127 | |
a52b0492 | 128 | 2. Start ovsdb-server |
542cc9bb | 129 | |
a52b0492 GS |
130 | ``` |
131 | ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \ | |
542cc9bb TG |
132 | --remote=db:Open_vSwitch,Open_vSwitch,manager_options \ |
133 | --private-key=db:Open_vSwitch,SSL,private_key \ | |
134 | --certificate=Open_vSwitch,SSL,certificate \ | |
135 | --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach | |
a52b0492 | 136 | ``` |
542cc9bb TG |
137 | |
138 | 3. First time after db creation, initialize: | |
139 | ||
a52b0492 GS |
140 | ``` |
141 | ovs-vsctl --no-wait init | |
142 | ``` | |
542cc9bb TG |
143 | |
144 | 5. Start vswitchd: | |
145 | ||
146 | DPDK configuration arguments can be passed to vswitchd via `--dpdk` | |
147 | argument. This needs to be first argument passed to vswitchd process. | |
148 | dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter | |
149 | for dpdk initialization. | |
150 | ||
a52b0492 GS |
151 | ``` |
152 | export DB_SOCK=/usr/local/var/run/openvswitch/db.sock | |
153 | ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach | |
154 | ``` | |
542cc9bb | 155 | |
a52b0492 GS |
156 | If allocated more than one GB hugepage (as for IVSHMEM), set amount and |
157 | use NUMA node 0 memory: | |
542cc9bb | 158 | |
a52b0492 GS |
159 | ``` |
160 | ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \ | |
161 | -- unix:$DB_SOCK --pidfile --detach | |
162 | ``` | |
542cc9bb TG |
163 | |
164 | 6. Add bridge & ports | |
b8e57534 | 165 | |
542cc9bb TG |
166 | To use ovs-vswitchd with DPDK, create a bridge with datapath_type |
167 | "netdev" in the configuration database. For example: | |
168 | ||
a52b0492 | 169 | `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev` |
542cc9bb | 170 | |
f748d99a RB |
171 | Now you can add dpdk devices. OVS expects DPDK device names to start with |
172 | "dpdk" and end with a portid. vswitchd should print (in the log file) the | |
173 | number of dpdk devices found. | |
542cc9bb | 174 | |
a52b0492 GS |
175 | ``` |
176 | ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk | |
177 | ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk | |
178 | ``` | |
542cc9bb | 179 | |
a52b0492 GS |
180 | Once first DPDK port is added to vswitchd, it creates a Polling thread and |
181 | polls dpdk device in continuous loop. Therefore CPU utilization | |
182 | for that thread is always 100%. | |
542cc9bb | 183 | |
77c180ce BM |
184 | Note: creating bonds of DPDK interfaces is slightly different to creating |
185 | bonds of system interfaces. For DPDK, the interface type must be explicitly | |
186 | set, for example: | |
187 | ||
188 | ``` | |
189 | ovs-vsctl add-bond br0 dpdkbond dpdk0 dpdk1 -- set Interface dpdk0 type=dpdk -- set Interface dpdk1 type=dpdk | |
190 | ``` | |
191 | ||
542cc9bb TG |
192 | 7. Add test flows |
193 | ||
194 | Test flow script across NICs (assuming ovs in /usr/src/ovs): | |
195 | Execute script: | |
196 | ||
197 | ``` | |
198 | #! /bin/sh | |
199 | # Move to command directory | |
200 | cd /usr/src/ovs/utilities/ | |
201 | ||
202 | # Clear current flows | |
203 | ./ovs-ofctl del-flows br0 | |
204 | ||
205 | # Add flows between port 1 (dpdk0) to port 2 (dpdk1) | |
206 | ./ovs-ofctl add-flow br0 in_port=1,action=output:2 | |
207 | ./ovs-ofctl add-flow br0 in_port=2,action=output:1 | |
208 | ``` | |
209 | ||
0bf765f7 IS |
210 | 8. QoS usage example |
211 | ||
212 | Assuming you have a vhost-user port transmitting traffic consisting of | |
213 | packets of size 64 bytes, the following command would limit the egress | |
214 | transmission rate of the port to ~1,000,000 packets per second: | |
215 | ||
216 | `ovs-vsctl set port vhost-user0 qos=@newqos -- --id=@newqos create qos | |
217 | type=egress-policer other-config:cir=46000000 other-config:cbs=2048` | |
218 | ||
219 | To examine the QoS configuration of the port: | |
220 | ||
221 | `ovs-appctl -t ovs-vswitchd qos/show vhost-user0` | |
222 | ||
223 | To clear the QoS configuration from the port and ovsdb use the following: | |
224 | ||
225 | `ovs-vsctl destroy QoS vhost-user0 -- clear Port vhost-user0 qos` | |
226 | ||
227 | For more details regarding egress-policer parameters please refer to the | |
228 | vswitch.xml. | |
229 | ||
188d29d7 KT |
230 | Performance Tuning: |
231 | ------------------- | |
542cc9bb | 232 | |
188d29d7 | 233 | 1. PMD affinitization |
542cc9bb | 234 | |
188d29d7 KT |
235 | A poll mode driver (pmd) thread handles the I/O of all DPDK |
236 | interfaces assigned to it. A pmd thread will busy loop through | |
237 | the assigned port/rxq's polling for packets, switch the packets | |
238 | and send to a tx port if required. Typically, it is found that | |
239 | a pmd thread is CPU bound, meaning that the greater the CPU | |
240 | occupancy the pmd thread can get, the better the performance. To | |
241 | that end, it is good practice to ensure that a pmd thread has as | |
242 | many cycles on a core available to it as possible. This can be | |
243 | achieved by affinitizing the pmd thread with a core that has no | |
244 | other workload. See section 7 below for a description of how to | |
245 | isolate cores for this purpose also. | |
542cc9bb | 246 | |
188d29d7 KT |
247 | The following command can be used to specify the affinity of the |
248 | pmd thread(s). | |
542cc9bb | 249 | |
188d29d7 | 250 | `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>` |
542cc9bb | 251 | |
188d29d7 KT |
252 | By setting a bit in the mask, a pmd thread is created and pinned |
253 | to the corresponding CPU core. e.g. to run a pmd thread on core 1 | |
542cc9bb | 254 | |
188d29d7 | 255 | `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=2` |
542cc9bb | 256 | |
188d29d7 | 257 | For more information, please refer to the Open_vSwitch TABLE section in |
542cc9bb | 258 | |
188d29d7 | 259 | `man ovs-vswitchd.conf.db` |
542cc9bb | 260 | |
188d29d7 KT |
261 | Note, that a pmd thread on a NUMA node is only created if there is |
262 | at least one DPDK interface from that NUMA node added to OVS. | |
542cc9bb | 263 | |
188d29d7 | 264 | 2. Multiple poll mode driver threads |
542cc9bb | 265 | |
188d29d7 KT |
266 | With pmd multi-threading support, OVS creates one pmd thread |
267 | for each NUMA node by default. However, it can be seen that in cases | |
268 | where there are multiple ports/rxq's producing traffic, performance | |
269 | can be improved by creating multiple pmd threads running on separate | |
270 | cores. These pmd threads can then share the workload by each being | |
271 | responsible for different ports/rxq's. Assignment of ports/rxq's to | |
272 | pmd threads is done automatically. | |
542cc9bb | 273 | |
188d29d7 KT |
274 | The following command can be used to specify the affinity of the |
275 | pmd threads. | |
542cc9bb | 276 | |
188d29d7 KT |
277 | `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>` |
278 | ||
279 | A set bit in the mask means a pmd thread is created and pinned | |
280 | to the corresponding CPU core. e.g. to run pmd threads on core 1 and 2 | |
281 | ||
282 | `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=6` | |
283 | ||
284 | For more information, please refer to the Open_vSwitch TABLE section in | |
285 | ||
286 | `man ovs-vswitchd.conf.db` | |
287 | ||
288 | For example, when using dpdk and dpdkvhostuser ports in a bi-directional | |
289 | VM loopback as shown below, spreading the workload over 2 or 4 pmd | |
290 | threads shows significant improvements as there will be more total CPU | |
291 | occupancy available. | |
292 | ||
293 | NIC port0 <-> OVS <-> VM <-> OVS <-> NIC port 1 | |
294 | ||
ce179f11 IM |
295 | The following command can be used to confirm that the port/rxq assignment |
296 | to pmd threads is as required: | |
297 | ||
298 | `ovs-appctl dpif-netdev/pmd-rxq-show` | |
299 | ||
300 | This can also be checked with: | |
188d29d7 KT |
301 | |
302 | ``` | |
303 | top -H | |
304 | taskset -p <pid_of_pmd> | |
305 | ``` | |
306 | ||
307 | To understand where most of the pmd thread time is spent and whether the | |
308 | caches are being utilized, these commands can be used: | |
309 | ||
310 | ``` | |
311 | # Clear previous stats | |
312 | ovs-appctl dpif-netdev/pmd-stats-clear | |
313 | ||
314 | # Check current stats | |
315 | ovs-appctl dpif-netdev/pmd-stats-show | |
316 | ``` | |
317 | ||
318 | 3. DPDK port Rx Queues | |
319 | ||
a14b8947 | 320 | `ovs-vsctl set Interface <DPDK interface> options:n_rxq=<integer>` |
188d29d7 | 321 | |
a14b8947 | 322 | The command above sets the number of rx queues for DPDK interface. |
188d29d7 KT |
323 | The rx queues are assigned to pmd threads on the same NUMA node in a |
324 | round-robin fashion. For more information, please refer to the | |
325 | Open_vSwitch TABLE section in | |
326 | ||
327 | `man ovs-vswitchd.conf.db` | |
328 | ||
329 | 4. Exact Match Cache | |
330 | ||
331 | Each pmd thread contains one EMC. After initial flow setup in the | |
332 | datapath, the EMC contains a single table and provides the lowest level | |
333 | (fastest) switching for DPDK ports. If there is a miss in the EMC then | |
334 | the next level where switching will occur is the datapath classifier. | |
335 | Missing in the EMC and looking up in the datapath classifier incurs a | |
336 | significant performance penalty. If lookup misses occur in the EMC | |
337 | because it is too small to handle the number of flows, its size can | |
338 | be increased. The EMC size can be modified by editing the define | |
339 | EM_FLOW_HASH_SHIFT in lib/dpif-netdev.c. | |
340 | ||
341 | As mentioned above an EMC is per pmd thread. So an alternative way of | |
342 | increasing the aggregate amount of possible flow entries in EMC and | |
343 | avoiding datapath classifier lookups is to have multiple pmd threads | |
344 | running. This can be done as described in section 2. | |
345 | ||
346 | 5. Compiler options | |
347 | ||
348 | The default compiler optimization level is '-O2'. Changing this to | |
349 | more aggressive compiler optimizations such as '-O3' or | |
350 | '-Ofast -march=native' with gcc can produce performance gains. | |
351 | ||
352 | 6. Simultaneous Multithreading (SMT) | |
353 | ||
354 | With SMT enabled, one physical core appears as two logical cores | |
355 | which can improve performance. | |
356 | ||
357 | SMT can be utilized to add additional pmd threads without consuming | |
358 | additional physical cores. Additional pmd threads may be added in the | |
359 | same manner as described in section 2. If trying to minimize the use | |
360 | of physical cores for pmd threads, care must be taken to set the | |
361 | correct bits in the pmd-cpu-mask to ensure that the pmd threads are | |
362 | pinned to SMT siblings. | |
363 | ||
364 | For example, when using 2x 10 core processors in a dual socket system | |
365 | with HT enabled, /proc/cpuinfo will report 40 logical cores. To use | |
366 | two logical cores which share the same physical core for pmd threads, | |
367 | the following command can be used to identify a pair of logical cores. | |
368 | ||
369 | `cat /sys/devices/system/cpu/cpuN/topology/thread_siblings_list` | |
370 | ||
371 | where N is the logical core number. In this example, it would show that | |
372 | cores 1 and 21 share the same physical core. The pmd-cpu-mask to enable | |
373 | two pmd threads running on these two logical cores (one physical core) | |
374 | is. | |
375 | ||
376 | `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=100002` | |
377 | ||
378 | Note that SMT is enabled by the Hyper-Threading section in the | |
379 | BIOS, and as such will apply to the whole system. So the impact of | |
380 | enabling/disabling it for the whole system should be considered | |
381 | e.g. If workloads on the system can scale across multiple cores, | |
382 | SMT may very beneficial. However, if they do not and perform best | |
383 | on a single physical core, SMT may not be beneficial. | |
384 | ||
385 | 7. The isolcpus kernel boot parameter | |
386 | ||
387 | isolcpus can be used on the kernel bootline to isolate cores from the | |
388 | kernel scheduler and hence dedicate them to OVS or other packet | |
389 | forwarding related workloads. For example a Linux kernel boot-line | |
390 | could be: | |
391 | ||
392 | 'GRUB_CMDLINE_LINUX_DEFAULT="quiet hugepagesz=1G hugepages=4 default_hugepagesz=1G 'intel_iommu=off' isolcpus=1-19"' | |
393 | ||
394 | 8. NUMA/Cluster On Die | |
395 | ||
396 | Ideally inter NUMA datapaths should be avoided where possible as packets | |
397 | will go across QPI and there may be a slight performance penalty when | |
398 | compared with intra NUMA datapaths. On Intel Xeon Processor E5 v3, | |
399 | Cluster On Die is introduced on models that have 10 cores or more. | |
400 | This makes it possible to logically split a socket into two NUMA regions | |
401 | and again it is preferred where possible to keep critical datapaths | |
402 | within the one cluster. | |
403 | ||
404 | It is good practice to ensure that threads that are in the datapath are | |
405 | pinned to cores in the same NUMA area. e.g. pmd threads and QEMU vCPUs | |
406 | responsible for forwarding. | |
407 | ||
408 | 9. Rx Mergeable buffers | |
409 | ||
410 | Rx Mergeable buffers is a virtio feature that allows chaining of multiple | |
411 | virtio descriptors to handle large packet sizes. As such, large packets | |
412 | are handled by reserving and chaining multiple free descriptors | |
413 | together. Mergeable buffer support is negotiated between the virtio | |
414 | driver and virtio device and is supported by the DPDK vhost library. | |
415 | This behavior is typically supported and enabled by default, however | |
416 | in the case where the user knows that rx mergeable buffers are not needed | |
417 | i.e. jumbo frames are not needed, it can be forced off by adding | |
de658847 | 418 | mrg_rxbuf=off to the QEMU command line options. By not reserving multiple |
188d29d7 KT |
419 | chains of descriptors it will make more individual virtio descriptors |
420 | available for rx to the guest using dpdkvhost ports and this can improve | |
421 | performance. | |
422 | ||
423 | 10. Packet processing in the guest | |
424 | ||
425 | It is good practice whether simply forwarding packets from one | |
426 | interface to another or more complex packet processing in the guest, | |
427 | to ensure that the thread performing this work has as much CPU | |
428 | occupancy as possible. For example when the DPDK sample application | |
429 | `testpmd` is used to forward packets in the guest, multiple QEMU vCPU | |
430 | threads can be created. Taskset can then be used to affinitize the | |
431 | vCPU thread responsible for forwarding to a dedicated core not used | |
432 | for other general processing on the host system. | |
433 | ||
434 | 11. DPDK virtio pmd in the guest | |
435 | ||
436 | dpdkvhostcuse or dpdkvhostuser ports can be used to accelerate the path | |
437 | to the guest using the DPDK vhost library. This library is compatible with | |
438 | virtio-net drivers in the guest but significantly better performance can | |
439 | be observed when using the DPDK virtio pmd driver in the guest. The DPDK | |
440 | `testpmd` application can be used in the guest as an example application | |
441 | that forwards packet from one DPDK vhost port to another. An example of | |
442 | running `testpmd` in the guest can be seen here. | |
443 | ||
444 | `./testpmd -c 0x3 -n 4 --socket-mem 512 -- --burst=64 -i --txqflags=0xf00 --disable-hw-vlan --forward-mode=io --auto-start` | |
445 | ||
446 | See below information on dpdkvhostcuse and dpdkvhostuser ports. | |
447 | See [DPDK Docs] for more information on `testpmd`. | |
542cc9bb | 448 | |
6553d06b | 449 | |
6553d06b | 450 | |
542cc9bb TG |
451 | DPDK Rings : |
452 | ------------ | |
453 | ||
454 | Following the steps above to create a bridge, you can now add dpdk rings | |
455 | as a port to the vswitch. OVS will expect the DPDK ring device name to | |
456 | start with dpdkr and end with a portid. | |
457 | ||
a52b0492 | 458 | `ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr` |
542cc9bb TG |
459 | |
460 | DPDK rings client test application | |
461 | ||
462 | Included in the test directory is a sample DPDK application for testing | |
463 | the rings. This is from the base dpdk directory and modified to work | |
464 | with the ring naming used within ovs. | |
465 | ||
466 | location tests/ovs_client | |
467 | ||
468 | To run the client : | |
469 | ||
a52b0492 GS |
470 | ``` |
471 | cd /usr/src/ovs/tests/ | |
472 | ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr" | |
473 | ``` | |
542cc9bb TG |
474 | |
475 | In the case of the dpdkr example above the "port id you gave dpdkr" is 0. | |
476 | ||
477 | It is essential to have --proc-type=secondary | |
478 | ||
479 | The application simply receives an mbuf on the receive queue of the | |
480 | ethernet ring and then places that same mbuf on the transmit ring of | |
481 | the ethernet ring. It is a trivial loopback application. | |
482 | ||
483 | DPDK rings in VM (IVSHMEM shared memory communications) | |
484 | ------------------------------------------------------- | |
485 | ||
486 | In addition to executing the client in the host, you can execute it within | |
487 | a guest VM. To do so you will need a patched qemu. You can download the | |
488 | patch and getting started guide at : | |
489 | ||
490 | https://01.org/packet-processing/downloads | |
491 | ||
492 | A general rule of thumb for better performance is that the client | |
493 | application should not be assigned the same dpdk core mask "-c" as | |
494 | the vswitchd. | |
495 | ||
58397e6c KT |
496 | DPDK vhost: |
497 | ----------- | |
498 | ||
02ab4b1a | 499 | DPDK 2.2 supports two types of vhost: |
58397e6c | 500 | |
7d1ced01 CL |
501 | 1. vhost-user |
502 | 2. vhost-cuse | |
58397e6c | 503 | |
7d1ced01 CL |
504 | Whatever type of vhost is enabled in the DPDK build specified, is the type |
505 | that will be enabled in OVS. By default, vhost-user is enabled in DPDK. | |
506 | Therefore, unless vhost-cuse has been enabled in DPDK, vhost-user ports | |
507 | will be enabled in OVS. | |
508 | Please note that support for vhost-cuse is intended to be deprecated in OVS | |
509 | in a future release. | |
58397e6c | 510 | |
7d1ced01 CL |
511 | DPDK vhost-user: |
512 | ---------------- | |
58397e6c | 513 | |
7d1ced01 CL |
514 | The following sections describe the use of vhost-user 'dpdkvhostuser' ports |
515 | with OVS. | |
58397e6c | 516 | |
7d1ced01 CL |
517 | DPDK vhost-user Prerequisites: |
518 | ------------------------- | |
58397e6c | 519 | |
02ab4b1a | 520 | 1. DPDK 2.2 with vhost support enabled as documented in the "Building and |
7d1ced01 | 521 | Installing section" |
58397e6c | 522 | |
7d1ced01 | 523 | 2. QEMU version v2.1.0+ |
58397e6c | 524 | |
7d1ced01 CL |
525 | QEMU v2.1.0 will suffice, but it is recommended to use v2.2.0 if providing |
526 | your VM with memory greater than 1GB due to potential issues with memory | |
527 | mapping larger areas. | |
58397e6c | 528 | |
7d1ced01 CL |
529 | Adding DPDK vhost-user ports to the Switch: |
530 | -------------------------------------- | |
58397e6c | 531 | |
7d1ced01 CL |
532 | Following the steps above to create a bridge, you can now add DPDK vhost-user |
533 | as a port to the vswitch. Unlike DPDK ring ports, DPDK vhost-user ports can | |
1af27e8a DDP |
534 | have arbitrary names, except that forward and backward slashes are prohibited |
535 | in the names. | |
58397e6c | 536 | |
7d1ced01 | 537 | - For vhost-user, the name of the port type is `dpdkvhostuser` |
58397e6c | 538 | |
7d1ced01 | 539 | ``` |
1af65cc7 | 540 | ovs-vsctl add-port br0 vhost-user-1 -- set Interface vhost-user-1 |
7d1ced01 CL |
541 | type=dpdkvhostuser |
542 | ``` | |
543 | ||
544 | This action creates a socket located at | |
545 | `/usr/local/var/run/openvswitch/vhost-user-1`, which you must provide | |
546 | to your VM on the QEMU command line. More instructions on this can be | |
547 | found in the next section "DPDK vhost-user VM configuration" | |
548 | Note: If you wish for the vhost-user sockets to be created in a | |
549 | directory other than `/usr/local/var/run/openvswitch`, you may specify | |
550 | another location on the ovs-vswitchd command line like so: | |
551 | ||
552 | `./vswitchd/ovs-vswitchd --dpdk -vhost_sock_dir /my-dir -c 0x1 ...` | |
553 | ||
554 | DPDK vhost-user VM configuration: | |
555 | --------------------------------- | |
556 | Follow the steps below to attach vhost-user port(s) to a VM. | |
557 | ||
558 | 1. Configure sockets. | |
559 | Pass the following parameters to QEMU to attach a vhost-user device: | |
560 | ||
561 | ``` | |
562 | -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user-1 | |
563 | -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce | |
564 | -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 | |
565 | ``` | |
566 | ||
567 | ...where vhost-user-1 is the name of the vhost-user port added | |
568 | to the switch. | |
569 | Repeat the above parameters for multiple devices, changing the | |
570 | chardev path and id as necessary. Note that a separate and different | |
571 | chardev path needs to be specified for each vhost-user device. For | |
572 | example you have a second vhost-user port named 'vhost-user-2', you | |
573 | append your QEMU command line with an additional set of parameters: | |
574 | ||
575 | ``` | |
576 | -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2 | |
577 | -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce | |
578 | -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2 | |
579 | ``` | |
580 | ||
581 | 2. Configure huge pages. | |
582 | QEMU must allocate the VM's memory on hugetlbfs. vhost-user ports access | |
583 | a virtio-net device's virtual rings and packet buffers mapping the VM's | |
584 | physical memory on hugetlbfs. To enable vhost-user ports to map the VM's | |
585 | memory into their process address space, pass the following paramters | |
586 | to QEMU: | |
587 | ||
588 | ``` | |
589 | -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages, | |
590 | share=on | |
591 | -numa node,memdev=mem -mem-prealloc | |
592 | ``` | |
593 | ||
4573fbd3 | 594 | 3. Optional: Enable multiqueue support |
a14b8947 IM |
595 | The vhost-user interface must be configured in Open vSwitch with the |
596 | desired amount of queues with: | |
597 | ||
598 | ``` | |
599 | ovs-vsctl set Interface vhost-user-2 options:n_rxq=<requested queues> | |
600 | ``` | |
601 | ||
602 | QEMU needs to be configured as well. | |
603 | The $q below should match the queues requested in OVS (if $q is more, | |
604 | packets will not be received). | |
4573fbd3 FL |
605 | The $v is the number of vectors, which is '$q x 2 + 2'. |
606 | ||
607 | ``` | |
608 | -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2 | |
609 | -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce,queues=$q | |
610 | -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mq=on,vectors=$v | |
611 | ``` | |
612 | ||
db6e1383 IS |
613 | If one wishes to use multiple queues for an interface in the guest, the |
614 | driver in the guest operating system must be configured to do so. It is | |
615 | recommended that the number of queues configured be equal to '$q'. | |
616 | ||
617 | For example, this can be done for the Linux kernel virtio-net driver with: | |
618 | ||
619 | ``` | |
620 | ethtool -L <DEV> combined <$q> | |
621 | ``` | |
622 | ||
623 | A note on the command above: | |
624 | ||
625 | `-L`: Changes the numbers of channels of the specified network device | |
626 | ||
627 | `combined`: Changes the number of multi-purpose channels. | |
628 | ||
7d1ced01 CL |
629 | DPDK vhost-cuse: |
630 | ---------------- | |
631 | ||
632 | The following sections describe the use of vhost-cuse 'dpdkvhostcuse' ports | |
633 | with OVS. | |
634 | ||
635 | DPDK vhost-cuse Prerequisites: | |
636 | ------------------------- | |
637 | ||
02ab4b1a | 638 | 1. DPDK 2.2 with vhost support enabled as documented in the "Building and |
7d1ced01 CL |
639 | Installing section" |
640 | As an additional step, you must enable vhost-cuse in DPDK by setting the | |
641 | following additional flag in `config/common_linuxapp`: | |
642 | ||
643 | `CONFIG_RTE_LIBRTE_VHOST_USER=n` | |
644 | ||
645 | Following this, rebuild DPDK as per the instructions in the "Building and | |
646 | Installing" section. Finally, rebuild OVS as per step 3 in the "Building | |
647 | and Installing" section - OVS will detect that DPDK has vhost-cuse libraries | |
648 | compiled and in turn will enable support for it in the switch and disable | |
649 | vhost-user support. | |
650 | ||
651 | 2. Insert the Cuse module: | |
652 | ||
653 | `modprobe cuse` | |
654 | ||
655 | 3. Build and insert the `eventfd_link` module: | |
656 | ||
657 | ``` | |
658 | cd $DPDK_DIR/lib/librte_vhost/eventfd_link/ | |
659 | make | |
660 | insmod $DPDK_DIR/lib/librte_vhost/eventfd_link.ko | |
661 | ``` | |
662 | ||
663 | 4. QEMU version v2.1.0+ | |
664 | ||
665 | vhost-cuse will work with QEMU v2.1.0 and above, however it is recommended to | |
666 | use v2.2.0 if providing your VM with memory greater than 1GB due to potential | |
667 | issues with memory mapping larger areas. | |
668 | Note: QEMU v1.6.2 will also work, with slightly different command line parameters, | |
669 | which are specified later in this document. | |
670 | ||
671 | Adding DPDK vhost-cuse ports to the Switch: | |
672 | -------------------------------------- | |
673 | ||
674 | Following the steps above to create a bridge, you can now add DPDK vhost-cuse | |
675 | as a port to the vswitch. Unlike DPDK ring ports, DPDK vhost-cuse ports can have | |
676 | arbitrary names. | |
677 | ||
678 | - For vhost-cuse, the name of the port type is `dpdkvhostcuse` | |
679 | ||
680 | ``` | |
1af65cc7 | 681 | ovs-vsctl add-port br0 vhost-cuse-1 -- set Interface vhost-cuse-1 |
7d1ced01 CL |
682 | type=dpdkvhostcuse |
683 | ``` | |
684 | ||
685 | When attaching vhost-cuse ports to QEMU, the name provided during the | |
686 | add-port operation must match the ifname parameter on the QEMU command | |
687 | line. More instructions on this can be found in the next section. | |
688 | ||
689 | DPDK vhost-cuse VM configuration: | |
690 | --------------------------------- | |
691 | ||
692 | vhost-cuse ports use a Linux* character device to communicate with QEMU. | |
58397e6c KT |
693 | By default it is set to `/dev/vhost-net`. It is possible to reuse this |
694 | standard device for DPDK vhost, which makes setup a little simpler but it | |
695 | is better practice to specify an alternative character device in order to | |
696 | avoid any conflicts if kernel vhost is to be used in parallel. | |
697 | ||
698 | 1. This step is only needed if using an alternative character device. | |
699 | ||
700 | The new character device filename must be specified on the vswitchd | |
701 | commandline: | |
702 | ||
703 | `./vswitchd/ovs-vswitchd --dpdk --cuse_dev_name my-vhost-net -c 0x1 ...` | |
704 | ||
705 | Note that the `--cuse_dev_name` argument and associated string must be the first | |
706 | arguments after `--dpdk` and come before the EAL arguments. In the example | |
707 | above, the character device to be used will be `/dev/my-vhost-net`. | |
708 | ||
709 | 2. This step is only needed if reusing the standard character device. It will | |
710 | conflict with the kernel vhost character device so the user must first | |
711 | remove it. | |
712 | ||
713 | `rm -rf /dev/vhost-net` | |
714 | ||
715 | 3a. Configure virtio-net adaptors: | |
716 | The following parameters must be passed to the QEMU binary: | |
717 | ||
718 | ``` | |
719 | -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on | |
720 | -device virtio-net-pci,netdev=net1,mac=<mac> | |
721 | ``` | |
722 | ||
723 | Repeat the above parameters for multiple devices. | |
724 | ||
725 | The DPDK vhost library will negiotiate its own features, so they | |
726 | need not be passed in as command line params. Note that as offloads are | |
727 | disabled this is the equivalent of setting: | |
728 | ||
729 | `csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off` | |
730 | ||
731 | 3b. If using an alternative character device. It must be also explicitly | |
732 | passed to QEMU using the `vhostfd` argument: | |
733 | ||
734 | ``` | |
735 | -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on, | |
736 | vhostfd=<open_fd> | |
737 | -device virtio-net-pci,netdev=net1,mac=<mac> | |
738 | ``` | |
739 | ||
740 | The open file descriptor must be passed to QEMU running as a child | |
741 | process. This could be done with a simple python script. | |
742 | ||
743 | ``` | |
744 | #!/usr/bin/python | |
745 | fd = os.open("/dev/usvhost", os.O_RDWR) | |
746 | subprocess.call("qemu-system-x86_64 .... -netdev tap,id=vhostnet0,\ | |
747 | vhost=on,vhostfd=" + fd +"...", shell=True) | |
748 | ||
898dcef1 | 749 | Alternatively the `qemu-wrap.py` script can be used to automate the |
58397e6c KT |
750 | requirements specified above and can be used in conjunction with libvirt if |
751 | desired. See the "DPDK vhost VM configuration with QEMU wrapper" section | |
752 | below. | |
753 | ||
754 | 4. Configure huge pages: | |
755 | QEMU must allocate the VM's memory on hugetlbfs. Vhost ports access a | |
756 | virtio-net device's virtual rings and packet buffers mapping the VM's | |
757 | physical memory on hugetlbfs. To enable vhost-ports to map the VM's | |
7d1ced01 | 758 | memory into their process address space, pass the following parameters |
58397e6c KT |
759 | to QEMU: |
760 | ||
761 | `-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages, | |
762 | share=on -numa node,memdev=mem -mem-prealloc` | |
763 | ||
7d1ced01 CL |
764 | Note: For use with an earlier QEMU version such as v1.6.2, use the |
765 | following to configure hugepages instead: | |
58397e6c | 766 | |
7d1ced01 | 767 | `-mem-path /dev/hugepages -mem-prealloc` |
58397e6c | 768 | |
7d1ced01 CL |
769 | DPDK vhost-cuse VM configuration with QEMU wrapper: |
770 | --------------------------------------------------- | |
58397e6c KT |
771 | The QEMU wrapper script automatically detects and calls QEMU with the |
772 | necessary parameters. It performs the following actions: | |
773 | ||
774 | * Automatically detects the location of the hugetlbfs and inserts this | |
775 | into the command line parameters. | |
776 | * Automatically open file descriptors for each virtio-net device and | |
777 | inserts this into the command line parameters. | |
778 | * Calls QEMU passing both the command line parameters passed to the | |
779 | script itself and those it has auto-detected. | |
780 | ||
781 | Before use, you **must** edit the configuration parameters section of the | |
782 | script to point to the correct emulator location and set additional | |
783 | settings. Of these settings, `emul_path` and `us_vhost_path` **must** be | |
784 | set. All other settings are optional. | |
785 | ||
786 | To use directly from the command line simply pass the wrapper some of the | |
787 | QEMU parameters: it will configure the rest. For example: | |
788 | ||
789 | ``` | |
790 | qemu-wrap.py -cpu host -boot c -hda <disk image> -m 4096 -smp 4 | |
791 | --enable-kvm -nographic -vnc none -net none -netdev tap,id=net1, | |
792 | script=no,downscript=no,ifname=if1,vhost=on -device virtio-net-pci, | |
793 | netdev=net1,mac=00:00:00:00:00:01 | |
5568661c | 794 | ``` |
58397e6c | 795 | |
7d1ced01 CL |
796 | DPDK vhost-cuse VM configuration with libvirt: |
797 | ---------------------------------------------- | |
58397e6c KT |
798 | |
799 | If you are using libvirt, you must enable libvirt to access the character | |
800 | device by adding it to controllers cgroup for libvirtd using the following | |
801 | steps. | |
802 | ||
803 | 1. In `/etc/libvirt/qemu.conf` add/edit the following lines: | |
804 | ||
805 | ``` | |
806 | 1) clear_emulator_capabilities = 0 | |
807 | 2) user = "root" | |
808 | 3) group = "root" | |
809 | 4) cgroup_device_acl = [ | |
810 | "/dev/null", "/dev/full", "/dev/zero", | |
811 | "/dev/random", "/dev/urandom", | |
812 | "/dev/ptmx", "/dev/kvm", "/dev/kqemu", | |
813 | "/dev/rtc", "/dev/hpet", "/dev/net/tun", | |
814 | "/dev/<my-vhost-device>", | |
815 | "/dev/hugepages"] | |
816 | ``` | |
817 | ||
818 | <my-vhost-device> refers to "vhost-net" if using the `/dev/vhost-net` | |
819 | device. If you have specificed a different name on the ovs-vswitchd | |
820 | commandline using the "--cuse_dev_name" parameter, please specify that | |
821 | filename instead. | |
822 | ||
823 | 2. Disable SELinux or set to permissive mode | |
824 | ||
825 | 3. Restart the libvirtd process | |
826 | For example, on Fedora: | |
827 | ||
828 | `systemctl restart libvirtd.service` | |
829 | ||
830 | After successfully editing the configuration, you may launch your | |
831 | vhost-enabled VM. The XML describing the VM can be configured like so | |
832 | within the <qemu:commandline> section: | |
833 | ||
834 | 1. Set up shared hugepages: | |
835 | ||
836 | ``` | |
837 | <qemu:arg value='-object'/> | |
838 | <qemu:arg value='memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on'/> | |
839 | <qemu:arg value='-numa'/> | |
840 | <qemu:arg value='node,memdev=mem'/> | |
841 | <qemu:arg value='-mem-prealloc'/> | |
842 | ``` | |
843 | ||
844 | 2. Set up your tap devices: | |
845 | ||
846 | ``` | |
847 | <qemu:arg value='-netdev'/> | |
848 | <qemu:arg value='type=tap,id=net1,script=no,downscript=no,ifname=vhost0,vhost=on'/> | |
849 | <qemu:arg value='-device'/> | |
850 | <qemu:arg value='virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01'/> | |
851 | ``` | |
852 | ||
853 | Repeat for as many devices as are desired, modifying the id, ifname | |
854 | and mac as necessary. | |
855 | ||
856 | Again, if you are using an alternative character device (other than | |
857 | `/dev/vhost-net`), please specify the file descriptor like so: | |
858 | ||
859 | `<qemu:arg value='type=tap,id=net3,script=no,downscript=no,ifname=vhost0,vhost=on,vhostfd=<open_fd>'/>` | |
860 | ||
861 | Where <open_fd> refers to the open file descriptor of the character device. | |
862 | Instructions of how to retrieve the file descriptor can be found in the | |
863 | "DPDK vhost VM configuration" section. | |
864 | Alternatively, the process is automated with the qemu-wrap.py script, | |
865 | detailed in the next section. | |
866 | ||
867 | Now you may launch your VM using virt-manager, or like so: | |
868 | ||
869 | `virsh create my_vhost_vm.xml` | |
870 | ||
7d1ced01 | 871 | DPDK vhost-cuse VM configuration with libvirt and QEMU wrapper: |
58397e6c KT |
872 | ---------------------------------------------------------- |
873 | ||
874 | To use the qemu-wrapper script in conjuntion with libvirt, follow the | |
875 | steps in the previous section before proceeding with the following steps: | |
876 | ||
877 | 1. Place `qemu-wrap.py` in libvirtd's binary search PATH ($PATH) | |
878 | Ideally in the same directory that the QEMU binary is located. | |
879 | ||
880 | 2. Ensure that the script has the same owner/group and file permissions | |
881 | as the QEMU binary. | |
882 | ||
883 | 3. Update the VM xml file using "virsh edit VM.xml" | |
884 | ||
885 | 1. Set the VM to use the launch script. | |
886 | Set the emulator path contained in the `<emulator><emulator/>` tags. | |
887 | For example, replace: | |
888 | ||
889 | `<emulator>/usr/bin/qemu-kvm<emulator/>` | |
890 | ||
891 | with: | |
892 | ||
893 | `<emulator>/usr/bin/qemu-wrap.py<emulator/>` | |
894 | ||
895 | 4. Edit the Configuration Parameters section of the script to point to | |
896 | the correct emulator location and set any additional options. If you are | |
897 | using a alternative character device name, please set "us_vhost_path" to the | |
898 | location of that device. The script will automatically detect and insert | |
7d1ced01 | 899 | the correct "vhostfd" value in the QEMU command line arguments. |
58397e6c KT |
900 | |
901 | 5. Use virt-manager to launch the VM | |
902 | ||
9899125a OS |
903 | Running ovs-vswitchd with DPDK backend inside a VM |
904 | -------------------------------------------------- | |
905 | ||
906 | Please note that additional configuration is required if you want to run | |
907 | ovs-vswitchd with DPDK backend inside a QEMU virtual machine. Ovs-vswitchd | |
908 | creates separate DPDK TX queues for each CPU core available. This operation | |
909 | fails inside QEMU virtual machine because, by default, VirtIO NIC provided | |
910 | to the guest is configured to support only single TX queue and single RX | |
911 | queue. To change this behavior, you need to turn on 'mq' (multiqueue) | |
912 | property of all virtio-net-pci devices emulated by QEMU and used by DPDK. | |
913 | You may do it manually (by changing QEMU command line) or, if you use Libvirt, | |
914 | by adding the following string: | |
915 | ||
916 | `<driver name='vhost' queues='N'/>` | |
917 | ||
918 | to <interface> sections of all network devices used by DPDK. Parameter 'N' | |
919 | determines how many queues can be used by the guest. | |
920 | ||
542cc9bb TG |
921 | Restrictions: |
922 | ------------- | |
923 | ||
542cc9bb TG |
924 | - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue. |
925 | - Currently DPDK port does not make use any offload functionality. | |
58397e6c | 926 | - DPDK-vHost support works with 1G huge pages. |
542cc9bb TG |
927 | |
928 | ivshmem: | |
3088fab7 MG |
929 | - If you run Open vSwitch with smaller page sizes (e.g. 2MB), you may be |
930 | unable to share any rings or mempools with a virtual machine. | |
931 | This is because the current implementation of ivshmem works by sharing | |
932 | a single 1GB huge page from the host operating system to any guest | |
933 | operating system through the Qemu ivshmem device. When using smaller | |
934 | page sizes, multiple pages may be required to hold the ring descriptors | |
935 | and buffer pools. The Qemu ivshmem device does not allow you to share | |
936 | multiple file descriptors to the guest operating system. However, if you | |
937 | want to share dpdkr rings with other processes on the host, you can do | |
938 | this with smaller page sizes. | |
542cc9bb | 939 | |
1e77bbe5 | 940 | Platform and Network Interface: |
49bbbdfd IS |
941 | - By default with DPDK 2.2, a maximum of 64 TX queues can be used with an |
942 | Intel XL710 Network Interface on a platform with more than 64 logical | |
943 | cores. If a user attempts to add an XL710 interface as a DPDK port type to | |
944 | a system as described above, an error will be reported that initialization | |
945 | failed for the 65th queue. OVS will then roll back to the previous | |
946 | successful queue initialization and use that value as the total number of | |
947 | TX queues available with queue locking. If a user wishes to use more than | |
948 | 64 queues and avoid locking, then the | |
949 | `CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF` config parameter in DPDK must be | |
950 | increased to the desired number of queues. Both DPDK and OVS must be | |
951 | recompiled for this change to take effect. | |
1e77bbe5 | 952 | |
542cc9bb TG |
953 | Bug Reporting: |
954 | -------------- | |
955 | ||
956 | Please report problems to bugs@openvswitch.org. | |
9feb1017 TG |
957 | |
958 | [INSTALL.userspace.md]:INSTALL.userspace.md | |
959 | [INSTALL.md]:INSTALL.md | |
491c2ea3 | 960 | [DPDK Linux GSG]: http://www.dpdk.org/doc/guides/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-igb-uioor-vfio-modules |
58397e6c | 961 | [DPDK Docs]: http://dpdk.org/doc |