1 Using Open vSwitch with DPDK
2 ============================
4 Open vSwitch can use Intel(R) DPDK lib to operate entirely in
5 userspace. This file explains how to install and use Open vSwitch in
8 The DPDK support of Open vSwitch is considered experimental.
9 It has not been thoroughly tested.
11 This version of Open vSwitch should be built manually with `configure`
14 OVS needs a system with 1GB hugepages support.
16 Building and Installing:
17 ------------------------
21 1. Configure build & install DPDK:
25 export DPDK_DIR=/usr/src/dpdk-1.7.1
29 2. Update `config/common_linuxapp` so that DPDK generate single lib file.
30 (modification also required for IVSHMEM build)
32 `CONFIG_RTE_BUILD_COMBINE_LIBS=y`
34 Then run `make install` to build and isntall the library.
35 For default install without IVSHMEM:
37 `make install T=x86_64-native-linuxapp-gcc`
39 To include IVSHMEM (shared memory):
41 `make install T=x86_64-ivshmem-linuxapp-gcc`
43 For further details refer to http://dpdk.org/
45 2. Configure & build the Linux kernel:
47 Refer to intel-dpdk-getting-started-guide.pdf for understanding
48 DPDK kernel requirement.
50 3. Configure & build OVS:
54 `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`
58 `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`
61 cd $(OVS_DIR)/openvswitch
63 ./configure --with-dpdk=$DPDK_BUILD
67 To have better performance one can enable aggressive compiler optimizations and
68 use the special instructions(popcnt, crc32) that may not be available on all
69 machines. Instead of typing `make`, type:
71 `make CFLAGS='-O3 -march=native'`
73 Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.
75 Using the DPDK with ovs-vswitchd:
76 ---------------------------------
79 Add the following options to the kernel bootline:
81 `default_hugepagesz=1GB hugepagesz=1G hugepages=1`
83 2. Setup DPDK devices:
84 1. insert uio.ko: `modprobe uio`
85 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
86 3. Bind network device to igb_uio: `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`
88 3. Mount the hugetable filsystem
90 `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
92 Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.
94 4. Start ovsdb-server as discussed in [INSTALL.md] doc:
95 1. First time only db creation (or clearing):
98 mkdir -p /usr/local/etc/openvswitch
99 mkdir -p /usr/local/var/run/openvswitch
100 rm /usr/local/etc/openvswitch/conf.db
102 ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
103 ./vswitchd/vswitch.ovsschema
106 2. start ovsdb-server
110 ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
111 --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
112 --private-key=db:Open_vSwitch,SSL,private_key \
113 --certificate=Open_vSwitch,SSL,certificate \
114 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
117 3. First time after db creation, initialize:
121 ./utilities/ovs-vsctl --no-wait init
126 DPDK configuration arguments can be passed to vswitchd via `--dpdk`
127 argument. This needs to be first argument passed to vswitchd process.
128 dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter
129 for dpdk initialization.
131 export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
132 ./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach
134 If allocated more than one GB hugepage (as for IVSHMEM), set amount and use NUMA
137 ./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \
138 -- unix:$DB_SOCK --pidfile --detach
140 6. Add bridge & ports
142 To use ovs-vswitchd with DPDK, create a bridge with datapath_type
143 "netdev" in the configuration database. For example:
145 `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
147 Now you can add dpdk devices. OVS expect DPDK device name start with dpdk
148 and end with portid. vswitchd should print (in the log file) the number of dpdk
151 ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
152 ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
154 Once first DPDK port is added to vswitchd, it creates a Polling thread and
155 polls dpdk device in continuous loop. Therefore CPU utilization
156 for that thread is always 100%.
160 Test flow script across NICs (assuming ovs in /usr/src/ovs):
165 # Move to command directory
166 cd /usr/src/ovs/utilities/
168 # Clear current flows
169 ./ovs-ofctl del-flows br0
171 # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
172 ./ovs-ofctl add-flow br0 in_port=1,action=output:2
173 ./ovs-ofctl add-flow br0 in_port=2,action=output:1
176 8. Performance tuning
178 With pmd multi-threading support, OVS creates one pmd thread for each
179 numa node as default. The pmd thread handles the I/O of all DPDK
180 interfaces on the same numa node. The following two commands can be used
181 to configure the multi-threading behavior.
183 ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>
185 The command above asks for a CPU mask for setting the affinity of pmd threads.
186 A set bit in the mask means a pmd thread is created and pinned to the
187 corresponding CPU core. For more information, please refer to
188 `man ovs-vswitchd.conf.db`
190 ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer>
192 The command above sets the number of rx queues of each DPDK interface. The
193 rx queues are assigned to pmd threads on the same numa node in round-robin
194 fashion. For more information, please refer to `man ovs-vswitchd.conf.db`
196 Ideally for maximum throughput, the pmd thread should not be scheduled out
197 which temporarily halts its execution. The following affinitization methods
200 Lets pick core 4,6,8,10 for pmd threads to run on. Also assume a dual 8 core
201 sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7
202 and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31. (A different cpu
203 configuration could have different core mask requirements).
205 To kernel bootline add core isolation list for cores and associated hype cores
206 (e.g. isolcpus=4,20,6,22,8,24,10,26,). Reboot system for isolation to take
207 effect, restart everything.
209 Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask':
211 ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550
213 You should be able to check that pmd threads are pinned to the correct cores
216 top -p `pidof ovs-vswitchd` -H -d1
218 Note, the pmd threads on a numa node are only created if there is at least
219 one DPDK interface from the numa node that has been added to OVS.
221 Note, core 0 is always reserved from non-pmd threads and should never be set
227 Following the steps above to create a bridge, you can now add dpdk rings
228 as a port to the vswitch. OVS will expect the DPDK ring device name to
229 start with dpdkr and end with a portid.
231 ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr
233 DPDK rings client test application
235 Included in the test directory is a sample DPDK application for testing
236 the rings. This is from the base dpdk directory and modified to work
237 with the ring naming used within ovs.
239 location tests/ovs_client
243 cd /usr/src/ovs/tests/
244 ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"
246 In the case of the dpdkr example above the "port id you gave dpdkr" is 0.
248 It is essential to have --proc-type=secondary
250 The application simply receives an mbuf on the receive queue of the
251 ethernet ring and then places that same mbuf on the transmit ring of
252 the ethernet ring. It is a trivial loopback application.
254 DPDK rings in VM (IVSHMEM shared memory communications)
255 -------------------------------------------------------
257 In addition to executing the client in the host, you can execute it within
258 a guest VM. To do so you will need a patched qemu. You can download the
259 patch and getting started guide at :
261 https://01.org/packet-processing/downloads
263 A general rule of thumb for better performance is that the client
264 application should not be assigned the same dpdk core mask "-c" as
270 - This Support is for Physical NIC. I have tested with Intel NIC only.
271 - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
272 - Currently DPDK port does not make use any offload functionality.
275 - The shared memory is currently restricted to the use of a 1GB
277 - All huge pages are shared amongst the host, clients, virtual
283 Please report problems to bugs@openvswitch.org.
285 [INSTALL.userspace.md]:INSTALL.userspace.md
286 [INSTALL.md]:INSTALL.md