]> git.proxmox.com Git - ovs.git/blame - INSTALL.DPDK.md
autotest: Fix kernel module unit test teardown
[ovs.git] / INSTALL.DPDK.md
CommitLineData
542cc9bb
TG
1Using Open vSwitch with DPDK
2============================
3
4Open vSwitch can use Intel(R) DPDK lib to operate entirely in
5userspace. This file explains how to install and use Open vSwitch in
6such a mode.
7
8The DPDK support of Open vSwitch is considered experimental.
9It has not been thoroughly tested.
10
11This version of Open vSwitch should be built manually with `configure`
12and `make`.
13
14OVS needs a system with 1GB hugepages support.
15
16Building and Installing:
17------------------------
18
19Required DPDK 1.7
20
211. Configure build & install DPDK:
22 1. Set `$DPDK_DIR`
23
24 ```
25 export DPDK_DIR=/usr/src/dpdk-1.7.1
26 cd $DPDK_DIR
27 ```
28
29 2. Update `config/common_linuxapp` so that DPDK generate single lib file.
30 (modification also required for IVSHMEM build)
31
32 `CONFIG_RTE_BUILD_COMBINE_LIBS=y`
33
34 Then run `make install` to build and isntall the library.
35 For default install without IVSHMEM:
36
37 `make install T=x86_64-native-linuxapp-gcc`
38
39 To include IVSHMEM (shared memory):
40
41 `make install T=x86_64-ivshmem-linuxapp-gcc`
42
43 For further details refer to http://dpdk.org/
44
452. Configure & build the Linux kernel:
46
47 Refer to intel-dpdk-getting-started-guide.pdf for understanding
48 DPDK kernel requirement.
49
503. Configure & build OVS:
51
52 * Non IVSHMEM:
53
54 `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/`
55
56 * IVSHMEM:
57
58 `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/`
59
60 ```
61 cd $(OVS_DIR)/openvswitch
62 ./boot.sh
63 ./configure --with-dpdk=$DPDK_BUILD
64 make
65 ```
66
67To have better performance one can enable aggressive compiler optimizations and
68use the special instructions(popcnt, crc32) that may not be available on all
69machines. Instead of typing `make`, type:
70
71`make CFLAGS='-O3 -march=native'`
72
9feb1017 73Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS.
542cc9bb
TG
74
75Using the DPDK with ovs-vswitchd:
76---------------------------------
77
781. Setup system boot
79 Add the following options to the kernel bootline:
80
81 `default_hugepagesz=1GB hugepagesz=1G hugepages=1`
82
832. Setup DPDK devices:
84 1. insert uio.ko: `modprobe uio`
85 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko`
86 3. Bind network device to igb_uio: `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1`
87
883. Mount the hugetable filsystem
89
90 `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
91
92 Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup.
93
a52b0492
GS
944. Follow the instructions in [INSTALL.md] to install only the
95 userspace daemons and utilities (via 'make install').
542cc9bb
TG
96 1. First time only db creation (or clearing):
97
a52b0492
GS
98 ```
99 mkdir -p /usr/local/etc/openvswitch
100 mkdir -p /usr/local/var/run/openvswitch
101 rm /usr/local/etc/openvswitch/conf.db
102 ovsdb-tool create /usr/local/etc/openvswitch/conf.db \
103 /usr/local/share/openvswitch/vswitch.ovsschema
104 ```
542cc9bb 105
a52b0492 106 2. Start ovsdb-server
542cc9bb 107
a52b0492
GS
108 ```
109 ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
542cc9bb
TG
110 --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
111 --private-key=db:Open_vSwitch,SSL,private_key \
112 --certificate=Open_vSwitch,SSL,certificate \
113 --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
a52b0492 114 ```
542cc9bb
TG
115
116 3. First time after db creation, initialize:
117
a52b0492
GS
118 ```
119 ovs-vsctl --no-wait init
120 ```
542cc9bb
TG
121
1225. Start vswitchd:
123
124 DPDK configuration arguments can be passed to vswitchd via `--dpdk`
125 argument. This needs to be first argument passed to vswitchd process.
126 dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter
127 for dpdk initialization.
128
a52b0492
GS
129 ```
130 export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
131 ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach
132 ```
542cc9bb 133
a52b0492
GS
134 If allocated more than one GB hugepage (as for IVSHMEM), set amount and
135 use NUMA node 0 memory:
542cc9bb 136
a52b0492
GS
137 ```
138 ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \
139 -- unix:$DB_SOCK --pidfile --detach
140 ```
542cc9bb
TG
141
1426. Add bridge & ports
143
144 To use ovs-vswitchd with DPDK, create a bridge with datapath_type
145 "netdev" in the configuration database. For example:
146
a52b0492 147 `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev`
542cc9bb
TG
148
149 Now you can add dpdk devices. OVS expect DPDK device name start with dpdk
a52b0492
GS
150 and end with portid. vswitchd should print (in the log file) the number
151 of dpdk devices found.
542cc9bb 152
a52b0492
GS
153 ```
154 ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
155 ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
156 ```
542cc9bb 157
a52b0492
GS
158 Once first DPDK port is added to vswitchd, it creates a Polling thread and
159 polls dpdk device in continuous loop. Therefore CPU utilization
160 for that thread is always 100%.
542cc9bb
TG
161
1627. Add test flows
163
164 Test flow script across NICs (assuming ovs in /usr/src/ovs):
165 Execute script:
166
167 ```
168 #! /bin/sh
169 # Move to command directory
170 cd /usr/src/ovs/utilities/
171
172 # Clear current flows
173 ./ovs-ofctl del-flows br0
174
175 # Add flows between port 1 (dpdk0) to port 2 (dpdk1)
176 ./ovs-ofctl add-flow br0 in_port=1,action=output:2
177 ./ovs-ofctl add-flow br0 in_port=2,action=output:1
178 ```
179
1808. Performance tuning
181
182 With pmd multi-threading support, OVS creates one pmd thread for each
183 numa node as default. The pmd thread handles the I/O of all DPDK
184 interfaces on the same numa node. The following two commands can be used
185 to configure the multi-threading behavior.
186
a52b0492 187 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string>`
542cc9bb 188
a52b0492
GS
189 The command above asks for a CPU mask for setting the affinity of pmd
190 threads. A set bit in the mask means a pmd thread is created and pinned
191 to the corresponding CPU core. For more information, please refer to
542cc9bb
TG
192 `man ovs-vswitchd.conf.db`
193
a52b0492 194 `ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer>`
542cc9bb
TG
195
196 The command above sets the number of rx queues of each DPDK interface. The
197 rx queues are assigned to pmd threads on the same numa node in round-robin
198 fashion. For more information, please refer to `man ovs-vswitchd.conf.db`
199
200 Ideally for maximum throughput, the pmd thread should not be scheduled out
201 which temporarily halts its execution. The following affinitization methods
202 can help.
203
204 Lets pick core 4,6,8,10 for pmd threads to run on. Also assume a dual 8 core
205 sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7
206 and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31. (A different cpu
207 configuration could have different core mask requirements).
208
209 To kernel bootline add core isolation list for cores and associated hype cores
210 (e.g. isolcpus=4,20,6,22,8,24,10,26,). Reboot system for isolation to take
211 effect, restart everything.
212
213 Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask':
214
a52b0492 215 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550`
542cc9bb
TG
216
217 You should be able to check that pmd threads are pinned to the correct cores
218 via:
219
a52b0492
GS
220 ```
221 top -p `pidof ovs-vswitchd` -H -d1
222 ```
542cc9bb
TG
223
224 Note, the pmd threads on a numa node are only created if there is at least
225 one DPDK interface from the numa node that has been added to OVS.
226
227 Note, core 0 is always reserved from non-pmd threads and should never be set
228 in the cpu mask.
229
230DPDK Rings :
231------------
232
233Following the steps above to create a bridge, you can now add dpdk rings
234as a port to the vswitch. OVS will expect the DPDK ring device name to
235start with dpdkr and end with a portid.
236
a52b0492 237`ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr`
542cc9bb
TG
238
239DPDK rings client test application
240
241Included in the test directory is a sample DPDK application for testing
242the rings. This is from the base dpdk directory and modified to work
243with the ring naming used within ovs.
244
245location tests/ovs_client
246
247To run the client :
248
a52b0492
GS
249```
250cd /usr/src/ovs/tests/
251ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr"
252```
542cc9bb
TG
253
254In the case of the dpdkr example above the "port id you gave dpdkr" is 0.
255
256It is essential to have --proc-type=secondary
257
258The application simply receives an mbuf on the receive queue of the
259ethernet ring and then places that same mbuf on the transmit ring of
260the ethernet ring. It is a trivial loopback application.
261
262DPDK rings in VM (IVSHMEM shared memory communications)
263-------------------------------------------------------
264
265In addition to executing the client in the host, you can execute it within
266a guest VM. To do so you will need a patched qemu. You can download the
267patch and getting started guide at :
268
269https://01.org/packet-processing/downloads
270
271A general rule of thumb for better performance is that the client
272application should not be assigned the same dpdk core mask "-c" as
273the vswitchd.
274
275Restrictions:
276-------------
277
278 - This Support is for Physical NIC. I have tested with Intel NIC only.
279 - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue.
280 - Currently DPDK port does not make use any offload functionality.
281
282 ivshmem:
283 - The shared memory is currently restricted to the use of a 1GB
284 huge pages.
285 - All huge pages are shared amongst the host, clients, virtual
286 machines etc.
287
288Bug Reporting:
289--------------
290
291Please report problems to bugs@openvswitch.org.
9feb1017
TG
292
293[INSTALL.userspace.md]:INSTALL.userspace.md
294[INSTALL.md]:INSTALL.md