]>
Commit | Line | Data |
---|---|---|
542cc9bb TG |
1 | Using Open vSwitch with DPDK |
2 | ============================ | |
3 | ||
4 | Open vSwitch can use Intel(R) DPDK lib to operate entirely in | |
5 | userspace. This file explains how to install and use Open vSwitch in | |
6 | such a mode. | |
7 | ||
8 | The DPDK support of Open vSwitch is considered experimental. | |
9 | It has not been thoroughly tested. | |
10 | ||
11 | This version of Open vSwitch should be built manually with `configure` | |
12 | and `make`. | |
13 | ||
14 | OVS needs a system with 1GB hugepages support. | |
15 | ||
16 | Building and Installing: | |
17 | ------------------------ | |
18 | ||
19 | Required DPDK 1.7 | |
20 | ||
21 | 1. Configure build & install DPDK: | |
22 | 1. Set `$DPDK_DIR` | |
23 | ||
24 | ``` | |
25 | export DPDK_DIR=/usr/src/dpdk-1.7.1 | |
26 | cd $DPDK_DIR | |
27 | ``` | |
28 | ||
29 | 2. Update `config/common_linuxapp` so that DPDK generate single lib file. | |
30 | (modification also required for IVSHMEM build) | |
31 | ||
32 | `CONFIG_RTE_BUILD_COMBINE_LIBS=y` | |
33 | ||
34 | Then run `make install` to build and isntall the library. | |
35 | For default install without IVSHMEM: | |
36 | ||
37 | `make install T=x86_64-native-linuxapp-gcc` | |
38 | ||
39 | To include IVSHMEM (shared memory): | |
40 | ||
41 | `make install T=x86_64-ivshmem-linuxapp-gcc` | |
42 | ||
43 | For further details refer to http://dpdk.org/ | |
44 | ||
45 | 2. Configure & build the Linux kernel: | |
46 | ||
47 | Refer to intel-dpdk-getting-started-guide.pdf for understanding | |
48 | DPDK kernel requirement. | |
49 | ||
50 | 3. Configure & build OVS: | |
51 | ||
52 | * Non IVSHMEM: | |
53 | ||
54 | `export DPDK_BUILD=$DPDK_DIR/x86_64-native-linuxapp-gcc/` | |
55 | ||
56 | * IVSHMEM: | |
57 | ||
58 | `export DPDK_BUILD=$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/` | |
59 | ||
60 | ``` | |
61 | cd $(OVS_DIR)/openvswitch | |
62 | ./boot.sh | |
63 | ./configure --with-dpdk=$DPDK_BUILD | |
64 | make | |
65 | ``` | |
66 | ||
67 | To have better performance one can enable aggressive compiler optimizations and | |
68 | use the special instructions(popcnt, crc32) that may not be available on all | |
69 | machines. Instead of typing `make`, type: | |
70 | ||
71 | `make CFLAGS='-O3 -march=native'` | |
72 | ||
9feb1017 | 73 | Refer to [INSTALL.userspace.md] for general requirements of building userspace OVS. |
542cc9bb TG |
74 | |
75 | Using the DPDK with ovs-vswitchd: | |
76 | --------------------------------- | |
77 | ||
78 | 1. Setup system boot | |
79 | Add the following options to the kernel bootline: | |
80 | ||
81 | `default_hugepagesz=1GB hugepagesz=1G hugepages=1` | |
82 | ||
83 | 2. Setup DPDK devices: | |
84 | 1. insert uio.ko: `modprobe uio` | |
85 | 2. insert igb_uio.ko: `insmod $DPDK_BUILD/kmod/igb_uio.ko` | |
86 | 3. Bind network device to igb_uio: `$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio eth1` | |
87 | ||
88 | 3. Mount the hugetable filsystem | |
89 | ||
90 | `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages` | |
91 | ||
92 | Ref to http://www.dpdk.org/doc/quick-start for verifying DPDK setup. | |
93 | ||
9feb1017 | 94 | 4. Start ovsdb-server as discussed in [INSTALL.md] doc: |
542cc9bb TG |
95 | 1. First time only db creation (or clearing): |
96 | ||
97 | ``` | |
98 | mkdir -p /usr/local/etc/openvswitch | |
99 | mkdir -p /usr/local/var/run/openvswitch | |
100 | rm /usr/local/etc/openvswitch/conf.db | |
101 | cd $OVS_DIR | |
102 | ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db \ | |
103 | ./vswitchd/vswitch.ovsschema | |
104 | ``` | |
105 | ||
106 | 2. start ovsdb-server | |
107 | ||
108 | ``` | |
109 | cd $OVS_DIR | |
110 | ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \ | |
111 | --remote=db:Open_vSwitch,Open_vSwitch,manager_options \ | |
112 | --private-key=db:Open_vSwitch,SSL,private_key \ | |
113 | --certificate=Open_vSwitch,SSL,certificate \ | |
114 | --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach | |
115 | ``` | |
116 | ||
117 | 3. First time after db creation, initialize: | |
118 | ||
119 | ``` | |
120 | cd $OVS_DIR | |
121 | ./utilities/ovs-vsctl --no-wait init | |
122 | ``` | |
123 | ||
124 | 5. Start vswitchd: | |
125 | ||
126 | DPDK configuration arguments can be passed to vswitchd via `--dpdk` | |
127 | argument. This needs to be first argument passed to vswitchd process. | |
128 | dpdk arg -c is ignored by ovs-dpdk, but it is a required parameter | |
129 | for dpdk initialization. | |
130 | ||
131 | export DB_SOCK=/usr/local/var/run/openvswitch/db.sock | |
132 | ./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile --detach | |
133 | ||
134 | If allocated more than one GB hugepage (as for IVSHMEM), set amount and use NUMA | |
135 | node 0 memory: | |
136 | ||
137 | ./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 \ | |
138 | -- unix:$DB_SOCK --pidfile --detach | |
139 | ||
140 | 6. Add bridge & ports | |
141 | ||
142 | To use ovs-vswitchd with DPDK, create a bridge with datapath_type | |
143 | "netdev" in the configuration database. For example: | |
144 | ||
145 | `ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev` | |
146 | ||
147 | Now you can add dpdk devices. OVS expect DPDK device name start with dpdk | |
148 | and end with portid. vswitchd should print (in the log file) the number of dpdk | |
149 | devices found. | |
150 | ||
151 | ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk | |
152 | ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk | |
153 | ||
154 | Once first DPDK port is added to vswitchd, it creates a Polling thread and | |
155 | polls dpdk device in continuous loop. Therefore CPU utilization | |
156 | for that thread is always 100%. | |
157 | ||
158 | 7. Add test flows | |
159 | ||
160 | Test flow script across NICs (assuming ovs in /usr/src/ovs): | |
161 | Execute script: | |
162 | ||
163 | ``` | |
164 | #! /bin/sh | |
165 | # Move to command directory | |
166 | cd /usr/src/ovs/utilities/ | |
167 | ||
168 | # Clear current flows | |
169 | ./ovs-ofctl del-flows br0 | |
170 | ||
171 | # Add flows between port 1 (dpdk0) to port 2 (dpdk1) | |
172 | ./ovs-ofctl add-flow br0 in_port=1,action=output:2 | |
173 | ./ovs-ofctl add-flow br0 in_port=2,action=output:1 | |
174 | ``` | |
175 | ||
176 | 8. Performance tuning | |
177 | ||
178 | With pmd multi-threading support, OVS creates one pmd thread for each | |
179 | numa node as default. The pmd thread handles the I/O of all DPDK | |
180 | interfaces on the same numa node. The following two commands can be used | |
181 | to configure the multi-threading behavior. | |
182 | ||
183 | ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=<hex string> | |
184 | ||
185 | The command above asks for a CPU mask for setting the affinity of pmd threads. | |
186 | A set bit in the mask means a pmd thread is created and pinned to the | |
187 | corresponding CPU core. For more information, please refer to | |
188 | `man ovs-vswitchd.conf.db` | |
189 | ||
190 | ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer> | |
191 | ||
192 | The command above sets the number of rx queues of each DPDK interface. The | |
193 | rx queues are assigned to pmd threads on the same numa node in round-robin | |
194 | fashion. For more information, please refer to `man ovs-vswitchd.conf.db` | |
195 | ||
196 | Ideally for maximum throughput, the pmd thread should not be scheduled out | |
197 | which temporarily halts its execution. The following affinitization methods | |
198 | can help. | |
199 | ||
200 | Lets pick core 4,6,8,10 for pmd threads to run on. Also assume a dual 8 core | |
201 | sandy bridge system with hyperthreading enabled where CPU1 has cores 0,...,7 | |
202 | and 16,...,23 & CPU2 cores 8,...,15 & 24,...,31. (A different cpu | |
203 | configuration could have different core mask requirements). | |
204 | ||
205 | To kernel bootline add core isolation list for cores and associated hype cores | |
206 | (e.g. isolcpus=4,20,6,22,8,24,10,26,). Reboot system for isolation to take | |
207 | effect, restart everything. | |
208 | ||
209 | Configure pmd threads on core 4,6,8,10 using 'pmd-cpu-mask': | |
210 | ||
211 | ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=00000550 | |
212 | ||
213 | You should be able to check that pmd threads are pinned to the correct cores | |
214 | via: | |
215 | ||
216 | top -p `pidof ovs-vswitchd` -H -d1 | |
217 | ||
218 | Note, the pmd threads on a numa node are only created if there is at least | |
219 | one DPDK interface from the numa node that has been added to OVS. | |
220 | ||
221 | Note, core 0 is always reserved from non-pmd threads and should never be set | |
222 | in the cpu mask. | |
223 | ||
224 | DPDK Rings : | |
225 | ------------ | |
226 | ||
227 | Following the steps above to create a bridge, you can now add dpdk rings | |
228 | as a port to the vswitch. OVS will expect the DPDK ring device name to | |
229 | start with dpdkr and end with a portid. | |
230 | ||
231 | ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr | |
232 | ||
233 | DPDK rings client test application | |
234 | ||
235 | Included in the test directory is a sample DPDK application for testing | |
236 | the rings. This is from the base dpdk directory and modified to work | |
237 | with the ring naming used within ovs. | |
238 | ||
239 | location tests/ovs_client | |
240 | ||
241 | To run the client : | |
242 | ||
243 | cd /usr/src/ovs/tests/ | |
244 | ovsclient -c 1 -n 4 --proc-type=secondary -- -n "port id you gave dpdkr" | |
245 | ||
246 | In the case of the dpdkr example above the "port id you gave dpdkr" is 0. | |
247 | ||
248 | It is essential to have --proc-type=secondary | |
249 | ||
250 | The application simply receives an mbuf on the receive queue of the | |
251 | ethernet ring and then places that same mbuf on the transmit ring of | |
252 | the ethernet ring. It is a trivial loopback application. | |
253 | ||
254 | DPDK rings in VM (IVSHMEM shared memory communications) | |
255 | ------------------------------------------------------- | |
256 | ||
257 | In addition to executing the client in the host, you can execute it within | |
258 | a guest VM. To do so you will need a patched qemu. You can download the | |
259 | patch and getting started guide at : | |
260 | ||
261 | https://01.org/packet-processing/downloads | |
262 | ||
263 | A general rule of thumb for better performance is that the client | |
264 | application should not be assigned the same dpdk core mask "-c" as | |
265 | the vswitchd. | |
266 | ||
267 | Restrictions: | |
268 | ------------- | |
269 | ||
270 | - This Support is for Physical NIC. I have tested with Intel NIC only. | |
271 | - Work with 1500 MTU, needs few changes in DPDK lib to fix this issue. | |
272 | - Currently DPDK port does not make use any offload functionality. | |
273 | ||
274 | ivshmem: | |
275 | - The shared memory is currently restricted to the use of a 1GB | |
276 | huge pages. | |
277 | - All huge pages are shared amongst the host, clients, virtual | |
278 | machines etc. | |
279 | ||
280 | Bug Reporting: | |
281 | -------------- | |
282 | ||
283 | Please report problems to bugs@openvswitch.org. | |
9feb1017 TG |
284 | |
285 | [INSTALL.userspace.md]:INSTALL.userspace.md | |
286 | [INSTALL.md]:INSTALL.md |