]> git.proxmox.com Git - ovs.git/blob - INSTALL.DPDK-ADVANCED.md
datapath: internal-dev: set IFF_NO_QUEUE
[ovs.git] / INSTALL.DPDK-ADVANCED.md
1 OVS DPDK ADVANCED INSTALL GUIDE
2 =================================
3
4 ## Contents
5
6 1. [Overview](#overview)
7 2. [Building Shared Library](#build)
8 3. [System configuration](#sysconf)
9 4. [Performance Tuning](#perftune)
10 5. [OVS Testcases](#ovstc)
11 6. [Vhost Walkthrough](#vhost)
12 7. [QOS](#qos)
13 8. [Rate Limiting](#rl)
14 9. [Vsperf](#vsperf)
15
16 ## <a name="overview"></a> 1. Overview
17
18 The Advanced Install Guide explains how to improve OVS performance using
19 DPDK datapath. This guide also provides information on tuning, system configuration,
20 troubleshooting, static code analysis and testcases.
21
22 ## <a name="build"></a> 2. Building Shared Library
23
24 DPDK can be built as static or shared library and shall be linked by applications
25 using DPDK datapath. The section lists steps to build shared library and dynamically
26 link DPDK against OVS.
27
28 Note: Minor performance loss is seen with OVS when using shared DPDK library as
29 compared to static library.
30
31 Check section [INSTALL DPDK], [INSTALL OVS] of INSTALL.DPDK on download instructions
32 for DPDK and OVS.
33
34 * Configure the DPDK library
35
36 Set `CONFIG_RTE_BUILD_SHARED_LIB=y` in `config/common_base`
37 to generate shared DPDK library
38
39
40 * Build and install DPDK
41
42 For Default install (without IVSHMEM), set `export DPDK_TARGET=x86_64-native-linuxapp-gcc`
43 For IVSHMEM case, set `export DPDK_TARGET=x86_64-ivshmem-linuxapp-gcc`
44
45 ```
46 export DPDK_DIR=/usr/src/dpdk-16.04
47 export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
48 make install T=$DPDK_TARGET DESTDIR=install
49 ```
50
51 * Build, Install and Setup OVS.
52
53 Export the DPDK shared library location and setup OVS as listed in
54 section 3.3 of INSTALL.DPDK.
55
56 `export LD_LIBRARY_PATH=$DPDK_DIR/x86_64-native-linuxapp-gcc/lib`
57
58 ## <a name="sysconf"></a> 3. System Configuration
59
60 To achieve optimal OVS performance, the system can be configured and that includes
61 BIOS tweaks, Grub cmdline additions, better understanding of NUMA nodes and
62 apt selection of PCIe slots for NIC placement.
63
64 ### 3.1 Recommended BIOS settings
65
66 ```
67 | Settings | values | comments
68 |---------------------------|-----------|-----------
69 | C3 power state | Disabled | -
70 | C6 power state | Disabled | -
71 | MLC Streamer | Enabled | -
72 | MLC Spacial prefetcher | Enabled | -
73 | DCU Data prefetcher | Enabled | -
74 | DCA | Enabled | -
75 | CPU power and performance | Performance -
76 | Memory RAS and perf | | -
77 config-> NUMA optimized | Enabled | -
78 ```
79
80 ### 3.2 PCIe Slot Selection
81
82 The fastpath performance also depends on factors like the NIC placement,
83 Channel speeds between PCIe slot and CPU, proximity of PCIe slot to the CPU
84 cores running DPDK application. Listed below are the steps to identify
85 right PCIe slot.
86
87 - Retrieve host details using cmd `dmidecode -t baseboard | grep "Product Name"`
88 - Download the technical specification for Product listed eg: S2600WT2.
89 - Check the Product Architecture Overview on the Riser slot placement,
90 CPU sharing info and also PCIe channel speeds.
91
92 example: On S2600WT, CPU1 and CPU2 share Riser Slot 1 with Channel speed between
93 CPU1 and Riser Slot1 at 32GB/s, CPU2 and Riser Slot1 at 16GB/s. Running DPDK app
94 on CPU1 cores and NIC inserted in to Riser card Slots will optimize OVS performance
95 in this case.
96
97 - Check the Riser Card #1 - Root Port mapping information, on the available slots
98 and individual bus speeds. In S2600WT slot 1, slot 2 has high bus speeds and are
99 potential slots for NIC placement.
100
101 ### 3.3 Advanced Hugepage setup
102
103 Allocate and mount 1G Huge pages:
104
105 - For persistent allocation of huge pages, add the following options to the kernel bootline
106
107 Add `default_hugepagesz=1GB hugepagesz=1G hugepages=N`
108
109 For platforms supporting multiple huge page sizes, Add options
110
111 `default_hugepagesz=<size> hugepagesz=<size> hugepages=N`
112 where 'N' = Number of huge pages requested, 'size' = huge page size,
113 optional suffix [kKmMgG]
114
115 - For run-time allocation of huge pages
116
117 `echo N > /sys/devices/system/node/nodeX/hugepages/hugepages-1048576kB/nr_hugepages`
118 where 'N' = Number of huge pages requested, 'X' = NUMA Node
119
120 Note: For run-time allocation of 1G huge pages, Contiguous Memory Allocator(CONFIG_CMA)
121 has to be supported by kernel, check your Linux distro.
122
123 - Mount huge pages
124
125 `mount -t hugetlbfs -o pagesize=1G none /dev/hugepages`
126
127 Note: Mount hugepages if not already mounted by default.
128
129 ### 3.4 Enable Hyperthreading
130
131 Requires BIOS changes
132
133 With HT/SMT enabled, A Physical core appears as two logical cores.
134 SMT can be utilized to spawn worker threads on logical cores of the same
135 physical core there by saving additional cores.
136
137 With DPDK, When pinning pmd threads to logical cores, care must be taken
138 to set the correct bits in the pmd-cpu-mask to ensure that the pmd threads are
139 pinned to SMT siblings.
140
141 Example System configuration:
142 Dual socket Machine, 2x 10 core processors, HT enabled, 40 logical cores
143
144 To use two logical cores which share the same physical core for pmd threads,
145 the following command can be used to identify a pair of logical cores.
146
147 `cat /sys/devices/system/cpu/cpuN/topology/thread_siblings_list`, where N is the
148 logical core number.
149
150 In this example, it would show that cores 1 and 21 share the same physical core.
151 The pmd-cpu-mask to enable two pmd threads running on these two logical cores
152 (one physical core) is.
153
154 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=100002`
155
156 ### 3.5 Isolate cores
157
158 'isolcpus' option can be used to isolate cores from the linux scheduler.
159 The isolated cores can then be used to dedicatedly run HPC applications/threads.
160 This helps in better application performance due to zero context switching and
161 minimal cache thrashing. To run platform logic on core 0 and isolate cores
162 between 1 and 19 from scheduler, Add `isolcpus=1-19` to GRUB cmdline.
163
164 Note: It has been verified that core isolation has minimal advantage due to
165 mature Linux scheduler in some circumstances.
166
167 ### 3.6 NUMA/Cluster on Die
168
169 Ideally inter NUMA datapaths should be avoided where possible as packets
170 will go across QPI and there may be a slight performance penalty when
171 compared with intra NUMA datapaths. On Intel Xeon Processor E5 v3,
172 Cluster On Die is introduced on models that have 10 cores or more.
173 This makes it possible to logically split a socket into two NUMA regions
174 and again it is preferred where possible to keep critical datapaths
175 within the one cluster.
176
177 It is good practice to ensure that threads that are in the datapath are
178 pinned to cores in the same NUMA area. e.g. pmd threads and QEMU vCPUs
179 responsible for forwarding. If DPDK is built with
180 CONFIG_RTE_LIBRTE_VHOST_NUMA=y, vHost User ports automatically
181 detect the NUMA socket of the QEMU vCPUs and will be serviced by a PMD
182 from the same node provided a core on this node is enabled in the
183 pmd-cpu-mask.
184
185 ### 3.7 Compiler Optimizations
186
187 The default compiler optimization level is '-O2'. Changing this to
188 more aggressive compiler optimization such as '-O3 -march=native'
189 with gcc(verified on 5.3.1) can produce performance gains though not
190 siginificant. '-march=native' will produce optimized code on local machine
191 and should be used when SW compilation is done on Testbed.
192
193 ## <a name="perftune"></a> 4. Performance Tuning
194
195 ### 4.1 Affinity
196
197 For superior performance, DPDK pmd threads and Qemu vCPU threads
198 needs to be affinitized accordingly.
199
200 * PMD thread Affinity
201
202 A poll mode driver (pmd) thread handles the I/O of all DPDK
203 interfaces assigned to it. A pmd thread shall poll the ports
204 for incoming packets, switch the packets and send to tx port.
205 pmd thread is CPU bound, and needs to be affinitized to isolated
206 cores for optimum performance.
207
208 By setting a bit in the mask, a pmd thread is created and pinned
209 to the corresponding CPU core. e.g. to run a pmd thread on core 2
210
211 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=4`
212
213 Note: pmd thread on a NUMA node is only created if there is
214 at least one DPDK interface from that NUMA node added to OVS.
215
216 * Qemu vCPU thread Affinity
217
218 A VM performing simple packet forwarding or running complex packet
219 pipelines has to ensure that the vCPU threads performing the work has
220 as much CPU occupancy as possible.
221
222 Example: On a multicore VM, multiple QEMU vCPU threads shall be spawned.
223 when the DPDK 'testpmd' application that does packet forwarding
224 is invoked, 'taskset' cmd should be used to affinitize the vCPU threads
225 to the dedicated isolated cores on the host system.
226
227 ### 4.2 Multiple poll mode driver threads
228
229 With pmd multi-threading support, OVS creates one pmd thread
230 for each NUMA node by default. However, it can be seen that in cases
231 where there are multiple ports/rxq's producing traffic, performance
232 can be improved by creating multiple pmd threads running on separate
233 cores. These pmd threads can then share the workload by each being
234 responsible for different ports/rxq's. Assignment of ports/rxq's to
235 pmd threads is done automatically.
236
237 A set bit in the mask means a pmd thread is created and pinned
238 to the corresponding CPU core. e.g. to run pmd threads on core 1 and 2
239
240 `ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=6`
241
242 For example, when using dpdk and dpdkvhostuser ports in a bi-directional
243 VM loopback as shown below, spreading the workload over 2 or 4 pmd
244 threads shows significant improvements as there will be more total CPU
245 occupancy available.
246
247 NIC port0 <-> OVS <-> VM <-> OVS <-> NIC port 1
248
249 ### 4.3 DPDK physical port Rx Queues
250
251 `ovs-vsctl set Interface <DPDK interface> options:n_rxq=<integer>`
252
253 The command above sets the number of rx queues for DPDK physical interface.
254 The rx queues are assigned to pmd threads on the same NUMA node in a
255 round-robin fashion.
256
257 ### 4.4 Exact Match Cache
258
259 Each pmd thread contains one EMC. After initial flow setup in the
260 datapath, the EMC contains a single table and provides the lowest level
261 (fastest) switching for DPDK ports. If there is a miss in the EMC then
262 the next level where switching will occur is the datapath classifier.
263 Missing in the EMC and looking up in the datapath classifier incurs a
264 significant performance penalty. If lookup misses occur in the EMC
265 because it is too small to handle the number of flows, its size can
266 be increased. The EMC size can be modified by editing the define
267 EM_FLOW_HASH_SHIFT in lib/dpif-netdev.c.
268
269 As mentioned above an EMC is per pmd thread. So an alternative way of
270 increasing the aggregate amount of possible flow entries in EMC and
271 avoiding datapath classifier lookups is to have multiple pmd threads
272 running. This can be done as described in section 4.2.
273
274 ### 4.5 Rx Mergeable buffers
275
276 Rx Mergeable buffers is a virtio feature that allows chaining of multiple
277 virtio descriptors to handle large packet sizes. As such, large packets
278 are handled by reserving and chaining multiple free descriptors
279 together. Mergeable buffer support is negotiated between the virtio
280 driver and virtio device and is supported by the DPDK vhost library.
281 This behavior is typically supported and enabled by default, however
282 in the case where the user knows that rx mergeable buffers are not needed
283 i.e. jumbo frames are not needed, it can be forced off by adding
284 mrg_rxbuf=off to the QEMU command line options. By not reserving multiple
285 chains of descriptors it will make more individual virtio descriptors
286 available for rx to the guest using dpdkvhost ports and this can improve
287 performance.
288
289 ## <a name="ovstc"></a> 5. OVS Testcases
290 ### 5.1 PHY-VM-PHY [VHOST LOOPBACK]
291
292 The section 5.2 in INSTALL.DPDK guide lists steps for PVP loopback testcase
293 and packet forwarding using DPDK testpmd application in the Guest VM.
294 For users wanting to do packet forwarding using kernel stack below are the steps.
295
296 ```
297 ifconfig eth1 1.1.1.2/24
298 ifconfig eth2 1.1.2.2/24
299 systemctl stop firewalld.service
300 systemctl stop iptables.service
301 sysctl -w net.ipv4.ip_forward=1
302 sysctl -w net.ipv4.conf.all.rp_filter=0
303 sysctl -w net.ipv4.conf.eth1.rp_filter=0
304 sysctl -w net.ipv4.conf.eth2.rp_filter=0
305 route add -net 1.1.2.0/24 eth2
306 route add -net 1.1.1.0/24 eth1
307 arp -s 1.1.2.99 DE:AD:BE:EF:CA:FE
308 arp -s 1.1.1.99 DE:AD:BE:EF:CA:EE
309 ```
310
311 ### 5.2 PHY-VM-PHY [IVSHMEM]
312
313 The steps (1-5) in 3.3 section of INSTALL.DPDK guide will create & initialize DB,
314 start vswitchd and add dpdk devices to bridge br0.
315
316 1. Add DPDK ring port to the bridge
317
318 ```
319 ovs-vsctl add-port br0 dpdkr0 -- set Interface dpdkr0 type=dpdkr
320 ```
321
322 2. Build modified Qemu (Qemu-2.2.1 + ivshmem-qemu-2.2.1.patch)
323
324 ```
325 cd /usr/src/
326 wget http://wiki.qemu.org/download/qemu-2.2.1.tar.bz2
327 tar -jxvf qemu-2.2.1.tar.bz2
328 cd /usr/src/qemu-2.2.1
329 wget https://raw.githubusercontent.com/netgroup-polito/un-orchestrator/master/orchestrator/compute_controller/plugins/kvm-libvirt/patches/ivshmem-qemu-2.2.1.patch
330 patch -p1 < ivshmem-qemu-2.2.1.patch
331 ./configure --target-list=x86_64-softmmu --enable-debug --extra-cflags='-g'
332 make -j 4
333 ```
334
335 3. Generate Qemu commandline
336
337 ```
338 mkdir -p /usr/src/cmdline_generator
339 cd /usr/src/cmdline_generator
340 wget https://raw.githubusercontent.com/netgroup-polito/un-orchestrator/master/orchestrator/compute_controller/plugins/kvm-libvirt/cmdline_generator/cmdline_generator.c
341 wget https://raw.githubusercontent.com/netgroup-polito/un-orchestrator/master/orchestrator/compute_controller/plugins/kvm-libvirt/cmdline_generator/Makefile
342 export RTE_SDK=/usr/src/dpdk-16.04
343 export RTE_TARGET=x86_64-ivshmem-linuxapp-gcc
344 make
345 ./build/cmdline_generator -m -p dpdkr0 XXX
346 cmdline=`cat OVSMEMPOOL`
347 ```
348
349 4. start Guest VM
350
351 ```
352 export VM_NAME=ivshmem-vm
353 export QCOW2_IMAGE=/root/CentOS7_x86_64.qcow2
354 export QEMU_BIN=/usr/src/qemu-2.2.1/x86_64-softmmu/qemu-system-x86_64
355
356 taskset 0x20 $QEMU_BIN -cpu host -smp 2,cores=2 -hda $QCOW2_IMAGE -m 4096 --enable-kvm -name $VM_NAME -nographic -vnc :2 -pidfile /tmp/vm1.pid $cmdline
357 ```
358
359 5. Running sample "dpdk ring" app in VM
360
361 ```
362 echo 1024 > /proc/sys/vm/nr_hugepages
363 mount -t hugetlbfs nodev /dev/hugepages (if not already mounted)
364
365 # Build the DPDK ring application in the VM
366 export RTE_SDK=/root/dpdk-16.04
367 export RTE_TARGET=x86_64-ivshmem-linuxapp-gcc
368 make
369
370 # Run dpdkring application
371 ./build/dpdkr -c 1 -n 4 -- -n 0
372 where "-n 0" refers to ring '0' i.e dpdkr0
373 ```
374
375 ## <a name="vhost"></a> 6. Vhost Walkthrough
376
377 DPDK 16.04 supports two types of vhost:
378
379 1. vhost-user - enabled default
380
381 2. vhost-cuse - Legacy, disabled by default
382
383 ### 6.1 vhost-user
384
385 - Prerequisites:
386
387 QEMU version >= 2.2
388
389 - Adding vhost-user ports to Switch
390
391 Unlike DPDK ring ports, DPDK vhost-user ports can have arbitrary names,
392 except that forward and backward slashes are prohibited in the names.
393
394 For vhost-user, the name of the port type is `dpdkvhostuser`
395
396 ```
397 ovs-vsctl add-port br0 vhost-user-1 -- set Interface vhost-user-1
398 type=dpdkvhostuser
399 ```
400
401 This action creates a socket located at
402 `/usr/local/var/run/openvswitch/vhost-user-1`, which you must provide
403 to your VM on the QEMU command line. More instructions on this can be
404 found in the next section "Adding vhost-user ports to VM"
405
406 Note: If you wish for the vhost-user sockets to be created in a
407 sub-directory of `/usr/local/var/run/openvswitch`, you may specify
408 this directory in the ovsdb like so:
409
410 `./utilities/ovs-vsctl --no-wait \
411 set Open_vSwitch . other_config:vhost-sock-dir=subdir`
412
413 - Adding vhost-user ports to VM
414
415 1. Configure sockets
416
417 Pass the following parameters to QEMU to attach a vhost-user device:
418
419 ```
420 -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user-1
421 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce
422 -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1
423 ```
424
425 where vhost-user-1 is the name of the vhost-user port added
426 to the switch.
427 Repeat the above parameters for multiple devices, changing the
428 chardev path and id as necessary. Note that a separate and different
429 chardev path needs to be specified for each vhost-user device. For
430 example you have a second vhost-user port named 'vhost-user-2', you
431 append your QEMU command line with an additional set of parameters:
432
433 ```
434 -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2
435 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce
436 -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2
437 ```
438
439 2. Configure huge pages.
440
441 QEMU must allocate the VM's memory on hugetlbfs. vhost-user ports access
442 a virtio-net device's virtual rings and packet buffers mapping the VM's
443 physical memory on hugetlbfs. To enable vhost-user ports to map the VM's
444 memory into their process address space, pass the following parameters
445 to QEMU:
446
447 ```
448 -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
449 share=on -numa node,memdev=mem -mem-prealloc
450 ```
451
452 3. Enable multiqueue support(OPTIONAL)
453
454 QEMU needs to be configured to use multiqueue.
455 The $q below is the number of queues.
456 The $v is the number of vectors, which is '$q x 2 + 2'.
457
458 ```
459 -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user-2
460 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce,queues=$q
461 -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mq=on,vectors=$v
462 ```
463
464 The vhost-user interface will be automatically reconfigured with required
465 number of rx and tx queues after connection of virtio device.
466 Manual configuration of `n_rxq` is not supported because OVS will work
467 properly only if `n_rxq` will match number of queues configured in QEMU.
468
469 A least 2 PMDs should be configured for the vswitch when using multiqueue.
470 Using a single PMD will cause traffic to be enqueued to the same vhost
471 queue rather than being distributed among different vhost queues for a
472 vhost-user interface.
473
474 If traffic destined for a VM configured with multiqueue arrives to the
475 vswitch via a physical DPDK port, then the number of rxqs should also be
476 set to at least 2 for that physical DPDK port. This is required to increase
477 the probability that a different PMD will handle the multiqueue
478 transmission to the guest using a different vhost queue.
479
480 If one wishes to use multiple queues for an interface in the guest, the
481 driver in the guest operating system must be configured to do so. It is
482 recommended that the number of queues configured be equal to '$q'.
483
484 For example, this can be done for the Linux kernel virtio-net driver with:
485
486 ```
487 ethtool -L <DEV> combined <$q>
488 ```
489 where `-L`: Changes the numbers of channels of the specified network device
490 and `combined`: Changes the number of multi-purpose channels.
491
492 - VM Configuration with libvirt
493
494 * change the user/group, access control policty and restart libvirtd.
495
496 - In `/etc/libvirt/qemu.conf` add/edit the following lines
497
498 ```
499 user = "root"
500 group = "root"
501 ```
502
503 - Disable SELinux or set to permissive mode
504
505 `setenforce 0`
506
507 - Restart the libvirtd process, For example, on Fedora
508
509 `systemctl restart libvirtd.service`
510
511 * Instantiate the VM
512
513 - Copy the xml configuration from [Guest VM using libvirt] in to workspace.
514
515 - Start the VM.
516
517 `virsh create demovm.xml`
518
519 - Connect to the guest console
520
521 `virsh console demovm`
522
523 * VM configuration
524
525 The demovm xml configuration is aimed at achieving out of box performance
526 on VM.
527
528 - The vcpus are pinned to the cores of the CPU socket 0 using vcpupin.
529
530 - Configure NUMA cell and memory shared using memAccess='shared'.
531
532 - Disable mrg_rxbuf='off'.
533
534 Note: For information on libvirt and further tuning refer [libvirt].
535
536 ### 6.2 vhost-cuse
537
538 - Prerequisites:
539
540 QEMU version >= 2.2
541
542 - Enable vhost-cuse support
543
544 1. Enable vhost cuse support in DPDK
545
546 Set `CONFIG_RTE_LIBRTE_VHOST_USER=n` in config/common_linuxapp and follow the
547 steps in 2.2 section of INSTALL.DPDK guide to build DPDK with cuse support.
548 OVS will detect that DPDK has vhost-cuse libraries compiled and in turn will enable
549 support for it in the switch and disable vhost-user support.
550
551 2. Insert the Cuse module
552
553 `modprobe cuse`
554
555 3. Build and insert the `eventfd_link` module
556
557 ```
558 cd $DPDK_DIR/lib/librte_vhost/eventfd_link/
559 make
560 insmod $DPDK_DIR/lib/librte_vhost/eventfd_link.ko
561 ```
562
563 - Adding vhost-cuse ports to Switch
564
565 Unlike DPDK ring ports, DPDK vhost-cuse ports can have arbitrary names.
566 For vhost-cuse, the name of the port type is `dpdkvhostcuse`
567
568 ```
569 ovs-vsctl add-port br0 vhost-cuse-1 -- set Interface vhost-cuse-1
570 type=dpdkvhostcuse
571 ```
572
573 When attaching vhost-cuse ports to QEMU, the name provided during the
574 add-port operation must match the ifname parameter on the QEMU cmd line.
575
576 - Adding vhost-cuse ports to VM
577
578 vhost-cuse ports use a Linux* character device to communicate with QEMU.
579 By default it is set to `/dev/vhost-net`. It is possible to reuse this
580 standard device for DPDK vhost, which makes setup a little simpler but it
581 is better practice to specify an alternative character device in order to
582 avoid any conflicts if kernel vhost is to be used in parallel.
583
584 1. This step is only needed if using an alternative character device.
585
586 ```
587 ./utilities/ovs-vsctl --no-wait set Open_vSwitch . \
588 other_config:cuse-dev-name=my-vhost-net
589 ```
590
591 In the example above, the character device to be used will be
592 `/dev/my-vhost-net`.
593
594 2. In case of reusing kernel vhost character device, there would be conflict
595 user should remove it.
596
597 `rm -rf /dev/vhost-net`
598
599 3. Configure virtio-net adapters
600
601 The following parameters must be passed to the QEMU binary, repeat
602 the below parameters for multiple devices.
603
604 ```
605 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on
606 -device virtio-net-pci,netdev=net1,mac=<mac>
607 ```
608
609 The DPDK vhost library will negotiate its own features, so they
610 need not be passed in as command line params. Note that as offloads
611 are disabled this is the equivalent of setting
612
613 `csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off`
614
615 When using an alternative character device, it must be explicitly
616 passed to QEMU using the `vhostfd` argument
617
618 ```
619 -netdev tap,id=<id>,script=no,downscript=no,ifname=<name>,vhost=on,
620 vhostfd=<open_fd> -device virtio-net-pci,netdev=net1,mac=<mac>
621 ```
622
623 The open file descriptor must be passed to QEMU running as a child
624 process. This could be done with a simple python script.
625
626 ```
627 #!/usr/bin/python
628 fd = os.open("/dev/usvhost", os.O_RDWR)
629 subprocess.call("qemu-system-x86_64 .... -netdev tap,id=vhostnet0,\
630 vhost=on,vhostfd=" + fd +"...", shell=True)
631 ```
632
633 4. Configure huge pages
634
635 QEMU must allocate the VM's memory on hugetlbfs. Vhost ports access a
636 virtio-net device's virtual rings and packet buffers mapping the VM's
637 physical memory on hugetlbfs. To enable vhost-ports to map the VM's
638 memory into their process address space, pass the following parameters
639 to QEMU
640
641 `-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,
642 share=on -numa node,memdev=mem -mem-prealloc`
643
644 - VM Configuration with QEMU wrapper
645
646 The QEMU wrapper script automatically detects and calls QEMU with the
647 necessary parameters. It performs the following actions:
648
649 * Automatically detects the location of the hugetlbfs and inserts this
650 into the command line parameters.
651 * Automatically open file descriptors for each virtio-net device and
652 inserts this into the command line parameters.
653 * Calls QEMU passing both the command line parameters passed to the
654 script itself and those it has auto-detected.
655
656 Before use, you **must** edit the configuration parameters section of the
657 script to point to the correct emulator location and set additional
658 settings. Of these settings, `emul_path` and `us_vhost_path` **must** be
659 set. All other settings are optional.
660
661 To use directly from the command line simply pass the wrapper some of the
662 QEMU parameters: it will configure the rest. For example:
663
664 ```
665 qemu-wrap.py -cpu host -boot c -hda <disk image> -m 4096 -smp 4
666 --enable-kvm -nographic -vnc none -net none -netdev tap,id=net1,
667 script=no,downscript=no,ifname=if1,vhost=on -device virtio-net-pci,
668 netdev=net1,mac=00:00:00:00:00:01
669 ```
670
671 - VM Configuration with libvirt
672
673 If you are using libvirt, you must enable libvirt to access the character
674 device by adding it to controllers cgroup for libvirtd using the following
675 steps.
676
677 1. In `/etc/libvirt/qemu.conf` add/edit the following lines:
678
679 ```
680 clear_emulator_capabilities = 0
681 user = "root"
682 group = "root"
683 cgroup_device_acl = [
684 "/dev/null", "/dev/full", "/dev/zero",
685 "/dev/random", "/dev/urandom",
686 "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
687 "/dev/rtc", "/dev/hpet", "/dev/net/tun",
688 "/dev/<my-vhost-device>",
689 "/dev/hugepages"]
690 ```
691
692 <my-vhost-device> refers to "vhost-net" if using the `/dev/vhost-net`
693 device. If you have specificed a different name in the database
694 using the "other_config:cuse-dev-name" parameter, please specify that
695 filename instead.
696
697 2. Disable SELinux or set to permissive mode
698
699 3. Restart the libvirtd process
700 For example, on Fedora:
701
702 `systemctl restart libvirtd.service`
703
704 After successfully editing the configuration, you may launch your
705 vhost-enabled VM. The XML describing the VM can be configured like so
706 within the <qemu:commandline> section:
707
708 1. Set up shared hugepages:
709
710 ```
711 <qemu:arg value='-object'/>
712 <qemu:arg value='memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on'/>
713 <qemu:arg value='-numa'/>
714 <qemu:arg value='node,memdev=mem'/>
715 <qemu:arg value='-mem-prealloc'/>
716 ```
717
718 2. Set up your tap devices:
719
720 ```
721 <qemu:arg value='-netdev'/>
722 <qemu:arg value='type=tap,id=net1,script=no,downscript=no,ifname=vhost0,vhost=on'/>
723 <qemu:arg value='-device'/>
724 <qemu:arg value='virtio-net-pci,netdev=net1,mac=00:00:00:00:00:01'/>
725 ```
726
727 Repeat for as many devices as are desired, modifying the id, ifname
728 and mac as necessary.
729
730 Again, if you are using an alternative character device (other than
731 `/dev/vhost-net`), please specify the file descriptor like so:
732
733 `<qemu:arg value='type=tap,id=net3,script=no,downscript=no,ifname=vhost0,vhost=on,vhostfd=<open_fd>'/>`
734
735 Where <open_fd> refers to the open file descriptor of the character device.
736 Instructions of how to retrieve the file descriptor can be found in the
737 "DPDK vhost VM configuration" section.
738 Alternatively, the process is automated with the qemu-wrap.py script,
739 detailed in the next section.
740
741 Now you may launch your VM using virt-manager, or like so:
742
743 `virsh create my_vhost_vm.xml`
744
745 - VM Configuration with libvirt & QEMU wrapper
746
747 To use the qemu-wrapper script in conjuntion with libvirt, follow the
748 steps in the previous section before proceeding with the following steps:
749
750 1. Place `qemu-wrap.py` in libvirtd binary search PATH ($PATH)
751 Ideally in the same directory that the QEMU binary is located.
752
753 2. Ensure that the script has the same owner/group and file permissions
754 as the QEMU binary.
755
756 3. Update the VM xml file using "virsh edit VM.xml"
757
758 Set the VM to use the launch script.
759 Set the emulator path contained in the `<emulator><emulator/>` tags.
760 For example, replace `<emulator>/usr/bin/qemu-kvm<emulator/>` with
761 `<emulator>/usr/bin/qemu-wrap.py<emulator/>`
762
763 4. Edit the Configuration Parameters section of the script to point to
764 the correct emulator location and set any additional options. If you are
765 using a alternative character device name, please set "us_vhost_path" to the
766 location of that device. The script will automatically detect and insert
767 the correct "vhostfd" value in the QEMU command line arguments.
768
769 5. Use virt-manager to launch the VM
770
771 ### 6.3 DPDK backend inside VM
772
773 Please note that additional configuration is required if you want to run
774 ovs-vswitchd with DPDK backend inside a QEMU virtual machine. Ovs-vswitchd
775 creates separate DPDK TX queues for each CPU core available. This operation
776 fails inside QEMU virtual machine because, by default, VirtIO NIC provided
777 to the guest is configured to support only single TX queue and single RX
778 queue. To change this behavior, you need to turn on 'mq' (multiqueue)
779 property of all virtio-net-pci devices emulated by QEMU and used by DPDK.
780 You may do it manually (by changing QEMU command line) or, if you use
781 Libvirt, by adding the following string:
782
783 `<driver name='vhost' queues='N'/>`
784
785 to <interface> sections of all network devices used by DPDK. Parameter 'N'
786 determines how many queues can be used by the guest.This may not work with
787 old versions of QEMU found in some distros and need Qemu version >= 2.2.
788
789 ## <a name="qos"></a> 7. QOS
790
791 Here is an example on QOS usage.
792 Assuming you have a vhost-user port transmitting traffic consisting of
793 packets of size 64 bytes, the following command would limit the egress
794 transmission rate of the port to ~1,000,000 packets per second
795
796 `ovs-vsctl set port vhost-user0 qos=@newqos -- --id=@newqos create qos
797 type=egress-policer other-config:cir=46000000 other-config:cbs=2048`
798
799 To examine the QoS configuration of the port:
800
801 `ovs-appctl -t ovs-vswitchd qos/show vhost-user0`
802
803 To clear the QoS configuration from the port and ovsdb use the following:
804
805 `ovs-vsctl destroy QoS vhost-user0 -- clear Port vhost-user0 qos`
806
807 For more details regarding egress-policer parameters please refer to the
808 vswitch.xml.
809
810 ## <a name="rl"></a> 8. Rate Limiting
811
812 Here is an example on Ingress Policing usage.
813 Assuming you have a vhost-user port receiving traffic consisting of
814 packets of size 64 bytes, the following command would limit the reception
815 rate of the port to ~1,000,000 packets per second:
816
817 `ovs-vsctl set interface vhost-user0 ingress_policing_rate=368000
818 ingress_policing_burst=1000`
819
820 To examine the ingress policer configuration of the port:
821
822 `ovs-vsctl list interface vhost-user0`
823
824 To clear the ingress policer configuration from the port use the following:
825
826 `ovs-vsctl set interface vhost-user0 ingress_policing_rate=0`
827
828 For more details regarding ingress-policer see the vswitch.xml.
829
830 ## <a name="vsperf"></a> 9. Vsperf
831
832 Vsperf project goal is to develop vSwitch test framework that can be used to
833 validate the suitability of different vSwitch implementations in a Telco deployment
834 environment. More information can be found in below link.
835
836 https://wiki.opnfv.org/display/vsperf/VSperf+Home
837
838
839 Bug Reporting:
840 --------------
841
842 Please report problems to bugs@openvswitch.org.
843
844
845 [INSTALL.userspace.md]:INSTALL.userspace.md
846 [INSTALL.md]:INSTALL.md
847 [DPDK Linux GSG]: http://www.dpdk.org/doc/guides/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-igb-uioor-vfio-modules
848 [DPDK Docs]: http://dpdk.org/doc
849 [libvirt]: http://libvirt.org/formatdomain.html
850 [Guest VM using libvirt]: INSTALL.DPDK.md#ovstc
851 [INSTALL DPDK]: INSTALL.DPDK.md#build
852 [INSTALL OVS]: INSTALL.DPDK.md#build