]> git.proxmox.com Git - ceph.git/blame - ceph/src/spdk/dpdk/doc/guides/howto/flow_bifurcation.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / src / spdk / dpdk / doc / guides / howto / flow_bifurcation.rst
CommitLineData
11fdf7f2
TL
1.. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Intel Corporation.
7c673cae
FG
3
4Flow Bifurcation How-to Guide
5=============================
6
7Flow Bifurcation is a mechanism which uses hardware capable Ethernet devices
8to split traffic between Linux user space and kernel space. Since it is a
9hardware assisted feature this approach can provide line rate processing
10capability. Other than :ref:`KNI <kni>`, the software is just required to
11enable device configuration, there is no need to take care of the packet
12movement during the traffic split. This can yield better performance with
13less CPU overhead.
14
15The Flow Bifurcation splits the incoming data traffic to user space
16applications (such as DPDK applications) and/or kernel space programs (such as
17the Linux kernel stack). It can direct some traffic, for example data plane
18traffic, to DPDK, while directing some other traffic, for example control
19plane traffic, to the traditional Linux networking stack.
20
21There are a number of technical options to achieve this. A typical example is
22to combine the technology of SR-IOV and packet classification filtering.
23
24SR-IOV is a PCI standard that allows the same physical adapter to be split as
25multiple virtual functions. Each virtual function (VF) has separated queues
26with physical functions (PF). The network adapter will direct traffic to a
27virtual function with a matching destination MAC address. In a sense, SR-IOV
28has the capability for queue division.
29
30Packet classification filtering is a hardware capability available on most
31network adapters. Filters can be configured to direct specific flows to a
32given receive queue by hardware. Different NICs may have different filter
33types to direct flows to a Virtual Function or a queue that belong to it.
34
35In this way the Linux networking stack can receive specific traffic through
36the kernel driver while a DPDK application can receive specific traffic
37bypassing the Linux kernel by using drivers like VFIO or the DPDK ``igb_uio``
38module.
39
40.. _figure_flow_bifurcation_overview:
41
42.. figure:: img/flow_bifurcation_overview.*
43
44 Flow Bifurcation Overview
45
46
9f95a23c
TL
47Using Flow Bifurcation on Mellanox ConnectX
48-------------------------------------------
49
50The Mellanox devices are :ref:`natively bifurcated <bifurcated_driver>`,
51so there is no need to split into SR-IOV PF/VF
52in order to get the flow bifurcation mechanism.
53The full device is already shared with the kernel driver.
54
55The DPDK application can setup some flow steering rules,
56and let the rest go to the kernel stack.
57In order to define the filters strictly with flow rules,
58the :ref:`flow_isolated_mode` can be configured.
59
60There is no specific instructions to follow.
61The recommended reading is the :doc:`../prog_guide/rte_flow` guide.
62Below is an example of testpmd commands
63for receiving VXLAN 42 in 4 queues of the DPDK port 0,
64while all other packets go to the kernel:
65
66.. code-block:: console
67
68 testpmd> flow isolate 0 true
69 testpmd> flow create 0 ingress pattern eth / ipv4 / udp / vxlan vni is 42 / end \
70 actions rss queues 0 1 2 3 end / end