]> git.proxmox.com Git - ceph.git/blob - ceph/src/seastar/dpdk/doc/guides/nics/nfb.rst
import 15.2.0 Octopus source
[ceph.git] / ceph / src / seastar / dpdk / doc / guides / nics / nfb.rst
1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2019 Cesnet
3 Copyright 2019 Netcope Technologies
4
5 NFB poll mode driver library
6 =================================
7
8 The NFB poll mode driver library implements support for the Netcope
9 FPGA Boards (**NFB-***), FPGA-based programmable NICs.
10 The NFB PMD uses interface provided by the libnfb library to communicate
11 with the NFB cards over the nfb layer.
12
13 More information about the
14 `NFB cards <http://www.netcope.com/en/products/fpga-boards>`_
15 and used technology
16 (`Netcope Development Kit <http://www.netcope.com/en/products/fpga-development-kit>`_)
17 can be found on the `Netcope Technologies website <http://www.netcope.com/>`_.
18
19 .. note::
20
21 This driver has external dependencies.
22 Therefore it is disabled in default configuration files.
23 It can be enabled by setting ``CONFIG_RTE_LIBRTE_NFB_PMD=y``
24 and recompiling.
25
26 .. note::
27
28 Currently the driver is supported only on x86_64 architectures.
29 Only x86_64 versions of the external libraries are provided.
30
31 Prerequisites
32 -------------
33
34 This PMD requires kernel modules which are responsible for initialization and
35 allocation of resources needed for nfb layer function.
36 Communication between PMD and kernel modules is mediated by libnfb library.
37 These kernel modules and library are not part of DPDK and must be installed
38 separately:
39
40 * **libnfb library**
41
42 The library provides API for initialization of nfb transfers, receiving and
43 transmitting data segments.
44
45 * **Kernel modules**
46
47 * nfb
48
49 Kernel modules manage initialization of hardware, allocation and
50 sharing of resources for user space applications.
51
52 Dependencies can be found here:
53 `Netcope common <https://www.netcope.com/en/company/community-support/dpdk-libsze2#NFB>`_.
54
55 Versions of the packages
56 ~~~~~~~~~~~~~~~~~~~~~~~~
57
58 The minimum version of the provided packages:
59
60 * for DPDK from 19.05
61
62 Configuration
63 -------------
64
65 These configuration options can be modified before compilation in the
66 ``.config`` file:
67
68 * ``CONFIG_RTE_LIBRTE_NFB_PMD`` default value: **n**
69
70 Value **y** enables compilation of nfb PMD.
71
72 Using the NFB PMD
73 ----------------------
74
75 Kernel modules have to be loaded before running the DPDK application.
76
77 NFB card architecture
78 ---------------------
79
80 The NFB cards are multi-port multi-queue cards, where (generally) data from any
81 Ethernet port may be sent to any queue.
82 They are represented in DPDK as a single port.
83
84 NFB-200G2QL card employs an add-on cable which allows to connect it to two
85 physical PCI-E slots at the same time (see the diagram below).
86 This is done to allow 200 Gbps of traffic to be transferred through the PCI-E
87 bus (note that a single PCI-E 3.0 x16 slot provides only 125 Gbps theoretical
88 throughput).
89
90 Although each slot may be connected to a different CPU and therefore to a different
91 NUMA node, the card is represented as a single port in DPDK. To work with data
92 from the individual queues on the right NUMA node, connection of NUMA nodes on
93 first and last queue (each NUMA node has half of the queues) need to be checked.
94
95 .. figure:: img/szedata2_nfb200g_architecture.*
96 :align: center
97
98 NFB-200G2QL high-level diagram
99
100 Limitations
101 -----------
102
103 Driver is usable only on Linux architecture, namely on CentOS.
104
105 Since a card is always represented as a single port, but can be connected to two
106 NUMA nodes, there is need for manual check where master/slave is connected.
107
108 Example of usage
109 ----------------
110
111 Read packets from 0. and 1. receive queue and write them to 0. and 1.
112 transmit queue:
113
114 .. code-block:: console
115
116 $RTE_TARGET/app/testpmd -l 0-3 -n 2 \
117 -- --port-topology=chained --rxq=2 --txq=2 --nb-cores=2 -i -a
118
119 Example output:
120
121 .. code-block:: console
122
123 [...]
124 EAL: PCI device 0000:06:00.0 on NUMA socket -1
125 EAL: probe driver: 1b26:c1c1 net_nfb
126 PMD: Initializing NFB device (0000:06:00.0)
127 PMD: Available DMA queues RX: 8 TX: 8
128 PMD: NFB device (0000:06:00.0) successfully initialized
129 Interactive-mode selected
130 Auto-start selected
131 Configuring Port 0 (socket 0)
132 Port 0: 00:11:17:00:00:00
133 Checking link statuses...
134 Port 0 Link Up - speed 10000 Mbps - full-duplex
135 Done
136 Start automatic packet forwarding
137 io packet forwarding - CRC stripping disabled - packets/burst=32
138 nb forwarding cores=2 - nb forwarding ports=1
139 RX queues=2 - RX desc=128 - RX free threshold=0
140 RX threshold registers: pthresh=0 hthresh=0 wthresh=0
141 TX queues=2 - TX desc=512 - TX free threshold=0
142 TX threshold registers: pthresh=0 hthresh=0 wthresh=0
143 TX RS bit threshold=0 - TXQ flags=0x0
144 testpmd>