]> git.proxmox.com Git - ceph.git/blob - ceph/src/spdk/dpdk/doc/guides/nics/dpaa.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / src / spdk / dpdk / doc / guides / nics / dpaa.rst
1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2017 NXP
3
4
5 DPAA Poll Mode Driver
6 =====================
7
8 The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
9 support for the inbuilt NIC found in the **NXP DPAA** SoC family.
10
11 More information can be found at `NXP Official Website
12 <http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
13
14 NXP DPAA (Data Path Acceleration Architecture - Gen 1)
15 ------------------------------------------------------
16
17 This section provides an overview of the NXP DPAA architecture
18 and how it is integrated into the DPDK.
19
20 Contents summary
21
22 - DPAA overview
23 - DPAA driver architecture overview
24
25 .. _dpaa_overview:
26
27 DPAA Overview
28 ~~~~~~~~~~~~~
29
30 Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
31
32 The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
33 components on specific QorIQ series multicore processors. This architecture
34 provides the infrastructure to support simplified sharing of networking
35 interfaces and accelerators by multiple CPU cores, and the accelerators
36 themselves.
37
38 DPAA includes:
39
40 - Cores
41 - Network and packet I/O
42 - Hardware offload accelerators
43 - Infrastructure required to facilitate flow of packets between the components above
44
45 Infrastructure components are:
46
47 - The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
48 It allows CPUs and other accelerators connected to the SoC datapath to
49 enqueue and dequeue ethernet frames, thus providing the infrastructure for
50 data exchange among CPUs and datapath accelerators.
51 - The Buffer Manager (BMan) is a hardware buffer pool management block that
52 allows software and accelerators on the datapath to acquire and release
53 buffers in order to build frames.
54
55 Hardware accelerators are:
56
57 - SEC - Cryptographic accelerator
58 - PME - Pattern matching engine
59
60 The Network and packet I/O component:
61
62 - The Frame Manager (FMan) is a key component in the DPAA and makes use of the
63 DPAA infrastructure (QMan and BMan). FMan is responsible for packet
64 distribution and policing. Each frame can be parsed, classified and results
65 may be attached to the frame. This meta data can be used to select
66 particular QMan queue, which the packet is forwarded to.
67
68
69 DPAA DPDK - Poll Mode Driver Overview
70 -------------------------------------
71
72 This section provides an overview of the drivers for DPAA:
73
74 * Bus driver and associated "DPAA infrastructure" drivers
75 * Functional object drivers (such as Ethernet).
76
77 Brief description of each driver is provided in layout below as well as
78 in the following sections.
79
80 .. code-block:: console
81
82 +------------+
83 | DPDK DPAA |
84 | PMD |
85 +-----+------+
86 |
87 +-----+------+ +---------------+
88 : Ethernet :.......| DPDK DPAA |
89 . . . . . . . . . : (FMAN) : | Mempool driver|
90 . +---+---+----+ | (BMAN) |
91 . ^ | +-----+---------+
92 . | |<enqueue, .
93 . | | dequeue> .
94 . | | .
95 . +---+---V----+ .
96 . . . . . . . . . . .: Portal drv : .
97 . . : : .
98 . . +-----+------+ .
99 . . : QMAN : .
100 . . : Driver : .
101 +----+------+-------+ +-----+------+ .
102 | DPDK DPAA Bus | | .
103 | driver |....................|.....................
104 | /bus/dpaa | |
105 +-------------------+ |
106 |
107 ========================== HARDWARE =====|========================
108 PHY
109 =========================================|========================
110
111 In the above representation, solid lines represent components which interface
112 with DPDK RTE Framework and dotted lines represent DPAA internal components.
113
114 DPAA Bus driver
115 ~~~~~~~~~~~~~~~
116
117 The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
118 Key functions include:
119
120 - Scanning and parsing the various objects and adding them to their respective
121 device list.
122 - Performing probe for available drivers against each scanned device
123 - Creating necessary ethernet instance before passing control to the PMD
124
125 DPAA NIC Driver (PMD)
126 ~~~~~~~~~~~~~~~~~~~~~
127
128 DPAA PMD is traditional DPDK PMD which provides necessary interface between
129 RTE framework and DPAA internal components/drivers.
130
131 - Once devices have been identified by DPAA Bus, each device is associated
132 with the PMD
133 - PMD is responsible for implementing necessary glue layer between RTE APIs
134 and lower level QMan and FMan blocks.
135 The Ethernet driver is bound to a FMAN port and implements the interfaces
136 needed to connect the DPAA network interface to the network stack.
137 Each FMAN Port corresponds to a DPDK network interface.
138
139
140 Features
141 ^^^^^^^^
142
143 Features of the DPAA PMD are:
144
145 - Multiple queues for TX and RX
146 - Receive Side Scaling (RSS)
147 - Packet type information
148 - Checksum offload
149 - Promiscuous mode
150
151 DPAA Mempool Driver
152 ~~~~~~~~~~~~~~~~~~~
153
154 DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
155 Manager.
156
157 - Using standard Mempools operations RTE API, the mempool driver interfaces
158 with RTE to service each mempool creation, deletion, buffer allocation and
159 deallocation requests.
160 - Each FMAN instance has a BMan pool attached to it during initialization.
161 Each Tx frame can be automatically released by hardware, if allocated from
162 this pool.
163
164
165 Whitelisting & Blacklisting
166 ---------------------------
167
168 For blacklisting a DPAA device, following commands can be used.
169
170 .. code-block:: console
171
172 <dpdk app> <EAL args> -b "dpaa_bus:fmX-macY" -- ...
173 e.g. "dpaa_bus:fm1-mac4"
174
175 Supported DPAA SoCs
176 -------------------
177
178 - LS1043A/LS1023A
179 - LS1046A/LS1026A
180
181 Prerequisites
182 -------------
183
184 See :doc:`../platform/dpaa` for setup information
185
186
187 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
188 to setup the basic DPDK environment.
189
190 .. note::
191
192 Some part of dpaa bus code (qbman and fman - library) routines are
193 dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
194
195 Pre-Installation Configuration
196 ------------------------------
197
198 Config File Options
199 ~~~~~~~~~~~~~~~~~~~
200
201 The following options can be modified in the ``config`` file.
202 Please note that enabling debugging options may affect system performance.
203
204 - ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``y``)
205
206 Toggle compilation of the ``librte_bus_dpaa`` driver.
207
208 - ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``y``)
209
210 Toggle compilation of the ``librte_pmd_dpaa`` driver.
211
212 - ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
213
214 Toggles display of bus configurations and enables a debugging queue
215 to fetch error (Rx/Tx) packets to driver. By default, packets with errors
216 (like wrong checksum) are dropped by the hardware.
217
218 - ``CONFIG_RTE_LIBRTE_DPAA_HWDEBUG`` (default ``n``)
219
220 Enables debugging of the Queue and Buffer Manager layer which interacts
221 with the DPAA hardware.
222
223
224 Environment Variables
225 ~~~~~~~~~~~~~~~~~~~~~
226
227 DPAA drivers uses the following environment variables to configure its
228 state during application initialization:
229
230 - ``DPAA_NUM_RX_QUEUES`` (default 1)
231
232 This defines the number of Rx queues configured for an application, per
233 port. Hardware would distribute across these many number of queues on Rx
234 of packets.
235 In case the application is configured to use lesser number of queues than
236 configured above, it might result in packet loss (because of distribution).
237
238 - ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
239
240 This defines the number of High performance queues to be used for ethdev Rx.
241 These queues use one private HW portal per queue configured, so they are
242 limited in the system. The first configured ethdev queues will be
243 automatically be assigned from the these high perf PUSH queues. Any queue
244 configuration beyond that will be standard Rx queues. The application can
245 choose to change their number if HW portals are limited.
246 The valid values are from '0' to '4'. The values shall be set to '0' if the
247 application want to use eventdev with DPAA device.
248 Currently these queues are not used for LS1023/LS1043 platform by default.
249
250
251 Driver compilation and testing
252 ------------------------------
253
254 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
255 for details.
256
257 #. Running testpmd:
258
259 Follow instructions available in the document
260 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
261 to run testpmd.
262
263 Example output:
264
265 .. code-block:: console
266
267 ./arm64-dpaa-linux-gcc/testpmd -c 0xff -n 1 \
268 -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
269
270 .....
271 EAL: Registered [pci] bus.
272 EAL: Registered [dpaa] bus.
273 EAL: Detected 4 lcore(s)
274 .....
275 EAL: dpaa: Bus scan completed
276 .....
277 Configuring Port 0 (socket 0)
278 Port 0: 00:00:00:00:00:01
279 Configuring Port 1 (socket 0)
280 Port 1: 00:00:00:00:00:02
281 .....
282 Checking link statuses...
283 Port 0 Link Up - speed 10000 Mbps - full-duplex
284 Port 1 Link Up - speed 10000 Mbps - full-duplex
285 Done
286 testpmd>
287
288 Limitations
289 -----------
290
291 Platform Requirement
292 ~~~~~~~~~~~~~~~~~~~~
293
294 DPAA drivers for DPDK can only work on NXP SoCs as listed in the
295 ``Supported DPAA SoCs``.
296
297 Maximum packet length
298 ~~~~~~~~~~~~~~~~~~~~~
299
300 The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
301 is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
302 member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
303 up to 10240 bytes can still reach the host interface.
304
305 Multiprocess Support
306 ~~~~~~~~~~~~~~~~~~~~
307
308 Current version of DPAA driver doesn't support multi-process applications
309 where I/O is performed using secondary processes. This feature would be
310 implemented in subsequent versions.