]> git.proxmox.com Git - ceph.git/blob - ceph/src/spdk/dpdk/doc/guides/sample_app_ug/ip_pipeline.rst
import 15.2.0 Octopus source
[ceph.git] / ceph / src / spdk / dpdk / doc / guides / sample_app_ug / ip_pipeline.rst
1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2015-2018 Intel Corporation.
3
4 Internet Protocol (IP) Pipeline Application
5 ===========================================
6
7 Application overview
8 --------------------
9
10 The *Internet Protocol (IP) Pipeline* application is intended to be a vehicle for rapid development of packet processing
11 applications on multi-core CPUs.
12
13 Following OpenFlow and P4 design principles, the application can be used to create functional blocks called pipelines out
14 of input/output ports, tables and actions in a modular way. Multiple pipelines can be inter-connected through packet queues
15 to create complete applications (super-pipelines).
16
17 The pipelines are mapped to application threads, with each pipeline executed by a single thread and each thread able to run
18 one or several pipelines. The possibilities of creating pipelines out of ports, tables and actions, connecting multiple
19 pipelines together and mapping the pipelines to execution threads are endless, therefore this application can be seen as
20 a true application generator.
21
22 Pipelines are created and managed through Command Line Interface (CLI):
23
24 * Any standard TCP client (e.g. telnet, netcat, custom script, etc) is typically able to connect to the application, send
25 commands through the network and wait for the response before pushing the next command.
26
27 * All the application objects are created and managed through CLI commands:
28 * 'Primitive' objects used to create pipeline ports: memory pools, links (i.e. network interfaces), SW queues, traffic managers, etc.
29 * Action profiles: used to define the actions to be executed by pipeline input/output ports and tables.
30 * Pipeline components: input/output ports, tables, pipelines, mapping of pipelines to execution threads.
31
32 Running the application
33 -----------------------
34
35 The application startup command line is::
36
37 ip_pipeline [EAL_ARGS] -- [-s SCRIPT_FILE] [-h HOST] [-p PORT]
38
39 The application startup arguments are:
40
41 ``-s SCRIPT_FILE``
42
43 * Optional: Yes
44
45 * Default: Not present
46
47 * Argument: Path to the CLI script file to be run at application startup.
48 No CLI script file will run at startup if this argument is not present.
49
50 ``-h HOST``
51
52 * Optional: Yes
53
54 * Default: ``0.0.0.0``
55
56 * Argument: IP Address of the host running ip pipeline application to be used by
57 remote TCP based client (telnet, netcat, etc.) for connection.
58
59 ``-p PORT``
60
61 * Optional: Yes
62
63 * Default: ``8086``
64
65 * Argument: TCP port number at which the ip pipeline is running.
66 This port number should be used by remote TCP client (such as telnet, netcat, etc.) to connect to host application.
67
68 Refer to *DPDK Getting Started Guide* for general information on running applications and the Environment Abstraction Layer (EAL) options.
69
70 The following is an example command to run ip pipeline application configured for layer 2 forwarding:
71
72 .. code-block:: console
73
74 $ ./build/ip_pipeline -c 0x3 -- -s examples/route_ecmp.cli
75
76 The application should start successfully and display as follows:
77
78 .. code-block:: console
79
80 EAL: Detected 40 lcore(s)
81 EAL: Detected 2 NUMA nodes
82 EAL: Multi-process socket /var/run/.rte_unix
83 EAL: Probing VFIO support...
84 EAL: PCI device 0000:02:00.0 on NUMA socket 0
85 EAL: probe driver: 8086:10fb net_ixgbe
86 ...
87
88 To run remote client (e.g. telnet) to communicate with the ip pipeline application:
89
90 .. code-block:: console
91
92 $ telnet 127.0.0.1 8086
93
94 When running a telnet client as above, command prompt is displayed:
95
96 .. code-block:: console
97
98 Trying 127.0.0.1...
99 Connected to 127.0.0.1.
100 Escape character is '^]'.
101
102 Welcome to IP Pipeline!
103
104 pipeline>
105
106 Once application and telnet client start running, messages can be sent from client to application.
107 At any stage, telnet client can be terminated using the quit command.
108
109
110 Application stages
111 ------------------
112
113 Initialization
114 ~~~~~~~~~~~~~~
115
116 During this stage, EAL layer is initialised and application specific arguments are parsed. Furthermore, the data structures
117 (i.e. linked lists) for application objects are initialized. In case of any initialization error, an error message
118 is displayed and the application is terminated.
119
120 .. _ip_pipeline_runtime:
121
122 Run-time
123 ~~~~~~~~
124
125 The master thread is creating and managing all the application objects based on CLI input.
126
127 Each data plane thread runs one or several pipelines previously assigned to it in round-robin order. Each data plane thread
128 executes two tasks in time-sharing mode:
129
130 1. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
131
132 2. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
133 messages send by the master thread. Examples: add/remove pipeline to/from current data plane thread, add/delete rules
134 to/from given table of a specific pipeline owned by the current data plane thread, read statistics, etc.
135
136 Examples
137 --------
138
139 .. _table_examples:
140
141 .. tabularcolumns:: |p{3cm}|p{5cm}|p{4cm}|p{4cm}|
142
143 .. table:: Pipeline examples provided with the application
144
145 +-----------------------+----------------------+----------------+------------------------------------+
146 | Name | Table(s) | Actions | Messages |
147 +=======================+======================+================+====================================+
148 | L2fwd | Stub | Forward | 1. Mempool create |
149 | | | | 2. Link create |
150 | Note: Implemented | | | 3. Pipeline create |
151 | using pipeline with | | | 4. Pipeline port in/out |
152 | a simple pass-through | | | 5. Pipeline table |
153 | connection between | | | 6. Pipeline port in table |
154 | input and output | | | 7. Pipeline enable |
155 | ports. | | | 8. Pipeline table rule add |
156 +-----------------------+----------------------+----------------+------------------------------------+
157 | Flow classification | Exact match | Forward | 1. Mempool create |
158 | | | | 2. Link create |
159 | | * Key = byte array | | 3. Pipeline create |
160 | | (16 bytes) | | 4. Pipeline port in/out |
161 | | * Offset = 278 | | 5. Pipeline table |
162 | | * Table size = 64K | | 6. Pipeline port in table |
163 | | | | 7. Pipeline enable |
164 | | | | 8. Pipeline table rule add default |
165 | | | | 9. Pipeline table rule add |
166 +-----------------------+----------------------+----------------+------------------------------------+
167 | KNI | Stub | Forward | 1. Mempool create |
168 | | | | 2. Link create |
169 | | | | 3. Pipeline create |
170 | | | | 4. Pipeline port in/out |
171 | | | | 5. Pipeline table |
172 | | | | 6. Pipeline port in table |
173 | | | | 7. Pipeline enable |
174 | | | | 8. Pipeline table rule add |
175 +-----------------------+----------------------+----------------+------------------------------------+
176 | Firewall | ACL | Allow/Drop | 1. Mempool create |
177 | | | | 2. Link create |
178 | | * Key = n-tuple | | 3. Pipeline create |
179 | | * Offset = 270 | | 4. Pipeline port in/out |
180 | | * Table size = 4K | | 5. Pipeline table |
181 | | | | 6. Pipeline port in table |
182 | | | | 7. Pipeline enable |
183 | | | | 8. Pipeline table rule add default |
184 | | | | 9. Pipeline table rule add |
185 +-----------------------+----------------------+----------------+------------------------------------+
186 | IP routing | LPM (IPv4) | Forward | 1. Mempool Create |
187 | | | | 2. Link create |
188 | | * Key = IP dest addr | | 3. Pipeline create |
189 | | * Offset = 286 | | 4. Pipeline port in/out |
190 | | * Table size = 4K | | 5. Pipeline table |
191 | | | | 6. Pipeline port in table |
192 | | | | 7. Pipeline enable |
193 | | | | 8. Pipeline table rule add default |
194 | | | | 9. Pipeline table rule add |
195 +-----------------------+----------------------+----------------+------------------------------------+
196 | Equal-cost multi-path | LPM (IPv4) | Forward, | 1. Mempool Create |
197 | routing (ECMP) | | load balance, | 2. Link create |
198 | | * Key = IP dest addr | encap ether | 3. Pipeline create |
199 | | * Offset = 286 | | 4. Pipeline port in/out |
200 | | * Table size = 4K | | 5. Pipeline table (LPM) |
201 | | | | 6. Pipeline table (Array) |
202 | | | | 7. Pipeline port in table (LPM) |
203 | | Array | | 8. Pipeline enable |
204 | | | | 9. Pipeline table rule add default |
205 | | * Key = Array index | | 10. Pipeline table rule add(LPM) |
206 | | * Offset = 256 | | 11. Pipeline table rule add(Array) |
207 | | * Size = 64K | | |
208 | | | | |
209 +-----------------------+----------------------+----------------+------------------------------------+
210
211 Command Line Interface (CLI)
212 ----------------------------
213
214 Link
215 ~~~~
216
217 Link configuration ::
218
219 link <link_name>
220 dev <device_name>|port <port_id>
221 rxq <n_queues> <queue_size> <mempool_name>
222 txq <n_queues> <queue_size> promiscuous on | off
223 [rss <qid_0> ... <qid_n>]
224
225 Note: The PCI device name must be specified in the Domain:Bus:Device.Function format.
226
227
228 Mempool
229 ~~~~~~~
230
231 Mempool create ::
232
233 mempool <mempool_name> buffer <buffer_size>
234 pool <pool_size> cache <cache_size> cpu <cpu_id>
235
236
237 Software queue
238 ~~~~~~~~~~~~~~
239
240 Create software queue ::
241
242 swq <swq_name> size <size> cpu <cpu_id>
243
244
245 Traffic manager
246 ~~~~~~~~~~~~~~~
247
248 Add traffic manager subport profile ::
249
250 tmgr subport profile
251 <tb_rate> <tb_size>
252 <tc0_rate> <tc1_rate> <tc2_rate> <tc3_rate>
253 <tc_period>
254
255
256 Add traffic manager pipe profile ::
257
258 tmgr pipe profile
259 <tb_rate> <tb_size>
260 <tc0_rate> <tc1_rate> <tc2_rate> <tc3_rate>
261 <tc_period>
262 <tc_ov_weight> <wrr_weight0..15>
263
264 Create traffic manager port ::
265
266 tmgr <tmgr_name>
267 rate <rate>
268 spp <n_subports_per_port>
269 pps <n_pipes_per_subport>
270 qsize <qsize_tc0>
271 <qsize_tc1> <qsize_tc2> <qsize_tc3>
272 fo <frame_overhead> mtu <mtu> cpu <cpu_id>
273
274 Configure traffic manager subport ::
275
276 tmgr <tmgr_name>
277 subport <subport_id>
278 profile <subport_profile_id>
279
280 Configure traffic manager pipe ::
281
282 tmgr <tmgr_name>
283 subport <subport_id>
284 pipe from <pipe_id_first> to <pipe_id_last>
285 profile <pipe_profile_id>
286
287
288 Tap
289 ~~~
290
291 Create tap port ::
292
293 tap <name>
294
295
296 Kni
297 ~~~
298
299 Create kni port ::
300
301 kni <kni_name>
302 link <link_name>
303 mempool <mempool_name>
304 [thread <thread_id>]
305
306
307 Cryptodev
308 ~~~~~~~~~
309
310 Create cryptodev port ::
311
312 cryptodev <cryptodev_name>
313 dev <DPDK Cryptodev PMD name>
314 queue <n_queues> <queue_size>
315
316 Action profile
317 ~~~~~~~~~~~~~~
318
319 Create action profile for pipeline input port ::
320
321 port in action profile <profile_name>
322 [filter match | mismatch offset <key_offset> mask <key_mask> key <key_value> port <port_id>]
323 [balance offset <key_offset> mask <key_mask> port <port_id0> ... <port_id15>]
324
325 Create action profile for the pipeline table ::
326
327 table action profile <profile_name>
328 ipv4 | ipv6
329 offset <ip_offset>
330 fwd
331 [balance offset <key_offset> mask <key_mask> outoffset <out_offset>]
332 [meter srtcm | trtcm
333 tc <n_tc>
334 stats none | pkts | bytes | both]
335 [tm spp <n_subports_per_port> pps <n_pipes_per_subport>]
336 [encap ether | vlan | qinq | mpls | pppoe]
337 [nat src | dst
338 proto udp | tcp]
339 [ttl drop | fwd
340 stats none | pkts]
341 [stats pkts | bytes | both]
342 [sym_crypto cryptodev <cryptodev_name>
343 mempool_create <mempool_name> mempool_init <mempool_name>]
344 [time]
345
346
347 Pipeline
348 ~~~~~~~~
349
350 Create pipeline ::
351
352 pipeline <pipeline_name>
353 period <timer_period_ms>
354 offset_port_id <offset_port_id>
355 cpu <cpu_id>
356
357 Create pipeline input port ::
358
359 pipeline <pipeline_name> port in
360 bsz <burst_size>
361 link <link_name> rxq <queue_id>
362 | swq <swq_name>
363 | tmgr <tmgr_name>
364 | tap <tap_name> mempool <mempool_name> mtu <mtu>
365 | kni <kni_name>
366 | source mempool <mempool_name> file <file_name> bpp <n_bytes_per_pkt>
367 [action <port_in_action_profile_name>]
368 [disabled]
369
370 Create pipeline output port ::
371
372 pipeline <pipeline_name> port out
373 bsz <burst_size>
374 link <link_name> txq <txq_id>
375 | swq <swq_name>
376 | tmgr <tmgr_name>
377 | tap <tap_name>
378 | kni <kni_name>
379 | sink [file <file_name> pkts <max_n_pkts>]
380
381 Create pipeline table ::
382
383 pipeline <pipeline_name> table
384 match
385 acl
386 ipv4 | ipv6
387 offset <ip_header_offset>
388 size <n_rules>
389 | array
390 offset <key_offset>
391 size <n_keys>
392 | hash
393 ext | lru
394 key <key_size>
395 mask <key_mask>
396 offset <key_offset>
397 buckets <n_buckets>
398 size <n_keys>
399 | lpm
400 ipv4 | ipv6
401 offset <ip_header_offset>
402 size <n_rules>
403 | stub
404 [action <table_action_profile_name>]
405
406 Connect pipeline input port to table ::
407
408 pipeline <pipeline_name> port in <port_id> table <table_id>
409
410 Display statistics for specific pipeline input port, output port
411 or table ::
412
413 pipeline <pipeline_name> port in <port_id> stats read [clear]
414 pipeline <pipeline_name> port out <port_id> stats read [clear]
415 pipeline <pipeline_name> table <table_id> stats read [clear]
416
417 Enable given input port for specific pipeline instance ::
418
419 pipeline <pipeline_name> port out <port_id> disable
420
421 Disable given input port for specific pipeline instance ::
422
423 pipeline <pipeline_name> port out <port_id> disable
424
425 Add default rule to table for specific pipeline instance ::
426
427 pipeline <pipeline_name> table <table_id> rule add
428 match
429 default
430 action
431 fwd
432 drop
433 | port <port_id>
434 | meta
435 | table <table_id>
436
437 Add rule to table for specific pipeline instance ::
438
439 pipeline <pipeline_name> table <table_id> rule add
440
441 match
442 acl
443 priority <priority>
444 ipv4 | ipv6 <sa> <sa_depth> <da> <da_depth>
445 <sp0> <sp1> <dp0> <dp1> <proto>
446 | array <pos>
447 | hash
448 raw <key>
449 | ipv4_5tuple <sa> <da> <sp> <dp> <proto>
450 | ipv6_5tuple <sa> <da> <sp> <dp> <proto>
451 | ipv4_addr <addr>
452 | ipv6_addr <addr>
453 | qinq <svlan> <cvlan>
454 | lpm
455 ipv4 | ipv6 <addr> <depth>
456
457 action
458 fwd
459 drop
460 | port <port_id>
461 | meta
462 | table <table_id>
463 [balance <out0> ... <out7>]
464 [meter
465 tc0 meter <meter_profile_id> policer g <pa> y <pa> r <pa>
466 [tc1 meter <meter_profile_id> policer g <pa> y <pa> r <pa>
467 tc2 meter <meter_profile_id> policer g <pa> y <pa> r <pa>
468 tc3 meter <meter_profile_id> policer g <pa> y <pa> r <pa>]]
469 [tm subport <subport_id> pipe <pipe_id>]
470 [encap
471 ether <da> <sa>
472 | vlan <da> <sa> <pcp> <dei> <vid>
473 | qinq <da> <sa> <pcp> <dei> <vid> <pcp> <dei> <vid>
474 | mpls unicast | multicast
475 <da> <sa>
476 label0 <label> <tc> <ttl>
477 [label1 <label> <tc> <ttl>
478 [label2 <label> <tc> <ttl>
479 [label3 <label> <tc> <ttl>]]]
480 | pppoe <da> <sa> <session_id>]
481 [nat ipv4 | ipv6 <addr> <port>]
482 [ttl dec | keep]
483 [stats]
484 [time]
485 [sym_crypto
486 encrypt | decrypt
487 type
488 | cipher
489 cipher_algo <algo> cipher_key <key> cipher_iv <iv>
490 | cipher_auth
491 cipher_algo <algo> cipher_key <key> cipher_iv <iv>
492 auth_algo <algo> auth_key <key> digest_size <size>
493 | aead
494 aead_algo <algo> aead_key <key> aead_iv <iv> aead_aad <aad>
495 digest_size <size>
496 data_offset <data_offset>]
497
498 where:
499 <pa> ::= g | y | r | drop
500
501 Add bulk rules to table for specific pipeline instance ::
502
503 pipeline <pipeline_name> table <table_id> rule add bulk <file_name> <n_rules>
504
505 Where:
506 - file_name = path to file
507 - File line format = match <match> action <action>
508
509 Delete table rule for specific pipeline instance ::
510
511 pipeline <pipeline_name> table <table_id> rule delete
512 match <match>
513
514 Delete default table rule for specific pipeline instance ::
515
516 pipeline <pipeline_name> table <table_id> rule delete
517 match
518 default
519
520 Add meter profile to the table for specific pipeline instance ::
521
522 pipeline <pipeline_name> table <table_id> meter profile <meter_profile_id>
523 add srtcm cir <cir> cbs <cbs> ebs <ebs>
524 | trtcm cir <cir> pir <pir> cbs <cbs> pbs <pbs>
525
526 Delete meter profile from the table for specific pipeline instance ::
527
528 pipeline <pipeline_name> table <table_id>
529 meter profile <meter_profile_id> delete
530
531
532 Update the dscp table for meter or traffic manager action for specific
533 pipeline instance ::
534
535 pipeline <pipeline_name> table <table_id> dscp <file_name>
536
537 Where:
538 - file_name = path to file
539 - exactly 64 lines
540 - File line format = <tc_id> <tc_queue_id> <color>, with <color> as: g | y | r
541
542
543 Pipeline enable/disable
544 ~~~~~~~~~~~~~~~~~~~~~~~
545
546 Enable given pipeline instance for specific data plane thread ::
547
548 thread <thread_id> pipeline <pipeline_name> enable
549
550
551 Disable given pipeline instance for specific data plane thread ::
552
553 thread <thread_id> pipeline <pipeline_name> disable