]> git.proxmox.com Git - ceph.git/blob - ceph/src/spdk/dpdk/doc/guides/sample_app_ug/l2_forward_event.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / src / spdk / dpdk / doc / guides / sample_app_ug / l2_forward_event.rst
1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2014 Intel Corporation.
3
4 .. _l2_fwd_event_app:
5
6 L2 Forwarding Eventdev Sample Application
7 =========================================
8
9 The L2 Forwarding eventdev sample application is a simple example of packet
10 processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
11 poll and event mode packet I/O mechanism.
12
13 Overview
14 --------
15
16 The L2 Forwarding eventdev sample application, performs L2 forwarding for each
17 packet that is received on an RX_PORT. The destination port is the adjacent port
18 from the enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
19 ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other.
20 Also, if MAC addresses updating is enabled, the MAC addresses are affected as follows:
21
22 * The source MAC address is replaced by the TX_PORT MAC address
23
24 * The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
25
26 Application receives packets from RX_PORT using below mentioned methods:
27
28 * Poll mode
29
30 * Eventdev mode (default)
31
32 This application can be used to benchmark performance using a traffic-generator,
33 as shown in the :numref:`figure_l2fwd_event_benchmark_setup`.
34
35 .. _figure_l2fwd_event_benchmark_setup:
36
37 .. figure:: img/l2_fwd_benchmark_setup.*
38
39 Performance Benchmark Setup (Basic Environment)
40
41 Compiling the Application
42 -------------------------
43
44 To compile the sample application see :doc:`compiling`.
45
46 The application is located in the ``l2fwd-event`` sub-directory.
47
48 Running the Application
49 -----------------------
50
51 The application requires a number of command line options:
52
53 .. code-block:: console
54
55 ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sched=SCHED_MODE
56
57 where,
58
59 * p PORTMASK: A hexadecimal bitmask of the ports to configure
60
61 * q NQ: A number of queues (=ports) per lcore (default is 1)
62
63 * --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
64
65 * --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
66
67 * --eventq-sched=SCHED_MODE: Event queue schedule mode, Ordered, Atomic or Parallel. Atomic by default.
68
69 * --config: Configure forwarding port pair mapping. Alternate port pairs by default.
70
71 Sample usage commands are given below to run the application into different mode:
72
73 Poll mode with 4 lcores, 16 ports and 8 RX queues per lcore and MAC address updating enabled,
74 issue the command:
75
76 .. code-block:: console
77
78 ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll
79
80 Eventdev mode with 4 lcores, 16 ports , sched method ordered and MAC address updating enabled,
81 issue the command:
82
83 .. code-block:: console
84
85 ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sched=ordered
86
87 or
88
89 .. code-block:: console
90
91 ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sched=ordered
92
93 Refer to the *DPDK Getting Started Guide* for general information on running
94 applications and the Environment Abstraction Layer (EAL) options.
95
96 To run application with S/W scheduler, it uses following DPDK services:
97
98 * Software scheduler
99 * Rx adapter service function
100 * Tx adapter service function
101
102 Application needs service cores to run above mentioned services. Service cores
103 must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
104 scheduler. Following is the sample command:
105
106 .. code-block:: console
107
108 ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 --vdev event_sw0 -- -q 8 -p ffff --mode=eventdev --eventq-sched=ordered
109
110 Explanation
111 -----------
112
113 The following sections provide some explanation of the code.
114
115 .. _l2_fwd_event_app_cmd_arguments:
116
117 Command Line Arguments
118 ~~~~~~~~~~~~~~~~~~~~~~
119
120 The L2 Forwarding eventdev sample application takes specific parameters,
121 in addition to Environment Abstraction Layer (EAL) arguments.
122 The preferred way to parse parameters is to use the getopt() function,
123 since it is part of a well-defined and portable library.
124
125 The parsing of arguments is done in the **l2fwd_parse_args()** function for non
126 eventdev parameters and in **parse_eventdev_args()** for eventdev parameters.
127 The method of argument parsing is not described here. Refer to the
128 *glibc getopt(3)* man page for details.
129
130 EAL arguments are parsed first, then application-specific arguments.
131 This is done at the beginning of the main() function and eventdev parameters
132 are parsed in eventdev_resource_setup() function during eventdev setup:
133
134 .. code-block:: c
135
136 /* init EAL */
137
138 ret = rte_eal_init(argc, argv);
139 if (ret < 0)
140 rte_panic("Invalid EAL arguments\n");
141
142 argc -= ret;
143 argv += ret;
144
145 /* parse application arguments (after the EAL ones) */
146
147 ret = l2fwd_parse_args(argc, argv);
148 if (ret < 0)
149 rte_panic("Invalid L2FWD arguments\n");
150 .
151 .
152 .
153
154 /* Parse eventdev command line options */
155 ret = parse_eventdev_args(argc, argv);
156 if (ret < 0)
157 return ret;
158
159
160
161
162 .. _l2_fwd_event_app_mbuf_init:
163
164 Mbuf Pool Initialization
165 ~~~~~~~~~~~~~~~~~~~~~~~~
166
167 Once the arguments are parsed, the mbuf pool is created.
168 The mbuf pool contains a set of mbuf objects that will be used by the driver
169 and the application to store network packet data:
170
171 .. code-block:: c
172
173 /* create the mbuf pool */
174
175 l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
176 MEMPOOL_CACHE_SIZE, 0,
177 RTE_MBUF_DEFAULT_BUF_SIZE,
178 rte_socket_id());
179 if (l2fwd_pktmbuf_pool == NULL)
180 rte_panic("Cannot init mbuf pool\n");
181
182 The rte_mempool is a generic structure used to handle pools of objects.
183 In this case, it is necessary to create a pool that will be used by the driver.
184 The number of allocated pkt mbufs is NB_MBUF, with a data room size of
185 RTE_MBUF_DEFAULT_BUF_SIZE each.
186 A per-lcore cache of 32 mbufs is kept.
187 The memory is allocated in NUMA socket 0,
188 but it is possible to extend this code to allocate one mbuf pool per socket.
189
190 The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
191 initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
192 An advanced application may want to use the mempool API to create the
193 mbuf pool with more control.
194
195 .. _l2_fwd_event_app_drv_init:
196
197 Driver Initialization
198 ~~~~~~~~~~~~~~~~~~~~~
199
200 The main part of the code in the main() function relates to the initialization
201 of the driver. To fully understand this code, it is recommended to study the
202 chapters that related to the Poll Mode and Event mode Driver in the
203 *DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
204
205 .. code-block:: c
206
207 /* reset l2fwd_dst_ports */
208
209 for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
210 l2fwd_dst_ports[portid] = 0;
211
212 last_port = 0;
213
214 /*
215 * Each logical core is assigned a dedicated TX queue on each port.
216 */
217
218 RTE_ETH_FOREACH_DEV(portid) {
219 /* skip ports that are not enabled */
220
221 if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
222 continue;
223
224 if (nb_ports_in_mask % 2) {
225 l2fwd_dst_ports[portid] = last_port;
226 l2fwd_dst_ports[last_port] = portid;
227 }
228 else
229 last_port = portid;
230
231 nb_ports_in_mask++;
232
233 rte_eth_dev_info_get((uint8_t) portid, &dev_info);
234 }
235
236 The next step is to configure the RX and TX queues. For each port, there is only
237 one RX queue (only one lcore is able to poll a given port). The number of TX
238 queues depends on the number of available lcores. The rte_eth_dev_configure()
239 function is used to configure the number of queues for a port:
240
241 .. code-block:: c
242
243 ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
244 if (ret < 0)
245 rte_panic("Cannot configure device: err=%d, port=%u\n",
246 ret, portid);
247
248 .. _l2_fwd_event_app_rx_init:
249
250 RX Queue Initialization
251 ~~~~~~~~~~~~~~~~~~~~~~~
252
253 The application uses one lcore to poll one or several ports, depending on the -q
254 option, which specifies the number of queues per lcore.
255
256 For example, if the user specifies -q 4, the application is able to poll four
257 ports with one lcore. If there are 16 ports on the target (and if the portmask
258 argument is -p ffff ), the application will need four lcores to poll all the
259 ports.
260
261 .. code-block:: c
262
263 ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0,
264 &rx_conf, l2fwd_pktmbuf_pool);
265 if (ret < 0)
266
267 rte_panic("rte_eth_rx_queue_setup: err=%d, port=%u\n",
268 ret, portid);
269
270 The list of queues that must be polled for a given lcore is stored in a private
271 structure called struct lcore_queue_conf.
272
273 .. code-block:: c
274
275 struct lcore_queue_conf {
276 unsigned n_rx_port;
277 unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
278 struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
279 } rte_cache_aligned;
280
281 struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
282
283 The values n_rx_port and rx_port_list[] are used in the main packet processing
284 loop (see :ref:`l2_fwd_event_app_rx_tx_packets`).
285
286 .. _l2_fwd_event_app_tx_init:
287
288 TX Queue Initialization
289 ~~~~~~~~~~~~~~~~~~~~~~~
290
291 Each lcore should be able to transmit on any port. For every port, a single TX
292 queue is initialized.
293
294 .. code-block:: c
295
296 /* init one TX queue on each port */
297
298 fflush(stdout);
299
300 ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd,
301 rte_eth_dev_socket_id(portid), &tx_conf);
302 if (ret < 0)
303 rte_panic("rte_eth_tx_queue_setup:err=%d, port=%u\n",
304 ret, (unsigned) portid);
305
306 To configure eventdev support, application setups following components:
307
308 * Event dev
309 * Event queue
310 * Event Port
311 * Rx/Tx adapters
312 * Ethernet ports
313
314 .. _l2_fwd_event_app_event_dev_init:
315
316 Event device Initialization
317 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
318 Application can use either H/W or S/W based event device scheduler
319 implementation and supports single instance of event device. It configures event
320 device as per below configuration
321
322 .. code-block:: c
323
324 struct rte_event_dev_config event_d_conf = {
325 .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */
326 .nb_event_ports = num_workers, /* Dedicated to each lcore */
327 .nb_events_limit = 4096,
328 .nb_event_queue_flows = 1024,
329 .nb_event_port_dequeue_depth = 128,
330 .nb_event_port_enqueue_depth = 128
331 };
332
333 ret = rte_event_dev_configure(event_d_id, &event_d_conf);
334 if (ret < 0)
335 rte_panic("Error in configuring event device\n");
336
337 In case of S/W scheduler, application runs eventdev scheduler service on service
338 core. Application retrieves service id and finds the best possible service core to
339 run S/W scheduler.
340
341 .. code-block:: c
342
343 rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
344 if (evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
345 ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
346 &service_id);
347 if (ret != -ESRCH && ret != 0)
348 rte_panic("Error in starting eventdev service\n");
349 l2fwd_event_service_enable(service_id);
350 }
351
352 .. _l2_fwd_app_event_queue_init:
353
354 Event queue Initialization
355 ~~~~~~~~~~~~~~~~~~~~~~~~~~
356 Each Ethernet device is assigned a dedicated event queue which will be linked
357 to all available event ports i.e. each lcore can dequeue packets from any of the
358 Ethernet ports.
359
360 .. code-block:: c
361
362 struct rte_event_queue_conf event_q_conf = {
363 .nb_atomic_flows = 1024,
364 .nb_atomic_order_sequences = 1024,
365 .event_queue_cfg = 0,
366 .schedule_type = RTE_SCHED_TYPE_ATOMIC,
367 .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST
368 };
369
370 /* User requested sched mode */
371 event_q_conf.schedule_type = eventq_sched_mode;
372 for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
373 ret = rte_event_queue_setup(event_d_id, event_q_id,
374 &event_q_conf);
375 if (ret < 0)
376 rte_panic("Error in configuring event queue\n");
377 }
378
379 In case of S/W scheduler, an extra event queue is created which will be used for
380 Tx adapter service function for enqueue operation.
381
382 .. _l2_fwd_app_event_port_init:
383
384 Event port Initialization
385 ~~~~~~~~~~~~~~~~~~~~~~~~~
386 Each worker thread is assigned a dedicated event port for enq/deq operations
387 to/from an event device. All event ports are linked with all available event
388 queues.
389
390 .. code-block:: c
391
392 struct rte_event_port_conf event_p_conf = {
393 .dequeue_depth = 32,
394 .enqueue_depth = 32,
395 .new_event_threshold = 4096
396 };
397
398 for (event_p_id = 0; event_p_id < num_workers; event_p_id++) {
399 ret = rte_event_port_setup(event_d_id, event_p_id,
400 &event_p_conf);
401 if (ret < 0)
402 rte_panic("Error in configuring event port %d\n", event_p_id);
403
404 ret = rte_event_port_link(event_d_id, event_p_id, NULL,
405 NULL, 0);
406 if (ret < 0)
407 rte_panic("Error in linking event port %d to queue\n",
408 event_p_id);
409 }
410
411 In case of S/W scheduler, an extra event port is created by DPDK library which
412 is retrieved by the application and same will be used by Tx adapter service.
413
414 .. code-block:: c
415
416 ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
417 if (ret)
418 rte_panic("Failed to get Tx adapter port id: %d\n", ret);
419
420 ret = rte_event_port_link(event_d_id, tx_port_id,
421 &evt_rsrc.evq.event_q_id[
422 evt_rsrc.evq.nb_queues - 1],
423 NULL, 1);
424 if (ret != 1)
425 rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
426 ret);
427
428 .. _l2_fwd_event_app_adapter_init:
429
430 Rx/Tx adapter Initialization
431 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
432 Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each
433 Ethernet port's Rx queues are connected to its respective event queue at
434 priority 0 via Rx adapter configuration and Ethernet port's tx queues are
435 connected via Tx adapter.
436
437 .. code-block:: c
438
439 RTE_ETH_FOREACH_DEV(port_id) {
440 if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
441 continue;
442 ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
443 &evt_rsrc->def_p_conf);
444 if (ret)
445 rte_panic("Failed to create rx adapter[%d]\n",
446 adapter_id);
447
448 /* Configure user requested sched type*/
449 eth_q_conf.ev.sched_type = rsrc->sched_type;
450 eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
451 ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
452 -1, &eth_q_conf);
453 if (ret)
454 rte_panic("Failed to add queues to Rx adapter\n");
455
456 ret = rte_event_eth_rx_adapter_start(adapter_id);
457 if (ret)
458 rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
459
460 evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
461 adapter_id++;
462 if (q_id < evt_rsrc->evq.nb_queues)
463 q_id++;
464 }
465
466 adapter_id = 0;
467 RTE_ETH_FOREACH_DEV(port_id) {
468 if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
469 continue;
470 ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
471 &evt_rsrc->def_p_conf);
472 if (ret)
473 rte_panic("Failed to create tx adapter[%d]\n",
474 adapter_id);
475
476 ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
477 -1);
478 if (ret)
479 rte_panic("Failed to add queues to Tx adapter\n");
480
481 ret = rte_event_eth_tx_adapter_start(adapter_id);
482 if (ret)
483 rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
484
485 evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
486 adapter_id++;
487 }
488
489 For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are
490 configured which will be shared among all the Ethernet ports. Also DPDK library
491 need service cores to run internal services for Rx/Tx adapters. Application gets
492 service id for Rx/Tx adapters and after successful setup it runs the services
493 on dedicated service cores.
494
495 .. code-block:: c
496
497 for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
498 ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
499 evt_rsrc->rx_adptr.rx_adptr[i], &caps);
500 if (ret < 0)
501 rte_panic("Failed to get Rx adapter[%d] caps\n",
502 evt_rsrc->rx_adptr.rx_adptr[i]);
503 ret = rte_event_eth_rx_adapter_service_id_get(
504 evt_rsrc->event_d_id,
505 &service_id);
506 if (ret != -ESRCH && ret != 0)
507 rte_panic("Error in starting Rx adapter[%d] service\n",
508 evt_rsrc->rx_adptr.rx_adptr[i]);
509 l2fwd_event_service_enable(service_id);
510 }
511
512 for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
513 ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
514 evt_rsrc->tx_adptr.tx_adptr[i], &caps);
515 if (ret < 0)
516 rte_panic("Failed to get Rx adapter[%d] caps\n",
517 evt_rsrc->tx_adptr.tx_adptr[i]);
518 ret = rte_event_eth_tx_adapter_service_id_get(
519 evt_rsrc->event_d_id,
520 &service_id);
521 if (ret != -ESRCH && ret != 0)
522 rte_panic("Error in starting Rx adapter[%d] service\n",
523 evt_rsrc->tx_adptr.tx_adptr[i]);
524 l2fwd_event_service_enable(service_id);
525 }
526
527 .. _l2_fwd_event_app_rx_tx_packets:
528
529 Receive, Process and Transmit Packets
530 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
531
532 In the **l2fwd_main_loop()** function, the main task is to read ingress packets from
533 the RX queues. This is done using the following code:
534
535 .. code-block:: c
536
537 /*
538 * Read packet from RX queues
539 */
540
541 for (i = 0; i < qconf->n_rx_port; i++) {
542 portid = qconf->rx_port_list[i];
543 nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
544 MAX_PKT_BURST);
545
546 for (j = 0; j < nb_rx; j++) {
547 m = pkts_burst[j];
548 rte_prefetch0(rte_pktmbuf_mtod(m, void *));
549 l2fwd_simple_forward(m, portid);
550 }
551 }
552
553 Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst()
554 function writes the mbuf pointers in a local table and returns the number of
555 available mbufs in the table.
556
557 Then, each mbuf in the table is processed by the l2fwd_simple_forward()
558 function. The processing is very simple: process the TX port from the RX port,
559 then replace the source and destination MAC addresses if MAC addresses updating
560 is enabled.
561
562 During the initialization process, a static array of destination ports
563 (l2fwd_dst_ports[]) is filled such that for each source port, a destination port
564 is assigned that is either the next or previous enabled port from the portmask.
565 If number of ports are odd in portmask then packet from last port will be
566 forwarded to first port i.e. if portmask=0x07, then forwarding will take place
567 like p0--->p1, p1--->p2, p2--->p0.
568
569 Also to optimize enqueue operation, l2fwd_simple_forward() stores incoming mbufs
570 up to MAX_PKT_BURST. Once it reaches up to limit, all packets are transmitted to
571 destination ports.
572
573 .. code-block:: c
574
575 static void
576 l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
577 {
578 uint32_t dst_port;
579 int32_t sent;
580 struct rte_eth_dev_tx_buffer *buffer;
581
582 dst_port = l2fwd_dst_ports[portid];
583
584 if (mac_updating)
585 l2fwd_mac_updating(m, dst_port);
586
587 buffer = tx_buffer[dst_port];
588 sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
589 if (sent)
590 port_statistics[dst_port].tx += sent;
591 }
592
593 For this test application, the processing is exactly the same for all packets
594 arriving on the same RX port. Therefore, it would have been possible to call
595 the rte_eth_tx_buffer() function directly from the main loop to send all the
596 received packets on the same TX port, using the burst-oriented send function,
597 which is more efficient.
598
599 However, in real-life applications (such as, L3 routing),
600 packet N is not necessarily forwarded on the same port as packet N-1.
601 The application is implemented to illustrate that, so the same approach can be
602 reused in a more complex application.
603
604 To ensure that no packets remain in the tables, each lcore does a draining of TX
605 queue in its main loop. This technique introduces some latency when there are
606 not many packets to send, however it improves performance:
607
608 .. code-block:: c
609
610 cur_tsc = rte_rdtsc();
611
612 /*
613 * TX burst queue drain
614 */
615 diff_tsc = cur_tsc - prev_tsc;
616 if (unlikely(diff_tsc > drain_tsc)) {
617 for (i = 0; i < qconf->n_rx_port; i++) {
618 portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
619 buffer = tx_buffer[portid];
620 sent = rte_eth_tx_buffer_flush(portid, 0,
621 buffer);
622 if (sent)
623 port_statistics[portid].tx += sent;
624 }
625
626 /* if timer is enabled */
627 if (timer_period > 0) {
628 /* advance the timer */
629 timer_tsc += diff_tsc;
630
631 /* if timer has reached its timeout */
632 if (unlikely(timer_tsc >= timer_period)) {
633 /* do this only on master core */
634 if (lcore_id == rte_get_master_lcore()) {
635 print_stats();
636 /* reset the timer */
637 timer_tsc = 0;
638 }
639 }
640 }
641
642 prev_tsc = cur_tsc;
643 }
644
645 In the **l2fwd_event_loop()** function, the main task is to read ingress
646 packets from the event ports. This is done using the following code:
647
648 .. code-block:: c
649
650 /* Read packet from eventdev */
651 nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
652 events, deq_len, 0);
653 if (nb_rx == 0) {
654 rte_pause();
655 continue;
656 }
657
658 for (i = 0; i < nb_rx; i++) {
659 mbuf[i] = events[i].mbuf;
660 rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
661 }
662
663
664 Before reading packets, deq_len is fetched to ensure correct allowed deq length
665 by the eventdev.
666 The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
667 and returns the number of available mbufs in the table.
668
669 Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
670 function. The processing is very simple: process the TX port from the RX port,
671 then replace the source and destination MAC addresses if MAC addresses updating
672 is enabled.
673
674 During the initialization process, a static array of destination ports
675 (l2fwd_dst_ports[]) is filled such that for each source port, a destination port
676 is assigned that is either the next or previous enabled port from the portmask.
677 If number of ports are odd in portmask then packet from last port will be
678 forwarded to first port i.e. if portmask=0x07, then forwarding will take place
679 like p0--->p1, p1--->p2, p2--->p0.
680
681 l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
682 be to destination ports via Tx adapter or generic event dev enqueue API
683 depending H/W or S/W scheduler is used.
684
685 .. code-block:: c
686
687 nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id, port_id, ev,
688 nb_rx);
689 while (nb_tx < nb_rx && !rsrc->force_quit)
690 nb_tx += rte_event_eth_tx_adapter_enqueue(
691 event_d_id, port_id,
692 ev + nb_tx, nb_rx - nb_tx);