]> git.proxmox.com Git - ceph.git/blob - ceph/src/spdk/dpdk/doc/guides/nics/kni.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / src / spdk / dpdk / doc / guides / nics / kni.rst
1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2017 Intel Corporation.
3
4 KNI Poll Mode Driver
5 ======================
6
7 KNI PMD is wrapper to the :ref:`librte_kni <kni>` library.
8
9 This PMD enables using KNI without having a KNI specific application,
10 any forwarding application can use PMD interface for KNI.
11
12 Sending packets to any DPDK controlled interface or sending to the
13 Linux networking stack will be transparent to the DPDK application.
14
15 To create a KNI device ``net_kni#`` device name should be used, and this
16 will create ``kni#`` Linux virtual network interface.
17
18 There is no physical device backend for the virtual KNI device.
19
20 Packets sent to the KNI Linux interface will be received by the DPDK
21 application, and DPDK application may forward packets to a physical NIC
22 or to a virtual device (like another KNI interface or PCAP interface).
23
24 To forward any traffic from physical NIC to the Linux networking stack,
25 an application should control a physical port and create one virtual KNI port,
26 and forward between two.
27
28 Using this PMD requires KNI kernel module be inserted.
29
30
31 Usage
32 -----
33
34 EAL ``--vdev`` argument can be used to create KNI device instance, like::
35
36 testpmd --vdev=net_kni0 --vdev=net_kn1 -- -i
37
38 Above command will create ``kni0`` and ``kni1`` Linux network interfaces,
39 those interfaces can be controlled by standard Linux tools.
40
41 When testpmd forwarding starts, any packets sent to ``kni0`` interface
42 forwarded to the ``kni1`` interface and vice versa.
43
44 There is no hard limit on number of interfaces that can be created.
45
46
47 Default interface configuration
48 -------------------------------
49
50 ``librte_kni`` can create Linux network interfaces with different features,
51 feature set controlled by a configuration struct, and KNI PMD uses a fixed
52 configuration:
53
54 .. code-block:: console
55
56 Interface name: kni#
57 force bind kernel thread to a core : NO
58 mbuf size: MAX_PACKET_SZ
59
60 KNI control path is not supported with the PMD, since there is no physical
61 backend device by default.
62
63
64 PMD arguments
65 -------------
66
67 ``no_request_thread``, by default PMD creates a phtread for each KNI interface
68 to handle Linux network interface control commands, like ``ifconfig kni0 up``
69
70 With ``no_request_thread`` option, pthread is not created and control commands
71 not handled by PMD.
72
73 By default request thread is enabled. And this argument should not be used
74 most of the time, unless this PMD used with customized DPDK application to handle
75 requests itself.
76
77 Argument usage::
78
79 testpmd --vdev "net_kni0,no_request_thread=1" -- -i
80
81
82 PMD log messages
83 ----------------
84
85 If KNI kernel module (rte_kni.ko) not inserted, following error log printed::
86
87 "KNI: KNI subsystem has not been initialized. Invoke rte_kni_init() first"
88
89
90 PMD testing
91 -----------
92
93 It is possible to test PMD quickly using KNI kernel module loopback feature:
94
95 * Insert KNI kernel module with loopback support:
96
97 .. code-block:: console
98
99 insmod build/kmod/rte_kni.ko lo_mode=lo_mode_fifo_skb
100
101 * Start testpmd with no physical device but two KNI virtual devices:
102
103 .. code-block:: console
104
105 ./testpmd --vdev net_kni0 --vdev net_kni1 -- -i
106
107 .. code-block:: console
108
109 ...
110 Configuring Port 0 (socket 0)
111 KNI: pci: 00:00:00 c580:b8
112 Port 0: 1A:4A:5B:7C:A2:8C
113 Configuring Port 1 (socket 0)
114 KNI: pci: 00:00:00 600:b9
115 Port 1: AE:95:21:07:93:DD
116 Checking link statuses...
117 Port 0 Link Up - speed 10000 Mbps - full-duplex
118 Port 1 Link Up - speed 10000 Mbps - full-duplex
119 Done
120 testpmd>
121
122 * Observe Linux interfaces
123
124 .. code-block:: console
125
126 $ ifconfig kni0 && ifconfig kni1
127 kni0: flags=4098<BROADCAST,MULTICAST> mtu 1500
128 ether ae:8e:79:8e:9b:c8 txqueuelen 1000 (Ethernet)
129 RX packets 0 bytes 0 (0.0 B)
130 RX errors 0 dropped 0 overruns 0 frame 0
131 TX packets 0 bytes 0 (0.0 B)
132 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
133
134 kni1: flags=4098<BROADCAST,MULTICAST> mtu 1500
135 ether 9e:76:43:53:3e:9b txqueuelen 1000 (Ethernet)
136 RX packets 0 bytes 0 (0.0 B)
137 RX errors 0 dropped 0 overruns 0 frame 0
138 TX packets 0 bytes 0 (0.0 B)
139 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
140
141
142 * Start forwarding with tx_first:
143
144 .. code-block:: console
145
146 testpmd> start tx_first
147
148 * Quit and check forwarding stats:
149
150 .. code-block:: console
151
152 testpmd> quit
153 Telling cores to stop...
154 Waiting for lcores to finish...
155
156 ---------------------- Forward statistics for port 0 ----------------------
157 RX-packets: 35637905 RX-dropped: 0 RX-total: 35637905
158 TX-packets: 35637947 TX-dropped: 0 TX-total: 35637947
159 ----------------------------------------------------------------------------
160
161 ---------------------- Forward statistics for port 1 ----------------------
162 RX-packets: 35637915 RX-dropped: 0 RX-total: 35637915
163 TX-packets: 35637937 TX-dropped: 0 TX-total: 35637937
164 ----------------------------------------------------------------------------
165
166 +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
167 RX-packets: 71275820 RX-dropped: 0 RX-total: 71275820
168 TX-packets: 71275884 TX-dropped: 0 TX-total: 71275884
169 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
170