dt-bindings: net: dsa: add new MT7531 binding to support MT7531
Add devicetree binding to support the compatible mt7531 switch as used
in the MediaTek MT7531 switch.
Signed-off-by: Sean Wang <sean.wang@mediatek.com> Signed-off-by: Landen Chao <landen.chao@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: dsa: mt7530: Extend device data ready for adding a new hardware
Add a structure holding required operations for each device such as device
initialization, PHY port read or write, a checker whether PHY interface is
supported on a certain port, MAC port setup for either bus pad or a
specific PHY interface.
The patch is done for ready adding a new hardware MT7531, and keep the
same setup logic of existing hardware.
Signed-off-by: Landen Chao <landen.chao@mediatek.com> Signed-off-by: Sean Wang <sean.wang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Refine message in Kconfig with fixing typo and an explicit MT7621 support.
Signed-off-by: Landen Chao <landen.chao@mediatek.com> Signed-off-by: Sean Wang <sean.wang@mediatek.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Xie He [Sat, 12 Sep 2020 02:18:07 +0000 (19:18 -0700)]
drivers/net/wan/x25_asy: Remove an unnecessary x25_type_trans call
x25_type_trans only needs to be called before we call netif_rx to pass
the skb to upper layers.
It does not need to be called before lapb_data_received. The LAPB module
does not need the fields that are set by calling it.
In the other two X.25 drivers - lapbether and hdlc_x25. x25_type_trans
is only called before netif_rx and not before lapb_data_received.
Cc: Martin Schiller <ms@dev.tdt.de> Signed-off-by: Xie He <xie.he.0141@gmail.com> Acked-by: Martin Schiller <ms@dev.tdt.de> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Thu, 10 Sep 2020 21:33:18 +0000 (23:33 +0200)]
net: try to avoid unneeded backlog flush
flush_all_backlogs() may cause deadlock on systems
running processes with FIFO scheduling policy.
The above is critical in -RT scenarios, where user-space
specifically ensure no network activity is scheduled on
the CPU running the mentioned FIFO process, but still get
stuck.
This commit tries to address the problem checking the
backlog status on the remote CPUs before scheduling the
flush operation. If the backlog is empty, we can skip it.
v1 -> v2:
- explicitly clear flushed cpu mask - Eric
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
mlxsw: Derive SBIB from maximum port speed & MTU
Petr says:
Internal buffer is a part of port headroom used for packets that are
mirrored due to triggers that the Spectrum ASIC considers "egress". Besides
ACL mirroring on port egresss this includes also packets mirrored due to
ECN marking.
This patchset changes the way the internal mirroring buffer is reserved.
Currently the buffer reflects port MTU and speed accurately. In the future,
mlxsw should support dcbnl_setbuffer hook to allow the users to set buffer
sizes by hand. In that case, there might not be enough space for growth of
the internal mirroring buffer due to MTU and speed changes. While vetoing
MTU changes would be merely confusing, port speed changes cannot be vetoed,
and such change would simply lead to issues in packet mirroring.
For these reasons, with these patches the internal mirroring buffer is
derived from maximum MTU and maximum speed achievable on the port.
Patches #1 and #2 introduce a new callback to determine the maximum speed a
given port can achieve.
With patches #3 and #4, the information about, respectively, maximum MTU
and maximum port speed, is kept in struct mlxsw_sp_port.
In patch #5, maximum MTU and maximum speed are used to determine the size
of the internal buffer. MTU update and speed update hooks are dropped,
because they are no longer necessary.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 13 Sep 2020 15:46:09 +0000 (18:46 +0300)]
mlxsw: spectrum_span: Derive SBIB from maximum port speed & MTU
The SBIB register configures the size of an internal buffer that the
Spectrum ASICs use when mirroring traffic on egress. This size should be
taken into account when validating that the port headroom buffers are not
larger than the chip can handle. Up until now this was not done, which is
incidentally not a problem, because the priority group buffers that mlxsw
auto-configures are small enough that the boundary condition could not be
violated.
However when dcbnl_setbuffer is implemented, the user has control over
sizes of PG buffers, and they might overshoot the headroom capacity.
However the size of the SBIB buffer depends on port speed, and that cannot
be vetoed. Therefore SBIB size should be deduced from maximum port speed.
Additionally, once the buffers are configured by hand, the user could get
into an uncomfortable situation where their MTU change requests get vetoed,
because the SBIB does not fit anymore. Therefore derive SBIB size from
maximum permissible MTU as well.
Remove all the code that adjusted the SBIB size whenever speed or MTU
changed.
Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 13 Sep 2020 15:46:08 +0000 (18:46 +0300)]
mlxsw: spectrum: Keep maximum speed around
The maximum port speed depends on link modes supported by the port, and for
Ethernet ports is constant. The maximum speed will be handy when setting
SBIB, the internal buffer used for traffic mirroring. Therefore, keep it in
struct mlxsw_sp_port for easy access.
Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 13 Sep 2020 15:46:07 +0000 (18:46 +0300)]
mlxsw: spectrum: Keep maximum MTU around
The maximum port MTU depends on port type. On Spectrum, mlxsw configures
all ports as Ethernet ports, and the maximum MTU therefore never changes.
Besides checking MTU configuration, maximum MTU will also be handy when
setting SBIB, the internal buffer used for traffic mirroring. Therefore,
keep it in struct mlxsw_sp_port for easy access.
Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The SBIB register configures the size of an internal buffer that the
Spectrum ASICs use when mirroring traffic on egress. This size should be
taken into account when validating that the port headroom buffers are not
larger than the chip can handle. Up until now this was not done, which is
incidentally not a problem, because the priority group buffers that mlxsw
auto-configures are small enough that the boundary condition could not be
violated.
When dcbnl_setbuffer is implemented, the user gets control over sizes of PG
buffers, and they might overshoot the headroom capacity. However the size
of the SBIB buffer depends on port speed, which cannot be vetoed. There is
obviously no way to retroactively push back on requests for overlarge PG
buffers, or reject an overlarge MTU, or cancel losslessness of a certain
PG.
Therefore, instead of taking into account the current speed when
calculating SBIB buffer size, take into account the maximum speed that a
port with given Ethernet protocol capabilities can have.
To that end, add a new ethtool callback, ptys_max_speed, which determines
this maximum speed.
Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Sun, 13 Sep 2020 15:46:05 +0000 (18:46 +0300)]
mlxsw: spectrum_ethtool: Extract a helper to get Ethernet attributes
In order to allow reusing the logic, extract from
mlxsw_sp_port_get_link_ksettings() the code to obtain Ethernet protocol
attributes, mlxsw_sp_port_ptys_query().
Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 14 Sep 2020 21:07:58 +0000 (14:07 -0700)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Tony Nguyen says:
====================
40GbE Intel Wired LAN Driver Updates 2020-09-14
This series contains updates to i40e driver only.
Li RongQing removes binding affinity mask to a fixed CPU and sets
prefetch of Rx buffer page to occur conditionally.
Björn provides AF_XDP performance improvements by not prefetching HW
descriptors, using 16 byte descriptors, and moving buffer allocation
out of Rx processing loop.
v2: Define prefetch_page_address in a common header for patch 2.
Dropped, previous, patch 5 as it is being reworked to be more
generalized.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Luo bin [Mon, 14 Sep 2020 13:48:23 +0000 (21:48 +0800)]
hinic: add vxlan segmentation and cs offload support
Add NETIF_F_GSO_UDP_TUNNEL and NETIF_F_GSO_UDP_TUNNEL_CSUM features
to support vxlan segmentation and checksum offload. Ipip and ipv6
tunnel packets are regarded as non-tunnel pkt for hw and as for other
type of tunnel pkts, checksum offload is disabled.
Signed-off-by: Luo bin <luobin9@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: qlcnic: remove unused variable 'val' in qlcnic_83xx_cam_unlock()
Fixes the following W=1 kernel build warning(s):
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c:661:6: warning:
variable 'val' set but not used [-Wunused-but-set-variable]
661 | u32 val;
| ^~~
After commit 7f9664525f9c ("qlcnic: 83xx memory map and HW access
routines"), variable 'val' is never used in qlcnic_83xx_cam_unlock(), so
removing it to avoid build warning.
Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: pxa168_eth: remove unused variable 'retval' int pxa168_eth_change_mtu()
Fixes the following W=1 kernel build warning(s):
drivers/net/ethernet/marvell/pxa168_eth.c:1190:6: warning:
variable 'retval' set but not used [-Wunused-but-set-variable]
1190 | int retval;
| ^~~~~~
Function pxa168_eth_change_mtu() always return zero, so variable 'retval'
is redundant, just remove it.
Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: fec: ptp: remove unused variable 'ns' in fec_time_keep()
Fixes the following W=1 kernel build warning(s):
drivers/net/ethernet/freescale/fec_ptp.c:523:6: warning:
variable 'ns' set but not used [-Wunused-but-set-variable]
523 | u64 ns;
| ^~
After commit 6605b730c061 ("FEC: Add time stamping code and a PTP
hardware clock"), variable 'ns' is never used in fec_time_keep(),
so removing it to avoid build warning.
Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com> Acked-by: Fugang Duan <fugang.duan@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/net/ethernet/dnet.c:510:6: warning:
variable 'tx_status' set but not used [-Wunused-but-set-variable]
u32 tx_status, irq_enable;
^~~~~~~~~
After commit 4796417417a6 ("dnet: Dave DNET ethernet controller driver
(updated)"), variable 'tx_status' is never used in dnet_start_xmit(),
so removing it to avoid build warning.
Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 14 Sep 2020 10:20:27 +0000 (03:20 -0700)]
tcp: remove SOCK_QUEUE_SHRUNK
SOCK_QUEUE_SHRUNK is currently used by TCP as a temporary state
that remembers if some room has been made in the rtx queue
by an incoming ACK packet.
This is later used from tcp_check_space() before
considering to send EPOLLOUT.
Problem is: If we receive SACK packets, and no packet
is removed from RTX queue, we can send fresh packets, thus
moving them from write queue to rtx queue and eventually
empty the write queue.
This stall can happen if TCP_NOTSENT_LOWAT is used.
With this fix, we no longer risk stalling sends while holes
are repaired, and we can fully use socket sndbuf.
This also removes a cache line dirtying for typical RPC
workloads.
Fixes: c9bee3b7fdec ("tcp: TCP_NOTSENT_LOWAT socket option") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Xie He [Mon, 14 Sep 2020 07:41:54 +0000 (00:41 -0700)]
net/packet: Fix a comment about hard_header_len and headroom allocation
This comment is outdated and no longer reflects the actual implementation
of af_packet.c.
Reasons for the new comment:
1.
In af_packet.c, the function packet_snd first reserves a headroom of
length (dev->hard_header_len + dev->needed_headroom).
Then if the socket is a SOCK_DGRAM socket, it calls dev_hard_header,
which calls dev->header_ops->create, to create the link layer header.
If the socket is a SOCK_RAW socket, it "un-reserves" a headroom of
length (dev->hard_header_len), and checks if the user has provided a
header sized between (dev->min_header_len) and (dev->hard_header_len)
(in dev_validate_header).
This shows the developers of af_packet.c expect hard_header_len to
be consistent with header_ops.
2.
In af_packet.c, the function packet_sendmsg_spkt has a FIXME comment.
That comment states that prepending an LL header internally in a driver
is considered a bug. I believe this bug can be fixed by setting
hard_header_len to 0, making the internal header completely invisible
to af_packet.c (and requesting the headroom in needed_headroom instead).
3.
There is a commit for a WiFi driver:
commit 9454f7a895b8 ("mwifiex: set needed_headroom, not hard_header_len")
According to the discussion about it at:
https://patchwork.kernel.org/patch/11407493/
The author tried to set the WiFi driver's hard_header_len to the Ethernet
header length, and request additional header space internally needed by
setting needed_headroom.
This means this usage is already adopted by driver developers.
Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Brian Norris <briannorris@chromium.org> Cc: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Xie He <xie.he.0141@gmail.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
mptcp: introduce support for real multipath xmit
This series enable MPTCP socket to transmit data on multiple subflows
concurrently in a load balancing scenario.
First the receive code path is refactored to better deal with out-of-order
data (patches 1-7). An RB-tree is introduced to queue MPTCP-level out-of-order
data, closely resembling the TCP level OoO handling.
When data is sent on multiple subflows, the peer can easily see OoO - "future"
data at the MPTCP level, especially if speeds, delay, or jitter are not
symmetric.
The other major change regards the netlink PM, which is extended to allow
creating non backup subflows in patches 9-11.
There are a few smaller additions, like the introduction of OoO related mibs,
send buffer autotuning and better ack handling.
Finally a bunch of new self-tests is introduced. The new feature is tested
ensuring that the B/W used by an MPTCP socket using multiple subflows matches
the link aggregated B/W - we use low B/W virtual links, to ensure the tests
are not CPU bounded.
v1 -> v2:
- fix 32 bit build breakage
- fix a bunch of checkpatch issues
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:19 +0000 (10:01 +0200)]
mptcp: simult flow self-tests
Add a bunch of test-cases for multiple subflow xmit:
create multiple subflows simulating different links
condition via netem and verify that the msk is able
to use completely the aggregated bandwidth.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:18 +0000 (10:01 +0200)]
mptcp: call tcp_cleanup_rbuf on subflows
That is needed to let the subflows announce promptly when new
space is available in the receive buffer.
tcp_cleanup_rbuf() is currently a static function, drop the
scope modifier and add a declaration in the TCP header.
Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:17 +0000 (10:01 +0200)]
mptcp: allow picking different xmit subflows
Update the scheduler to less trivial heuristic: cache
the last used subflow, and try to send on it a reasonably
long burst of data.
When the burst or the subflow send space is exhausted, pick
the subflow with the lower ratio between write space and
send buffer - that is, the subflow with the greater relative
amount of free space.
v1 -> v2:
- fix 32 bit build breakage due to 64bits div
- fix checkpath issues (uint64_t -> u64)
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:16 +0000 (10:01 +0200)]
mptcp: allow creating non-backup subflows
Currently the 'backup' attribute of local endpoint
is ignored. Let's use it for the MP_JOIN handshake
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:15 +0000 (10:01 +0200)]
mptcp: move address attribute into mptcp_addr_info
So that can be accessed easily from the subflow creation
helper. No functional change intended.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:14 +0000 (10:01 +0200)]
mptcp: add OoO related mibs
Add a bunch of MPTCP mibs related to MPTCP OoO data
processing.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:13 +0000 (10:01 +0200)]
mptcp: cleanup mptcp_subflow_discard_data()
There is no need to use the tcp_read_sock(), we can
simply drop the skb. Additionally try to look at the
next buffer for in order data.
This both simplifies the code and avoid unneeded indirect
calls.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:12 +0000 (10:01 +0200)]
mptcp: move ooo skbs into msk out of order queue.
Add an RB-tree to cope with OoO (at MPTCP level) data.
__mptcp_move_skb() insert into the RB tree "future"
data, eventually coalescing skb as allowed by the
MPTCP DSN.
To simplify sequence accounting, move the DSN inside
the cb.
After successfully enqueuing in sequence data, check
if we can use any data from the RB tree.
Additionally move the data_fin check after spooling
data from the OoO tree, otherwise we could miss shutdown
events.
The RB tree code is copied as verbatim as possible
from tcp_data_queue_ofo(), with a few simplifications
due to the fact that MPTCP doesn't need to cope with
sacks. All bugs here are added by me.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:11 +0000 (10:01 +0200)]
mptcp: introduce and use mptcp_try_coalesce()
Factor-out existing code, will be re-used by the
next patch.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:10 +0000 (10:01 +0200)]
mptcp: basic sndbuf autotuning
Let the msk sendbuf track the size of the larger subflow's
send window, so that we ensure mptcp_sendmsg() does not
exceed MPTCP-level send window.
The update is performed just before try to send any data.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:09 +0000 (10:01 +0200)]
mptcp: trigger msk processing even for OoO data
This is a prerequisite to allow receiving data from multiple
subflows without re-injection.
Instead of dropping the OoO - "future" data in
subflow_check_data_avail(), call into __mptcp_move_skbs()
and let the msk drop that.
To avoid code duplication factor out the mptcp_subflow_discard_data()
helper.
Note that __mptcp_move_skbs() can now find multiple subflows
with data avail (comprising to-be-discarded data), so must
update the byte counter incrementally.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:08 +0000 (10:01 +0200)]
mptcp: set data_ready status bit in subflow_check_data_avail()
This simplify mptcp_subflow_data_available() and will
made follow-up patches simpler.
Additionally remove the unneeded checks on subflow copied_seq:
we always whole skbs out of subflows.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Mon, 14 Sep 2020 08:01:07 +0000 (10:01 +0200)]
mptcp: rethink 'is writable' conditional
Currently, when checking for the 'msk is writable' condition, we
look at the individual subflows write space.
That works well while we send data via a single subflow, but will
not as soon as we will enable concurrent xmit on multiple subflows.
With this change msk becomes writable when the following conditions
hold:
- the socket has some free write space
- there is at least a subflow with write free space
Additionally we need to set the NOSPACE bit on all subflows
before blocking.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
ethernet: convert tasklets to use new tasklet_setup API
Commit 12cc923f1ccc ("tasklet: Introduce new initialization API")'
introduced a new tasklet initialization API. This series converts
all the crypto modules to use the new tasklet_setup() API
This series is based on v5.9-rc5
v3:
fix subject prefix
use backpointer instead of fragile priv to netdev.
v2:
fix kdoc reported by Jakub Kicinski.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:39 +0000 (12:59 +0530)]
net: smc91x: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:38 +0000 (12:59 +0530)]
net: silan: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:37 +0000 (12:59 +0530)]
qed: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:36 +0000 (12:59 +0530)]
net: nixge: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:35 +0000 (12:59 +0530)]
nfp: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:34 +0000 (12:59 +0530)]
net: natsemi: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:33 +0000 (12:59 +0530)]
net: micrel: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:32 +0000 (12:59 +0530)]
net: mlx: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:31 +0000 (12:59 +0530)]
net: skge: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:30 +0000 (12:59 +0530)]
net: jme: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:29 +0000 (12:59 +0530)]
ibmvnic: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:28 +0000 (12:59 +0530)]
net: ehea: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:27 +0000 (12:59 +0530)]
net: hinic: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:26 +0000 (12:59 +0530)]
net: sundance: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:25 +0000 (12:59 +0530)]
chelsio: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:24 +0000 (12:59 +0530)]
liquidio: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:23 +0000 (12:59 +0530)]
net: macb: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:22 +0000 (12:59 +0530)]
cnic: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:21 +0000 (12:59 +0530)]
net: amd-xgbe: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Allen Pais [Mon, 14 Sep 2020 07:29:20 +0000 (12:59 +0530)]
net: alteon: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.
Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Björn Töpel [Tue, 25 Aug 2020 11:35:56 +0000 (13:35 +0200)]
i40e, xsk: move buffer allocation out of the Rx processing loop
Instead of checking in each iteration of the Rx packet processing
loop, move the allocation out of the loop and do it once for each napi
activation.
For AF_XDP the rx_drop benchmark was improved by 6%.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Björn Töpel [Tue, 25 Aug 2020 11:35:55 +0000 (13:35 +0200)]
i40e: use 16B HW descriptors instead of 32B
The i40e NIC supports two flavors of HW descriptors, 16 and 32
byte. The latter has, obviously, room for more offloading
information. However, the only fields of the 32B HW descriptor that is
being used by the driver, is also available in the 16B descriptor.
In other words; Reading and writing 32 bytes instead of 16 byte is a
waste of bus bandwidth.
This commit starts using 16 byte descriptors instead of 32 byte
descriptors.
For AF_XDP the rx_drop benchmark was improved by 2%.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Björn Töpel [Tue, 25 Aug 2020 11:35:54 +0000 (13:35 +0200)]
i40e, xsk: remove HW descriptor prefetch in AF_XDP path
The software prefetching of HW descriptors has a negative impact on
the performance. Therefore, it is now removed.
Performance for the rx_drop benchmark increased with 2%.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Li RongQing [Tue, 18 Aug 2020 07:07:57 +0000 (15:07 +0800)]
i40e: optimise prefetch page refcount
refcount of rx_buffer page will be added here originally, so prefetchw
is needed, but after commit 1793668c3b8c ("i40e/i40evf: Update code to
better handle incrementing page count"), and refcount is not added
every time, so change prefetchw as prefetch.
Now it mainly services page_address(), but which accesses struct page
only when WANT_PAGE_VIRTUAL or HASHED_PAGE_VIRTUAL is defined otherwise
it returns address based on offset, so we prefetch it conditionally.
Jakub suggested to define prefetch_page_address in a common header.
Reported-by: kernel test robot <lkp@intel.com> Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Li RongQing <lirongqing@baidu.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Li RongQing [Wed, 1 Jul 2020 08:54:53 +0000 (16:54 +0800)]
i40e: not compute affinity_mask for IRQ
After commit 759dc4a7e605 ("i40e: initialize our affinity_mask
based on cpu_possible_mask"), NAPI IRQ affinity_mask is bind to
all possible CPUs, not a fixed CPU
Signed-off-by: Li RongQing <lirongqing@baidu.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
David Howells [Mon, 14 Sep 2020 14:58:14 +0000 (15:58 +0100)]
rxrpc: Fix an overget of the conn bundle when setting up a client conn
When setting up a client connection, a second ref is accidentally obtained
on the connection bundle (we get one when allocating the conn and a second
one when adding the conn to the bundle).
Fix it to only use the ref obtained by rxrpc_alloc_client_connection() and
not to add a second when adding the candidate conn to the bundle.
Fixes: 245500d853e9 ("rxrpc: Rewrite the client connection manager") Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 14 Sep 2020 12:10:00 +0000 (13:10 +0100)]
rxrpc: Fix rxrpc_bundle::alloc_error to be signed
The alloc_error field in the rxrpc_bundle struct should be signed as it has
negative error codes assigned to it. Checks directly on it may then fail,
and may produce a warning like this:
net/rxrpc/conn_client.c:662 rxrpc_wait_for_channel()
warn: 'bundle->alloc_error' is unsigned
Fixes: 245500d853e9 ("rxrpc: Rewrite the client connection manager")
Reported-by Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David Howells <dhowells@redhat.com>
Fixes: 245500d853e9 ("rxrpc: Rewrite the client connection manager") Reported-by: Marc Dionne <marc.dionne@auristor.com> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Marc Dionne <marc.dionne@auristor.com>
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'tulip_init_one()' GFP_KERNEL can be used
because it is a probe function and no lock is taken in the between.
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'de_alloc_rings()' GFP_KERNEL can be used
because it is only called from 'de_open()' which is a '.ndo_open'
function. Such functions are synchronized using the rtnl_lock() semaphore
and no lock is taken in the between.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Moritz Fischer <mdf@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'dmfe_init_one()' GFP_KERNEL can be used
because it is a probe function and no lock is taken in the between.
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'uli526x_init_one()' GFP_KERNEL can be used
because it is a probe function and no lock is taken in the between.
Russell King [Sun, 13 Sep 2020 07:05:52 +0000 (08:05 +0100)]
net: mvpp2: set SKBTX_IN_PROGRESS
Richard Cochran points out that SKBTX_IN_PROGRESS should be set when
the skbuff is queued for timestamping. Add this.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
tulip: windbond-840: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'alloc_ringdesc()' GFP_KERNEL can be used
because it is only called from 'netdev_open()' which is a '.ndo_open'
function. Such functions are synchronized using the rtnl_lock() semaphore
and no lock is taken in the between.
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'rio_probe1()' GFP_KERNEL can be used because
it is a probe function and no lock is taken in the between.
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'alloc_ring()' (natsemi.c) GFP_KERNEL can be
used because it is only called from 'netdev_open()', which is a '.ndo_open'
function. Such function are synchronized with the rtnl_lock() semaphore.
When memory is allocated in 'ns83820_init_one()' (ns83820.c) GFP_KERNEL can
be used because it is a probe function and no lock is taken in the between.
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'bdx_fifo_init()' GFP_ATOMIC must be used
because it can be called from a '.ndo_set_rx_mode' function. Such functions
hold the 'netif_addr_lock' spinlock.
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'rocker_dma_ring_create()' GFP_KERNEL can be
used because it is already used in the same function just a few lines
above.
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'sc92031_open()' GFP_KERNEL can be used
because it is a '.ndo_open' function and no lock is taken in the between.
'.ndo_open' functions are synchronized with the rtnl_lock() semaphore.
The wrappers in include/linux/pci-dma-compat.h should go away.
The patch has been generated with the coccinelle script below and has been
hand modified to replace GFP_ with a correct flag.
It has been compile tested.
When memory is allocated in 'tlan_init()' GFP_KERNEL can be used because
it is only called from a probe function or a module_init function and no
lock is taken in the between.
The call chain is:
tlan_probe (module_init function)
--> tlan_eisa_probe
or
tlan_init_one (probe function)
Luo Jiaxing [Sat, 12 Sep 2020 08:08:15 +0000 (16:08 +0800)]
net: ethernet: mlx4: Avoid assigning a value to ring_cons but not used it anymore in mlx4_en_xmit()
We found a set but not used variable 'ring_cons' in mlx4_en_xmit(), it will
cause a warning when build the kernel. And after checking the commit record
of this function, we found that it was introduced by a previous patch.
So, We delete this redundant assignment code.
Fixes: 488a9b48e398 ("net/mlx4_en: Wake TX queues only when there's enough room") Signed-off-by: Luo Jiaxing <luojiaxing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
LAN8814 is a low-power, quad-port triple-speed (10BASE-T/100BASETX/1000BASE-T)
Ethernet physical layer transceiver (PHY). It supports transmission and
reception of data on standard CAT-5, as well as CAT-5e and CAT-6, unshielded
twisted pair (UTP) cables.
LAN8814 supports industry-standard QSGMII (Quad Serial Gigabit Media
Independent Interface) and Q-USGMII (Quad Universal Serial Gigabit Media
Independent Interface) providing chip-to-chip connection to four Gigabit
Ethernet MACs using a single serialized link (differential pair) in each
direction.
The LAN8814 SKU supports high-accuracy timestamping functions to
support IEEE-1588 solutions using Microchip Ethernet switches, as well as
customer solutions based on SoCs and FPGAs.
The LAN8804 SKU has same features as that of LAN8814 SKU except that it does
not support 1588, SyncE, or Q-USGMII with PCH/MCH.
This adds support for 10BASE-T, 100BASE-TX, and 1000BASE-T,
QSGMII link with the MAC.
Signed-off-by: Divya Koppera<divya.koppera@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Barry Song [Fri, 11 Sep 2020 01:55:10 +0000 (13:55 +1200)]
net: hns: use IRQ_NOAUTOEN to avoid irq is enabled due to request_irq
Rather than doing request_irq and then disabling the irq immediately, it
should be safer to use IRQ_NOAUTOEN flag for the irq. It removes any gap
between request_irq() and disable_irq().
Cc: Salil Mehta <salil.mehta@huawei.com> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
As existing, as newly introduced CPSW ALE versions have differences in
supported features and ALE table formats. Especially it's actual for the
recent AM65x/J721E/J7200 and future AM64x SoCs, which supports more
features like: auto-aging, classifiers, Link aggregation, additional HW
filtering, etc.
The existing ALE configuration interface is not practical in terms of
adding new features and requires consumers to program a lot static
parameters. And any attempt to add new features will case endless adding
and maintaining different combination of flags and options. Because CPSW
ALE configuration is static and fixed for SoC (or set of SoC), It is
reasonable to add support for static ALE configurations inside ALE module.
This series introduces static ALE configuration table for different ALE
variants and provides option for consumers to select required ALE
configuration by providing ALE const char *dev_id identifier (Patch 2).
And all existing driver have been switched to use new approach (Patches 3-6).
After this ALE HW auto-ageing feature can be enabled for AM65x CPSW ALE
variant (Patch 7).
Finally, Patches 8-9 introduces tables to describe the ALE VLAN entries
fields as the ALE VLAN entries are too much differ between different TI
CPSW ALE versions. So, handling them using flags, defines and get/set
functions are became over-complicated.
net: ethernet: ti: ale: add support for multi port k3 cpsw versions
The TI J721E (CPSW9g) ALE version is similar, in general, to Sitara AM3/4/5
CPSW ALE, but has more extended functions and different ALE VLAN entry
format.
This patch adds support for for multi port TI J721E (CPSW9g) ALE variant.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ethernet: ti: ale: switch to use tables for vlan entry description
The ALE VLAN entries are too much differ between different TI CPSW ALE
versions. So, handling them using flags, defines and get/set functions
became over-complicated.
This patch introduces tables to describe the ALE VLAN entries fields, which
are different between TI CPSW ALE versions, and new get/set access
functions. It also allows to detect incorrect access to not available ALL
entry fields.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ethernet: ti: am65-cpsw: enable hw auto ageing
The AM65x ALE supports HW auto-ageing which can be enabled by programming
ageing interval in ALE_AGING_TIMER register. For this CPSW fck_clk
frequency has to be know by ALE.
This patch extends cpsw_ale_params with bus_freq field and enables ALE HW
auto ageing for AM65x CPSW2G ALE version.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ethernet: ti: ale: make usage of ale dev_id mandatory
Hence all existing driver updated to use ALE dev_id the usage of ale dev_id
can be made mandatory and cpsw_ale_create() can be updated to use
"features" property from ALE static configuration.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ethernet: ti: am65-cpsw: use dev_id for ale configuration
The previous patch has introduced possibility to select CPSW ALE by using
ALE dev_id identifier. Switch TI TI AM65x/J721E CPSW NUSS driver to use
dev_id.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: netcp: ethss: use dev_id for ale configuration
The previous patch has introduced possibility to select CPSW ALE by using
ALE dev_id identifier. Switch TI Keystone 2 NETCP driver to use dev_id and
perform clean up by removing "ale_entries" configuration code.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ethernet: ti: cpsw: use dev_id for ale configuration
The previous patch has introduced possibility to select CPSW ALE by using
ALE dev_id identifier. Switch TI cpsw driver to use dev_id="cpsw" and
perform clean up by removing "ale_entries" configuration code.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
As existing, as newly introduced CPSW ALE versions have differences in
supported features and ALE table formats. Especially it's actual for the
recent AM65x/J721E/J7200 SoC and feature AM64x, which supports features
like: auto-aging, classifiers, Link aggregation, additional hw filtering,
etc.
Existing ALE configuration interface is not practical in terms of adding
new features and requires consumers to program a lot static parameters. Any
attempt to add new options will case endless adding and maintaining
different combination of flags and options.
Hence CPSW ALE configuration is static and fixed for SoC (or set of SoC) It
is reasonable to add support for static ALE configurations inside ALE
module. This patch adds static ALE configuration table for different ALE
versions and provides option for consumers to select required ALE
configuration by providing ALE const char *dev_id identifier.
This feature is not enabled by default until existing CPSW drivers will be
modified by follow up patches.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 12 Sep 2020 00:30:43 +0000 (17:30 -0700)]
Merge branch 'DSA-tag_8021q-cleanup'
Vladimir Oltean says:
====================
DSA tag_8021q cleanup
This small series tries to consolidate the VLAN handling in DSA a little
bit. It reworks tag_8021q to be minimally invasive to the dsa_switch_ops
structure. This makes the rest of the code a bit easier to follow.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
dsa_slave_vlan_rx_add_vid dsa_port_setup_8021q_tagging
| |
| |
| +-------------+
| |
v v
dsa_port_vid_add dsa_slave_port_obj_add
| |
+-------+ +-------+
| |
v v
dsa_port_vlan_add
Now that tag_8021q has its own ops structure, it no longer relies on
dsa_port_vid_add, and therefore on the dsa_switch_ops to install its
VLANs.
So dsa_port_vid_add now only has one single caller. So we can simplify
the call graph to what it was before, aka:
dsa_slave_vlan_rx_add_vid dsa_slave_port_obj_add
| |
+-------+ +-------+
| |
v v
dsa_port_vlan_add
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Thu, 10 Sep 2020 16:48:56 +0000 (19:48 +0300)]
net: dsa: tag_8021q: add a context structure
While working on another tag_8021q driver implementation, some things
became apparent:
- It is not mandatory for a DSA driver to offload the tag_8021q VLANs by
using the VLAN table per se. For example, it can add custom TCAM rules
that simply encapsulate RX traffic, and redirect & decapsulate rules
for TX traffic. For such a driver, it makes no sense to receive the
tag_8021q configuration through the same callback as it receives the
VLAN configuration from the bridge and the 8021q modules.
- Currently, sja1105 (the only tag_8021q user) sets a
priv->expect_dsa_8021q variable to distinguish between the bridge
calling, and tag_8021q calling. That can be improved, to say the
least.
- The crosschip bridging operations are, in fact, stateful already. The
list of crosschip_links must be kept by the caller and passed to the
relevant tag_8021q functions.
So it would be nice if the tag_8021q configuration was more
self-contained. This patch attempts to do that.
Create a struct dsa_8021q_context which encapsulates a struct
dsa_switch, and has 2 function pointers for adding and deleting a VLAN.
These will replace the previous channel to the driver, which was through
the .port_vlan_add and .port_vlan_del callbacks of dsa_switch_ops.
Also put the list of crosschip_links into this dsa_8021q_context.
Drivers that don't support cross-chip bridging can simply omit to
initialize this list, as long as they dont call any cross-chip function.
The sja1105_vlan_add and sja1105_vlan_del functions are refactored into
a smaller sja1105_vlan_add_one, which now has 2 entry points:
- sja1105_vlan_add, from struct dsa_switch_ops
- sja1105_dsa_8021q_vlan_add, from the tag_8021q ops
But even this change is fairly trivial. It just reflects the fact that
for sja1105, the VLANs from these 2 channels end up in the same hardware
table. However that is not necessarily true in the general sense (and
that's the reason for making this change).
The rest of the patch is mostly plain refactoring of "ds" -> "ctx". The
dsa_8021q_context structure needs to be propagated because adding a VLAN
is now done through the ops function pointers inside of it.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Thu, 10 Sep 2020 16:48:55 +0000 (19:48 +0300)]
net: dsa: tag_8021q: setup tagging via a single function call
There is no point in calling dsa_port_setup_8021q_tagging for each
individual port. Additionally, it will become more difficult to do that
when we'll have a context structure to tag_8021q (next patch). So
refactor this now.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Thu, 10 Sep 2020 16:48:54 +0000 (19:48 +0300)]
net: dsa: tag_8021q: include missing refcount.h
The previous assumption was that the caller would already have this
header file included.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
crypto/chcr: move nic TLS functionality to drivers/net
This patch moves complete nic tls offload (kTLS) code from crypto
directory to drivers/net/ethernet/chelsio/inline_crypto/ch_ktls
directory. nic TLS is made a separate ULD of cxgb4.
Signed-off-by: Rohit Maheshwari <rohitm@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 12 Sep 2020 00:15:22 +0000 (17:15 -0700)]
Merge branch 'sfc-encap-offloads-on-EF10'
Edward Cree says:
====================
sfc: encap offloads on EF10
EF10 NICs from the 8000 series onwards support TX offloads (checksumming,
TSO) on VXLAN- and NVGRE-encapsulated packets. This series adds driver
support for these offloads.
Changes from v1:
* Fix 'no TXQ of type' error handling in patch #1 (and clear up the
misleading comment that inspired the wrong version)
* Add comment in patch #5 explaining what the deal with TSOv3 is
====================
Signed-off-by: David S. Miller <davem@davemloft.net>