Brett Creeley [Thu, 2 Dec 2021 16:38:43 +0000 (08:38 -0800)]
ice: Refactor vf->port_vlan_info to use ice_vlan
The current vf->port_vlan_info variable is a packed u16 that contains
the port VLAN ID and QoS/prio value. This is fine, but changes are
incoming that allow for an 802.1ad port VLAN. Add flexibility by
changing the vf->port_vlan_info member to be an ice_vlan structure.
Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Brett Creeley [Thu, 2 Dec 2021 16:38:42 +0000 (08:38 -0800)]
ice: Introduce ice_vlan struct
Add a new struct for VLAN related information. Currently this holds
VLAN ID and priority values, but will be expanded to hold TPID value.
This reduces the changes necessary if any other values are added in
future. Remove the action argument from these calls as it's always
ICE_FWD_VSI.
Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Brett Creeley [Thu, 2 Dec 2021 16:38:41 +0000 (08:38 -0800)]
ice: Add new VSI VLAN ops
Incoming changes to support 802.1Q and/or 802.1ad VLAN filtering and
offloads require more flexibility when configuring VLANs. The VSI VLAN
interface will allow flexibility for configuring VLANs for all VSI
types. Add new files to separate the VSI VLAN ops and move functions to
make the code more organized.
Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Brett Creeley [Thu, 2 Dec 2021 16:38:40 +0000 (08:38 -0800)]
ice: Add helper function for adding VLAN 0
There are multiple places where VLAN 0 is being added. Create a function
to be called in order to minimize changes as the implementation is expanded
to support double VLAN and avoid duplicated code.
Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Brett Creeley [Thu, 2 Dec 2021 16:38:39 +0000 (08:38 -0800)]
ice: Refactor spoofcheck configuration functions
Add functions to configure Tx VLAN antispoof based on iproute
configuration and/or VLAN mode and VF driver support. This is needed
later so the driver can control when it can be configured. Also, add
functions that can be used to enable and disable MAC and VLAN
spoofcheck. Move spoofchk configuration during VSI setup into the
SR-IOV initialization path and into the post VSI rebuild flow for VF
VSIs.
Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Wang Qing [Wed, 9 Feb 2022 08:39:19 +0000 (00:39 -0800)]
net: ethernet: cavium: use div64_u64() instead of do_div()
do_div() does a 64-by-32 division.
When the divisor is u64, do_div() truncates it to 32 bits, this means it
can test non-zero and be truncated to zero for division.
fix do_div.cocci warning:
do_div() does a 64-by-32 division, please consider using div64_u64 instead.
Signed-off-by: Wang Qing <wangqing@vivo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Po Liu [Wed, 9 Feb 2022 12:33:02 +0000 (20:33 +0800)]
net:enetc: command BD ring data memory alloc as one function alone
Separate the CBDR data memory alloc standalone. It is convenient for
other part loading, for example the ENETC QOS part.
Reported-and-suggested-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Po Liu <po.liu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes: 888ae5a3952ba ("net: enetc: add tc flower psfp offload driver")
cc: Claudiu Manoil <claudiu.manoil@nxp.com> Reported-by: Tim Gardner <tim.gardner@canonical.com> Signed-off-by: Po Liu <po.liu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 13:15:35 +0000 (13:15 +0000)]
Merge branch 'dpaa2-eth-sw-TSO'
Ioana Ciornei says:
====================
dpaa2-eth: add support for software TSO
This series adds support for driver level TSO in the dpaa2-eth driver.
The first 5 patches lay the ground work for the actual feature:
rearrange some variable declaration, cleaning up the interraction with
the S/G Table buffer cache etc.
The 6th patch adds the actual driver level software TSO support by using
the usual tso_build_hdr()/tso_build_data() APIs and creates the S/G FDs.
With this patch set we can see the following improvement in a TCP flow
running on a single A72@2.2GHz of the LX2160A SoC:
Ioana Ciornei [Wed, 9 Feb 2022 09:23:35 +0000 (11:23 +0200)]
soc: fsl: dpio: read the consumer index from the cache inhibited area
Once we added support in the dpaa2-eth for driver level software TSO we
observed the following situation: if the EQCR CI (consumer index) is
read from the cache-enabled area we sometimes end up with a computed
value of available enqueue entries bigger than the size of the ring.
This eventually will lead to the multiple enqueue of the same FD which
will determine the same FD to end up on the Tx confirmation path and the
same skb being freed twice.
Just read the consumer index from the cache inhibited area so that we
avoid this situation.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:34 +0000 (11:23 +0200)]
dpaa2-eth: add support for software TSO
This patch adds support for driver level TSO in the enetc driver using
the TSO API.
There is not much to say about this specific implementation. We are
using the usual tso_build_hdr(), tso_build_data() to create each data
segment, we create an array of S/G FDs where the first S/G entry is
referencing the header data and the remaining ones the data portion.
For the S/G Table buffer we use the same cache of buffers used on the
other non-GSO cases - dpaa2_eth_sgt_get() and dpaa2_eth_sgt_recycle().
We cannot keep a DMA coherent buffer for all the TSO headers because the
DPAA2 architecture does not work in a ring based fashion so we just
allocate a buffer each time.
Even with these limitations we get the following improvement in TCP
termination on the LX2160A SoC, on a single A72 core running at 2.2GHz.
before: 6.38Gbit/s
after: 8.48Gbit/s
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:33 +0000 (11:23 +0200)]
dpaa2-eth: work with an array of FDs
Up until now, the __dpaa2_eth_tx function used a single FD on the stack
to construct the structure to be enqueued. Since we are now preparing
the ground work to add support for TSO done in software at the driver
level, the same function needs to work with an array of FDs and enqueue
as many as the build_*_fd functions create.
Make the necessary adjustments in order to do this. These include:
keeping an array of FDs in a percpu structure, cleaning up the necessary
FDs before populating it and then, retrying the enqueue process up till
all the generated FDs were enqueued or until we reach the maximum number
retries.
This patch does not change the fact that only a single FD will result
from a __dpaa2_eth_tx call but rather just creates the necessary changes
for the next patch.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:32 +0000 (11:23 +0200)]
dpaa2-eth: use the S/G table cache also for the normal S/G path
Instead of allocating memory for an S/G table each time a nonlinear skb
is processed, and then freeing it on the Tx confirmation path, use the
S/G table cache in order to reuse the memory.
For this to work we have to change the size of the cached buffers so
that it can hold the maximum number of scatterlist entries.
Other than that, each allocate/free call is replaced by a call to the
dpaa2_eth_sgt_get/dpaa2_eth_sgt_recycle functions, introduced in the
previous patch.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:31 +0000 (11:23 +0200)]
dpaa2-eth: extract the S/G table buffer cache interaction into functions
The dpaa2-eth driver uses in certain circumstances a buffer cache for
the S/G tables needed in case of a S/G FD. At the moment, the
interraction with the cache is open-coded and couldn't be reused easily.
Add two new functions - dpaa2_eth_sgt_get and dpaa2_eth_sgt_recycle -
which help with code reusability.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ioana Ciornei [Wed, 9 Feb 2022 09:23:29 +0000 (11:23 +0200)]
dpaa2-eth: rearrange variable declaration in __dpaa2_eth_tx
In the next patches we'll be moving things arroung in the mentioned
function and also add some new variable declarations. Before all this,
cleanup the variable declaration order.
Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 13:02:33 +0000 (13:02 +0000)]
Merge branch 'octeontx2-af-priority-flow-control'
Hariprasad Kelam says:
====================
Priority flow control support for RVU netdev
In network congestion, instead of pausing all traffic on link
PFC allows user to selectively pause traffic according to its
class. This series of patches add support of PFC for RVU netdev
drivers.
Patch1 adds support to disable pause frames by default as
with PFC user can enable either PFC or 802.3 pause frames.
Patch2&3 adds resource management support for flow control
and configures necessary registers for PFC.
Patch4 adds dcb ops registration for netdev drivers.
Data centric bridging designed to eliminate packet loss due to
queue overflow by adding enhancements to ethernet network such as
proprity flow control etc. This patch adds support for management
of Priority flow control(PFC) on Octeontx2 and CN10K interfaces.
To enable PFC for all priorities
dcb pfc set dev eth0 prio-pfc all:on/off
To enable PFC on selected priorites
dcb pfc set dev eth0 prio-pfc 0:on/off 1:on/off ..7:on/off
With the ntuple commands user can map Priority to receive queues.
On queue overflow NIX will assert backpressure such that PFC pause frames
are genarated with mapped priority.
To map priority 7 to Queue 1
ethtool -U eth0 flow-type ether dst xx:xx:xx:xx:xx:xx vlan 0xe00a
m 0x1fff queue 1
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
CN10K MAC block (RPM) and Octeontx2 MAC block (CGX) both supports
PFC flow control and 802.3X flow control pause frames.
Each MAC block supports max 4 LMACS and AF driver assigns same
(MAC,LMAC) to PF and its VFs. As PF and its share same (MAC,LMAC)
pair we need resource management to address below scenarios
1. Maintain PFC and 8023X pause frames mutually exclusive.
2. Reject disable flow control request if other PF or Vfs
enabled it.
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
octeontx2-af: Priority flow control configuration support
Prirority based flow control (802.1Qbb) mechanism is similar to
ethernet pause frames (802.3x) instead pausing all traffic on a link,
PFC allows user to selectively pause traffic according to its class.
Oceteontx2 MAC block (CGX) and CN10K Mac block (RPM) both supports
PFC. As upper layer mbox handler is same for both the MACs, this
patch configures PFC by calling apporopritate callbacks.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
octeontx2-af: Don't enable Pause frames by default
Current implementation is such that 802.3x pause frames are
enabled by default. As CGX and RPM blocks support PFC
(priority flow control) also, instead of driver enabling one
between them enable them upon request from PF or its VFs.
Also add support to disable pause frames in driver unbind.
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 12:00:11 +0000 (12:00 +0000)]
Merge branch 'MCTP-tag-control-interface'
Jeremy Kerr says:
====================
MCTP tag control interface
This series implements a small interface for userspace-controlled
message tag allocation for the MCTP protocol. Rather than leaving the
kernel to allocate per-message tag values, userspace can explicitly
allocate (and release) message tags through two new ioctls:
SIOCMCTPALLOCTAG and SIOCMCTPDROPTAG.
In order to do this, we first introduce some minor changes to the tag
handling, including a couple of new tests for the route input paths.
As always, any comments/queries/etc are most welcome.
v2:
- make mctp_lookup_prealloc_tag static
- minor checkpatch formatting fixes
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Matt Johnston [Wed, 9 Feb 2022 04:05:57 +0000 (12:05 +0800)]
mctp: Add SIOCMCTP{ALLOC,DROP}TAG ioctls for tag control
This change adds a couple of new ioctls for mctp sockets:
SIOCMCTPALLOCTAG and SIOCMCTPDROPTAG. These ioctls provide facilities
for explicit allocation / release of tags, overriding the automatic
allocate-on-send/release-on-reply and timeout behaviours. This allows
userspace more control over messages that may not fit a simple
request/response model.
In order to indicate a pre-allocated tag to the sendmsg() syscall, we
introduce a new flag to the struct sockaddr_mctp.smctp_tag value:
MCTP_TAG_PREALLOC.
Additional changes from Jeremy Kerr <jk@codeconstruct.com.au>.
Contains a fix that was: Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Matt Johnston <matt@codeconstruct.com.au> Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
Jeremy Kerr [Wed, 9 Feb 2022 04:05:54 +0000 (12:05 +0800)]
mctp: tests: Add key state tests
This change adds a few more tests to check the key/tag lookups on route
input. We add a specific entry to the keys lists, route a packet with
specific header values, and check for key match/mismatch.
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
Jeremy Kerr [Wed, 9 Feb 2022 04:05:53 +0000 (12:05 +0800)]
mctp: tests: Rename FL_T macro to FL_TO
This is a definition for the tag-owner flag, which has TO as a standard
abbreviation. We'll want to add a helper for the actual tag value in a
future change.
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 9 Feb 2022 11:57:54 +0000 (11:57 +0000)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next
-queue
Tony Nguyen says:
====================
40GbE Intel Wired LAN Driver Updates 2022-02-08
Joe Damato says:
This patch set makes several updates to the i40e driver stats collection
and reporting code to help users of i40e get a better sense of how the
driver is performing and interacting with the rest of the kernel.
These patches include some new stats (like waived and busy) which were
inspired by other drivers that track stats using the same nomenclature.
The new stats and an existing stat, rx_reuse, are now accessible with
ethtool to make harvesting this data more convenient for users.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes: c1f55c5e0482 ("ip6_tunnel: allow routing IPv4 traffic in NBMA mode") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Qing Deng <i@moy.cat> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Tianyu Lan [Tue, 8 Feb 2022 14:26:52 +0000 (09:26 -0500)]
Netvsc: Call hv_unmap_memory() in the netvsc_device_remove()
netvsc_device_remove() calls vunmap() inside which should not be
called in the interrupt context. Current code calls hv_unmap_memory()
in the free_netvsc_device() which is rcu callback and maybe called
in the interrupt context. This will trigger BUG_ON(in_interrupt())
in the vunmap(). Fix it via moving hv_unmap_memory() to netvsc_device_
remove().
Fixes: 846da38de0e8 ("net: netvsc: Add Isolation VM support for netvsc driver") Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Document Gigabit Ethernet IP found on RZ/G2UL SoC. Gigabit Ethernet
Interface is identical to one found on the RZ/G2L SoC. No driver changes
are required as generic compatible string "renesas,rzg2l-gbeth" will be
used as a fallback.
Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Document Gigabit Ethernet IP found on RZ/V2L SoC. Gigabit Ethernet
Interface is identical to one found on the RZ/G2L SoC. No driver changes
are required as generic compatible string "renesas,rzg2l-gbeth" will be
used as a fallback.
Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Acked-by: Rob Herring <robh@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: David S. Miller <davem@davemloft.net>
In this series, I made network namespace deletions more scalable,
by 4x on the little benchmark described in this cover letter.
- Remove bottleneck on ipv6 addrconf, by replacing a global
hash table to a per netns one.
- Rework many (struct pernet_operations)->exit() handlers to
exit_batch() ones. This removes many rtnl acquisitions,
and gives to cleanup_net() kind of a priority over rtnl
ownership.
Tested on a host with 24 cpus (48 HT)
Test script:
for nr in {1..10}
do
(for i in {1..10000}; do unshare -n /bin/bash -c "ifconfig lo up"; done) &
done
wait
for i in {1..10}
do
sleep 1
echo 3 >/proc/sys/vm/drop_caches
grep net_namespace /proc/slabinfo
done
Before: We can see host struggles to clean the netns, even after there are no new creations.
Memory cost is high, because each netns consumes a good amount of memory.
Eric Dumazet [Tue, 8 Feb 2022 04:50:37 +0000 (20:50 -0800)]
bonding: switch bond_net_exit() to batch mode
cleanup_net() is competing with other rtnl users.
Batching bond_net_exit() factorizes all rtnl acquistions
to a single one, giving chance for cleanup_net()
to progress much faster, holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jay Vosburgh <j.vosburgh@gmail.com> Cc: Veaceslav Falico <vfalico@gmail.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:36 +0000 (20:50 -0800)]
can: gw: switch cangw_pernet_exit() to batch mode
cleanup_net() is competing with other rtnl users.
Avoiding to acquire rtnl for each netns before calling
cgw_remove_all_jobs() gives chance for cleanup_net()
to progress much faster, holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Oliver Hartkopp <socketcan@hartkopp.net> Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:35 +0000 (20:50 -0800)]
ipmr: introduce ipmr_net_exit_batch()
cleanup_net() is competing with other rtnl users.
Avoiding to acquire rtnl for each netns before calling
ipmr_rules_exit() gives chance for cleanup_net()
to progress much faster, holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:34 +0000 (20:50 -0800)]
ip6mr: introduce ip6mr_net_exit_batch()
cleanup_net() is competing with other rtnl users.
Avoiding to acquire rtnl for each netns before calling
ip6mr_rules_exit() gives chance for cleanup_net()
to progress much faster, holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:33 +0000 (20:50 -0800)]
ipv6: change fib6_rules_net_exit() to batch mode
cleanup_net() is competing with other rtnl users.
fib6_rules_net_exit() seems a good candidate for exit_batch(),
as this gives chance for cleanup_net() to progress much faster,
holding rtnl a bit longer.
Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Tue, 8 Feb 2022 04:50:29 +0000 (20:50 -0800)]
ipv6/addrconf: use one delayed work per netns
Next step for using per netns inet6_addr_lst
is to have per netns work item to ultimately
call addrconf_verify_rtnl() and addrconf_verify()
with a new 'struct net*' argument.
Everything is still using the global inet6_addr_lst[] table.
Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Joe Damato [Fri, 17 Dec 2021 19:35:19 +0000 (11:35 -0800)]
i40e: Add a stat for tracking busy rx pages
In some cases, pages cannot be reused by i40e because the page is busy. Add
a counter for this event.
Busy page count is accessible via ethtool.
Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Joe Damato [Fri, 17 Dec 2021 19:35:18 +0000 (11:35 -0800)]
i40e: Add a stat for tracking pages waived
In some cases, pages can not be reused because they are not associated with
the correct NUMA zone. Knowing how often pages are waived helps users to
understand the interaction between the driver's memory usage and their
system.
Pass rx_stats through to i40e_can_reuse_rx_page to allow tracking when
pages are waived.
The page waive count is accessible via ethtool.
Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Joe Damato [Fri, 17 Dec 2021 19:35:17 +0000 (11:35 -0800)]
i40e: Add a stat tracking new RX page allocations
Add a counter for new page allocations in the i40e RX path. This stat is
accessible with ethtool.
Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Joe Damato [Fri, 17 Dec 2021 19:35:16 +0000 (11:35 -0800)]
i40e: Aggregate and export RX page reuse stat
rx page reuse was already being tracked by the i40e driver per RX ring.
Aggregate the counts and make them accessible via ethtool.
Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Joe Damato [Fri, 17 Dec 2021 19:35:15 +0000 (11:35 -0800)]
i40e: Remove rx page reuse double count
Page reuse was being tracked from two locations:
- i40e_reuse_rx_page (via 40e_clean_rx_irq), and
- i40e_alloc_mapped_page
Remove the double count and only count reuse from i40e_alloc_mapped_page
when the page is about to be reused.
Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
====================
inet: Separate DSCP from ECN bits using new dscp_t type
The networking stack currently doesn't clearly distinguish between DSCP
and ECN bits. The entire DSCP+ECN bits are stored in u8 variables (or
structure fields), and each part of the stack handles them in their own
way, using different macros. This has created several bugs in the past
and some uncommon code paths are still unfixed.
Such bugs generally manifest by selecting invalid routes because of ECN
bits interfering with FIB routes and rules lookups (more details in the
LPC 2021 talk[1] and in the RFC of this series[2]).
This patch series aims at preventing the introduction of such bugs (and
detecting existing ones), by introducing a dscp_t type, representing
"sanitised" DSCP values (that is, with no ECN information), as opposed
to plain u8 values that contain both DSCP and ECN information. dscp_t
makes it clear for the reader what we're working on, and Sparse can
flag invalid interactions between dscp_t and plain u8.
This series converts only a few variables and structures:
* Patch 1 converts the tclass field of struct fib6_rule. It
effectively forbids the use of ECN bits in the tos/dsfield option
of ip -6 rule. Rules now match packets solely based on their DSCP
bits, so ECN doesn't influence the result any more. This contrasts
with the previous behaviour where all 8 bits of the Traffic Class
field were used. It is believed that this change is acceptable as
matching ECN bits wasn't usable for IPv4, so only IPv6-only
deployments could be depending on it. Also the previous behaviour
made DSCP-based ip6-rules fail for packets with both a DSCP and an
ECN mark, which is another reason why any such deploy is unlikely.
* Patch 2 converts the tos field of struct fib4_rule. This one too
effectively forbids defining ECN bits, this time in ip -4 rule.
Before that, setting ECN bit 1 was accepted, while ECN bit 0 was
rejected. But even when accepted, the rule would never match, as
the packets would have their ECN bits cleared before doing the
rule lookup.
* Patch 3 converts the fc_tos field of struct fib_config. This is
equivalent to patch 2, but for IPv4 routes. Routes using a
tos/dsfield option with any ECN bit set is now rejected. Before
this patch, they were accepted but, as with ip4 rules, these routes
couldn't match any packet, since their ECN bits are cleared before
the lookup.
* Patch 4 converts the fa_tos field of struct fib_alias. This one is
pure internal u8 to dscp_t conversion. While patches 1-3 had user
facing consequences, this patch shouldn't have any side effect and
is there to give an overview of what future conversion patches will
look like. Conversions are quite mechanical, but imply some code
churn, which is the price for the extra clarity a possibility of
type checking.
To summarise, all the behaviour changes required for the dscp_t type
approach to work should be contained in patches 1-3. These changes are
edge cases of ip-route and ip-rule that don't currently work properly.
So they should be safe. Also, a kernel selftest is added for each of
them.
Finally, this work also paves the way for allowing the usage of the 3
high order DSCP bits in IPv4 (a few call paths already handle them, but
in general the stack clears them before IPv4 rule and route lookups).
References:
[1] LPC 2021 talk:
- https://linuxplumbersconf.org/event/11/contributions/943/
- Direct link to slide deck:
https://linuxplumbersconf.org/event/11/contributions/943/attachments/901/1780/inet_tos_lpc2021.pdf
[2] RFC version of this series:
- https://lore.kernel.org/netdev/cover.1638814614.git.gnault@redhat.com/
====================
Guillaume Nault [Fri, 4 Feb 2022 13:58:19 +0000 (14:58 +0100)]
ipv4: Use dscp_t in struct fib_alias
Use the new dscp_t type to replace the fa_tos field of fib_alias. This
ensures ECN bits are ignored and makes the field compatible with the
fc_dscp field of struct fib_config.
Converting old *tos variables and fields to dscp_t allows sparse to
flag incorrect uses of DSCP and ECN bits. This patch is entirely about
type annotation and shouldn't change any existing behaviour.
Signed-off-by: Guillaume Nault <gnault@redhat.com> Acked-by: David Ahern <dsahern@kernel.org> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Guillaume Nault [Fri, 4 Feb 2022 13:58:16 +0000 (14:58 +0100)]
ipv4: Reject routes specifying ECN bits in rtm_tos
Use the new dscp_t type to replace the fc_tos field of fib_config, to
ensure IPv4 routes aren't influenced by ECN bits when configured with
non-zero rtm_tos.
Before this patch, IPv4 routes specifying an rtm_tos with some of the
ECN bits set were accepted. However they wouldn't work (never match) as
IPv4 normally clears the ECN bits with IPTOS_RT_MASK before doing a FIB
lookup (although a few buggy code paths don't).
After this patch, IPv4 routes specifying an rtm_tos with any ECN bit
set is rejected.
Note: IPv6 routes ignore rtm_tos altogether, any rtm_tos is accepted,
but treated as if it were 0.
Signed-off-by: Guillaume Nault <gnault@redhat.com> Acked-by: David Ahern <dsahern@kernel.org> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Guillaume Nault [Fri, 4 Feb 2022 13:58:14 +0000 (14:58 +0100)]
ipv4: Stop taking ECN bits into account in fib4-rules
Use the new dscp_t type to replace the tos field of struct fib4_rule,
so that fib4-rules consistently ignore ECN bits.
Before this patch, fib4-rules did accept rules with the high order ECN
bit set (but not the low order one). Also, it relied on its callers
masking the ECN bits of ->flowi4_tos to prevent those from influencing
the result. This was brittle and a few call paths still do the lookup
without masking the ECN bits first.
After this patch fib4-rules only compare the DSCP bits. ECN can't
influence the result anymore, even if the caller didn't mask these
bits. Also, fib4-rules now must have both ECN bits cleared or they will
be rejected.
Signed-off-by: Guillaume Nault <gnault@redhat.com> Acked-by: David Ahern <dsahern@kernel.org> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Guillaume Nault [Fri, 4 Feb 2022 13:58:11 +0000 (14:58 +0100)]
ipv6: Define dscp_t and stop taking ECN bits into account in fib6-rules
Define a dscp_t type and its appropriate helpers that ensure ECN bits
are not taken into account when handling DSCP.
Use this new type to replace the tclass field of struct fib6_rule, so
that fib6-rules don't get influenced by ECN bits anymore.
Before this patch, fib6-rules didn't make any distinction between the
DSCP and ECN bits. Therefore, rules specifying a DSCP (tos or dsfield
options in iproute2) stopped working as soon a packets had at least one
of its ECN bits set (as a work around one could create four rules for
each DSCP value to match, one for each possible ECN value).
After this patch fib6-rules only compare the DSCP bits. ECN doesn't
influence the result anymore. Also, fib6-rules now must have the ECN
bits cleared or they will be rejected.
Signed-off-by: Guillaume Nault <gnault@redhat.com> Acked-by: David Ahern <dsahern@kernel.org> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Yannick Vignon [Fri, 4 Feb 2022 13:55:44 +0000 (14:55 +0100)]
net: stmmac: optimize locking around PTP clock reads
Reading the PTP clock is a simple operation requiring only 3 register
reads. Under a PREEMPT_RT kernel, protecting those reads by a spin_lock is
counter-productive: if the 2nd task preempting the 1st has a higher prio
but needs to read time as well, it will require 2 context switches, which
will pretty much always be more costly than just disabling preemption for
the duration of the reads. Moreover, with the code logic recently added
to get_systime(), disabling preemption is not even required anymore:
reads and writes just need to be protected from each other, to prevent a
clock read while the clock is being updated.
Improve the above situation by replacing the PTP spinlock by a rwlock, and
using read_lock for PTP clock reads so simultaneous reads do not block
each other.
Corinna Vinschen [Wed, 19 Jan 2022 14:52:59 +0000 (15:52 +0100)]
igb: refactor XDP registration
On changing the RX ring parameters igb uses a hack to avoid a warning
when calling xdp_rxq_info_reg via igb_setup_rx_resources. It just
clears the struct xdp_rxq_info content.
Instead, change this to unregister if we're already registered. Align
code to the igc code.
igc_ethtool_set_ringparam() copies the igc_ring structure but neglects to
reset the xdp_rxq_info member before calling igc_setup_rx_resources().
This in turn calls xdp_rxq_info_reg() with an already registered xdp_rxq_info.
Make sure to unregister the xdp_rxq_info structure first in
igc_setup_rx_resources.
Fixes: 73f1071c1d29 ("igc: Add support for XDP_TX action") Reported-by: Lennert Buytenhek <buytenh@arista.com> Signed-off-by: Corinna Vinschen <vinschen@redhat.com> Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Dan Carpenter [Mon, 7 Feb 2022 08:24:39 +0000 (11:24 +0300)]
net: dsa: mv88e6xxx: Unlock on error in mv88e6xxx_port_bridge_join()
Call mv88e6xxx_reg_unlock(chip) before returning on this error path.
Fixes: 7af4a361a62f ("net: dsa: mv88e6xxx: Improve isolation of standalone ports") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Mon, 7 Feb 2022 08:22:53 +0000 (11:22 +0300)]
net: dsa: mv88e6xxx: Fix off by in one in mv88e6185_phylink_get_caps()
The <= ARRAY_SIZE() needs to be < ARRAY_SIZE() to prevent an out of
bounds error.
Fixes: d4ebf12bcec4 ("net: dsa: mv88e6xxx: populate supported_interfaces and mac_capabilities") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Yufeng Mo [Mon, 7 Feb 2022 01:44:23 +0000 (09:44 +0800)]
net: hns3: add support for TX push mode
For the device that supports the TX push capability, the BD can
be directly copied to the device memory. However, due to hardware
restrictions, the push mode can be used only when there are no
more than two BDs, otherwise, the doorbell mode based on device
memory is used.
Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Pavel Skripkin [Sun, 6 Feb 2022 18:05:16 +0000 (21:05 +0300)]
net: asix: add proper error handling of usb read errors
Syzbot once again hit uninit value in asix driver. The problem still the
same -- asix_read_cmd() reads less bytes, than was requested by caller.
Since all read requests are performed via asix_read_cmd() let's catch
usb related error there and add __must_check notation to be sure all
callers actually check return value.
So, this patch adds sanity check inside asix_read_cmd(), that simply
checks if bytes read are not less, than was requested and adds missing
error handling of asix_read_cmd() all across the driver code.
Fixes: d9fe64e51114 ("net: asix: Add in_pm parameter") Reported-and-tested-by: syzbot+6ca9f7867b77c2d316ac@syzkaller.appspotmail.com Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Tested-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Sun, 6 Feb 2022 16:07:13 +0000 (17:07 +0100)]
r8169: factor out redundant RTL8168d PHY config functionality to rtl8168d_1_common()
rtl8168d_2_hw_phy_config() shares quite some functionality with
rtl8168d_1_hw_phy_config(), so let's factor out the common part to a
new function rtl8168d_1_common(). In addition improve the code a little.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This was fine because ip6mr_sk_done() would not reach the point decreasing
net->ipv6.devconf_all->mc_forwarding until my patch in ip6mr_sk_done().
To fix this without changing struct pernet_operations ordering,
we can clear net->ipv6.devconf_dflt and net->ipv6.devconf_all
when they are freed from addrconf_exit_net()
BUG: KASAN: use-after-free in instrument_atomic_read include/linux/instrumented.h:71 [inline]
BUG: KASAN: use-after-free in atomic_read include/linux/atomic/atomic-instrumented.h:27 [inline]
BUG: KASAN: use-after-free in ip6mr_sk_done+0x11b/0x410 net/ipv6/ip6mr.c:1578
Read of size 4 at addr ffff88801ff08688 by task kworker/u4:4/963
Fixes: f2f2325ec799 ("ip6mr: ip6mr_sk_done() can exit early in common cases") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 7 Feb 2022 11:59:57 +0000 (11:59 +0000)]
Merge branch 'mlxsw-dip-sip-mangling'
Ido Schimmel says:
====================
mlxsw: Add SIP and DIP mangling support
Danielle says:
On Spectrum-2 onwards, it is possible to overwrite SIP and DIP address
of an IPv4 or IPv6 packet in the ACL engine. That corresponds to pedit
munges of, respectively, ip src and ip dst fields, and likewise for ip6.
Offload these munges on the systems where they are supported.
Patchset overview:
Patch #1: introduces SIP_DIP_ACTION and its fields.
Patch #2-#3: adds the new pedit fields, and dispatches on them on
Spectrum-2 and above.
Patch #4 adds a selftest.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Sun, 6 Feb 2022 15:36:13 +0000 (17:36 +0200)]
selftests: forwarding: Add a test for pedit munge SIP and DIP
Add a test that checks that pedit adjusts source and destination
addresses of IPv4 and IPv6 packets.
Output example:
$ ./pedit_ip.sh
TEST: ping [ OK ]
TEST: ping6 [ OK ]
TEST: dev swp2 ingress pedit ip src set 198.51.100.1 [ OK ]
TEST: dev swp3 egress pedit ip src set 198.51.100.1 [ OK ]
TEST: dev swp2 ingress pedit ip dst set 198.51.100.1 [ OK ]
TEST: dev swp3 egress pedit ip dst set 198.51.100.1 [ OK ]
TEST: dev swp2 ingress pedit ip6 src set 2001:db8:2::1 [ OK ]
TEST: dev swp3 egress pedit ip6 src set 2001:db8:2::1 [ OK ]
TEST: dev swp2 ingress pedit ip6 dst set 2001:db8:2::1 [ OK ]
TEST: dev swp3 egress pedit ip6 dst set 2001:db8:2::1 [ OK ]
Signed-off-by: Danielle Ratson <danieller@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Sun, 6 Feb 2022 15:36:12 +0000 (17:36 +0200)]
mlxsw: Support FLOW_ACTION_MANGLE for SIP and DIP IPv6 addresses
Spectrum-2 supports an ACL action SIP_DIP, which allows IPv4 and IPv6
source and destination addresses change. Offload suitable mangles to
the IPv6 address change action.
Signed-off-by: Danielle Ratson <danieller@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Sun, 6 Feb 2022 15:36:11 +0000 (17:36 +0200)]
mlxsw: Support FLOW_ACTION_MANGLE for SIP and DIP IPv4 addresses
Spectrum-2 supports an ACL action SIP_DIP, which allows IPv4 and IPv6
source and destination addresses change. Offload suitable mangles to
the IPv4 address change action.
Signed-off-by: Danielle Ratson <danieller@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Danielle Ratson [Sun, 6 Feb 2022 15:36:10 +0000 (17:36 +0200)]
mlxsw: core_acl_flex_actions: Add SIP_DIP_ACTION
Add fields related to SIP_DIP_ACTION, which is used for changing of SIP
and DIP addresses.
Signed-off-by: Danielle Ratson <danieller@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 7 Feb 2022 11:18:49 +0000 (11:18 +0000)]
Merge branch 'ipv6-kfree_skb_reason'
Menglong Dong says:
====================
net: use kfree_skb_reason() for ip/udp packet receive
In this series patches, kfree_skb() is replaced with kfree_skb_reason()
during ipv4 and udp4 packet receiving path, and following drop reasons
are introduced:
TCP is more complex, so I left it in the next series.
I just figure out how __print_symbolic() works. It doesn't base on the
array index, but searching for symbols by loop. So I'm a little afraid
it's performance.
Changes since v3:
- fix some small problems in the third patch (net: ipv4: use
kfree_skb_reason() in ip_rcv_core()), as David Ahern said
Changes since v2:
- use SKB_DROP_REASON_PKT_TOO_SMALL for a path in ip_rcv_core()
Changes since v1:
- add document for all drop reasons, as David advised
- remove unreleated cleanup
- remove EARLY_DEMUX and IP_ROUTE_INPUT drop reason
- replace {UDP, TCP}_FILTER with SOCKET_FILTER
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Menglong Dong <imagedong@tencent.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Menglong Dong [Sat, 5 Feb 2022 07:47:38 +0000 (15:47 +0800)]
net: udp: use kfree_skb_reason() in udp_queue_rcv_one_skb()
Replace kfree_skb() with kfree_skb_reason() in udp_queue_rcv_one_skb().
Signed-off-by: Menglong Dong <imagedong@tencent.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Menglong Dong <imagedong@tencent.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Menglong Dong <imagedong@tencent.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Menglong Dong <imagedong@tencent.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Menglong Dong <imagedong@tencent.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 5 Feb 2022 17:27:11 +0000 (09:27 -0800)]
ref_tracker: remove filter_irq_stacks() call
After commit e94006608949 ("lib/stackdepot: always do filter_irq_stacks()
in stack_depot_save()") it became unnecessary to filter the stack
before calling stack_depot_save().
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 5 Feb 2022 17:01:25 +0000 (09:01 -0800)]
net: initialize init_net earlier
While testing a patch that will follow later
("net: add netns refcount tracker to struct nsproxy")
I found that devtmpfs_init() was called before init_net
was initialized.
This is a bug, because devtmpfs_setup() calls
ksys_unshare(CLONE_NEWNS);
This has the effect of increasing init_net refcount,
which will be later overwritten to 1, as part of setup_net(&init_net)
We had too many prior patches [1] trying to work around the root cause.
Really, make sure init_net is in BSS section, and that net_ns_init()
is called earlier at boot time.
Note that another patch ("vfs: add netns refcount tracker
to struct fs_context") also will need net_ns_init() being called
before vfs_caches_init()
As a bonus, this patch saves around 4KB in .data section.
[1]
f8c46cb39079 ("netns: do not call pernet ops for not yet set up init_net namespace") b5082df8019a ("net: Initialise init_net.count to 1") 734b65417b24 ("net: Statically initialize init_net.dev_base_head")
v2: fixed a build error reported by kernel build bots (CONFIG_NET=n)
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Juhee Kang [Sat, 5 Feb 2022 15:40:38 +0000 (15:40 +0000)]
net: hsr: use hlist_head instead of list_head for mac addresses
Currently, HSR manages mac addresses of known HSR nodes by using list_head.
It takes a lot of time when there are a lot of registered nodes due to
finding specific mac address nodes by using linear search. We can be
reducing the time by using hlist. Thus, this patch moves list_head to
hlist_head for mac addresses and this allows for further improvement of
network performance.
net: sundance: Replace one-element array with non-array object
It seems this one-element array is not actually being used as an
array of variable size, so we can just replace it with just a
non-array object of type struct desc_frag and refactor a bit the
rest of the code.
This helps with the ongoing efforts to globally enable -Warray-bounds
and get us closer to being able to tighten the FORTIFY_SOURCE routines
on memcpy().
This issue was found with the help of Coccinelle and audited and fixed,
manually.
Link: https://github.com/KSPP/linux/issues/79 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
bnx2x: Replace one-element array with flexible-array member
There is a regular need in the kernel to provide a way to declare having
a dynamically sized set of trailing elements in a structure. Kernel code
should always use “flexible array members”[1] for these cases. The older
style of one-element or zero-length arrays should no longer be used[2].
This helps with the ongoing efforts to globally enable -Warray-bounds
and get us closer to being able to tighten the FORTIFY_SOURCE routines
on memcpy().
This issue was found with the help of Coccinelle and audited and fixed,
manually.
Link: https://github.com/KSPP/linux/issues/79 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Haiyang Zhang [Fri, 4 Feb 2022 22:45:45 +0000 (14:45 -0800)]
net: mana: Remove unnecessary check of cqe_type in mana_process_rx_cqe()
The switch statement already ensures cqe_type == CQE_RX_OKAY at that
point.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Dexuan Cui <decui@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>