As stated in [1], dma_set_mask() with a 64-bit mask will never fail if
dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
So, if dma_set_mask_and_coherent() succeeds, 'using_dac' is known to be
'true'. This variable can be removed.
Simplify code and remove some dead code accordingly.
As stated in [1], dma_set_mask() with a 64-bit mask will never fail if
dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
If dma_set_mask_and_coherent() succeeds, 'ap->pci_using_dac' is known to be
1. So 'pci_using_dac' can be removed from the 'struct ace_private'.
Simplify code and remove some dead code accordingly.
As stated in [1], dma_set_mask() with a 64-bit mask will never fail if
dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
If dma_set_mask_and_coherent() succeeds, 'dac_enabled' is known to be 1.
Simplify code and remove some dead code accordingly.
As stated in [1], dma_set_mask() with a 64-bit mask will never fail if
dev->dma_mask is non-NULL.
So, if it fails, the 32 bits case will also fail for the same reason.
So qlcnic_set_dma_mask(), (in qlcnic_main.c) can be simplified a lot and
inlined directly in its only caller.
If dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1.
So it can be removed from all the calling chain.
qlcnic_setup_netdev() can finally be simplified as-well.
Yunsheng Lin [Fri, 7 Jan 2022 09:00:42 +0000 (17:00 +0800)]
page_pool: remove spinlock in page_pool_refill_alloc_cache()
As page_pool_refill_alloc_cache() is only called by
__page_pool_get_cached(), which assumes non-concurrent access
as suggested by the comment in __page_pool_get_cached(), and
ptr_ring allows concurrent access between consumer and producer,
so remove the spinlock in page_pool_refill_alloc_cache().
In this series patch, the interface kfree_skb_with_reason() is
introduced(), which is used to collect skb drop reason, and pass
it to 'kfree_skb' tracepoint. Therefor, 'drop_monitor' or eBPF is
able to monitor abnormal skb with detail reason.
In fact, this series patches are out of the intelligence of David
and Steve, I'm just a truck man :/
In the first patch, kfree_skb_with_reason() is introduced and
the 'reason' field is added to 'kfree_skb' tracepoint. In the
second patch, 'kfree_skb()' in replaced with 'kfree_skb_with_reason()'
in tcp_v4_rcv(). In the third patch, 'kfree_skb_with_reason()' is
used in __udp4_lib_rcv().
Changes since v3:
- fix some code style problems in skb.h
Changes since v2:
- rename kfree_skb_with_reason() to kfree_skb_reason()
- make kfree_skb() static inline, as Jakub suggested
Changes since v1:
- rename some drop reason, as David suggested
- add the third patch
====================
Jakub Kicinski [Sun, 9 Jan 2022 21:33:21 +0000 (13:33 -0800)]
net/mlx5e: Fix build error in fec_set_block_stats()
Build bot reports:
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c: In function 'fec_set_block_stats':
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c:1235:48: error: 'outl' undeclared (first use in this function); did you mean 'out'?
1235 | if (mlx5_core_access_reg(mdev, in, sz, outl, sz, MLX5_REG_PPCNT, 0, 0))
| ^~~~
| out
Jakub Kicinski [Mon, 10 Jan 2022 00:27:26 +0000 (16:27 -0800)]
Merge branch 'bnxt_en-update-for-net-next'
Michael Chan says:
====================
bnxt_en: Update for net-next
This series adds better error and debug logging for firmware messages.
We now also use the firmware provided timeout value for long running
commands instead of capping it to 40 seconds.
====================
Edwin Peer [Sun, 9 Jan 2022 23:54:45 +0000 (18:54 -0500)]
bnxt_en: improve firmware timeout messaging
While it has always been possible to infer that an HWRM command was
abandoned due to an unhealthy firmware status by the shortened timeout
reported, this change improves the log messaging to account for this
case explicitly. In the interests of further clarity, the firmware
status is now also reported in these new messages.
v2: Remove inline keyword for hwrm_wait_must_abort() in .c file.
Reviewed-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Edwin Peer <edwin.peer@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Edwin Peer [Sun, 9 Jan 2022 23:54:44 +0000 (18:54 -0500)]
bnxt_en: use firmware provided max timeout for messages
Some older devices cannot accommodate the 40 seconds timeout
cap for long running commands (such as NVRAM commands) due to
hardware limitations. Allow these devices to request more time for
these long running commands, but print a warning, since the longer
timeout may cause the hung task watchdog to trigger. In the case of a
firmware update operation, this is preferable to failing outright.
v2: Use bp->hwrm_cmd_max_timeout directly without the constants.
Fixes: 881d8353b05e ("bnxt_en: Add an upper bound for all firmware command timeouts.") Signed-off-by: Edwin Peer <edwin.peer@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Edwin Peer [Sun, 9 Jan 2022 23:54:43 +0000 (18:54 -0500)]
bnxt_en: improve VF error messages when PF is unavailable
The current driver design relies on the PF netdev being open in order
to intercept the following HWRM commands from a VF:
- HWRM_FUNC_VF_CFG
- HWRM_CFA_L2_FILTER_ALLOC
- HWRM_PORT_PHY_QCFG (only if FW_CAP_LINK_ADMIN is not supported)
If the PF is closed, then VFs are subjected to rather inscrutable error
messages in response to any configuration requests involving the above
command types. Recent firmware distinguishes this problem case from
other errors by returning HWRM_ERR_CODE_PF_UNAVAILABLE. In most cases,
the appropriate course of action is still to fail, but this can now be
accomplished with the aid of more user informative log messages. For L2
filter allocations that are already asynchronous, an automatic retry
seems more appropriate.
v2: Delete extra newline.
Signed-off-by: Edwin Peer <edwin.peer@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Edwin Peer [Sun, 9 Jan 2022 23:54:42 +0000 (18:54 -0500)]
bnxt_en: add dynamic debug support for HWRM messages
Add logging of firmware messages. These can be useful for diagnosing
issues in the field, but due to their verbosity are only appropriate
at a debug message level.
Signed-off-by: Edwin Peer <edwin.peer@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
Netfilter updates for net-next
The following patchset contains Netfilter updates for net-next. This
includes one patch to update ovs and act_ct to use nf_ct_put() instead
of nf_conntrack_put().
1) Add netns_tracker to nfnetlink_log and masquerade, from Eric Dumazet.
2) Remove redundant rcu read-size lock in nf_tables packet path.
3) Replace BUG() by WARN_ON_ONCE() in nft_payload.
4) Consolidate rule verdict tracing.
5) Replace WARN_ON() by WARN_ON_ONCE() in nf_tables core.
6) Make counter support built-in in nf_tables.
7) Add new field to conntrack object to identify locally generated
traffic, from Florian Westphal.
8) Prevent NAT from shadowing well-known ports, from Florian Westphal.
9) Merge nf_flow_table_{ipv4,ipv6} into nf_flow_table_inet, also from
Florian.
10) Remove redundant pointer in nft_pipapo AVX2 support, from Colin Ian King.
11) Replace opencoded max() in conntrack, from Jiapeng Chong.
12) Update conntrack to use refcount_t API, from Florian Westphal.
13) Move ip_ct_attach indirection into the nf_ct_hook structure.
14) Constify several pointer object in the netfilter codebase,
from Florian Westphal.
15) Tree-wide replacement of nf_conntrack_put() by nf_ct_put(), also
from Florian.
16) Fix egress splat due to incorrect rcu notation, from Florian.
17) Move stateful fields of connlimit, last, quota, numgen and limit
out of the expression data area.
18) Build a blob to represent the ruleset in nf_tables, this is a
requirement of the new register tracking infrastructure.
19) Add NFT_REG32_NUM to define the maximum number of 32-bit registers.
20) Add register tracking infrastructure to skip redundant
store-to-register operations, this includes support for payload,
meta and bitwise expresssions.
* git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next: (32 commits)
netfilter: nft_meta: cancel register tracking after meta update
netfilter: nft_payload: cancel register tracking after payload update
netfilter: nft_bitwise: track register operations
netfilter: nft_meta: track register operations
netfilter: nft_payload: track register operations
netfilter: nf_tables: add register tracking infrastructure
netfilter: nf_tables: add NFT_REG32_NUM
netfilter: nf_tables: add rule blob layout
netfilter: nft_limit: move stateful fields out of expression data
netfilter: nft_limit: rename stateful structure
netfilter: nft_numgen: move stateful fields out of expression data
netfilter: nft_quota: move stateful fields out of expression data
netfilter: nft_last: move stateful fields out of expression data
netfilter: nft_connlimit: move stateful fields out of expression data
netfilter: egress: avoid a lockdep splat
net: prefer nf_ct_put instead of nf_conntrack_put
netfilter: conntrack: avoid useless indirection during conntrack destruction
netfilter: make function op structures const
netfilter: core: move ip_ct_attach indirection to struct nf_ct_hook
netfilter: conntrack: convert to refcount_t api
...
====================
Check if the destination register already contains the data that this
bitwise expression performs. This allows to skip this redundant
operation.
If the destination contains a different bitwise operation, cancel the
register tracking information. If the destination contains no bitwise
operation, update the register tracking information.
Update the payload and meta expression to check if this bitwise
operation has been already performed on the register. Hence, both the
payload/meta and the bitwise expressions are reduced.
There is also a special case: If source register != destination register
and source register is not updated by a previous bitwise operation, then
transfer selector from the source register to the destination register.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Check if the destination register already contains the data that this
meta store expression performs. This allows to skip this redundant
operation. If the destination contains a different selector, update
the register tracking information.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Check if the destination register already contains the data that this
payload store expression performs. This allows to skip this redundant
operation. If the destination contains a different selector, update
the register tracking information.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
This patch adds new infrastructure to skip redundant selector store
operations on the same register to achieve a performance boost from
the packet path.
This is particularly noticeable in pure linear rulesets but it also
helps in rulesets which are already heaving relying in maps to avoid
ruleset linear inspection.
The idea is to keep data of the most recurrent store operations on
register to reuse them with cmp and lookup expressions.
This infrastructure allows for dynamic ruleset updates since the ruleset
blob reduction happens from the kernel.
Userspace still needs to be updated to maximize register utilization to
cooperate to improve register data reuse / reduce number of store on
register operations.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
The new structure nft_rule_dp represents the rule in a more compact way
(smaller memory footprint) compared to the control-plane nft_rule
structure.
The ruleset blob is a read-only data structure. The first field contains
the blob size, then the rules containing expressions. There is a trailing
rule which is used by the tracing infrastructure which is equivalent to
the NULL rule marker in the previous representation. The blob size field
does not include the size of this trailing rule marker.
The ruleset blob is generated from the commit path.
This patch reuses the infrastructure available since 0cbc06b3faba
("netfilter: nf_tables: remove synchronize_rcu in commit phase") to
build the array of rules per chain.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Its the same as nf_conntrack_put(), but without the
need for an indirect call. The downside is a module dependency on
nf_conntrack, but all of these already depend on conntrack anyway.
Cc: Paul Blakey <paulb@mellanox.com> Cc: dev@openvswitch.org Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
There are two _put helpers:
nf_ct_put and nf_conntrack_put. The latter is what should be used in
code that MUST NOT cause a linker dependency on the conntrack module
(e.g. calls from core network stack).
Everyone else should call nf_ct_put() instead.
A followup patch will convert a few nf_conntrack_put() calls to
nf_ct_put(), in particular from modules that already have a conntrack
dependency such as act_ct or even nf_conntrack itself.
Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Convert nf_conn reference counting from atomic_t to refcount_t based api.
refcount_t api provides more runtime sanity checks and will warn on
certain constructs, e.g. refcount_inc() on a zero reference count, which
usually indicates use-after-free.
For this reason template allocation is changed to init the refcount to
1, the subsequenct add operations are removed.
Likewise, init_conntrack() is changed to set the initial refcount to 1
instead refcount_inc().
This is safe because the new entry is not (yet) visible to other cpus.
Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Jakub Kicinski [Sun, 9 Jan 2022 22:14:08 +0000 (14:14 -0800)]
Merge tag 'for-net-next-2022-01-07' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next
Luiz Augusto von Dentz says:
====================
bluetooth-next pull request for net-next:
- Add support for Foxconn QCA 0xe0d0
- Fix HCI init sequence on MacBook Air 8,1 and 8,2
- Fix Intel firmware loading on legacy ROM devices
* tag 'for-net-next-2022-01-07' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next:
Bluetooth: hci_sock: fix endian bug in hci_sock_setsockopt()
Bluetooth: L2CAP: uninitialized variables in l2cap_sock_setsockopt()
Bluetooth: btqca: sequential validation
Bluetooth: btusb: Add support for Foxconn QCA 0xe0d0
Bluetooth: btintel: Fix broken LED quirk for legacy ROM devices
Bluetooth: hci_event: Rework hci_inquiry_result_with_rssi_evt
Bluetooth: btbcm: disable read tx power for MacBook Air 8,1 and 8,2
Bluetooth: hci_qca: Fix NULL vs IS_ERR_OR_NULL check in qca_serdev_probe
Bluetooth: hci_bcm: Check for error irq
====================
this is a pull request of 22 patches for net-next/master.
The first patch is by Tom Rix and fixes an uninitialized variable in
the janz-ican3 driver (introduced in linux-can-next-for-5.17-20220105).
The next 13 patches are by my and target the mcp251xfd driver. First
several cleanup patches, then the driver is prepared for the upcoming
ethtool ring parameter and IRQ coalescing support, which is added in a
later pull request.
The remaining 8 patches are by Dario Binacchi and me and enhance the
flexcan driver. The driver is moved into a sub directory. An ethtool
private flag is added to optionally disable CAN RTR frame reception,
to make use of more RX buffers. The resulting RX buffer configuration
can be read by ethtool ring parameter support. Finally documentation
for the ethtool private flag is added to the
Documentation/networking/device_drivers/can directory.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Dario Binacchi [Sat, 8 Jan 2022 18:16:33 +0000 (19:16 +0100)]
can: flexcan: add ethtool support to get rx/tx ring parameters
This patch adds ethtool support to get the number of message buffers
configured for reception/transmission, which may also depends on
runtime configurations such as the 'rx-rtr' flag state.
can: flexcan: add ethtool support to change rx-rtr setting during runtime
This patch adds a private flag to the flexcan driver to switch the
"rx-rtr" setting on and off.
"rx-rtr" on - Receive RTR frames. (default)
The CAN controller can and will receive RTR frames.
On some IP cores the controller cannot receive RTR
frames in the more performant "RX mailbox" mode and
will use "RX FIFO" mode instead.
"rx-rtr" off - Waive ability to receive RTR frames. (not supported on all IP cores)
This mode activates the "RX mailbox mode" for better
performance, on some IP cores RTR frames cannot be
received anymore.
The "RX FIFO" mode uses a FIFO with a depth of 6 CAN frames.
The "RX mailbox" mode uses up to 62 mailboxes.
can: flexcan: add more quirks to describe RX path capabilities
Most flexcan IP cores support 2 RX modes:
- FIFO
- mailbox
Some IP core versions cannot receive CAN RTR messages via mailboxes.
This patch adds quirks to document this.
This information will be used in a later patch to switch from FIFO to
more performant mailbox mode at the expense of losing the ability to
receive RTR messages. This trade off is beneficial in certain use
cases.
Most flexcan IP cores support 2 RX modes:
- FIFO
- mailbox
The names for these modes were chosen to reflect the name of the
rx-offload mode they are using.
The name of the RX modes should better reflect their difference with
regards the flexcan IP core. So this patch renames the various
occurrences of OFF_FIFO to RX_FIFO and OFF_TIMESTAMP to RX_MAILBOX:
can: mcp251xfd: mcp251xfd.h: sort function prototypes
The .c files in the Makefile are ordered alphabetically. This patch
groups the function prototypes by their corresponding .c file and
brings the into the same order.
A RX overflow usually happens during high system load. Printing
overflow messages to the kernel log, which on embedded systems often
is outputted on the serial console, even increases the system load.
To decrease the system load in these situations, denote the messages
to debug level and wrap them with net_ratelimit().
can: mcp251xfd: mcp251xfd_open(): make use of pm_runtime_resume_and_get()
With patch
| dd8088d5a896 PM: runtime: Add pm_runtime_resume_and_get to deal with usage counter
the usual pm_runtime_get_sync() and pm_runtime_put_noidle()
in-case-of-error dance is no longer needed. Convert the mcp251xfd
driver to use this function.
can: mcp251xfd: mcp251xfd_open(): open_candev() first
This patch exchanges the order of open_candev() and
pm_runtime_get_sync(), so that open_candev() is called first.
A usual reason why open_candev() fails is missing CAN bit rate
configuration. It makes no sense to resume the device from PM sleep
first just to put it to sleep if the bit rate is not configured.
Tom Rix [Sat, 8 Jan 2022 14:33:19 +0000 (06:33 -0800)]
can: janz-ican3: initialize dlc variable
Clang static analysis reports this problem
janz-ican3.c:1311:2: warning: Undefined or garbage value returned to caller
return dlc;
^~~~~~~~~~
dlc is only set with this conditional
if (!(cf->can_id & CAN_RTR_FLAG))
dlc = cf->len;
But is always returned. So initialize dlc to 0.
Fixes: cc4b08c31b5c ("can: do not increase tx_bytes statistics for RTR frames") Link: https://lore.kernel.org/all/20220108143319.3986923-1-trix@redhat.com Signed-off-by: Tom Rix <trix@redhat.com> Acked-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
====================
ENA: capabilities field and cosmetic changes
Add a new capabilities bitmask field to get indication of
capabilities supported by the device. Use the capabilities
field to query the device for ENI stats support.
Other patches are cosmetic changes like fixing readme
mistakes, removing unused variables etc...
====================
This struct was used to pass data from callee function to its caller.
Its usage can be avoided.
Removing it results in less code without any damage to code readability.
Also it allows to consolidate ring size calculation into a single
function (ena_calc_io_queue_size()).
Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net: ena: Change ENI stats support check to use capabilities field
Use the capabilities field to query the device for ENI stats
support.
This replaces the previous method that tried to get the ENI stats
during ena_probe() and used the success or failure as an indication
for support by the device.
Remove eni_stats_supported field from struct ena_adapter. This field
was used for the previous method of queriying for ENI stats support.
Change the severity level of the print in case of
ena_com_get_eni_stats() failure from info to error.
With the previous method of querying form ENI stats support, failure
to get ENI stats was normal for devices that don't support it.
With the use of the capabilities field such a failure is unexpected,
as it is called only if the device reported that it supports ENI
stats.
Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net: ena: Add capabilities field with support for ENI stats capability
This bitmask field indicates what capabilities are supported by the
device.
The capabilities field differs from the 'supported_features' field which
indicates what sub-commands for the set/get feature commands are
supported. The sub-commands are specified in the 'feature_id' field of
the 'ena_admin_set_feat_cmd' struct in the following way:
The 'capabilities' field, on the other hand, specifies different
capabilities of the device. For example, whether the device supports
querying of ENI stats.
Also add an enumerator which contains all the capabilities. The
first added capability macro is for ENI stats feature.
Capabilities are queried along with the other device attributes (in
ena_com_get_dev_attr_feat()) during device initialization and are stored
in the ena_com_dev struct. They can be later queried using the
ena_com_get_cap() helper function.
Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Fri, 7 Jan 2022 18:39:53 +0000 (10:39 -0800)]
af_packet: fix tracking issues in packet_do_bind()
It appears that my changes in packet_do_bind() were
slightly wrong.
syzbot found that calling bind() twice would trigger
a false positive.
Remove proto_curr/dev_curr variables and rewrite things
to be less confusing (like not having to use netdev_tracker_alloc(),
and instead use the standard dev_hold_track())
Fixes: f1d9268e0618 ("net: add net device refcount tracker to struct packet_type") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Link: https://lore.kernel.org/r/20220107183953.3886647-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
mptcp: Refactoring for one selftest and csum validation
Patch 1 changes the MPTCP join self tests to depend more on events
rather than delays, so the script runs faster and has more consistent
results.
Patches 2 and 3 get rid of some duplicate code in MPTCP's checksum
validation by modifying and leveraging an existing helper function.
====================
Paolo Abeni [Fri, 7 Jan 2022 19:25:22 +0000 (11:25 -0800)]
selftests: mptcp: more stable join tests-cases
MPTCP join self-tests are a bit fragile as they reply on
delays instead of events to catch-up with the expected
sockets states.
Replace the delay with state checking where possible and
reduce the number of sleeps in the most complex scenarios.
This will both reduce the tests run-time and will improve
stability.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Vladimir Oltean [Fri, 7 Jan 2022 14:42:29 +0000 (16:42 +0200)]
net: dsa: felix: add port fast age support
Add support for flushing the MAC table on a given port in the ocelot
switch library, and use this functionality in the felix DSA driver.
This operation is needed when a port leaves a bridge to become
standalone, and when the learning is disabled, and when the STP state
changes to a state where no FDB entry should be present.
Vladimir Oltean [Fri, 7 Jan 2022 16:43:32 +0000 (18:43 +0200)]
net: mscc: ocelot: fix incorrect balancing with down LAG ports
Assuming the test setup described here:
https://patchwork.kernel.org/project/netdevbpf/cover/20210205130240.4072854-1-vladimir.oltean@nxp.com/
(swp1 and swp2 are in bond0, and bond0 is in a bridge with swp0)
it can be seen that when swp1 goes down (on either board A or B), then
traffic that should go through that port isn't forwarded anywhere.
In other words, in the "bad" configuration, the attempt is to remove the
inactive swp1 from the destination ports via PGID_DST. But when a MAC
table entry is learned, it is learned towards PGID_DST 1, because that
is the logical port id of the LAG itself (it is equal to the lowest
numbered member port). So when swp1 becomes inactive, if we set
PGID_DST[1] to contain just swp1 and not swp2, the packet will not have
any chance to reach the destination via swp2.
The "correct" way to remove swp1 as a destination is via PGID_AGGR
(remove swp1 from the aggregation port groups for all aggregation
codes). This means that PGID_DST[1] and PGID_DST[2] must still contain
both swp1 and swp2. This makes the MAC table still treat packets
destined towards the single-port LAG as "multicast", and the inactive
ports are removed via the aggregation code tables.
The change presented here is a design one: the ocelot_get_bond_mask()
function used to take an "only_active_ports" argument. We don't need
that. The only call site that specifies only_active_ports=true,
ocelot_set_aggr_pgids(), must retrieve the entire bonding mask, because
it must program that into PGID_DST. Additionally, it must also clear the
inactive ports from the bond mask here, which it can't do if bond_mask
just contains the active ports:
ac = ocelot_read_rix(ocelot, ANA_PGID_PGID, i);
ac &= ~bond_mask; <---- here
/* Don't do division by zero if there was no active
* port. Just make all aggregation codes zero.
*/
if (num_active_ports)
ac |= BIT(aggr_idx[i % num_active_ports]);
ocelot_write_rix(ocelot, ac, ANA_PGID_PGID, i);
So it becomes the responsibility of ocelot_set_aggr_pgids() to take
ocelot_port->lag_tx_active into consideration when populating the
aggr_idx array.
Jakub Kicinski [Sat, 8 Jan 2022 02:51:46 +0000 (18:51 -0800)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
40GbE Intel Wired LAN Driver Updates 2022-01-07
This series contains updates to i40e and iavf drivers.
Karen limits per VF MAC filters so that one VF does not consume all
filters for i40e.
Jedrzej reduces busy wait time for admin queue calls for i40e.
Mateusz updates firmware versions to reflect new supported NVM images
and renames an error to remove non-inclusive language for i40e.
Yang Li fixes a set but not used warning for i40e.
Jason Wang removes an unneeded variable for iavf.
* '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue:
iavf: remove an unneeded variable
i40e: remove variables set but not used
i40e: Remove non-inclusive language
i40e: Update FW API version
i40e: Minimize amount of busy-waiting during AQ send
i40e: Add ensurance of MacVlan resources for every trusted VF
====================
Gal Pressman [Sun, 2 Jan 2022 08:12:53 +0000 (10:12 +0200)]
net/tls: Fix skb memory leak when running kTLS traffic
The cited Fixes commit introduced a memory leak when running kTLS
traffic (with/without hardware offloads).
I'm running nginx on the server side and wrk on the client side and get
the following:
I'm not familiar with these areas of the code, but I've added this
sk_defer_free_flush() to tls_sw_recvmsg() based on a hunch and it
resolved the issue.
Fixes: f35f821935d8 ("tcp: defer skb freeing after socket lock is released") Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20220102081253.9123-1-gal@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jason Wang [Sun, 12 Dec 2021 08:10:01 +0000 (16:10 +0800)]
iavf: remove an unneeded variable
The variable `ret_code' used for returning is never changed in function
`iavf_shutdown_adminq'. So that it can be removed and just return its
initial value 0 at the end of `iavf_shutdown_adminq' function.
Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Yang Li [Mon, 13 Dec 2021 03:11:07 +0000 (11:11 +0800)]
i40e: remove variables set but not used
The code that uses variables pe_cntx_size and pe_filt_size
has been removed, so they should be removed as well.
Eliminate the following clang warnings:
drivers/net/ethernet/intel/i40e/i40e_common.c:4139:20:
warning: variable 'pe_filt_size' set but not used.
drivers/net/ethernet/intel/i40e/i40e_common.c:4139:6:
warning: variable 'pe_cntx_size' set but not used.
Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
i40e: Minimize amount of busy-waiting during AQ send
The i40e_asq_send_command will now use a non blocking usleep_range if
possible (non-atomic context), instead of busy-waiting udelay. The
usleep_range function uses hrtimers to provide better performance and
removes the negative impact of busy-waiting in time-critical
environments.
1. Rename i40e_asq_send_command to i40e_asq_send_command_atomic
and add 5th parameter to inform if called from an atomic context.
Call inside usleep_range (if non-atomic) or udelay (if atomic).
2. Change i40e_asq_send_command to invoke
i40e_asq_send_command_atomic(..., false).
3. Change two functions:
- i40e_aq_set_vsi_uc_promisc_on_vlan
- i40e_aq_set_vsi_mc_promisc_on_vlan
to explicitly use i40e_asq_send_command_atomic(..., true)
instead of i40e_asq_send_command, as they use spinlocks and do some
work in an atomic context.
All other calls to i40e_asq_send_command remain unchanged.
Signed-off-by: Dawid Lukwinski <dawid.lukwinski@intel.com> Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Tested-by: Tony Brelinski <tony.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Karen Sornek [Thu, 17 Jun 2021 07:19:26 +0000 (09:19 +0200)]
i40e: Add ensurance of MacVlan resources for every trusted VF
Trusted VF can use up every resource available, leaving nothing
to other trusted VFs.
Introduce define, which calculates MacVlan resources available based
on maximum available MacVlan resources, bare minimum for each VF and
number of currently allocated VFs.
Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@intel.com> Signed-off-by: Karen Sornek <karen.sornek@intel.com> Tested-by: Tony Brelinski <tony.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Kevin Bracey [Thu, 6 Jan 2022 21:56:37 +0000 (23:56 +0200)]
sch_cake: revise Diffserv docs
Documentation incorrectly stated that CS1 is equivalent to LE for
diffserv8. But when LE was added to the table, CS1 was pushed into tin
1, leaving only LE in tin 0.
Also "TOS1" no longer exists, as that is the same codepoint as LE.
Make other tweaks properly distinguishing codepoints from classes and
putting current Diffserve codepoints ahead of legacy ones.
David S. Miller [Fri, 7 Jan 2022 11:27:07 +0000 (11:27 +0000)]
Merge branch 'mptcp-next'
Mat Martineau says:
====================
mptcp: New features and cleanup
These patches have been tested in the MPTCP tree for a longer than usual
time (thanks to holiday schedules), and are ready for the net-next
branch. Changes include feature updates, small fixes, refactoring, and
some selftest changes.
Patch 1 fixes an OUTQ ioctl issue with TCP fallback sockets.
Patches 2, 3, and 6 add support of the MPTCP fastclose option (quick
shutdown of the full MPTCP connection, similar to TCP RST in regular
TCP), and a related self test.
Patch 4 cleans up some accept and poll code that is no longer needed
after the fastclose changes.
Patch 5 add userspace disconnect using AF_UNSPEC, which is used when
testing fastclose and makes the MPTCP socket's handling of AF_UNSPEC in
connect() more TCP-like.
Patches 7-11 refactor subflow creation to make better use of multiple
local endpoints and to better handle individual connection failures when
creating multiple subflows. Includes self test updates.
Patch 12 cleans up the way subflows are added to the MPTCP connection
list, eliminating the need for calls throughout the MPTCP code that had
to check the intermediate "join list" for entries to shift over to the
main "connection list".
Patch 13 refactors the MPTCP release_cb flags to use separate storage
for values only accessed with the socket lock held (no atomic ops
needed), and for values that need atomic operations.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:26 +0000 (16:20 -0800)]
mptcp: avoid atomic bit manipulation when possible
Currently the msk->flags bitmask carries both state for the
mptcp_release_cb() - mostly touched under the mptcp data lock
- and others state info touched even outside such lock scope.
As a consequence, msk->flags is always manipulated with
atomic operations.
This change splits such bitmask in two separate fields, so
that we use plain bit operations when touching the
cb-related info.
The MPTCP_PUSH_PENDING bit needs additional care, as it is the
only CB related field currently accessed either under the mptcp
data lock or the mptcp socket lock.
Let's add another mask just for such bit's sake.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:25 +0000 (16:20 -0800)]
mptcp: cleanup MPJ subflow list handling
We can simplify the join list handling leveraging the
mptcp_release_cb(): if we can acquire the msk socket
lock at mptcp_finish_join time, move the new subflow
directly into the conn_list, otherwise place it on join_list and
let the release_cb process such list.
Since pending MPJ connection are now always processed
in a timely way, we can avoid flushing the join list
every time we have to process all the current subflows.
Additionally we can now use the mptcp data lock to protect
the join_list, removing the additional spin lock.
Finally, the MPJ handshake is now always finalized under the
msk socket lock, we can drop the additional synchronization
between mptcp_finish_join() and mptcp_close().
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:24 +0000 (16:20 -0800)]
selftests: mptcp: add tests for subflow creation failure
Verify that, when multiple endpoints are available, subflows
creation proceed even when the first additional subflow creation
fails - due to packet drop on the relevant link
Co-developed-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Geliang Tang <geliang.tang@suse.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:23 +0000 (16:20 -0800)]
mptcp: do not block subflows creation on errors
If the MPTCP configuration allows for multiple subflows
creation, and the first additional subflows never reach
the fully established status - e.g. due to packets drop or
reset - the in kernel path manager do not move to the
next subflow.
This patch introduces a new PM helper to cope with MPJ
subflow creation failure and delay and hook it where appropriate.
Such helper triggers additional subflow creation, as needed
and updates the PM subflow counter, if the current one is
closing.
Additionally start all the needed additional subflows
as soon as the MPTCP socket is fully established, so we don't
have to cope with slow MPJ handshake blocking the next subflow
creation.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:22 +0000 (16:20 -0800)]
mptcp: keep track of local endpoint still available for each msk
Include into the path manager status a bitmap tracking the list
of local endpoints still available - not yet used - for the
relevant mptcp socket.
Keep such map updated at endpoint creation/deletion time, so
that we can easily skip already used endpoint at local address
selection time.
The endpoint used by the initial subflow is lazyly accounted at
subflow creation time: the usage bitmap is be up2date before
endpoint selection and we avoid such unneeded task in some relevant
scenarios - e.g. busy servers accepting incoming subflows but
not creating any additional ones nor annuncing additional addresses.
Overall this allows for fair local endpoints usage in case of
subflow failure.
As a side effect, this patch also enforces that each endpoint
is used at most once for each mptcp connection.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:21 +0000 (16:20 -0800)]
mptcp: clean-up MPJ option writing
Check for all MPJ variant at once, this reduces the number
of conditionals traversed on average and will simplify the
next patch.
No functional change intended.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:20 +0000 (16:20 -0800)]
mptcp: fix per socket endpoint accounting
Since full-mesh endpoint support, the reception of a single ADD_ADDR
option can cause multiple subflows creation. When such option is
accepted we increment 'add_addr_accepted' by one. When we received
a paired RM_ADDR option, we deleted all the relevant subflows,
decrementing 'add_addr_accepted' by one for each of them.
We have a similar issue for 'local_addr_used'
Fix them moving the pm endpoint accounting outside the subflow
traversal.
Fixes: 1a0d6136c5f0 ("mptcp: local addresses fullmesh") Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:19 +0000 (16:20 -0800)]
selftests: mptcp: add disconnect tests
Performs several disconnect/reconnect on the same socket,
ensuring the overall transfer is succesful.
The new test leverages ioctl(SIOCOUTQ) to ensure all the
pending data is acked before disconnecting.
Additionally order alphabetically the test program arguments list
for better maintainability.
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Fri, 7 Jan 2022 00:20:18 +0000 (16:20 -0800)]
mptcp: implement support for user-space disconnect
Handle explicitly AF_UNSPEC in mptcp_stream_connnect() to
allow user-space to disconnect established MPTCP connections
Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>