Mengyuan Lou [Fri, 3 Feb 2023 09:11:27 +0000 (17:11 +0800)]
net: ngbe: Add irqs request flow
Add request_irq for tx/rx rings and misc other events.
If the application is successful, config vertors for interrupts.
Enable some base interrupts mask in ngbe_irq_enable.
Signed-off-by: Mengyuan Lou <mengyuanlou@net-swift.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Mengyuan Lou [Fri, 3 Feb 2023 09:11:26 +0000 (17:11 +0800)]
net: libwx: Add irq flow functions
Add irq flow functions for ngbe and txgbe.
Alloc pcie msix irqs for drivers, otherwise fall back to msi/legacy.
Signed-off-by: Mengyuan Lou <mengyuanlou@net-swift.com> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Qingfang DENG [Fri, 3 Feb 2023 01:16:11 +0000 (09:16 +0800)]
net: page_pool: use in_softirq() instead
We use BH context only for synchronization, so we don't care if it's
actually serving softirq or not.
As a side node, in case of threaded NAPI, in_serving_softirq() will
return false because it's in process context with BH off, making
page_pool_recycle_in_cache() unreachable.
Signed-off-by: Qingfang DENG <qingfang.deng@siflower.com.cn> Tested-by: Felix Fietkau <nbd@nbd.name> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 6 Feb 2023 09:09:23 +0000 (09:09 +0000)]
Merge tag 'mlx5-updates-2023-02-04' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2023-02-04
This series provides misc updates to mlx5 driver:
1) Trivial LAG code cleanup patches from Roi
2) Rahul improves mlx5's documentation structure
Separates the documentation into multiple pages related to different
components in the device driver. Adds Kconfig parameters, devlink
parameters, and tracepoints that were previously introduced but not added
to the documentation. Introduces a new page on ethtool statistics counters
with information about counters previously implemented in the mlx5_core
driver but not documented in the kernel tree.
3) From Raed, policy/state selector support for IPSec.
4) From Fragos, add support for XDR speed in IPoIB mlx5 netdev
5) Few more misc cleanups and trivial changes
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Parav Pandit [Fri, 3 Feb 2023 13:37:38 +0000 (15:37 +0200)]
virtio-net: Maintain reverse cleanup order
To easily audit the code, better to keep the device stop()
sequence to be mirror of the device open() sequence.
Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 6 Feb 2023 08:48:27 +0000 (08:48 +0000)]
Merge branch 'bridge-mdb-limit'
Petr Machata says:
====================
bridge: Limit number of MDB entries per port, port-vlan
The MDB maintained by the bridge is limited. When the bridge is configured
for IGMP / MLD snooping, a buggy or malicious client can easily exhaust its
capacity. In SW datapath, the capacity is configurable through the
IFLA_BR_MCAST_HASH_MAX parameter, but ultimately is finite. Obviously a
similar limit exists in the HW datapath for purposes of offloading.
In order to prevent the issue of unilateral exhaustion of MDB resources,
introduce two parameters in each of two contexts:
- Per-port and (when BROPT_MCAST_VLAN_SNOOPING_ENABLED is enabled)
per-port-VLAN number of MDB entries that the port is member in.
- Per-port and (when BROPT_MCAST_VLAN_SNOOPING_ENABLED is enabled)
per-port-VLAN maximum permitted number of MDB entries, or 0 for
no limit.
Per-port number of entries keeps track of the total number of MDB entries
configured on a given port. The per-port-VLAN value then keeps track of the
subset of MDB entries configured specifically for the given VLAN, on that
port. The number is adjusted as port_groups are created and deleted, and
therefore under multicast lock.
A maximum value, if non-zero, then places a limit on the number of entries
that can be configured in a given context. Attempts to add entries above
the maximum are rejected.
Rejection reason of netlink-based requests to add MDB entries is
communicated through extack. This channel is unavailable for rejections
triggered from the control path. To address this lack of visibility, the
patchset adds a tracepoint, bridge:br_mdb_full:
# perf record -e bridge:br_mdb_full &
# [...]
# perf script | cut -d: -f4-
dev v2 af 2 src ::ffff:0.0.0.0 grp ::ffff:239.1.1.112/00:00:00:00:00:00 vid 0
dev v2 af 10 src :: grp ff0e::112/00:00:00:00:00:00 vid 0
dev v2 af 2 src ::ffff:0.0.0.0 grp ::ffff:239.1.1.112/00:00:00:00:00:00 vid 10
dev v2 af 10 src 2001:db8:1::1 grp ff0e::1/00:00:00:00:00:00 vid 10
dev v2 af 2 src ::ffff:192.0.2.1 grp ::ffff:239.1.1.1/00:00:00:00:00:00 vid 10
Another option to consume the tracepoint is e.g. through the bpftrace tool:
This tracepoint is triggered for mcast_hash_max exhaustions as well.
The following is an example of how the feature is used. A more extensive
example is available in patch #8:
# bridge vlan set dev v1 vid 1 mcast_max_groups 1
# bridge mdb add dev br port v1 grp 230.1.2.3 temp vid 1
# bridge mdb add dev br port v1 grp 230.1.2.4 temp vid 1
Error: bridge: Port-VLAN is already in 1 groups, and mcast_max_groups=1.
The patchset progresses as follows:
- In patch #1, set strict_start_type at two bridge-related policies. The
reason is we are adding a new attribute to one of these, and want the new
attribute to be parsed strictly. The other was adjusted for completeness'
sake.
- In patches #2 to #5, br_mdb and br_multicast code is adjusted to make the
following additions smoother.
- In patch #6, add the tracepoint.
- In patch #7, the code to maintain number of MDB entries is added as
struct net_bridge_mcast_port::mdb_n_entries. The maximum is added, too,
as struct net_bridge_mcast_port::mdb_max_entries, however at this point
there is no way to set the value yet, and since 0 is treated as "no
limit", the functionality doesn't change at this point. Note however,
that mcast_hash_max violations already do trigger at this point.
- In patch #8, netlink plumbing is added: reading of number of entries, and
reading and writing of maximum.
The per-port values are passed through RTM_NEWLINK / RTM_GETLINK messages
in IFLA_BRPORT_MCAST_N_GROUPS and _MAX_GROUPS, inside IFLA_PROTINFO nest.
The per-port-vlan values are passed through RTM_GETVLAN / RTM_NEWVLAN
messages in BRIDGE_VLANDB_ENTRY_MCAST_N_GROUPS, _MAX_GROUPS, inside
BRIDGE_VLANDB_ENTRY.
The following patches deal with the selftest:
- Patches #9 and #10 clean up and move around some selftest code.
- Patches #11 to #14 add helpers and generalize the existing IGMP / MLD
support to allow generating packets with configurable group addresses and
varying source lists for (S,G) memberships.
- Patch #15 adds code to generate IGMP leave and MLD done packets.
- Patch #16 finally adds the selftest itself.
v3:
- Patch #7:
- Access mdb_max_/_n_entries through READ_/WRITE_ONCE
- Move extack setting to br_multicast_port_ngroups_inc_one().
Since we use NL_SET_ERR_MSG_FMT_MOD, the correct context
(port / port-vlan) can be passed through an argument.
This also removes the need for more READ/WRITE_ONCE's
at the extack-setting site.
- Patch #8:
- Move the br_multicast_port_ctx_vlan_disabled() check
out to the _vlan_ helpers callers. Thus these helpers
cannot fail, which makes them very similar to the
_port_ helpers. Have them take the MC context directly
and unify them.
v2:
- Cover letter:
- Add an example of a bpftrace-based probe script
- Patch #6:
- Report IPv4 as an IPv6-mapped address through the IPv6 buffer
as well, to save ring buffer space.
- Patch #7:
- In br_multicast_port_ngroups_inc_one(), bounce
if n>=max, not if n==max
- Adjust extack messages to mention ngroups, now
that the bounces appear when n>=max, not n==max
- In __br_multicast_enable_port_ctx(), do not reset
max to 0. Also do not count number of entries by
going through _inc, as that would end up incorrectly
bouncing the entries.
- Patch #8:
- Drop locks around accesses in
br_multicast_{port,vlan}_ngroups_{get,set_max}(),
- Drop bounces due to max<n in
br_multicast_{port,vlan}_ngroups_set_max().
- Patch #12:
- In the comment at payload_template_calc_checksum(),
s/%#02x/%02x/, that's the mausezahn payload format.
- Patch #16:
- Adjust the tests that check setting max below n and
reset of max on VLAN snooping enablement
- Make test naming uniform
- Enable testing of control path (IGMP/MLD) in
mcast_vlan_snooping bridge
- Reorganize the code so that test instances (per bridge
type and configuration type) always come right after
the test, in order of {d,q,qvs}{4,6}{cfg,ctl}.
Then groups of selftests are at the end of the file.
Similarly adjust invocation order of the tests.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:34 +0000 (18:59 +0100)]
selftests: forwarding: bridge_mdb_max: Add a new selftest
Add a suite covering mcast_n_groups and mcast_max_groups bridge features.
Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:33 +0000 (18:59 +0100)]
selftests: forwarding: lib: Add helpers to build IGMP/MLD leave packets
The testsuite that checks for mcast_max_groups functionality will need to
wipe the added groups as well. Add helpers to build an IGMP or MLD packets
announcing that host is leaving a given group.
Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:32 +0000 (18:59 +0100)]
selftests: forwarding: lib: Allow list of IPs for IGMPv3/MLDv2
The testsuite that checks for mcast_max_groups functionality will need
to generate IGMP and MLD packets with configurable number of (S,G)
addresses. To that end, further extend igmpv3_is_in_get() and
mldv2_is_in_get() to allow a list of IP addresses instead of one
address.
Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
In order to generate IGMPv3 and MLDv2 packets on the fly, the
functions that generate these packets need to be able to generate
packets for different groups and different sources. Generating MLDv2
packets further needs the source address of the packet for purposes of
checksum calculation. Add the necessary parameters, and generate the
payload accordingly by dispatching to helpers added in the previous
patches.
Adjust the sole client, bridge_mdb.sh, as well.
Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:30 +0000 (18:59 +0100)]
selftests: forwarding: lib: Add helpers for checksum handling
In order to generate IGMPv3 and MLDv2 packets on the fly, we will need
helpers to calculate the packet checksum.
The approach presented in this patch revolves around payload templates
for mausezahn. These are mausezahn-like payload strings (01:23:45:...)
with possibly one 2-byte sequence replaced with the word PAYLOAD. The
main function is payload_template_calc_checksum(), which calculates
RFC 1071 checksum of the message. There are further helpers to then
convert the checksum to the payload format, and to expand it.
For IPv6, MLDv2 message checksum is computed using a pseudoheader that
differs from the header used in the payload itself. The fact that the
two messages are different means that the checksum needs to be
returned as a separate quantity, instead of being expanded in-place in
the payload itself. Furthermore, the pseudoheader includes a length of
the message. Much like the checksum, this needs to be expanded in
mausezahn format. And likewise for number of addresses for (S,G)
entries. Thus we have several places where a computed quantity needs
to be presented in the payload format. Add a helper u16_to_bytes(),
which will be used in all these cases.
Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:29 +0000 (18:59 +0100)]
selftests: forwarding: lib: Add helpers for IP address handling
In order to generate IGMPv3 and MLDv2 packets on the fly, we will need
helpers to expand IPv4 and IPv6 addresses given as parameters in
mausezahn payload notation. Add helpers that do it.
Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:28 +0000 (18:59 +0100)]
selftests: forwarding: bridge_mdb: Fix a typo
Add the letter missing from the word "INCLUDE".
Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:27 +0000 (18:59 +0100)]
selftests: forwarding: Move IGMP- and MLD-related functions to lib
These functions will be helpful for other testsuites as well. Extract them
to a common place.
Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:26 +0000 (18:59 +0100)]
net: bridge: Add netlink knobs for number / maximum MDB entries
The previous patch added accounting for number of MDB entries per port and
per port-VLAN, and the logic to verify that these values stay within
configured bounds. However it didn't provide means to actually configure
those bounds or read the occupancy. This patch does that.
Two new netlink attributes are added for the MDB occupancy:
IFLA_BRPORT_MCAST_N_GROUPS for the per-port occupancy and
BRIDGE_VLANDB_ENTRY_MCAST_N_GROUPS for the per-port-VLAN occupancy.
And another two for the maximum number of MDB entries:
IFLA_BRPORT_MCAST_MAX_GROUPS for the per-port maximum, and
BRIDGE_VLANDB_ENTRY_MCAST_MAX_GROUPS for the per-port-VLAN one.
Note that the two new IFLA_BRPORT_ attributes prompt bumping of
RTNL_SLAVE_MAX_TYPE to size the slave attribute tables large enough.
The new attributes are used like this:
# ip link add name br up type bridge vlan_filtering 1 mcast_snooping 1 \
mcast_vlan_snooping 1 mcast_querier 1
# ip link set dev v1 master br
# bridge vlan add dev v1 vid 2
# bridge vlan set dev v1 vid 1 mcast_max_groups 1
# bridge mdb add dev br port v1 grp 230.1.2.3 temp vid 1
# bridge mdb add dev br port v1 grp 230.1.2.4 temp vid 1
Error: bridge: Port-VLAN is already in 1 groups, and mcast_max_groups=1.
# bridge link set dev v1 mcast_max_groups 1
# bridge mdb add dev br port v1 grp 230.1.2.3 temp vid 2
Error: bridge: Port is already in 1 groups, and mcast_max_groups=1.
# bridge -d link show
5: v1@v2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br [...]
[...] mcast_n_groups 1 mcast_max_groups 1
Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:25 +0000 (18:59 +0100)]
net: bridge: Maintain number of MDB entries in net_bridge_mcast_port
The MDB maintained by the bridge is limited. When the bridge is configured
for IGMP / MLD snooping, a buggy or malicious client can easily exhaust its
capacity. In SW datapath, the capacity is configurable through the
IFLA_BR_MCAST_HASH_MAX parameter, but ultimately is finite. Obviously a
similar limit exists in the HW datapath for purposes of offloading.
In order to prevent the issue of unilateral exhaustion of MDB resources,
introduce two parameters in each of two contexts:
- Per-port and per-port-VLAN number of MDB entries that the port
is member in.
- Per-port and (when BROPT_MCAST_VLAN_SNOOPING_ENABLED is enabled)
per-port-VLAN maximum permitted number of MDB entries, or 0 for
no limit.
The per-port multicast context is used for tracking of MDB entries for the
port as a whole. This is available for all bridges.
The per-port-VLAN multicast context is then only available on
VLAN-filtering bridges on VLANs that have multicast snooping on.
With these changes in place, it will be possible to configure MDB limit for
bridge as a whole, or any one port as a whole, or any single port-VLAN.
Note that unlike the global limit, exhaustion of the per-port and
per-port-VLAN maximums does not cause disablement of multicast snooping.
It is also permitted to configure the local limit larger than hash_max,
even though that is not useful.
In this patch, introduce only the accounting for number of entries, and the
max field itself, but not the means to toggle the max. The next patch
introduces the netlink APIs to toggle and read the values.
Signed-off-by: Petr Machata <petrm@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:24 +0000 (18:59 +0100)]
net: bridge: Add a tracepoint for MDB overflows
The following patch will add two more maximum MDB allowances to the global
one, mcast_hash_max, that exists today. In all these cases, attempts to add
MDB entries above the configured maximums through netlink, fail noisily and
obviously. Such visibility is missing when adding entries through the
control plane traffic, by IGMP or MLD packets.
To improve visibility in those cases, add a trace point that reports the
violation, including the relevant netdevice (be it a slave or the bridge
itself), and the MDB entry parameters:
# perf record -e bridge:br_mdb_full &
# [...]
# perf script | cut -d: -f4-
dev v2 af 2 src ::ffff:0.0.0.0 grp ::ffff:239.1.1.112/00:00:00:00:00:00 vid 0
dev v2 af 10 src :: grp ff0e::112/00:00:00:00:00:00 vid 0
dev v2 af 2 src ::ffff:0.0.0.0 grp ::ffff:239.1.1.112/00:00:00:00:00:00 vid 10
dev v2 af 10 src 2001:db8:1::1 grp ff0e::1/00:00:00:00:00:00 vid 10
dev v2 af 2 src ::ffff:192.0.2.1 grp ::ffff:239.1.1.1/00:00:00:00:00:00 vid 10
CC: Steven Rostedt <rostedt@goodmis.org> CC: linux-trace-kernel@vger.kernel.org Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:23 +0000 (18:59 +0100)]
net: bridge: Change a cleanup in br_multicast_new_port_group() to goto
This function is getting more to clean up in the following patches.
Structuring the cleanups in one labeled block will allow reusing the same
cleanup from several places.
Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:22 +0000 (18:59 +0100)]
net: bridge: Add br_multicast_del_port_group()
Since cleaning up the effects of br_multicast_new_port_group() just
consists of delisting and freeing the memory, the function
br_mdb_add_group_star_g() inlines the corresponding code. In the following
patches, number of per-port and per-port-VLAN MDB entries is going to be
maintained, and that counter will have to be updated. Because that logic
is going to be hidden in the br_multicast module, introduce a new hook
intended to again remove a newly-created group.
Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:21 +0000 (18:59 +0100)]
net: bridge: Move extack-setting to br_multicast_new_port_group()
Now that br_multicast_new_port_group() takes an extack argument, move
setting the extack there. The downside is that the error messages end
up being less specific (the function cannot distinguish between (S,G)
and (*,G) groups). However, the alternative is to check in the caller
whether the callee set the extack, and if it didn't, set it. But that
is only done when the callee is not exactly known. (E.g. in case of a
notifier invocation.)
Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:20 +0000 (18:59 +0100)]
net: bridge: Add extack to br_multicast_new_port_group()
Make it possible to set an extack in br_multicast_new_port_group().
Eventually, this function will check for per-port and per-port-vlan
MDB maximums, and will use the extack to communicate the reason for
the bounce.
Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Petr Machata [Thu, 2 Feb 2023 17:59:19 +0000 (18:59 +0100)]
net: bridge: Set strict_start_type at two policies
Make any attributes newly-added to br_port_policy or vlan_tunnel_policy
parsed strictly, to prevent userspace from passing garbage. Note that this
patchset only touches the former policy. The latter was adjusted for
completeness' sake. There do not appear to be other _deprecated calls
with non-NULL policies.
Suggested-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 6 Feb 2023 08:26:26 +0000 (08:26 +0000)]
Merge branch 'sparx5-PSFP-support'
Daniel Machon says:
====================
net: Add support for PSFP in Sparx5
================================================================================
Add support for Per-Stream Filtering and Policing (802.1Q-2018, 8.6.5.1).
================================================================================
The VCAP CLM (VCAP IS0 ingress classifier) classifies streams,
identified by ISDX (Ingress Service Index, frame metadata), and maps
ISDX to streams.
Flow meters are also classified by ISDX, and implemented using service
policers (Service Dual Leacky Buckets, SDLB). Leacky buckets are linked
together in a leak chain of a leak group. Leak groups a preconfigured to serve
buckets within a certain rate interval.
Stream gates are time-based policers used by PSFP. Frames are dropped
based on the gate state (OPEN/ CLOSE), whose state will be altered based
on the Gate Control List (GCL) and current PTP time. Apart from
time-based policing, stream gates can alter egress queue selection for
the frames that pass through the Gate. This is done through Internal
Priority Selector (IPS). Stream gates are mapped from stream filters.
Support for tc actions gate and police, have been added to the VCAP IS0 set of
supported actions.
================================================================================
Patches
================================================================================
Patch #1: Adds new register needed for PSFP.
Patch #2: Adds resource pools to control PSFP needed chip resources.
Patch #3: Adds support for SDLB's needed for flow-meters.
Patch #4: Adds support for service policers.
Patch #5: Adds support for PSFP flow-meters, using service policers.
Patch #6: Adds a new function to calculate basetime, required by flow-meters.
Patch #7: Adds support for PSFP stream gates.
Patch #8: Adds support for PSFP stream filters.
Patch #9: Adds a function to initialize flow-meters, stream gates and stream
filters.
Patch #10: Adds the required flower code to configure PSFP using the tc command.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:55 +0000 (11:43 +0100)]
sparx5: add support for configuring PSFP via tc
Add support for tc actions gate and police, in order to implement
support for configuring PSFP through tc.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:54 +0000 (11:43 +0100)]
net: microchip: sparx5: initialize PSFP
Initialize the SDLB's, stream gates and stream filters.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:53 +0000 (11:43 +0100)]
net: microchip: sparx5: add support for PSFP stream filters
Add support for configuring PSFP stream filters (IEEE 802.1Q-2018,
8.6.5.1.1).
The VCAP CLM (VCAP IS0 ingress classifier) classifies streams,
identified by ISDX (Ingress Service Index, frame metadata), and maps
ISDX to streams.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:52 +0000 (11:43 +0100)]
net: microchip: sparx5: add support for PSFP stream gates
Add support for configuring PSFP stream gates (IEEE 802.1Q-2018,
8.6.5.1.2).
Stream gates are time-based policers used by PSFP. Frames are dropped
based on the gate state (OPEN/ CLOSE), whose state will be altered based
on the Gate Control List (GCL) and current PTP time. Apart from
time-based policing, stream gates can alter egress queue selection for
the frames that pass through the Gate. This is done through Internal
Priority Selector (IPS). Stream gates are mapped from stream filters.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:51 +0000 (11:43 +0100)]
net: microchip: sparx5: add function for calculating PTP basetime
Add a new function for calculating PTP basetime, required by the stream
gate scheduler to calculate gate state (open / close).
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:50 +0000 (11:43 +0100)]
net: microchip: sparx5: add support for PSFP flow-meters
Add support for configuring PSFP flow-meters (IEEE 802.1Q-2018,
8.6.5.1.3).
The VCAP CLM (VCAP IS0 ingress classifier) classifies streams,
identified by ISDX (Ingress Service Index, frame metadata), and maps
ISDX to flow-meters. SDLB's provide the flow-meter parameters.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:49 +0000 (11:43 +0100)]
net: microchip: sparx5: add support for service policers
Add initial API for configuring policers. This patch add support for
service policers.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:48 +0000 (11:43 +0100)]
net: microchip: sparx5: add support for Service Dual Leacky Buckets
Add support for Service Dual Leacky Buckets (SDLB), used to implement
PSFP flow-meters. Buckets are linked together in a leak chain of a leak
group. Leak groups a preconfigured to serve buckets within a certain
rate interval.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:47 +0000 (11:43 +0100)]
net: microchip: sparx5: add resource pools
Add resource pools and accessor functions. These pools can be queried by
the driver, whenever a finite resource is required. Some resources can
be reused, in which case an index and a reference count is used to keep
track of users.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Machon [Thu, 2 Feb 2023 10:43:46 +0000 (11:43 +0100)]
net: microchip: add registers needed for PSFP
Add registers needed for PSFP. This patch also renames a single
register, shortening its name (SYS_CLK_PER_100PS). Uses have been update
accordingly.
Signed-off-by: Daniel Machon <daniel.machon@microchip.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
If an SQ is deactivated and reactivated again, some packets could be
sent after MLX5E_SQ_STATE_ENABLED is cleared, but before
netif_tx_stop_queue, meaning that NAPI might miss some completions. In
order to handle them, make sure to trigger NAPI after SQ activation in
all cases where it can be relevant. Regular SQs, XDP SQs and XSK SQs are
good. Missing cases added: after recovery, after activating HTB SQs and
after activating PTP SQs.
Raed Salem [Wed, 11 Jan 2023 12:58:19 +0000 (14:58 +0200)]
net/mlx5e: IPsec, support upper protocol selector field offload
Add support to policy/state upper protocol selector field offload,
this will enable to select traffic for IPsec operation based on l4
protocol (TCP/UDP) with specific source/destination port.
Signed-off-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Jack Morgenstein [Wed, 18 Jan 2023 17:57:04 +0000 (19:57 +0200)]
net/mlx5: Enhance debug print in page allocation failure
Provide more details to aid debugging.
Fixes: bf0bf77f6519 ("mlx5: Support communicating arbitrary host page size to firmware") Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com> Signed-off-by: Majd Dibbiny <majd@nvidia.com> Signed-off-by: Jack Morgenstein <jackm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Rahul Rameshbabu [Tue, 22 Nov 2022 01:16:38 +0000 (17:16 -0800)]
net/mlx5: Add firmware support for MTUTC scaled_ppm frequency adjustments
When device is capable of handling scaled ppm values for adjusting
frequency, conversion to ppb will not be done by the driver. Instead, the
scaled ppm value will be passed directly to the device for the frequency
adjustment operation.
Rahul Rameshbabu [Wed, 12 Oct 2022 00:25:28 +0000 (17:25 -0700)]
net/mlx5: Separate mlx5 driver documentation into multiple pages
The mlx5 device driver documentation page has grown in size and should be
split into multiple subpages. This change also contains a table of contents
for these new subpages.
David S. Miller [Sat, 4 Feb 2023 09:48:19 +0000 (09:48 +0000)]
Merge branch 'net-smc-parallelism'
D. Wythe says:
====================
net/smc: optimize the parallelism of SMC-R connections
This patch set attempts to optimize the parallelism of SMC-R connections,
mainly to reduce unnecessary blocking on locks, and to fix exceptions that
occur after thoses optimization.
According to Off-CPU graph, SMC worker's off-CPU as that:
An ideal SMC-R connection process should only block on the IO events
of the network, but it's quite clear that the SMC-R connection now is
queued on the lock most of the time.
The goal of this patchset is to achieve our ideal situation where
network IO events are blocked for the majority of the connection lifetime.
We can see that most of the waiting times are waiting for network IO
events. This also has a certain performance improvement on our
short-lived conenction wrk/nginx benchmark test:
The reason why the benefit is not obvious after the number of connections
has increased dues to workqueue. If we try to change workqueue to UNBOUND,
we can obtain at least 4-5 times performance improvement, reach up to half
of TCP. However, this is not an elegant solution, the optimization of it
will be much more complicated. But in any case, we will submit relevant
optimization patches as soon as possible.
Please note that the premise here is that the lock related problem
must be solved first, otherwise, no matter how we optimize the workqueue,
there won't be much improvement.
Because there are a lot of related changes to the code, if you have
any questions or suggestions, please let me know.
Thanks
D. Wythe
v1 -> v2:
1. Fix panic in SMC-D scenario
2. Fix lnkc related hashfn calculation exception, caused by operator
priority
3. Only wake up one connection if the lnk is not active
4. Delete obsolete unlock logic in smc_listen_work()
5. PATCH format, do Reverse Christmas tree
6. PATCH format, change all xxx_lnk_xxx function to xxx_link_xxx
7. PATCH format, add correct fix tag for the patches for fixes.
8. PATCH format, fix some spelling error
9. PATCH format, rename slow to do_slow
v2 -> v3:
1. add SMC-D support, remove the concept of link cluster since SMC-D has
no link at all. Replace it by lgr decision maker, who provides suggestions
to SMC-D and SMC-R on whether to create new link group.
2. Fix the corruption problem described by PATCH 'fix application
data exception' on SMC-D.
v3 -> v4:
1. Fix panic caused by uninitialization map.
v4 -> v5:
1. Make SMC-D buf creation be serial to avoid Potential error
2. Add a flag to synchronize the success of the first contact
with the ready of the link group, including SMC-D and SMC-R.
3. Fixed possible reference count leak in smc_llc_flow_start().
4. reorder the patch, make bugfix PATCH be ahead.
v5 -> v6:
1. Separate the bugfix patches to make it independent.
2. Merge patch 'fix SMC_CLC_DECL_ERR_REGRMB without smc_server_lgr_pending'
with patch 'remove locks smc_client_lgr_pending and smc_server_lgr_pending'
3. Format code styles, including alignment and reverse christmas tree
style.
4. Fix a possible memory leak in smc_llc_rmt_delete_rkey()
and smc_llc_rmt_conf_rkey().
v6 -> v7:
1. Discard patch attempting to remove global locks
2. Discard patch attempting make confirm/delete rkey process concurrently
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
D. Wythe [Thu, 2 Feb 2023 08:26:42 +0000 (16:26 +0800)]
net/smc: replace mutex rmbs_lock and sndbufs_lock with rw_semaphore
It's clear that rmbs_lock and sndbufs_lock are aims to protect the
rmbs list or the sndbufs list.
During connection establieshment, smc_buf_get_slot() will always
be invoked, and it only performs read semantics in rmbs list and
sndbufs list.
Based on the above considerations, we replace mutex with rw_semaphore.
Only smc_buf_get_slot() use down_read() to allow smc_buf_get_slot()
run concurrently, other part use down_write() to keep exclusive
semantics.
Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
D. Wythe [Thu, 2 Feb 2023 08:26:41 +0000 (16:26 +0800)]
net/smc: reduce unnecessary blocking in smcr_lgr_reg_rmbs()
Unlike smc_buf_create() and smcr_buf_unuse(), smcr_lgr_reg_rmbs() is
exclusive when assigned rmb_desc was not registered, although it can be
executed in parallel when assigned rmb_desc was registered already
and only performs read semtamics on it. Hence, we can not simply replace
it with read semaphore.
The idea here is that if the assigned rmb_desc was registered already,
use read semaphore to protect the critical section, once the assigned
rmb_desc was not registered, keep using keep write semaphore still
to keep its exclusivity.
Thanks to the reusable features of rmb_desc, which allows us to execute
in parallel in most cases.
Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
We can clearly see that during the connection establishment time,
waiting time of connections is not on IO, but on llc_conf_mutex.
What is more important, the core critical area (smcr_buf_unuse() &
smc_buf_create()) only perfroms read semantics on links, we can
easily replace it with read semaphore.
Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
D. Wythe [Thu, 2 Feb 2023 08:26:39 +0000 (16:26 +0800)]
net/smc: llc_conf_mutex refactor, replace it with rw_semaphore
llc_conf_mutex was used to protect links and link related configurations
in the same link group, for example, add or delete links. However,
in most cases, the protected critical area has only read semantics and
with no write semantics at all, such as obtaining a usable link or an
available rmb_desc.
This patch do simply code refactoring, replace mutex with rw_semaphore,
replace mutex_lock with down_write and replace mutex_unlock with
up_write.
Theoretically, this replacement is equivalent, but after this patch,
we can distinguish lock granularity according to different semantics
of critical areas.
Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Sat, 4 Feb 2023 04:05:59 +0000 (20:05 -0800)]
Merge branch 'updates-to-enetc-txq-management'
Vladimir Oltean says:
====================
Updates to ENETC TXQ management
The set ensures that the number of TXQs given by enetc to the network
stack (mqprio or TX hashing) + the number of TXQs given to XDP never
exceeds the number of available TXQs.
These are the first 4 patches of series "[v5,net-next,00/17] ENETC
mqprio/taprio cleanup" from here:
https://patchwork.kernel.org/project/netdevbpf/cover/20230202003621.2679603-1-vladimir.oltean@nxp.com/
There is no change in this version compared to there. I split them off
because this contains a fix for net-next and it would be good if it
could go in quickly. I also did it to reduce the patch count of that
other series, if I need to respin it again.
====================
Vladimir Oltean [Fri, 3 Feb 2023 00:11:16 +0000 (02:11 +0200)]
net: enetc: ensure we always have a minimum number of TXQs for stack
Currently it can happen that an mqprio qdisc is installed with num_tc 8,
and this will reserve 8 (out of 8) TXQs for the network stack. Then we
can attach an XDP program, and this will crop 2 TXQs, leaving just 6 for
mqprio. That's not what the user requested, and we should fail it.
On the other hand, if mqprio isn't requested, we still give the 8 TXQs
to the network stack (with hashing among a single traffic class), but
then, cropping 2 TXQs for XDP is fine, because the user didn't
explicitly ask for any number of TXQs, so no expectations are violated.
Simply put, the logic that mqprio should impose a minimum number of TXQs
for the network never existed. Let's say (more or less arbitrarily) that
without mqprio, the driver expects a minimum number of TXQs equal to the
number of CPUs (on NXP LS1028A, that is either 1, or 2). And with mqprio,
mqprio gives the minimum required number of TXQs.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Vladimir Oltean [Fri, 3 Feb 2023 00:11:15 +0000 (02:11 +0200)]
net: enetc: recalculate num_real_tx_queues when XDP program attaches
Since the blamed net-next commit, enetc_setup_xdp_prog() no longer goes
through enetc_open(), and therefore, the function which was supposed to
detect whether a BPF program exists (in order to crop some TX queues
from network stack usage), enetc_num_stack_tx_queues(), no longer gets
called.
We can move the netif_set_real_num_rx_queues() call to enetc_alloc_msix()
(probe time), since it is a runtime invariant. We can do the same thing
with netif_set_real_num_tx_queues(), and let enetc_reconfigure_xdp_cb()
explicitly recalculate and change the number of stack TX queues.
Fixes: c33bfaf91c4c ("net: enetc: set up XDP program under enetc_reconfigure()") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Vladimir Oltean [Fri, 3 Feb 2023 00:11:14 +0000 (02:11 +0200)]
net: enetc: allow the enetc_reconfigure() callback to fail
enetc_reconfigure() was modified in commit c33bfaf91c4c ("net: enetc:
set up XDP program under enetc_reconfigure()") to take an optional
callback that runs while the netdev is down, but this callback currently
cannot fail.
Code up the error handling so that the interface is restarted with the
old resources if the callback fails.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Vladimir Oltean [Fri, 3 Feb 2023 00:11:13 +0000 (02:11 +0200)]
net: enetc: simplify enetc_num_stack_tx_queues()
We keep a pointer to the xdp_prog in the private netdev structure as
well; what's replicated per RX ring is done so just for more convenient
access from the NAPI poll procedure.
Simplify enetc_num_stack_tx_queues() by looking at priv->xdp_prog rather
than iterating through the information replicated per RX ring.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
devlink: Move devlink dev code to a separate file
This patchset is moving code from the file leftover.c to new file dev.c.
About 1.3K lines are moved by this patchset covering most of the devlink
dev object callbacks and functionality: reload, eswitch, info, flash and
selftest.
====================
Moshe Shemesh [Thu, 2 Feb 2023 14:47:03 +0000 (16:47 +0200)]
devlink: Move devlink dev info code to dev
Move devlink dev info callbacks, related drivers helpers functions and
other related code from leftover.c to dev.c. No functional change in
this patch.
Moshe Shemesh [Thu, 2 Feb 2023 14:47:00 +0000 (16:47 +0200)]
devlink: Split out dev get and dump code
Move devlink dev get and dump callbacks and related dev code to new file
dev.c. This file shall include all callbacks that are specific on
devlink dev object.
Vladimir Oltean [Thu, 2 Feb 2023 14:03:54 +0000 (16:03 +0200)]
net: dsa: use NL_SET_ERR_MSG_WEAK_MOD() more consistently
Now that commit 028fb19c6ba7 ("netlink: provide an ability to set
default extack message") provides a weak function that doesn't override
an existing extack message provided by the driver, it makes sense to use
it also for LAG and HSR offloading, not just for bridge offloading.
Also consistently put the message string on a separate line, to reduce
line length from 92 to 84 characters.
David S. Miller [Fri, 3 Feb 2023 09:34:51 +0000 (09:34 +0000)]
Merge branch 'yt8531-support'
Frank Sae says:
====================
net: add dts for yt8521 and yt8531s, add driver for yt8531
Add dts for yt8521 and yt8531s, add driver for yt8531.
These patches have been verified on our AM335x platform (motherboard)
which has one integrated yt8521 and one RGMII interface.
It can connect to daughter boards like yt8531s or yt8531 board.
v5:
- change the compatible of yaml
- change the maintainers of yaml from "frank sae" to "Frank Sae"
v4:
- change default tx delay from 150ps to 1950ps
- add compatible for yaml
v3:
- change default rx delay from 1900ps to 1950ps
- moved ytphy_rgmii_clk_delay_config_with_lock from yt8521's patch to yt8531's patch
- removed unnecessary checks of phydev->attached_dev->dev_addr
v2:
- split BIT macro as one patch
- split "dts for yt8521/yt8531s ... " patch as two patches
- use standard rx-internal-delay-ps and tx-internal-delay-ps, removed motorcomm,sds-tx-amplitude
- removed ytphy_parse_dt, ytphy_probe_helper and ytphy_config_init_helper
- not store dts arg to yt8521_priv
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Sae [Thu, 2 Feb 2023 03:00:37 +0000 (11:00 +0800)]
net: phy: Add driver for Motorcomm yt8531 gigabit ethernet phy
Add a driver for the motorcomm yt8531 gigabit ethernet phy. We have
verified the driver on AM335x platform with yt8531 board. On the
board, yt8531 gigabit ethernet phy works in utp mode, RGMII
interface, supports 1000M/100M/10M speeds, and wol(magic package).
Signed-off-by: Frank Sae <Frank.Sae@motor-comm.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Sae [Thu, 2 Feb 2023 03:00:36 +0000 (11:00 +0800)]
net: phy: Add dts support for Motorcomm yt8531s gigabit ethernet phy
Add dts support for Motorcomm yt8531s gigabit ethernet phy.
Change yt8521_probe to support clk config of yt8531s. Becase
yt8521_probe does the things which yt8531s is needed, so
removed yt8531s function.
This patch has been verified on AM335x platform with yt8531s board.
Signed-off-by: Frank Sae <Frank.Sae@motor-comm.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Sae [Thu, 2 Feb 2023 03:00:35 +0000 (11:00 +0800)]
net: phy: Add dts support for Motorcomm yt8521 gigabit ethernet phy
Add dts support for Motorcomm yt8521 gigabit ethernet phy.
Add ytphy_rgmii_clk_delay_config function to support dst config for
the delay of rgmii clk. This funciont is common for yt8521, yt8531s
and yt8531.
This patch has been verified on AM335x platform.
Signed-off-by: Frank Sae <Frank.Sae@motor-comm.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Frank Sae [Thu, 2 Feb 2023 03:00:34 +0000 (11:00 +0800)]
net: phy: Add BIT macro for Motorcomm yt8521/yt8531 gigabit ethernet phy
Add BIT macro for Motorcomm yt8521/yt8531 gigabit ethernet phy.
This is a preparatory patch. Add BIT macro for 0xA012 reg, and
supplement for 0xA001 and 0xA003 reg. These will be used to support dts.
Signed-off-by: Frank Sae <Frank.Sae@motor-comm.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 3 Feb 2023 09:31:25 +0000 (09:31 +0000)]
Merge branch 'act_ct-UDP-NEW'
Vlad Buslov says:
====================
net: Allow offloading of UDP NEW connections via act_ct
Currently only bidirectional established connections can be offloaded
via act_ct. Such approach allows to hardcode a lot of assumptions into
act_ct, flow_table and flow_offload intermediate layer codes. In order
to enabled offloading of unidirectional UDP NEW connections start with
incrementally changing the following assumptions:
- Drivers assume that only established connections are offloaded and
don't support updating existing connections. Extract ctinfo from meta
action cookie and refuse offloading of new connections in the drivers.
- Fix flow_table offload fixup algorithm to calculate flow timeout
according to current connection state instead of hardcoded
"established" value.
- Add new flow_table flow flag that designates bidirectional connections
instead of assuming it and hardcoding hardware offload of every flow
in both directions.
- Add new flow_table flow flag that designates connections that are
offloaded to hardware as "established" instead of assuming it. This
allows some optimizations in act_ct and prevents spamming the
flow_table workqueue with redundant tasks.
With all the necessary infrastructure in place modify act_ct to offload
UDP NEW as unidirectional connection. Pass reply direction traffic to CT
and promote connection to bidirectional when UDP connection state
changes to "assured". Rely on refresh mechanism to propagate connection
state change to supporting drivers.
Note that early drop algorithm that is designed to free up some space in
connection tracking table when it becomes full (by randomly deleting up
to 5% of non-established connections) currently ignores connections
marked as "offloaded". Now, with UDP NEW connections becoming
"offloaded" it could allow malicious user to perform DoS attack by
filling the table with non-droppable UDP NEW connections by sending just
one packet in single direction. To prevent such scenario change early
drop algorithm to also consider "offloaded" connections for deletion.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Wed, 1 Feb 2023 16:31:00 +0000 (17:31 +0100)]
netfilter: nf_conntrack: allow early drop of offloaded UDP conns
Both synchronous early drop algorithm and asynchronous gc worker completely
ignore connections with IPS_OFFLOAD_BIT status bit set. With new
functionality that enabled UDP NEW connection offload in action CT
malicious user can flood the conntrack table with offloaded UDP connections
by just sending a single packet per 5tuple because such connections can no
longer be deleted by early drop algorithm.
To mitigate the issue allow both early drop and gc to consider offloaded
UDP connections for deletion.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Wed, 1 Feb 2023 16:30:59 +0000 (17:30 +0100)]
net/sched: act_ct: offload UDP NEW connections
Modify the offload algorithm of UDP connections to the following:
- Offload NEW connection as unidirectional.
- When connection state changes to ESTABLISHED also update the hardware
flow. However, in order to prevent act_ct from spamming offload add wq for
every packet coming in reply direction in this state verify whether
connection has already been updated to ESTABLISHED in the drivers. If that
it the case, then skip flow_table and let conntrack handle such packets
which will also allow conntrack to potentially promote the connection to
ASSURED.
- When connection state changes to ASSURED set the flow_table flow
NF_FLOW_HW_BIDIRECTIONAL flag which will cause refresh mechanism to offload
the reply direction.
All other protocols have their offload algorithm preserved and are always
offloaded as bidirectional.
Note that this change tries to minimize the load on flow_table add
workqueue. First, it tracks the last ctinfo that was offloaded by using new
flow 'NF_FLOW_HW_ESTABLISHED' flag and doesn't schedule the refresh for
reply direction packets when the offloads have already been updated with
current ctinfo. Second, when 'add' task executes on workqueue it always
update the offload with current flow state (by checking 'bidirectional'
flow flag and obtaining actual ctinfo/cookie through meta action instead of
caching any of these from the moment of scheduling the 'add' work)
preventing the need from scheduling more updates if state changed
concurrently while the 'add' work was pending on workqueue.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Wed, 1 Feb 2023 16:30:58 +0000 (17:30 +0100)]
net/sched: act_ct: set ctinfo in meta action depending on ct state
Currently tcf_ct_flow_table_fill_actions() function assumes that only
established connections can be offloaded and always sets ctinfo to either
IP_CT_ESTABLISHED or IP_CT_ESTABLISHED_REPLY strictly based on direction
without checking actual connection state. To enable UDP NEW connection
offload set the ctinfo, metadata cookie and NF_FLOW_HW_ESTABLISHED
flow_offload flags bit based on ct->status value.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Wed, 1 Feb 2023 16:30:57 +0000 (17:30 +0100)]
netfilter: flowtable: cache info of last offload
Modify flow table offload to cache the last ct info status that was passed
to the driver offload callbacks by extending enum nf_flow_flags with new
"NF_FLOW_HW_ESTABLISHED" flag. Set the flag if ctinfo was 'established'
during last act_ct meta actions fill call. This infrastructure change is
necessary to optimize promoting of UDP connections from 'new' to
'established' in following patches in this series.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Wed, 1 Feb 2023 16:30:56 +0000 (17:30 +0100)]
netfilter: flowtable: allow unidirectional rules
Modify flow table offload to support unidirectional connections by
extending enum nf_flow_flags with new "NF_FLOW_HW_BIDIRECTIONAL" flag. Only
offload reply direction when the flag is set. This infrastructure change is
necessary to support offloading UDP NEW connections in original direction
in following patches in series.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Wed, 1 Feb 2023 16:30:55 +0000 (17:30 +0100)]
netfilter: flowtable: fixup UDP timeout depending on ct state
Currently flow_offload_fixup_ct() function assumes that only replied UDP
connections can be offloaded and hardcodes UDP_CT_REPLIED timeout value. To
enable UDP NEW connection offload in following patches extract the actual
connections state from ct->status and set the timeout according to it.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Wed, 1 Feb 2023 16:30:54 +0000 (17:30 +0100)]
net: flow_offload: provision conntrack info in ct_metadata
In order to offload connections in other states besides "established" the
driver offload callbacks need to have access to connection conntrack info.
Flow offload intermediate representation data structure already contains
that data encoded in 'cookie' field, so just reuse it in the drivers.
Reject offloading IP_CT_NEW connections for now by returning an error in
relevant driver callbacks based on value of ctinfo. Support for offloading
such connections will need to be added to the drivers afterwards.
Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Horatiu Vultur [Thu, 2 Feb 2023 14:53:37 +0000 (15:53 +0100)]
net: lan966x: Add VCAP debugFS support
Enable debugfs for vcap for lan966x. This will allow to print all the
entries in the VCAP and also the port information regarding which keys
are configured.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 3 Feb 2023 09:19:41 +0000 (09:19 +0000)]
Merge branch 'rswitch-SERDES-PHY-init'
Yoshihiro Shimoda says:
====================
net: renesas: rswitch: Modify initialization for SERDES and PHY
- My platform has the 88x2110.
- The MACTYPE setting of strap pin on the platform is SXGMII.
- However, we realized that the SoC cannot communicate the PHY with SXGMII
because of mismatching hardware specification.
- We have a lot of boards which mismatch the MACTYPE setting.
So, I would like to change the MACTYPE as SGMII by software for the platform.
The patch [1/5] sets phydev->host_interfaces by phylink for Marvell PHY
driver (marvell10g) to initialize the MACTYPE.
- The patch [1/5] siplifies the rswitch driver.
- The patch [2/5] converts to phy_device from phylink.
- The patch [3/5] sets phydev->host_interfaces from this driver without
any new functions of phylib.
- The patch [4/5] adds phy_power_on() calling to initialize the Ethernet
SERDES PHY driver (r8a779f0-eth-serdes) for each channel.
- The patch [5/5] adds "max-speed" handling.
Changes from v4:
https://lore.kernel.org/all/20230127142621.1761278-1-yoshihiro.shimoda.uh@renesas.com/
- No modification of phylink API.
- Convert to phylib instead of phylink.
- Add "max-speed" handling.
Changes from v3:
https://lore.kernel.org/all/20230127014812.1656340-1-yoshihiro.shimoda.uh@renesas.com/
- Keep a pointer of "port" and more simplify the code.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The previous code set the speed by the interface mode of PHY.
Also this hardware has a restriction which cannot change the speed
at runtime. To use other speed, add "max-speed" handling to set
each port's speed if needed.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Some Ethernet PHYs (like marvell10g) will decide the host interface
mode by the media-side speed. So, the rswitch driver needs to
initialize one of the Ethernet SERDES (r8a779f0-eth-serdes) ports
after linked the Ethernet PHY up. The r8a779f0-eth-serdes driver has
.init() for initializing all ports and .power_on() for initializing
each port. So, add phy_power_{on,off} calling for it.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Intended to set phy_device->host_interfaces by phylink in the future.
But there is difficult to implement phylink properly, especially
supporting the in-band mode on this driver because extra initialization
is needed after linked the ethernet PHY up. So, convert to phy_device
from phylink.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dmitry Torokhov [Wed, 1 Feb 2023 21:53:20 +0000 (13:53 -0800)]
net: fec: do not double-parse 'phy-reset-active-high' property
Conversion to gpiod API done in commit 468ba54bd616 ("fec: convert
to gpio descriptor") clashed with gpiolib applying the same quirk to the
reset GPIO polarity (introduced in commit b02c85c9458c). This results in
the reset line being left active/device being left in reset state when
reset line is "active low".
Remove handling of 'phy-reset-active-high' property from the driver and
rely on gpiolib to apply needed adjustments to avoid ending up with the
double inversion/flipped logic.
Dmitry Torokhov [Wed, 1 Feb 2023 21:53:19 +0000 (13:53 -0800)]
net: fec: restore handling of PHY reset line as optional
Conversion of the driver to gpiod API done in 468ba54bd616 ("fec:
convert to gpio descriptor") incorrectly made reset line mandatory and
resulted in aborting driver probe in cases where reset line was not
specified (note: this way of specifying PHY reset line is actually
deprecated).
Switch to using devm_gpiod_get_optional() and skip manipulating reset
line if it can not be located.
Fixes: 468ba54bd616 ("fec: convert to gpio descriptor") Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reported-by: Marc Kleine-Budde <mkl@pengutronix.de> Tested-by: Marc Kleine-Budde <mkl@pengutronix.de> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20230201215320.528319-1-dmitry.torokhov@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>