David S. Miller [Mon, 3 Jun 2019 20:42:56 +0000 (13:42 -0700)]
Merge tag 'mlx5-updates-2019-05-31' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2019-05-31
This series provides some updates to mlx5 core and netdevice driver.
1) use __netdev_tx_sent_queue() to improve performance under GSO workload
2) Allow matching only enc_key_id/enc_dst_port for decapsulation action
3) Geneve support:
This patchset adds support for GENEVE tunnel encap/decap flows offload:
encapsulating layer 2 Ethernet frames within layer 4 UDP datagrams.
The driver supports 6081 destination UDP port number, which is the
default IANA-assigned port.
Encap:
ConnectX-5 inserts the header (w/ or w/o Geneve TLV options) that is
provided by the mlx5 driver to the outgoing packet.
Decap:
Geneve header is matched and the packet is decapsulated.
Notes about decap flows with Geneve TLV Options:
- Support offloading of 32-bit options data only
- At any given time, only one combination of class/type parameters
can be offloaded, but the same class/type combination can have
many different flows offloaded with different 32-bit option data
- Options with value of 0 can't be offloaded
Managing Geneve TLV options:
Matching (on receive) is done by ConnectX-5 flex parser.
Geneve TLV options are managed using General Object of type
“Geneve TLV Options”.
When the first flow with a certain class/type values is requested
to be offloaded, the driver creates a FW object with FW command
(Geneve TLV Options general object) and starts counting the number
of flows using this object.
During this time, any request with a different class/type values
will fail to be offloaded.
Once the refcount reaches 0, the driver destroys the TLV options
general object, and can now offload a flow with any class/type parameters.
Geneve TLV Options object is added to core device.
It is currently used to manage Geneve TLV options general
object allocation in FW and its reference counting only.
In the future it will also be used for managing geneve ports
by registering callbacks for ndo_udp_tunnel_add/del.
TC tunnel code refactoring:
As a preparation for Geneve code, the TC tunnel code in mlx5
was rearranged in a modular way, so that it would be easier
to add future tunnels:
- Defined tc tunnel object with the fields and callbacks that
any tunnel must implement.
- Define tc UDP tunnel object for UDP tunnels, such as VXLAN
- Move each tunnel code (GRE, VXLAN) to its own separate file
- Rewrite tc tunnel implementation in a general way – using
only the objects and their callbacks.
4) Termination tables:
Actions in tables set with the termination flag are guaranteed to terminate
the action list. Thus, potential looping functionality (e.g. haripin) can safely be
executed without potential loops.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 3 Jun 2019 20:30:38 +0000 (13:30 -0700)]
Merge branch 'ena-next'
Sameeh Jubran says:
====================
Extending the ena driver to support new features and enhance performance
This patchset introduces the following:
* add support for changing the inline header size (max_header_size) for applications
with overlay and nested headers
* enable automatic fallback to polling mode for admin queue when interrupt is not
available or missed
* add good checksum counter for Rx ethtool statistics
* update ena.txt
* some minor code clean-up
* some performance enhancements with doorbell calculations
Differences from V1:
* net: ena: add handling of llq max tx burst size (1/11):
* fixed christmas tree issue
* net: ena: ethtool: add extra properties retrieval via get_priv_flags (2/11):
* replaced snprintf with strlcpy
* dropped confusing error message
* added more details to the commit message
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sameeh Jubran [Mon, 3 Jun 2019 14:43:28 +0000 (17:43 +0300)]
net: ena: add good checksum counter
Add a new statistics to ETHTOOL to specify if the device calculated
and validated the Rx csum.
Signed-off-by: Evgeny Shmeilin <evgeny@annapurnaLabs.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sameeh Jubran [Mon, 3 Jun 2019 14:43:27 +0000 (17:43 +0300)]
net: ena: optimise calculations for CQ doorbell
This patch initially checks if CQ doorbell
is needed before proceeding with the calculations.
Signed-off-by: Igor Chauskin <igorch@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sameeh Jubran [Mon, 3 Jun 2019 14:43:26 +0000 (17:43 +0300)]
net: ena: add support for changing max_header_size in LLQ mode
Up until now the driver always used a single setting for the sizes
of the different parts of the llq entry - 128 for entry size, 2 for
descriptors before header and 96 for maximum header size.
The current code makes sure that the parts of the llq entry are
compatible with each other and with the initial llq entry size given
by the device.
This commit changes this code to support any llq entry size
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sameeh Jubran [Mon, 3 Jun 2019 14:43:25 +0000 (17:43 +0300)]
net: ena: allow automatic fallback to polling mode
Enable fallback to polling mode for Admin queue
when identified a command response arrival
without an accompanying MSI-X interrupt
Signed-off-by: Igor Chauskin <igorch@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sameeh Jubran [Mon, 3 Jun 2019 14:43:23 +0000 (17:43 +0300)]
net: ena: add newline at the end of pr_err prints
Some pr_err prints lacked '\n' in the end. Added where missing.
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sameeh Jubran [Mon, 3 Jun 2019 14:43:22 +0000 (17:43 +0300)]
net: ena: arrange ena_probe() function variables in reverse christmas tree
Reverse christmas tree arrangement is when strings are written from longer
to shorter with each line. Most of our functions are abiding this
arrangement but this function does not.
In this commit we arrange the variables of ena_probe() in reverse christmas
tree.
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sameeh Jubran [Mon, 3 Jun 2019 14:43:21 +0000 (17:43 +0300)]
net: ena: replace free_tx/rx_ids union with single free_ids field in ena_ring
struct ena_ring holds a union of free_rx_ids and free_tx_ids.
Both of the above fields mean the exact same thing and are used
exactly the same way.
Furthermore, these fields are always used with a prefix of the
type of ring. So for tx it will be tx_ring->free_tx_ids, and for
rx it will be rx_ring->free_rx_ids, which shows how redundant the
"_tx" and "_rx" parts are.
Furthermore still, this may lead to confusing code like where
tx_ring->free_rx_ids which works correctly but looks like a mess.
This commit removes the aforementioned redundancy by replacing the
free_rx/tx_ids union with a single free_ids field.
It also changes a single goto label name from err_free_tx_ids: to
err_tx_free_ids: for consistency with the above new notation.
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ena: ethtool: add extra properties retrieval via get_priv_flags
This commit adds a mechanism for exposing different device
properties via ethtool's priv_flags. The strings are provided
by the device and copied to user space through the driver.
In this commit we:
Add commands, structs and defines necessary for handling
extra properties
Add functions for:
Allocation/destruction of a buffer for extra properties strings.
Retreival of extra properties strings and flags from the network device.
Handle the allocation of a buffer for extra properties strings.
* Initialize buffer with extra properties strings from the
network device at driver startup.
Use ethtool's get_priv_flags to expose extra properties of
the ENA device
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sameeh Jubran [Mon, 3 Jun 2019 14:43:19 +0000 (17:43 +0300)]
net: ena: add handling of llq max tx burst size
There is a maximum TX burst size that the ENA device can handle.
It is exposed by the device to the driver and the driver
needs to comply with it to avoid bugs.
In this commit we:
1. Add ena_com_is_doorbell_needed(), which calculates the number of
llq entries that will be used to hold a packet, and will return
true if they exceed the number of allowed entries in a burst.
If the function returns true, a doorbell needs to be invoked
to send this packet in the next burst.
2. Follow the available entries in the current burst:
- Every doorbell a new burst begins
- With each write of an llq entry, the available entries in the
current burst are decreased by 1.
Signed-off-by: Arthur Kiyanovski <akiyano@amazon.com> Signed-off-by: Sameeh Jubran <sameehj@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: dsa: mv88e6xxx: make mv88e6xxx_g1_stats_wait static
mv88e6xxx_g1_stats_wait has no users outside global1.c, so make it
static.
Signed-off-by: Rasmus Villemoes <rasmus.villemoes@prevas.dk> Reviewed-by: Vivien Didelot <vivien.didelot@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: dsa: mv88e6xxx: fix comments and macro names in mv88e6390_g1_mgmt_rsvd2cpu
The macros have an extraneous '800' (after 0180C2 there should be just
six nibbles, with X representing one), while the comments have
interchanged c2 and 80 and an extra :00.
Signed-off-by: Rasmus Villemoes <rasmus.villemoes@prevas.dk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
r8169: replace several function pointers with direct calls
This series removes most function pointers from struct rtl8169_private
and uses direct calls instead. This simplifies the code and avoids
the penalty of indirect calls in times of retpoline.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
pci_device_to_OF_node(to_pci_dev(dev)) is the same as dev->of_node,
so we can simplify the code. In addition add an empty line before
the return statement.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 3 Jun 2019 01:08:47 +0000 (18:08 -0700)]
Merge branch 'ifa_list-RCU'
Florian Westphal says:
====================
net: add rcu annotations for ifa_list
v3: fix typo in patch1 commit message
All other patches are unchanged.
v2: remove ifa_list iteration in afs instead of conversion
Eric Dumazet reported following problem:
It looks that unless RTNL is held, accessing ifa_list needs proper RCU
protection. indev->ifa_list can be changed under us by another cpu
(which owns RTNL) [..]
A proper rcu_dereference() with an happy sparse support would require
adding __rcu attribute.
This patch series does that: add __rcu to the ifa_list pointers.
That makes sparse complain, so the series also adds the required
rcu_assign_pointer/dereference helpers where needed.
All patches except the last one are preparation work.
Two new macros are introduced for in_ifaddr walks.
Last patch adds the __rcu annotations and the assign_pointer/dereference
helper calls.
This patch is a bit large, but I found no better way -- other
approaches (annotate-first or add helpers-first) all result in
mid-series sparse warnings.
This series is submitted vs. net-next rather than net for several
reasons:
1. Its (mostly) compile-tested only
2. 3rd patch changes behaviour wrt. secondary addresses
(see changelog)
3. The problem exists for a very long time (2004), so it doesn't
seem to be urgent to fix this -- rcu use to free ifa_list
predates the git era.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Fri, 31 May 2019 16:27:05 +0000 (18:27 +0200)]
devinet: use in_dev_for_each_ifa_rcu in more places
This also replaces spots that used for_primary_ifa().
for_primary_ifa() aborts the loop on the first secondary address seen.
Replace it with either the rcu or rtnl variant of in_dev_for_each_ifa(),
but two places will now also consider secondary addresses too:
inet_addr_onlink() and inet_ifa_byprefix().
I do not understand why they should ignore secondary addresses.
Why would a secondary address not be considered 'on link'?
When matching a prefix, why ignore a matching secondary address?
Other places get converted as well, but gain "->flags & SECONDARY" check.
Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Westphal [Fri, 31 May 2019 16:27:03 +0000 (18:27 +0200)]
afs: do not send list of client addresses
David Howells says:
I'm told that there's not really any point populating the list.
Current OpenAFS ignores it, as does AuriStor - and IBM AFS 3.6 will
do the right thing.
The list is actually useless as it's the client's view of the world,
not the servers, so if there's any NAT in the way its contents are
invalid. Further, it doesn't support IPv6 addresses.
On that basis, feel free to make it an empty list and remove all the
interface enumeration.
V1 of this patch reworked the function to use a new helper for the
ifa_list iteration to avoid sparse warnings once the proper __rcu
annotations get added in struct in_device later.
But, in light of the above, just remove afs_get_ipv4_interfaces.
Compile tested only.
Cc: David Howells <dhowells@redhat.com> Cc: linux-afs@lists.infradead.org Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Fri, 31 May 2019 13:27:38 +0000 (14:27 +0100)]
qed: remove redundant assignment to rc
The variable rc is assigned with a value that is never read and
it is re-assigned a new value later on. The assignment is redundant
and can be removed.
Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
When isdn4linux came up in the context of another patch series, I
remembered that we had discussed removing it a while ago.
It turns out that the suggestion from Karsten Keil wa to remove I4L
in 2018 after the last public ISDN networks are shut down. This has
happened now (with a very small number of exceptions), so I guess it's
time to try again.
We currently have three ISDN stacks in the kernel: the original
isdn4linux (with the hisax driver), the newer CAPI (with four drivers),
and finally the mISDN stack (supporting roughly the same hardware as
hisax).
As far as I can tell, anyone using ISDN with mainline kernel drivers in
the past few years uses mISDN, and this is typically used for voice-only
PBX installations that don't require a public network.
The older stacks support additional features for data networks, but those
typically make no sense any more if there is no network to connect to.
My proposal for this time is to kill off isdn4linux entirely, as it seems
to have been unusable for quite a while. This code has been abandoned
for many years and it does cause problems for treewide maintenance as
it tends to do everything that we try to stop doing.
Birger Harzenetter mentioned that is is still using i4l in order to
make use of the 'divert' feature that is not part of mISDN, but has
otherwise moved on to mISDN for normal operation, like apparently
everyone else.
CAPI in turn is not quite as obsolete, but two of the drivers (avm
and hysdn) don't seem to be used at all, while another one (gigaset)
will stop being maintained as Paul Bolle is no longer able to
test it after the network gets shut down in September.
All three are now moved into drivers/staging to let others speak
up in case there are remaining users.
This leaves Bluetooth CMTP as the only remaining user of CAPI, but
Marcel Holtmann wishes to keep maintaining it.
For the discussion on version 1, see [2]
Unfortunately, Karsten Keil as the maintainer has not participated in
the discussion.
Horatiu Vultur [Fri, 31 May 2019 07:16:57 +0000 (09:16 +0200)]
net: mscc: ocelot: Hardware ofload for tc flower filter
Hardware offload of port filtering are now supported via tc command using
flower filter. ACL rules are used to enable the hardware offload.
The following keys are supported:
vlan_id
vlan_prio
dst_mac/src_mac for non IP frames
dst_ip/src_ip
dst_port/src_port
The following actions are supported:
trap
drop
These filters are supported only on the ingress schedulare.
Add:
tc qdisc add dev eth3 ingress
tc filter ad dev eth3 parent ffff: ip_proto ip flower \
ip_proto tcp dst_port 80 action drop
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
Netfilter/IPVS updates for net-next
The following patchset container Netfilter/IPVS update for net-next:
1) Add UDP tunnel support for ICMP errors in IPVS.
Julian Anastasov says:
This patchset is a followup to the commit that adds UDP/GUE tunnel:
"ipvs: allow tunneling with gue encapsulation".
What we do is to put tunnel real servers in hash table (patch 1),
add function to lookup tunnels (patch 2) and use it to strip the
embedded tunnel headers from ICMP errors (patch 3).
2) Extend xt_owner to match for supplementary groups, from
Lukasz Pawelczyk.
3) Remove unused oif field in flow_offload_tuple object, from
Taehee Yoo.
4) Release basechain counters from workqueue to skip synchronize_rcu()
call. From Florian Westphal.
5) Replace skb_make_writable() by skb_ensure_writable(). Patchset
from Florian Westphal.
6) Checksum support for gue encapsulation in IPVS, from Jacky Hu.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alan Maguire [Fri, 31 May 2019 17:47:14 +0000 (18:47 +0100)]
selftests/bpf: measure RTT from xdp using xdping
xdping allows us to get latency estimates from XDP. Output looks
like this:
./xdping -I eth4 192.168.55.8
Setting up XDP for eth4, please wait...
XDP setup disrupts network connectivity, hit Ctrl+C to quit
Normal ping RTT data
[Ignore final RTT; it is distorted by XDP using the reply]
PING 192.168.55.8 (192.168.55.8) from 192.168.55.7 eth4: 56(84) bytes of data.
64 bytes from 192.168.55.8: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.55.8: icmp_seq=2 ttl=64 time=0.208 ms
64 bytes from 192.168.55.8: icmp_seq=3 ttl=64 time=0.163 ms
64 bytes from 192.168.55.8: icmp_seq=8 ttl=64 time=0.275 ms
4 packets transmitted, 4 received, 0% packet loss, time 3079ms
rtt min/avg/max/mdev = 0.163/0.237/0.302/0.054 ms
XDP RTT data:
64 bytes from 192.168.55.8: icmp_seq=5 ttl=64 time=0.02808 ms
64 bytes from 192.168.55.8: icmp_seq=6 ttl=64 time=0.02804 ms
64 bytes from 192.168.55.8: icmp_seq=7 ttl=64 time=0.02815 ms
64 bytes from 192.168.55.8: icmp_seq=8 ttl=64 time=0.02805 ms
The xdping program loads the associated xdping_kern.o BPF program
and attaches it to the specified interface. If run in client
mode (the default), it will add a map entry keyed by the
target IP address; this map will store RTT measurements, current
sequence number etc. Finally in client mode the ping command
is executed, and the xdping BPF program will use the last ICMP
reply, reformulate it as an ICMP request with the next sequence
number and XDP_TX it. After the reply to that request is received
we can measure RTT and repeat until the desired number of
measurements is made. This is why the sequence numbers in the
normal ping are 1, 2, 3 and 8. We XDP_TX a modified version
of ICMP reply 4 and keep doing this until we get the 4 replies
we need; hence the networking stack only sees reply 8, where
we have XDP_PASSed it upstream since we are done.
In server mode (-s), xdping simply takes ICMP requests and replies
to them in XDP rather than passing the request up to the networking
stack. No map entry is required.
xdping can be run in native XDP mode (the default, or specified
via -N) or in skb mode (-S).
A test program test_xdping.sh exercises some of these options.
Note that native XDP does not seem to XDP_TX for veths, hence -N
is not tested. Looking at the code, it looks like XDP_TX is
supported so I'm not sure if that's expected. Running xdping in
native mode for ixgbe as both client and server works fine.
Changes since v4
- close fds on cleanup (Song Liu)
Changes since v3
- fixed seq to be __be16 (Song Liu)
- fixed fd checks in xdping.c (Song Liu)
Changes since v2
- updated commit message to explain why seq number of last
ICMP reply is 8 not 4 (Song Liu)
- updated types of seq number, raddr and eliminated csum variable
in xdpclient/xdpserver functions as it was not needed (Song Liu)
- added XDPING_DEFAULT_COUNT definition and usage specification of
default/max counts (Song Liu)
Changes since v1
- moved from RFC to PATCH
- removed unused variable in ipv4_csum() (Song Liu)
- refactored ICMP checks into icmp_check() function called by client
and server programs and reworked client and server programs due
to lack of shared code (Song Liu)
- added checks to ensure that SKB and native mode are not requested
together (Song Liu)
Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
David S. Miller [Sat, 1 Jun 2019 00:13:19 +0000 (17:13 -0700)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
Intel Wired LAN Driver Updates 2019-05-31
This series contains updates to the iavf driver.
Nathan Chancellor converts the use of gnu_printf to printf.
Aleksandr modifies the driver to limit the number of RSS queues to the
number of online CPUs in order to avoid creating misconfigured RSS
queues.
Gustavo A. R. Silva converts a couple of instances where sizeof() can be
replaced with struct_size().
Alice makes the remaining changes to the iavf driver to cleanup all the
old "i40evf" references in the driver to iavf, including the file names
that still contained the old driver reference. There was no functional
changes made, just cosmetic to reduce any confusion going forward now
that the iavf driver is the virtual function driver for both i40e and
ice drivers.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiong Wang [Thu, 30 May 2019 20:23:18 +0000 (21:23 +0100)]
bpf: doc: update answer for 32-bit subregister question
There has been quite a few progress around the two steps mentioned in the
answer to the following question:
Q: BPF 32-bit subregister requirements
This patch updates the answer to reflect what has been done.
v2:
- Add missing full stop. (Song Liu)
- Minor tweak on one sentence. (Song Liu)
v1:
- Integrated rephrase from Quentin and Jakub
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
====================
During my work on memcg-based memory accounting for bpf maps
I've done some cleanups and refactorings of the existing
memlock rlimit-based code. It makes it more robust, unifies
size to pages conversion, size checks and corresponding error
codes. Also it adds coverage for cgroup local storage and
socket local storage maps.
It looks like some preliminary work on the mm side might be
required to start working on the memcg-based accounting,
so I'm sending these patches as a separate patchset.
====================
Roman Gushchin [Thu, 30 May 2019 01:03:58 +0000 (18:03 -0700)]
bpf: rework memlock-based memory accounting for maps
In order to unify the existing memlock charging code with the
memcg-based memory accounting, which will be added later, let's
rework the current scheme.
Currently the following design is used:
1) .alloc() callback optionally checks if the allocation will likely
succeed using bpf_map_precharge_memlock()
2) .alloc() performs actual allocations
3) .alloc() callback calculates map cost and sets map.memory.pages
4) map_create() calls bpf_map_init_memlock() which sets map.memory.user
and performs actual charging; in case of failure the map is
destroyed
<map is in use>
1) bpf_map_free_deferred() calls bpf_map_release_memlock(), which
performs uncharge and releases the user
2) .map_free() callback releases the memory
The scheme can be simplified and made more robust:
1) .alloc() calculates map cost and calls bpf_map_charge_init()
2) bpf_map_charge_init() sets map.memory.user and performs actual
charge
3) .alloc() performs actual allocations
<map is in use>
1) .map_free() callback releases the memory
2) bpf_map_charge_finish() performs uncharge and releases the user
The new scheme also allows to reuse bpf_map_charge_init()/finish()
functions for memcg-based accounting. Because charges are performed
before actual allocations and uncharges after freeing the memory,
no bogus memory pressure can be created.
In cases when the map structure is not available (e.g. it's not
created yet, or is already destroyed), on-stack bpf_map_memory
structure is used. The charge can be transferred with the
bpf_map_charge_move() function.
Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Roman Gushchin [Thu, 30 May 2019 01:03:57 +0000 (18:03 -0700)]
bpf: group memory related fields in struct bpf_map_memory
Group "user" and "pages" fields of bpf_map into the bpf_map_memory
structure. Later it can be extended with "memcg" and other related
information.
The main reason for a such change (beside cosmetics) is to pass
bpf_map_memory structure to charging functions before the actual
allocation of bpf_map.
Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
====================
This patchset adds support for propagating congestion notifications (cn)
to TCP from cgroup inet skb egress BPF programs.
Current cgroup skb BPF programs cannot trigger TCP congestion window
reductions, even when they drop a packet. This patch-set adds support
for cgroup skb BPF programs to send congestion notifications in the
return value when the packets are TCP packets. Rather than the
current 1 for keeping the packet and 0 for dropping it, they can
now return:
NET_XMIT_SUCCESS (0) - continue with packet output
NET_XMIT_DROP (1) - drop packet and do cn
NET_XMIT_CN (2) - continue with packet output and do cn
-EPERM - drop packet
Finally, HBM programs are modified to collect and return more
statistics.
There has been some discussion regarding the best place to manage
bandwidths. Some believe this should be done in the qdisc where it can
also be managed with a BPF program. We believe there are advantages
for doing it with a BPF program in the cgroup/skb callback. For example,
it reduces overheads in the cases where there is on primary workload and
one or more secondary workloads, where each workload is running on its
own cgroupv2. In this scenario, we only need to throttle the secondary
workloads and there is no overhead for the primary workload since there
will be no BPF program attached to its cgroup.
Regardless, we agree that this mechanism should not penalize those that
are not using it. We tested this by doing 1 byte req/reply RPCs over
loopback. Each test consists of 30 sec of back-to-back 1 byte RPCs.
Each test was repeated 50 times with a 1 minute delay between each set
of 10. We then calculated the average RPCs/sec over the 50 tests. We
compare upstream with upstream + patchset and no BPF program as well
as upstream + patchset and a BPF program that just returns ALLOW_PKT.
Here are the results:
upstream 80937 RPCs/sec
upstream + patches, no BPF program 80894 RPCs/sec
upstream + patches, BPF program 80634 RPCs/sec
These numbers indicate that there is no penalty for these patches
The use of congestion notifications improves the performance of HBM when
using Cubic. Without congestion notifications, Cubic will not decrease its
cwnd and HBM will need to drop a large percentage of the packets.
The following results are obtained for rate limits of 1Gbps,
between two servers using netperf, and only one flow. We also show how
reducing the max delayed ACK timer can improve the performance when
using Cubic.
Command used was:
./do_hbm_test.sh -l -D --stats -N -r=<rate> [--no_cn] [dctcp] \
-s=<server running netserver>
where:
<rate> is 1000
--no_cn specifies no cwr notifications
dctcp uses dctcp
Notes:
--no_cn has no effect with DCTCP
Lim = rate limit
DA = maximum delay ack timer
cred = credit in packets
drops = % packets dropped
v1->v2: Insures that only BPF_CGROUP_INET_EGRESS can return values 2 and 3
New egress values apply to all protocols, not just TCP
Cleaned up patch 4, Update BPF_CGROUP_RUN_PROG_INET_EGRESS callers
Removed changes to __tcp_transmit_skb (patch 5), no longer needed
Removed sample use of EDT
v2->v3: Removed the probe timer related changes
v3->v4: Replaced preempt_enable_no_resched() by preempt_enable()
in BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY() macro
====================
Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
brakmo [Tue, 28 May 2019 23:59:37 +0000 (16:59 -0700)]
bpf: Update __cgroup_bpf_run_filter_skb with cn
For egress packets, __cgroup_bpf_fun_filter_skb() will now call
BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY() instead of PROG_CGROUP_RUN_ARRAY()
in order to propagate congestion notifications (cn) requests to TCP
callers.
For egress packets, this function can return:
NET_XMIT_SUCCESS (0) - continue with packet output
NET_XMIT_DROP (1) - drop packet and notify TCP to call cwr
NET_XMIT_CN (2) - continue with packet output and notify TCP
to call cwr
-EPERM - drop packet
For ingress packets, this function will return -EPERM if any attached
program was found and if it returned != 1 during execution. Otherwise 0
is returned.
Signed-off-by: Lawrence Brakmo <brakmo@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
brakmo [Tue, 28 May 2019 23:59:36 +0000 (16:59 -0700)]
bpf: cgroup inet skb programs can return 0 to 3
Allows cgroup inet skb programs to return values in the range [0, 3].
The second bit is used to deterine if congestion occurred and higher
level protocol should decrease rate. E.g. TCP would call tcp_enter_cwr()
The bpf_prog must set expected_attach_type to BPF_CGROUP_INET_EGRESS
at load time if it uses the new return values (i.e. 2 or 3).
The expected_attach_type is currently not enforced for
BPF_PROG_TYPE_CGROUP_SKB. e.g Meaning the current bpf_prog with
expected_attach_type setting to BPF_CGROUP_INET_EGRESS can attach to
BPF_CGROUP_INET_INGRESS. Blindly enforcing expected_attach_type will
break backward compatibility.
This patch adds a enforce_expected_attach_type bit to only
enforce the expected_attach_type when it uses the new
return value.
Signed-off-by: Lawrence Brakmo <brakmo@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
brakmo [Tue, 28 May 2019 23:59:35 +0000 (16:59 -0700)]
bpf: Create BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY
Create new macro BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY() to be used by
__cgroup_bpf_run_filter_skb for EGRESS BPF progs so BPF programs can
request cwr for TCP packets.
Current cgroup skb programs can only return 0 or 1 (0 to drop the
packet. This macro changes the behavior so the low order bit
indicates whether the packet should be dropped (0) or not (1)
and the next bit is used for congestion notification (cn).
Hence, new allowed return values of CGROUP EGRESS BPF programs are:
0: drop packet
1: keep packet
2: drop packet and call cwr
3: keep packet and call cwr
This macro then converts it to one of NET_XMIT values or -EPERM
that has the effect of dropping the packet with no cn.
0: NET_XMIT_SUCCESS skb should be transmitted (no cn)
1: NET_XMIT_DROP skb should be dropped and cwr called
2: NET_XMIT_CN skb should be transmitted and cwr called
3: -EPERM skb should be dropped (no cn)
Note that when more than one BPF program is called, the packet is
dropped if at least one of programs requests it be dropped, and
there is cn if at least one program returns cn.
Signed-off-by: Lawrence Brakmo <brakmo@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Colin Ian King [Thu, 30 May 2019 19:04:38 +0000 (20:04 +0100)]
xen-netback: remove redundant assignment to err
The variable err is assigned with the value -ENOMEM that is never
read and it is re-assigned a new value later on. The assignment is
redundant and can be removed.
Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Thu, 30 May 2019 15:57:54 +0000 (16:57 +0100)]
nexthop: remove redundant assignment to err
The variable err is initialized with a value that is never read
and err is reassigned a few statements later. This initialization
is redundant and can be removed.
Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Erez Alfasi [Tue, 14 May 2019 10:55:22 +0000 (13:55 +0300)]
net/mlx5e: TX, Improve performance under GSO workload
__netdev_tx_sent_queue() was introduced by:
commit 3e59020abf0f ("net: bql: add __netdev_tx_sent_queue()")
BQL counters should be updated without flipping/caring about
BQL status, if the current skb has xmit_more set.
Using __netdev_tx_sent_queue() avoids messing with BQL stop
flag, increases performance on GSO workload by keeping
doorbells to the minimum required and also sparing atomic
operations.
Oz Shlomo [Thu, 18 Apr 2019 13:45:29 +0000 (16:45 +0300)]
net/mlx5e: Use termination table for VLAN push actions
HW does not support push VLAN action in the RX direction (packets
arriving from the wire). The FW works around this limitation by haripining
the packet. The hairpin workaround applies only when the push VLAN action
is specified in a termination table, assuring that there are no actions
following the haripin.
Instantiate termination table for push VLAN actions. Re-use identical
terminating tables for increased HW cache efficiency.
Signed-off-by: Oz Shlomo <ozsh@mellanox.com> Reviewed-by: Paul Blakey <paulb@mellanox.com> Reviewed-by: Eli Britstein <elibr@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
net/mlx5e: Geneve, Add support for encap/decap flows offload
Add HW offloading support for flows with Geneve encap/decap.
Notes about decap flows with Geneve TLV Options:
- Support offloading of 32-bit options data only
- At any given time, only one combination of class/type parameters
can be offloaded, but the same class/type combination can have
many different flows offloaded with different 32-bit option data
- Options with value of 0 can't be offloaded
net/mlx5e: Rearrange tc tunnel code in a modular way
Rearrange tc tunnel code so that it would be easy to add future tunnels:
- Define tc tunnel object with the fields and callbacks that any
tunnel must implement.
- Define tc UDP tunnel object for UDP tunnels, such as VXLAN
- Move each tunnel code (GRE, VXLAN) to its own separate file
- Rewrite tc tunnel implementation in a general way - using only
the objects and their callbacks.
net/mlx5e: Geneve, Keep tunnel info as pointer to the original struct
In mlx5e encap entry structure, IP tunnel info data structure is copied
by value. This approach worked till now, but it breaks when there are
encapsulation options, such as in case of Geneve.
These options are stored in the structure that is allocated adjacent to
the IP tunnel info struct, and not pointed at by any field in that struct.
Therefore, when copying the struct by value, we loose the address of the
original struct and can't get to the encapsulation options.
Fix the problem by storing the pointer to the tunnel info data instead.
Use Geneve TLV Options object to manage the flex parser matching
on the 32-bit options data.
When the first flow with a certain class/type values is requested to
be offloaded, create a FW object with FW command (Geneve TLV Options
general object) and start counting the number of flows using this object.
During this time, any request with a different class/type values will
fail to be offloaded.
Once the refcount reaches 0, destroy the TLV options general object,
and can now offload a flow with any class/type parameters.
Geneve TLV Options object is added to core device.
It is currently used to manage Geneve TLV options general
object allocation in FW and its reference counting only.
In the future it will also be used for managing geneve ports
by registering callbacks for ndo_udp_tunnel_add/del.
net/mlx5e: Enable setting multiple match criteria for flow group
When filling in flow spec match criteria, to allow previous
modifications of the match criteria, use "|=" rather than "=".
Tunnel options are parsed before the match criteria of the offloaded
flow are being set. If the the flow that we're about to offload has
encapsulation options, the flow group might need to match on additional
criteria.
For Geneve, an additional flow group matching parameter should
be used - misc3. The appropriate bit in the match criteria is set
while parsing the tunnel options, so the criteria value shouldn't
be overwritten.
This is a pre-step for supporting Geneve TLV options offload.
Tonghao Zhang [Mon, 6 May 2019 18:28:37 +0000 (11:28 -0700)]
net/mlx5e: Allow matching only enc_key_id/enc_dst_port for decapsulation action
In some case, we don't care the enc_src_ip and enc_dst_ip, and
if we don't match the field enc_src_ip and enc_dst_ip, we can use
fewer flows in hardware when revice the tunnel packets. For example,
the tunnel packets may be sent from different hosts, we must offload
one rule for each host.
Saeed Mahameed [Fri, 31 May 2019 19:37:24 +0000 (12:37 -0700)]
Merge branch 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux
This series provides some low level updates for mlx5 driver needed for
both rdma and netdev trees.
1) Termination flow steering table bits and hardware definitions.
2) Introduce the core dump HW access registers definitions.
3) Refactor and cleans-up VF representors functions handlers.
4) Renames host_params bits to function_changed bits and add the
support for eswitch functions change event in the eswitch general case.
(for both legacy and switchdev modes).
5) Potential error pointer dereference in error handling
David S. Miller [Fri, 31 May 2019 19:37:46 +0000 (12:37 -0700)]
Merge branch 'phylink-sfp-updates'
Russell King says:
====================
phylink/sfp updates
This is a series of updates to phylink and sfp:
- Remove an unused net device argument from the phylink MII ioctl
emulation code.
- add support for using interrupts when using a GPIO for link status
tracking, rather than polling it at one second intervals. This
reduces the need to wakeup the CPU every second.
- add support to the MII ioctl API to read and write Clause 45 PHY
registers. I don't know how desirable this is for mainline, but I
have used this facility extensively to investigate the Marvell
88x3310 PHY. A recent illustration of use for this was debugging
the PHY-without-firmware problem recently reported.
- add mandatory attach/detach methods for the upstream side of sfp
bus code, which will allow us to remove the "netdev" structure from
the SFP layers.
- remove the "netdev" structure from the SFP upstream registration
calls, which simplifies PHY to SFP links.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Tue, 28 May 2019 09:57:39 +0000 (10:57 +0100)]
net: sfp: remove sfp-bus use of netdevs
The sfp-bus code now no longer has any use for the network device
structure, so remove its use.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Tue, 28 May 2019 09:57:34 +0000 (10:57 +0100)]
net: sfp: add mandatory attach/detach methods for sfp buses
Add attach and detach methods for SFP buses, which will allow us to get
rid of the netdev storage in sfp-bus.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Tue, 28 May 2019 09:57:29 +0000 (10:57 +0100)]
net: phy: allow Clause 45 access via mii ioctl
Allow userspace to generate Clause 45 MII access cycles via phylib.
This is useful for tools such as mii-diag to be able to inspect Clause
45 PHYs.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Tue, 28 May 2019 09:57:23 +0000 (10:57 +0100)]
net: phylink: support for link gpio interrupt
Add support for using GPIO interrupts with a fixed-link GPIO rather than
polling the GPIO every second and invoking the phylink resolution. This
avoids unnecessary calls to mac_config().
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Tue, 28 May 2019 09:57:18 +0000 (10:57 +0100)]
net: phylink: remove netdev from phylink mii ioctl emulation
The netdev used in the phylink ioctl emulation is never used, so let's
remove it.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Currently for every representor type and for every single vport,
representer function pointers copy is stored even though they don't
change from one to other vport.
Additionally priv data entry for the rep is not passed during
registration, but its copied. It is used (set and cleared) by the user
of the reps.
As we want to scale vports, to simplify and also to split constants
from data,
1. Rename mlx5_eswitch_rep_if to mlx5_eswitch_rep_ops as to match _ops
prefix with other standard netdev, ibdev ops.
2. Constify the IB and Ethernet rep ops structure.
3. Instead of storing copy of all rep function pointers, store copy
per eswitch rep type.
4. Split data and function pointers to mlx5_eswitch_rep_ops and
mlx5_eswitch_rep_data.
Eli Britstein [Wed, 29 May 2019 22:50:29 +0000 (22:50 +0000)]
net/mlx5: Introduce termination table bits
Termination table is a flow table with a termination flag. The flag
allows the firmware to assume that the the specified actions are the last
actions list. This assumption allows the FW to safely perform potential
looping logic (e.g. hairpin). Introduce the bits for this attribute.
Signed-off-by: Eli Britstein <elibr@mellanox.com> Reviewed-by: Oz Shlomo <ozsh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
The phylink conflict was between a bug fix by Russell King
to make sure we have a consistent PHY interface mode, and
a change in net-next to pull some code in phylink_resolve()
into the helper functions phylink_mac_link_{up,down}()
On the dp83867 side it's mostly overlapping changes, with
the 'net' side removing a condition that was supposed to
trigger for RGMII but because of how it was coded never
actually could trigger.
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes a few problems with CONFIG_IPV6=y and
CONFIG_NF_CONNTRACK_BRIDGE=m:
In file included from net/netfilter/utils.c:5:
include/linux/netfilter_ipv6.h: In function 'nf_ipv6_br_defrag':
include/linux/netfilter_ipv6.h:110:9: error: implicit declaration of function 'nf_ct_frag6_gather'; did you mean 'nf_ct_attach'? [-Werror=implicit-function-declaration]
And these too:
net/ipv6/netfilter.c:242:2: error: unknown field 'br_defrag' specified in initializer
net/ipv6/netfilter.c:243:2: error: unknown field 'br_fragment' specified in initializer
This patch includes an original chunk from wenxu.
Fixes: 764dd163ac92 ("netfilter: nf_conntrack_bridge: add support for IPv6") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Reported-by: Yuehaibing <yuehaibing@huawei.com> Reported-by: kbuild test robot <lkp@intel.com> Reported-by: wenxu <wenxu@ucloud.cn> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: wenxu <wenxu@ucloud.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
Jacky Hu [Thu, 30 May 2019 00:16:40 +0000 (08:16 +0800)]
ipvs: add checksum support for gue encapsulation
Add checksum support for gue encapsulation with the tun_flags parameter,
which could be one of the values below:
IP_VS_TUNNEL_ENCAP_FLAG_NOCSUM
IP_VS_TUNNEL_ENCAP_FLAG_CSUM
IP_VS_TUNNEL_ENCAP_FLAG_REMCSUM
Signed-off-by: Jacky Hu <hengqing.hu@gmail.com> Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Thu, 23 May 2019 13:44:05 +0000 (15:44 +0200)]
netfilter: bridge: convert skb_make_writable to skb_ensure_writable
Back in the day, skb_ensure_writable did not exist. By now, both functions
have the same precondition:
I. skb_make_writable will test in this order:
1. wlen > skb->len -> error
2. if not cloned and wlen <= headlen -> OK
3. If cloned and wlen bytes of clone writeable -> OK
After those checks, skb is either not cloned but needs to pull from
nonlinear area, or writing to head would also alter data of another clone.
In both cases skb_make_writable will then call __pskb_pull_tail, which will
kmalloc a new memory area to use for skb->head.
IOW, after successful skb_make_writable call, the requested length is in
linear area and can be modified, even if skb was cloned.
II. skb_ensure_writable will do this instead:
1. call pskb_may_pull. This handles case 1 above.
After this, wlen is in linear area, but skb might be cloned.
2. return if skb is not cloned
3. return if wlen byte of clone are writeable.
4. fully copy the skb.
So post-conditions are the same:
*len bytes are writeable in linear area without altering any payload data
of a clone, all header pointers might have been changed.
Only differences are that skb_ensure_writable is in the core, whereas
skb_make_writable lives in netfilter core and the inverted return value.
skb_make_writable returns 0 on error, whereas skb_ensure_writable returns
negative value.
For the normal cases performance is similar:
A. skb is not cloned and in linear area:
pskb_may_pull is inline helper, so neither function copies.
B. skb is cloned, write is in linear area and clone is writeable:
both funcions return with step 3.
This series removes skb_make_writable from the kernel.
While at it, pass the needed value instead, its less confusing that way:
There is no special-handling of "0-length" argument in either
skb_make_writable or skb_ensure_writable.
bridge already makes sure ethernet header is in linear area, only purpose
of the make_writable() is is to copy skb->head in case of cloned skbs.
Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Add ip_vs_find_tunnel() to match tunnel headers
by family, address and optional port. Use it to
properly find the tunnel real server used in
received ICMP errors.
Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
ipvs: allow rs_table to contain different real server types
Before now rs_table was used only for NAT real servers.
Change it to allow TUN real severs from different types,
possibly hashed with different port key.
Signed-off-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
I tried to find any indication of whether the capi drivers are still in
use, and have not found anything from a long time ago.
With public ISDN networks almost completely shut down over the past 12
months, there is very little you can actually do with this hardware. The
main remaining use case would be to connect ISDN voice phones to an
in-house installation with Asterisk or LCR, but anyone trying this in
turn seems to be using either the mISDN driver stack, or out-of-tree
drivers from the hardware vendors.
I may of course have missed something, so I would suggest moving these
three drivers (avm, hysdn, gigaset) into drivers/staging/ just in case
someone still uses them.
If nobody complains, we can remove them entirely in six months, or
otherwise move the core code and any drivers that are still needed back
into drivers/isdn.
As Paul Bolle notes, he is still testing the gigaset driver as long as
he can, but the Dutch ISDN network will be shut down in September 2019,
which puts an end to that.
Marcel Holtmann still maintains the Bluetooth CMTP profile and wants to
keep that alive, so the actual CAPI subsystem code remains in place for
now, after all other drivers are gone, CMTP and CAPI can be merged into
a single driver directory.
With all isdn4linux hardware drivers gone, this is only a wrapper around
CAPI to support old user space. However, from looking at the mailing
list, it seems that the last time anyone asked about it was in 2014,
when the upgrade from a linux-2.4 installation failed, and mISDN was
suggested as a replacement.
The largest public ISDN network (Deutsche Telekom) was supposed to be
shut down 2018, which must have drastically reduced the number of legacy
installations.
When we last discussed removing i4l in 2016, Karsten Keil suggested
revisiting this in 2018. I guess this is overdue.
With the decline of ISDN, this seems to have become almost completely
obsolete, and even in the past years before that, almost all remaining
users appear to have used mISDN instead.
Birger Harzenetter noted that he is still using i4l/hisax to take
advantage of the 'divert' driver for call diversion, but otherwise uses
mISDN on the same hardware. This is a rare edge case as far as I
can tell, but we are still breaking an actively used work flow
(see https://xkcd.com/1172/).
We debated moving i4l/hisax to staging as an intermediate step, but as
he is not likely to change the setup, and that would just delay breaking
this use case. The alternatives here are to stay on stable kernels
< 5.2, to create an external driver repository for isdn4linux, or to
add divert functionality to mISDN.