Sathya Perla [Mon, 28 Aug 2017 17:40:35 +0000 (13:40 -0400)]
bnxt_en: add code to query TC flower offload stats
This patch adds code to implement TC_CLSFLOWER_STATS TC-cmd and the
required FW code to query the stats from the HW.
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the hwrm_cfa_flow_alloc/free() routines
that are needed to issue the FW cmds needed for TC flower offload.
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sathya Perla [Mon, 28 Aug 2017 17:40:33 +0000 (13:40 -0400)]
bnxt_en: bnxt: add TC flower filter offload support
This patch adds support for offloading TC based flow
rules and actions for the 'flower' classifier in the bnxt_en driver.
It includes logic to parse flow rules and actions received from the
TC subsystem, store them and issue the corresponding
hwrm_cfa_flow_alloc/free FW cmds. L2/IPv4/IPv6 flows and drop,
redir, vlan push/pop actions are supported in this patch.
In this patch the hwrm_cfa_flow_xxx routines are just stubs.
The code for these routines is introduced in the next patch for easier
review. Also, the code to query the TC/flower action stats will
be introduced in a subsequent patch.
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sathya Perla [Mon, 28 Aug 2017 17:40:32 +0000 (13:40 -0400)]
bnxt_en: fix clearing devlink ptr from bnxt struct
The routine bnxt_link_bp_to_dl() is used to set the devlink ptr
in bnxt struct (bp) and also to set the bnxt back ptr in
the devlink struct. If devlink_register() fails, bp->dl must
be cleared which is not happening currently. This patch fixes
bnxt_link_bp_to_dl() to clear bp->dl by passing a NULL dl ptr.
Fixes: 4ab0c6a8ffd7 ("bnxt_en: add support to enable VF-representors") Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Mon, 28 Aug 2017 17:40:30 +0000 (13:40 -0400)]
bnxt_en: Improve -ENOMEM logic in NAPI poll loop.
If we cannot allocate RX buffers in the NAPI poll loop when processing
an RX event, the current code does not count that event towards the NAPI
budget. This can cause us to potentially loop forever in NAPI if we
consistently cannot allocate new buffers. Improve it by counting
-ENOMEM event as 1 towards the NAPI budget.
Cc: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Reported-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Scott Branden [Mon, 28 Aug 2017 17:40:29 +0000 (13:40 -0400)]
bnxt: initialize board_info values with proper enums
initialize board_info values with proper enums for defensive programming
purposes. This will avoid any errors of the enums being declared not
lining up with the board_info array.
Signed-off-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ray Jui [Mon, 28 Aug 2017 17:40:28 +0000 (13:40 -0400)]
bnxt: Add PCIe device IDs for bcm58802/bcm58808
Add PCIe device ID for bcm58802 and bcm58808. Also add chip number
update to declare bcm588xx as chip class phase 4 and later
Signed-off-by: Ray Jui <ray.jui@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Mon, 28 Aug 2017 17:40:27 +0000 (13:40 -0400)]
bnxt_en: assign CPU affinity hints to bnxt_en IRQs
This patch provides hints to irqbalance to map bnxt_en device IRQs
to specific CPU cores. cpumask_local_spread() is used, which first
maps IRQs to near NUMA cores; when those cores are exhausted, IRQs
are mapped to far NUMA cores.
Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Mon, 28 Aug 2017 17:40:26 +0000 (13:40 -0400)]
bnxt_en: Improve tx ring reservation logic.
When the number of TX rings is changed (e.g. ethtool -L, enabling XDP TX
rings, etc), the current code tries to reserve the new number of TX rings
before closing and re-opening the NIC. If we are unable to reserve the
new TX rings, we abort the operation and keep the current TX rings.
The problem is that the firmware will disable the current TX rings even
when it cannot reserve the new set of TX rings. We fix it as follows:
1. Instead of reserving the new set of TX rings, just ask the firmware
to check if the new set of TX rings is available. There is a flag in
the firmware message to do that. If not available, abort and the
current TX rings will not be disabled.
2. Do the actual TX ring reservation in the path that opens the NIC.
We keep the number of TX rings currently successfully reserved. If the
number of TX rings is different than the reserved TX rings, we call
firmware and reserve again.
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 28 Aug 2017 23:49:49 +0000 (16:49 -0700)]
Merge branch 'NCSI-vlan-filtering'
Samuel Mendoza-Jonas says:
====================
NCSI VLAN Filtering Support
This series (mainly patch 2) adds VLAN filtering to the NCSI implementation.
A fair amount of code already exists in the NCSI stack for VLAN filtering but
none of it is actually hooked up. This goes the final mile and fixes a few
bugs in the existing code found along the way (patch 1).
Patch 3 adds the appropriate flag and callbacks to the ftgmac100 driver to
enable filtering as it's a large consumer of NCSI (and what I've been
testing on).
v3: - Add comment describing change to ncsi_find_filter()
- Catch NULL in clear_one_vid() from ncsi_get_filter()
- Simplify state changes when kicking updated channel
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
ftgmac100: Support NCSI VLAN filtering when available
Register the ndo_vlan_rx_{add,kill}_vid callbacks and set the
NETIF_F_HW_VLAN_CTAG_FILTER if NCSI is available.
This allows the VLAN core to notify the NCSI driver when changes occur
so that the remote NCSI channel can be properly configured to filter on
the set VLAN tags.
Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Make use of the ndo_vlan_rx_{add,kill}_vid callbacks to have the NCSI
stack process new VLAN tags and configure the channel VLAN filter
appropriately.
Several VLAN tags can be set and a "Set VLAN Filter" packet must be sent
for each one, meaning the ncsi_dev_state_config_svf state must be
repeated. An internal list of VLAN tags is maintained, and compared
against the current channel's ncsi_channel_filter in order to keep track
within the state. VLAN filters are removed in a similar manner, with the
introduction of the ncsi_dev_state_config_clear_vids state. The maximum
number of VLAN tag filters is determined by the "Get Capabilities"
response from the channel.
Signed-off-by: Samuel Mendoza-Jonas <sam@mendozajonas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 28 Aug 2017 23:46:25 +0000 (16:46 -0700)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2017-08-27
This series contains updates to i40e and i40evf only.
Sudheer updates code comments and state variable so that adminq_subtask
will have accutate information whenever it gets scheduled.
Mariusz stores information about FEC modes, to be used to printing link
states information, so that we do not need to call admin queue when
reporting link status. Adds VF support for controlling VLAN tag
stripping via ethtool.
Jake provides the majority of changes in this series, starting with
increasing the size of the prefix buffer so that it can hold enough
characters for every possible input, which prevents snprintf truncation.
Fixed other string truncation errors/warnings produced by GCC 7.x.
Removed an unnecessary workaround for resetting XPS. Fixed an issue
where there is a mismatched affinity mask value, so initialize the value
to cpu_possible_mask and invert the logic for checking incorrect CPU vs
IRQ affinity so that the exceptional case is handled at the check.
Removed ULTRA latency mode due to several issues found and will be
looking at better solution for small packet workloads.
Akeem fixes an issue where the incorrect flag was being used to set
promiscuous mode for unicast, which was enabling promiscuous mode only
for multicast instead of unicast.
Carolyn fixes an issue where an error return value is set, but this
value can be overwritten before we actually do exit the function. So
remove the error code assignment and add code comments for better
understanding on why we do not need to set and return the error.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Aviad Krawczyk [Sun, 27 Aug 2017 17:35:30 +0000 (01:35 +0800)]
net-next/hinic: fix comparison of a uint16_t type with -1
Remove the search for index of constant buffer size
Signed-off-by: Aviad Krawczyk <aviad.krawczyk@huawei.com> Signed-off-by: Zhao Chen <zhaochen6@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Aviad Krawczyk [Sun, 27 Aug 2017 17:20:26 +0000 (01:20 +0800)]
net-next/hinic: Fix MTU limitation
Fix the hw MTU limitation by setting max_mtu
Signed-off-by: Aviad Krawczyk <aviad.krawczyk@huawei.com> Signed-off-by: Zhao Chen <zhaochen6@huawei.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 28 Aug 2017 23:42:57 +0000 (16:42 -0700)]
Merge branch 'irda-move-to-staging'
Greg Kroah-Hartman says:
====================
irda: move it to drivers/staging so we can delete it
The IRDA code has long been obsolete and broken. So, to keep people
from trying to use it, and to prevent people from having to maintain it,
let's move it to drivers/staging/ so that we can delete it entirely from
the kernel in a few releases.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
irda: move include/net/irda into staging subdirectory
And finally, move the irda include files into
drivers/staging/irda/include/net/irda. Yes, it's a long path, but it
makes it easy for us to just add a Makefile directory path addition and
all of the net and drivers code "just works".
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
It's time to get rid of IRDA. It's long been broken, and no one seems
to use it anymore. So move it to staging and after a while, we can
delete it from there.
To start, move the network irda core from net/irda to
drivers/staging/irda/net/
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 28 Aug 2017 23:41:01 +0000 (16:41 -0700)]
Merge branch 'dpaa_eth-rss'
Madalin Bucur says:
====================
Add RSS to DPAA 1.x Ethernet driver
This patch set introduces Receive Side Scaling for the DPAA Ethernet
driver. Documentation is updated with details related to the new
feature and limitations that apply.
Added also a small fix.
v2: removed a C++ style comment
v3: move struct fman to header file to avoid exporting a function
v4: addressed compilation issues introduced in v3
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Madalin Bucur [Sun, 27 Aug 2017 13:13:40 +0000 (16:13 +0300)]
dpaa_eth: enable Rx hashing control
Allow ethtool control of the Rx flow hashing. By default RSS is
enabled, this allows to turn it off by bypassing the FMan Keygen
block and sending all traffic on the default Rx frame queue.
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Madalin Bucur [Sun, 27 Aug 2017 13:13:39 +0000 (16:13 +0300)]
dpaa_eth: use multiple Rx frame queues
Add a block of 128 Rx frame queues per port. The FMan hardware will
send traffic on one of these queues based on the FMan port Parse
Classify Distribute setup. The hash computed by the FMan Keygen
block will select the Rx FQ.
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the FMan Keygen with a hardcoded scheme to spread
incoming traffic on a FQ range based on source and destination IPs
and ports.
Signed-off-by: Iordache Florinel <florinel.iordache@nxp.com> Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Gao Feng [Sat, 26 Aug 2017 14:58:58 +0000 (22:58 +0800)]
sched: sfq: drop packets after root qdisc lock is released
The commit 520ac30f4551 ("net_sched: drop packets after root qdisc lock
is released) made a big change of tc for performance. But there are
some points which are not changed in SFQ enqueue operation.
1. Fail to find the SFQ hash slot;
2. When the queue is full;
Now use qdisc_drop instead free skb directly.
Signed-off-by: Gao Feng <gfree.wind@vip.163.com> Signed-off-by: David S. Miller <davem@davemloft.net>
During the neighbor traversal the neighbors from different families
should be ignored.
Fixes: c58035a74aba ("mlxsw: spectrum_dpipe: Add support for IPv4 host table dump") Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Sat, 26 Aug 2017 06:35:38 +0000 (08:35 +0200)]
mlxsw: spectrum: compile-in dpipe support only if devlink is enabled
Makes no sense to have dpipe compiled in when devlink is not enabled,
because the devlink dpipe registation is noop function. So don't compile
it in. This also fixes missing extern structs errors.
Reported-by: kbuild test robot <fengguang.wu@intel.com> Fixes: a86f030915f2 ("mlxsw: spectrum_dpipe: Add support for IPv4 host table dump") Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dexuan Cui [Sat, 26 Aug 2017 04:52:43 +0000 (04:52 +0000)]
hv_sock: implements Hyper-V transport for Virtual Sockets (AF_VSOCK)
Hyper-V Sockets (hv_sock) supplies a byte-stream based communication
mechanism between the host and the guest. It uses VMBus ringbuffer as the
transportation layer.
With hv_sock, applications between the host (Windows 10, Windows Server
2016 or newer) and the guest can talk with each other using the traditional
socket APIs.
More info about Hyper-V Sockets is available here:
"Make your own integration services":
https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service
The patch implements the necessary support in Linux guest by introducing a new
vsock transport for AF_VSOCK.
Signed-off-by: Dexuan Cui <decui@microsoft.com> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Andy King <acking@vmware.com> Cc: Dmitry Torokhov <dtor@vmware.com> Cc: George Zhang <georgezhang@vmware.com> Cc: Jorgen Hansen <jhansen@vmware.com> Cc: Reilly Grant <grantr@vmware.com> Cc: Asias He <asias@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Cathy Avery <cavery@redhat.com> Cc: Rolf Neugebauer <rolf.neugebauer@docker.com> Cc: Marcelo Cerri <marcelo.cerri@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 25 Aug 2017 21:39:57 +0000 (14:39 -0700)]
selftests/bpf: check the instruction dumps are populated
Add a basic test for checking whether kernel is populating
the jited and xlated BPF images. It was used to confirm
the behaviour change from commit d777b2ddbecf ("bpf: don't
zero out the info struct in bpf_obj_get_info_by_fd()"),
which made bpf_obj_get_info_by_fd() usable for retrieving
the image dumps.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Fri, 25 Aug 2017 20:27:14 +0000 (23:27 +0300)]
bpf: fix oops on allocation failure
"err" is set to zero if bpf_map_area_alloc() fails so it means we return
ERR_PTR(0) which is NULL. The caller, find_and_alloc_map(), is not
expecting NULL returns and will oops.
Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Mon, 28 Aug 2017 22:14:20 +0000 (15:14 -0700)]
net: Add comment that early_demux can change via sysctl
Twice patches trying to constify inet{6}_protocol have been reverted: 39294c3df2a8 ("Revert "ipv6: constify inet6_protocol structures"") to
revert 3a3a4e3054137 and then 03157937fe0b5 ("Revert "ipv4: make
net_protocol const"") to revert aa8db499ea67.
Add a comment that the structures can not be const because the
early_demux field can change based on a sysctl.
Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Willem de Bruijn [Fri, 25 Aug 2017 17:10:43 +0000 (13:10 -0400)]
xen-netback: update ubuf_info initialization to anonymous union
The xen driver initializes struct ubuf_info fields using designated
initializers. I recently moved these fields inside a nested anonymous
struct inside an anonymous union. I had missed this use case.
This breaks compilation of xen-netback with older compilers.
>From kbuild bot with gcc-4.4.7:
drivers/net//xen-netback/interface.c: In function
'xenvif_init_queue':
>> drivers/net//xen-netback/interface.c:554: error: unknown field 'ctx' specified in initializer
>> drivers/net//xen-netback/interface.c:554: warning: missing braces around initializer
drivers/net//xen-netback/interface.c:554: warning: (near initialization for '(anonymous).<anonymous>')
>> drivers/net//xen-netback/interface.c:554: warning: initialization makes integer from pointer without a cast
>> drivers/net//xen-netback/interface.c:555: error: unknown field 'desc' specified in initializer
Add double braces around the designated initializers to match their
nested position in the struct. After this, compilation succeeds again.
Fixes: 4ab6c99d99bb ("sock: MSG_ZEROCOPY notification coalescing") Reported-by: kbuild bot <lpk@intel.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
gre: add collect_md mode for ERSPAN tunnel
This patch series provide collect_md mode for ERSPAN tunnel. The fist patch
refactors the existing gre_fb_xmit function by exacting the route cache
portion into a new function called prepare_fb_xmit. The second patch
introduces the collect_md mode for ERSPAN tunnel, by calling the
prepare_fb_xmit function and adding ERSPAN specific logic. The final patch
adds the test case using bpf_skb_{set,get}_tunnel_{key,opt}.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
William Tu [Fri, 25 Aug 2017 16:21:28 +0000 (09:21 -0700)]
gre: add collect_md mode to ERSPAN tunnel
Similar to gre, vxlan, geneve, ipip tunnels, allow ERSPAN tunnels to
operate in 'collect metadata' mode. bpf_skb_[gs]et_tunnel_key() helpers
can make use of it right away. OVS can use it as well in the future.
Signed-off-by: William Tu <u9012063@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Bhumika Goyal [Fri, 25 Aug 2017 14:21:45 +0000 (19:51 +0530)]
RDS: make rhashtable_params const
Make this const as it is either used during a copy operation or passed
to a const argument of the function rhltable_init
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 28 Aug 2017 18:13:22 +0000 (11:13 -0700)]
Merge branch 'sockmap-uapi-updates-and-fixes'
John Fastabend says:
====================
sockmap UAPI updates and fixes
This series updates sockmap UAPI, adds additional test cases and
provides a couple fixes.
First the UAPI changes. The original API added two sockmap specific
API artifacts (a) a new map_flags field with a sockmap specific update
command and (b) a new sockmap specific attach field in the attach data
structure. After this series instead of attaching programs with a
single command now two commands are used to attach programs to maps
individually. This allows us to add new programs easily in the future
and avoids any specific sockmap data structure additions. The
map_flags field is also removed and instead we allow socks to be
added to multiple maps that may or may not have programs attached.
This allows users to decide if a sock should run a SK_SKB program type
on receive based on the map it is attached to. This is a nice
improvement. See patches for specific details.
More test cases were added to test above changes and also stress test
the interface.
Finally two fixes/improvements were made. First a missing rcu
section was added. Second now sockmap can build without KCM being
used to trigger 'y' on CONFIG_STREAM_PARSER by selecting a new
BPF config option.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Mon, 28 Aug 2017 14:12:41 +0000 (07:12 -0700)]
bpf: test_maps add sockmap stress test
Sockmap is a bit different than normal stress tests that can run
in parallel as is. We need to reuse the same socket pool and map
pool to get good stress test cases.
Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
SOCKMAP uses strparser code (compiled with Kconfig option
CONFIG_STREAM_PARSER) to run the parser BPF program. Without this
config option set sockmap wont be compiled. However, at the moment
the only way to pull in the strparser code is to enable KCM.
To resolve this create a BPF specific config option to pull
only the strparser piece in that sockmap needs. This also
allows folks who want to use BPF/syscall/maps but don't need
sockmap to easily opt out.
Signed-off-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Mon, 28 Aug 2017 14:12:01 +0000 (07:12 -0700)]
bpf: sockmap indicate sock events to listeners
After userspace pushes sockets into a sockmap it may not be receiving
data (assuming stream_{parser|verdict} programs are attached). But, it
may still want to manage the socks. A common pattern is to poll/select
for a POLLRDHUP event so we can close the sock.
This patch adds the logic to wake up these listeners.
Also add TCP_SYN_SENT to the list of events to handle. We don't want
to break the connection just because we happen to be in this state.
Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Mon, 28 Aug 2017 14:11:43 +0000 (07:11 -0700)]
bpf: harden sockmap program attach to ensure correct map type
When attaching a program to sockmap we need to check map type
is correct.
Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Mon, 28 Aug 2017 14:11:24 +0000 (07:11 -0700)]
bpf: more SK_SKB selftests
Tests packet read/writes and additional skb fields.
Signed-off-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Mon, 28 Aug 2017 14:11:05 +0000 (07:11 -0700)]
bpf: additional sockmap self tests
Add some more sockmap tests to cover,
- forwarding to NULL entries
- more than two maps to test list ops
- forwarding to different map
Signed-off-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Mon, 28 Aug 2017 14:10:45 +0000 (07:10 -0700)]
bpf: sockmap add missing rcu_read_(un)lock in smap_data_ready
References to psock must be done inside RCU critical section.
Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Mon, 28 Aug 2017 14:10:25 +0000 (07:10 -0700)]
bpf: sockmap, remove STRPARSER map_flags and add multi-map support
The addition of map_flags BPF_SOCKMAP_STRPARSER flags was to handle a
specific use case where we want to have BPF parse program disabled on
an entry in a sockmap.
However, Alexei found the API a bit cumbersome and I agreed. Lets
remove the STRPARSER flag and support the use case by allowing socks
to be in multiple maps. This allows users to create two maps one with
programs attached and one without. When socks are added to maps they
now inherit any programs attached to the map. This is a nice
generalization and IMO improves the API.
The API rules are less ambiguous and do not need a flag:
- When a sock is added to a sockmap we have two cases,
i. The sock map does not have any attached programs so
we can add sock to map without inheriting bpf programs.
The sock may exist in 0 or more other maps.
ii. The sock map has an attached BPF program. To avoid duplicate
bpf programs we only add the sock entry if it does not have
an existing strparser/verdict attached, returning -EBUSY if
a program is already attached. Otherwise attach the program
and inherit strparser/verdict programs from the sock map.
This allows for socks to be in a multiple maps for redirects and
inherit a BPF program from a single map.
Also this patch simplifies the logic around BPF_{EXIST|NOEXIST|ANY}
flags. In the original patch I tried to be extra clever and only
update map entries when necessary. Now I've decided the complexity
is not worth it. If users constantly update an entry with the same
sock for no reason (i.e. update an entry without actually changing
any parameters on map or sock) we still do an alloc/release. Using
this and allowing multiple entries of a sock to exist in a map the
logic becomes much simpler.
Note: Now that multiple maps are supported the "maps" pointer called
when a socket is closed becomes a list of maps to remove the sock from.
To keep the map up to date when a sock is added to the sockmap we must
add the map/elem in the list. Likewise when it is removed we must
remove it from the list. This results in searching the per psock list
on delete operation. On TCP_CLOSE events we walk the list and remove
the psock from all map/entry locations. I don't see any perf
implications in this because at most I have a psock in two maps. If
a psock were to be in many maps its possibly this might be noticeable
on delete but I can't think of a reason to dup a psock in many maps.
The sk_callback_lock is used to protect read/writes to the list. This
was convenient because in all locations we were taking the lock
anyways just after working on the list. Also the lock is per sock so
in normal cases we shouldn't see any contention.
Suggested-by: Alexei Starovoitov <ast@kernel.org> Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend [Mon, 28 Aug 2017 14:10:04 +0000 (07:10 -0700)]
bpf: convert sockmap field attach_bpf_fd2 to type
In the initial sockmap API we provided strparser and verdict programs
using a single attach command by extending the attach API with a the
attach_bpf_fd2 field.
However, if we add other programs in the future we will be adding a
field for every new possible type, attach_bpf_fd(3,4,..). This
seems a bit clumsy for an API. So lets push the programs using two
new type fields.
This has the advantage of having a readable name and can easily be
extended in the future.
Updates to samples and sockmap included here also generalize tests
slightly to support upcoming patch for multiple map support.
Signed-off-by: John Fastabend <john.fastabend@gmail.com> Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support") Suggested-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David Wu [Tue, 22 Aug 2017 09:24:25 +0000 (17:24 +0800)]
ARM: dts: rk3228-evb: Fix the compiling error
This patch solves the following error:
arch/arm/boot/dts/rk3228-evb.dtb: ERROR (phandle_references): Reference to non-existent node or label "phy0"
Fixess db40f15b53e4 ("ARM: dts: rk3228-evb: Enable the integrated PHY for gmac") Signed-off-by: David Wu <david.wu@rock-chips.com> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
Jacob Keller [Fri, 14 Jul 2017 13:10:13 +0000 (09:10 -0400)]
i40e/i40evf: avoid dynamic ITR updates when polling or low packet rate
The dynamic ITR algorithm depends on a calculation of usecs which
assumes that the interrupts have been firing constantly at the interrupt
throttle rate. This is not guaranteed because we could have a low packet
rate, or have been polling in software.
We'll estimate whether this is the case by using jiffies to determine if
we've been too long. If the time difference of jiffies is larger we are
guaranteed to have an incorrect calculation. If the time difference of
jiffies is smaller we might have been polling some but the difference
shouldn't affect the calculation too much.
This ensures that we don't get stuck in BULK latency during certain rare
situations where we receive bursts of packets that force us into NAPI
polling.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Fri, 14 Jul 2017 13:10:12 +0000 (09:10 -0400)]
i40e/i40evf: remove ULTRA latency mode
Since commit c56625d59726 ("i40e/i40evf: change dynamic interrupt
thresholds") a new higher latency ITR setting called I40E_ULTRA_LATENCY
was added with a cryptic comment about how it was meant for adjusting Rx
more aggressively when streaming small packets.
This mode was attempting to calculate packets per second and then kick
in when we have a huge number of small packets.
Unfortunately, the ULTRA setting was kicking in for workloads it wasn't
intended for including single-thread UDP_STREAM workloads.
This wasn't caught for a variety of reasons. First, the ip_defrag
routines were improved somewhat which makes the UDP_STREAM test still
reasonable at 10GbE, even when dropped down to 8k interrupts a second.
Additionally, some other obvious workloads appear to work fine, such
as TCP_STREAM.
The number 40k doesn't make sense for a number of reasons. First, we
absolutely can do more than 40k packets per second. Second, we calculate
the value inline in an integer, which sometimes can overflow resulting
in using incorrect values.
If we fix this overflow it makes it even more likely that we'll enter
ULTRA mode which is the opposite of what we want.
The ULTRA mode was added originally as a way to reduce CPU utilization
during a small packet workload where we weren't keeping up anyways. It
should never have been kicking in during these other workloads.
Given the issues outlined above, let's remove the ULTRA latency mode. If
necessary, a better solution to the CPU utilization issue for small
packet workloads will be added in a future patch.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Fri, 14 Jul 2017 13:10:11 +0000 (09:10 -0400)]
i40e: invert logic for checking incorrect cpu vs irq affinity
In commit 96db776a3682 ("i40e/vf: fix interrupt affinity bug")
we added some code to force exit of polling in case we did
not have the correct CPU. This is important since it was possible for
the IRQ affinity to be changed while the CPU is pegged at 100%. This can
result in the polling routine being stuck on the wrong CPU until
traffic finally stops.
Unfortunately, the implementation, "if the CPU is correct, exit as
normal, otherwise, fall-through to the end-polling exit" is incredibly
confusing to reason about. In this case, the normal flow looks like the
exception, while the exception actually occurs far away from the if
statement and comment.
We recently discovered and fixed a bug in this code because we were
incorrectly initializing the affinity mask.
Re-write the code so that the exceptional case is handled at the check,
rather than having the logic be spread through the regular exit flow.
This does end up with minor code duplication, but the resulting code is
much easier to reason about.
The new logic is identical, but inverted. If we are running on a CPU not
in our affinity mask, we'll exit polling. However, the code flow is much
easier to understand.
Note that we don't actually have to check for MSI-X, because in the MSI
case we'll only have one q_vector, but its default affinity mask should
be correct as it includes all CPUs when it's initialized. Further, we
could at some point add code to setup the notifier for the non-MSI-X
case and enable this workaround for that case too, if desired, though
there isn't much gain since its unlikely to be the common case.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Fri, 14 Jul 2017 13:10:10 +0000 (09:10 -0400)]
i40e: initialize our affinity_mask based on cpu_possible_mask
On older kernels a call to irq_set_affinity_hint does not guarantee that
the IRQ affinity will be set. If nothing else on the system sets the IRQ
affinity this can result in a bug in the i40e_napi_poll() routine where
we notice that our interrupt fired on the "wrong" CPU according to our
internal affinity_mask variable.
This results in a bug where we continuously tell NAPI to stop polling to
move the interrupt to a new CPU, but the CPU never changes because our
affinity mask does not match the actual mask setup for the IRQ.
The root problem is a mismatched affinity mask value. So lets initialize
the value to cpu_possible_mask instead. This ensures that prior to the
first time we get an IRQ affinity notification we'll have the mask set
to include every possible CPU.
We use cpu_possible_mask instead of cpu_online_mask since the former is
almost certainly never going to change, while the later might change
after we've made a copy.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Fri, 14 Jul 2017 13:10:09 +0000 (09:10 -0400)]
i40e: move enabling icr0 into i40e_update_enable_itr
If we don't have MSI-X enabled, we handle interrupts on all icr0. This
is a special case, so let's move the conditional into
i40e_update_enable_itr() in order to make i40e_napi_poll easier to
read about.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Fri, 14 Jul 2017 13:10:08 +0000 (09:10 -0400)]
i40e: remove workaround for resetting XPS
Since commit 3ffa037d7f78 ("i40e: Set XPS bit mask to zero in DCB mode")
we've tried to reset the XPS settings by building a custom
empty CPU mask.
This workaround is not necessary because we're not really removing the
XPS setting, but simply setting it so that no CPU is valid.
Second, we shorten the code further by using zalloc_cpumask_var instead
of a separate call to bitmap_zero().
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
i40e: Fix for unused value issue found by static analysis
This patch fixes an issue where an error return value is
set, but without an immediate exit, the value can be overwritten
by the following code execution. The condition at this point
is not fatal, so remove the error assignment and comment the
intent for future code maintainers
Signed-off-by: Carolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
i40e/i40evf: support for VF VLAN tag stripping control
This patch gives VF capability to control VLAN tag stripping via
ethtool. As rx-vlan-offload was fixed before, now the VF is able to
change it using "ethtool --offload <IF> rxvlan on/off" settings.
Signed-off-by: Mariusz Stachura <mariusz.stachura@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Wed, 12 Jul 2017 09:46:12 +0000 (05:46 -0400)]
i40e: force VMDQ device name truncation
In new versions of GCC since 7.x a new warning exists which warns when
a string is truncated before all of the format can be completed.
When we setup VMDQ netdev names we are copying a pre-existing interface
name which could be up to 15 characters in length. Since we also add
4 bytes, v, the literal %, the d and a \0 null, we would overrun the
available size unless snprintf truncated for us.
The snprintf call will of course truncate on the end, so lets instead
modify the code to force truncation of the copied netdev name by
4 characters, to create enough space for the 4 bytes we're adding.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Wed, 12 Jul 2017 09:46:11 +0000 (05:46 -0400)]
i40evf: fix possible snprintf truncation of q_vector->name
The q_vector names are based on the interface name with a driver prefix,
the type of q_vector setup, and the queue number. We previously set the
size of this variable to IFNAMSIZ + 9, which is incorrect, because we
actually include a minimum of 14 characters extra beyond the interface
name size.
New versions of GCC since 7 include a new warning that detects this
possible truncation and complains. We can fix this by increasing the
size in case our interface name is too large to avoid truncation. We
don't need to go beyond 14 because the compiler is smart enough to
realize our values can never exceed size of 1. We do go up to 15 here
because possible future changes may increase the number of queues beyond
one digit.
While we are here, also change some variables to be unsigned (since they
are never negative) and stop using an extra unnecessary %s format
specifier.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
i40e: Use correct flag to enable egress traffic for unicast promisc
Albeit, we usually set true promiscuous mode for both multicast and
unicast at the same time - however, it is possible to set it
individually, so using allmulti flag which is only for allmulticast might
caused unwanted behavior in mirroring egress traffic promiscuous for
unicast in VF.
Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Wed, 12 Jul 2017 09:46:09 +0000 (05:46 -0400)]
i40e: prevent snprintf format specifier truncation
Increase the size of the prefix buffer so that it can hold enough
characters for every possible input. Although 20 is enough for all
expected inputs, it is possible for the values to be larger than
expected, resulting in a possibly truncated string. Additionally, lets
use sizeof(prefix) in order to ensure we use the correct size if we need
to change the array length in the future.
New versions of GCC starting at 7 now include warnings to prevent
truncation unless you handle the return code. At most 27 bytes can be
written here, so lets just increase the buffer size even if for all
expected hw->bus.* values we only needed 20.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Store information about FEC modes, that were requested. It will be used
in printing link status information function and this way there is no
need to call admin queue there.
Signed-off-by: Mariusz Stachura <mariusz.stachura@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
During NVM update, state machine gets into unrecoverable state because
i40e_clean_adminq_subtask can get scheduled after the admin queue
command but before other state variables are updated. This causes
incorrect input to i40e_nvmupd_check_wait_event and state transitions
don't happen.
This fix updates the state variables so that adminq_subtask will have
accurate information whenever it gets scheduled.
Signed-off-by: Sudheer Mogilappagari <sudheer.mogilappagari@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Antoine Ténart [Fri, 25 Aug 2017 13:24:46 +0000 (15:24 +0200)]
net: mvpp2: fix the packet size configuration for 10G
The MVPP22_XLG_CTRL1_FRAMESIZELIMIT define is used as an offset, but is
defined as BIT(0). Updated its name to contains "OFFS" as in offset and
fix its value using the offset value, 0.
Reported-by: Stefan Chulski <stefanc@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com> Fixes: 76eb1b1de5b6 ("net: mvpp2: set maximum packet size for 10G ports") Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 26 Aug 2017 02:39:58 +0000 (19:39 -0700)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
40GbE Intel Wired LAN Driver Updates 2017-08-25
This series contains updates to i40e and i40evf only.
Mitch adjusts the max packet size to account for two VLAN tags.
Sudheer provides a fix to ensure that the watchdog timer is scheduled
immediately after admin queue operations are scheduled in i40evf_down().
Fixes an issue by adding locking around the admin queue command and
update of state variables so that adminq_subtask will have the accurate
information whenever it gets scheduled.
Anjali fixes a bug where the PF flag setup should happen before the VMDq
RSS queue count is initialized for VMDq VSI to get the right number of
queues for RSS in the case of x722 devices. Fixed a problem with the
hardware ATR eviction feature where the NVM setting was incorrect.
Jake separates the flags into two types, hw_features and flags. The
hw_features flags contain a set of features which are enabled at init
time and will not contain feature flags that can be toggled. Everything
else will remain in the flags variable, and can be modified anytime
during run time. We should not be directly copying a cpumask_t, since
it is bitmap and might not be copied correctly, so use cpumask_copy()
instead.
Stefan Assmann makes vf _offload_flags more "generic" by renaming it to
vf_cap_flags, which allows other capabilities besides offloading to be
added.
Alan makes it such that if adaptive-rx/tx is enabled, the user cannot
make any manual adjustments to interrupt moderation. Also makes it so
that if ITR is disabled by adaptive-rx/tx is then enabled, ITR will be
re-enabled.
v2: Dropped patches #1 & #8 from the original patch series submission,
while Jesse and Jake re-work their patches based on feedback from
David Miller. Also removed the duplicate patch 3 that was
accidentally sent out twice in the previous submission.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 26 Aug 2017 02:24:59 +0000 (19:24 -0700)]
Merge branch 'nfp-SR-IOV-ndos-support'
Jakub Kicinski says:
====================
nfp: SR-IOV ndos support
This set adds basic SR-IOV including setting/getting VF MAC addresses,
VLANs, link state and spoofcheck settings. It is wired up for both
vNICs and representors (note: ip link will not report VF settings on
VF/PF representors because they are not linked to the PF PCI device).
Pablo and team add the basic implementation, Simon and Dirk follow
up with the representor plumbing.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Simon Horman [Fri, 25 Aug 2017 04:31:50 +0000 (21:31 -0700)]
nfp: add basic SR-IOV ndo functions to representors
Add basic ndo_set/get_vf to support SR-IOV on all types
of port representors.
Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Pablo Cascón [Fri, 25 Aug 2017 04:31:49 +0000 (21:31 -0700)]
nfp: add basic SR-IOV ndo functions
Add basic ndo_set/get_vf to support SR-IOV.
VF to egress phy static mapping by now.
Use vfcfg ABI version 2 to write the info to the FW and collect
the return value from the mailbox.
Signed-off-by: Pablo Cascón <pablo.cascon@netronome.com> Signed-off-by: Jimmy Kizito <jimmy.kizito@netronome.com> Signed-off-by: Rami Tomer <rami.tomer@netronome.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes: 306b13eb3cf9 ("proto_ops: Add locked held versions of sendmsg and sendpage") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Dmitry Vyukov <dvyukov@google.com> Cc: Tom Herbert <tom@quantonium.net> Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 24 Aug 2017 23:51:30 +0000 (16:51 -0700)]
net_sched: kill u32_node pointer in Qdisc
It is ugly to hide a u32-filter-specific pointer inside Qdisc,
this breaks the TC layers:
1. Qdisc is a generic representation, should not have any specific
data of any type
2. Qdisc layer is above filter layer, should only save filters in
the list of struct tcf_proto.
This pointer is used as the head of the chain of u32 hash tables,
that is struct tc_u_hnode, because u32 filter is very special,
it allows to create multiple hash tables within one qdisc and
across multiple u32 filters.
Instead of using this ugly pointer, we can just save it in a global
hash table key'ed by (dev ifindex, qdisc handle), therefore we can
still treat it as a per qdisc basis data structure conceptually.
Of course, because of network namespaces, this key is not unique
at all, but it is fine as we already have a pointer to Qdisc in
struct tc_u_common, we can just compare the pointers when collision.
And this only affects slow paths, has no impact to fast path,
thanks to the pointer ->tp_c.
Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: Jiri Pirko <jiri@resnulli.us> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 24 Aug 2017 23:51:29 +0000 (16:51 -0700)]
net_sched: remove tc class reference counting
For TC classes, their ->get() and ->put() are always paired, and the
reference counting is completely useless, because:
1) For class modification and dumping paths, we already hold RTNL lock,
so all of these ->get(),->change(),->put() are atomic.
2) For filter bindiing/unbinding, we use other reference counter than
this one, and they should have RTNL lock too.
3) For ->qlen_notify(), it is special because it is called on ->enqueue()
path, but we already hold qdisc tree lock there, and we hold this
tree lock when graft or delete the class too, so it should not be gone
or changed until we release the tree lock.
Therefore, this patch removes ->get() and ->put(), but:
1) Adds a new ->find() to find the pointer to a class by classid, no
refcnt.
2) Move the original class destroy upon the last refcnt into ->delete(),
right after releasing tree lock. This is fine because the class is
already removed from hash when holding the lock.
For those who also use ->put() as ->unbind(), just rename them to reflect
this change.
Cc: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 24 Aug 2017 23:51:28 +0000 (16:51 -0700)]
net_sched: introduce tclass_del_notify()
Like for TC actions, ->delete() is a special case,
we have to prepare and fill the notification before delete
otherwise would get use-after-free after we remove the
reference count.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Thu, 24 Aug 2017 23:51:27 +0000 (16:51 -0700)]
net_sched: get rid of more forward declarations
This is not needed if we move them up properly.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Fri, 25 Aug 2017 08:24:28 +0000 (11:24 +0300)]
hinic: skb_pad() frees on error
The skb_pad() function frees the skb on error, so this code has a double
free.
Fixes: 00e57a6d4ad3 ("net-next/hinic: Add Tx operation") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 26 Aug 2017 00:10:24 +0000 (17:10 -0700)]
Merge branch 'ipv6-sr-updates'
David Lebrun says:
====================
net: updates for IPv6 Segment Routing
v2: seg6_lwt_headroom() is not relevant for lwtunnel_input_redirect()
use cases, and L2ENCAP only uses this redirection. Fix incoherence
between arbitrary MAC header size support and fixed headroom
computation by setting only LWTUNNEL_STATE_INPUT_REDIRECT for L2ENCAP
mode.
This patch series provides several updates for the SRv6 implementation. The
first patch leverages the existing infrastructure to support encapsulation
of IPv4 packets. The second patch implements the T.Encaps.L2 SR function,
enabling to encapsulate an L2 Ethernet frame within an IPv6+SRH packet.
The last three patches update the seg6local lightweight tunnel, and mainly
implement four new actions: End.T, End.DX2, End.DX4 and End.DT6.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Lebrun [Fri, 25 Aug 2017 07:56:47 +0000 (09:56 +0200)]
ipv6: sr: add helper functions for seg6local
This patch adds three helper functions to be used with the seg6local packet
processing actions.
The decap_and_validate() function will be used by the End.D* actions, that
decapsulate an SR-enabled packet.
The advance_nextseg() function applies the fundamental operations to update
an SRH for the next segment.
The lookup_nexthop() function helps select the next-hop for the processed
SR packets. It supports an optional next-hop address to route the packet
specifically through it, and an optional routing table to use.
Signed-off-by: David Lebrun <david.lebrun@uclouvain.be> Signed-off-by: David S. Miller <davem@davemloft.net>
David Lebrun [Fri, 25 Aug 2017 07:56:45 +0000 (09:56 +0200)]
ipv6: sr: add support for encapsulation of L2 frames
This patch implements the L2 frame encapsulation mechanism, referred to
as T.Encaps.L2 in the SRv6 specifications [1].
A new type of SRv6 tunnel mode is added (SEG6_IPTUN_MODE_L2ENCAP). It only
accepts packets with an existing MAC header (i.e., it will not work for
locally generated packets). The resulting packet looks like IPv6 -> SRH ->
Ethernet -> original L3 payload. The next header field of the SRH is set to
NEXTHDR_NONE.
David Lebrun [Fri, 25 Aug 2017 07:56:44 +0000 (09:56 +0200)]
ipv6: sr: add support for ip4ip6 encapsulation
This patch enables the SRv6 encapsulation mode to carry an IPv4 payload.
All the infrastructure was already present, I just had to add a parameter
to seg6_do_srh_encap() to specify the inner packet protocol, and perform
some additional checks.
Usage example:
ip route add 1.2.3.4 encap seg6 mode encap segs fc00::1,fc00::2 dev eth0
Signed-off-by: David Lebrun <david.lebrun@uclouvain.be> Signed-off-by: David S. Miller <davem@davemloft.net>
i40e: synchronize nvmupdate command and adminq subtask
During NVM update, state machine gets into unrecoverable state because
i40e_clean_adminq_subtask can get scheduled after the admin queue
command but before other state variables are updated. This causes
incorrect input to i40e_nvmupd_check_wait_event and state transitions
don't happen.
This issue existed before but surfaced after commit 373149fc99a0
("i40e: Decrease the scope of rtnl lock")
This fix adds locking around admin queue command and update of
state variables so that adminq_subtask will have accurate information
whenever it gets scheduled.
Signed-off-by: Sudheer Mogilappagari <sudheer.mogilappagari@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alan Brady [Wed, 12 Jul 2017 09:46:06 +0000 (05:46 -0400)]
i40e: prevent changing ITR if adaptive-rx/tx enabled
Currently the driver allows the user to change (or even disable)
interrupt moderation if adaptive-rx/tx is enabled when this should
not be the case.
Adaptive RX/TX will not respect the user's ITR settings so
allowing the user to change it is weird. This bug would also
allow the user to disable interrupt moderation with adaptive-rx/tx
enabled which doesn't make much sense either.
This patch makes it such that if adaptive-rx/tx is enabled, the user
cannot make any manual adjustments to interrupt moderation. It also
makes it so that if ITR is disabled but adaptive-rx/tx is then
enabled, ITR will be re-enabled.
Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Wed, 12 Jul 2017 09:46:05 +0000 (05:46 -0400)]
i40e: use cpumask_copy instead of direct assignment
According to the header file cpumask.h, we shouldn't be directly copying
a cpumask_t, since its a bitmap and might not be copied correctly. Lets
use the provided cpumask_copy() function instead.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Alan Brady [Wed, 12 Jul 2017 09:46:04 +0000 (05:46 -0400)]
i40evf: use netdev variable in reset task
If we're going to bother initializing a variable to reference it we might
as well use it.
Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Stefan Assmann [Thu, 29 Jun 2017 13:12:24 +0000 (15:12 +0200)]
i40e/i40evf: rename vf_offload_flags to vf_cap_flags in struct virtchnl_vf_resource
The current name of vf_offload_flags indicates that the bitmap is
limited to offload related features. Make this more generic by renaming
it to vf_cap_flags, which allows for other capabilities besides
offloading to be added.
Signed-off-by: Stefan Assmann <sassmann@kpanic.de> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Fri, 23 Jun 2017 08:24:51 +0000 (04:24 -0400)]
i40e: move check for avoiding VID=0 filters into i40e_vsi_add_vlan
In i40e_vsi_add_vlan we treat attempting to add VID=0 as an error,
because it does not do what the caller might expect. We already special
case VID=0 in i40e_vlan_rx_add_vid so that we avoid this error when
adding the VLAN.
This special casing is necessary so that we do not add the VLAN=0 filter
since we don't want to stop receiving untagged traffic. Unfortunately,
not all callers of i40e_vsi_add_vlan are aware of this, including when
we add VLANs from a VF device.
Rather than special casing every single caller of i40e_vsi_add_vlan,
lets just move this check internally. This makes the code simpler
because the caller does not need to be aware of how VLAN=0 is special,
and we don't forget to add this check in new places.
This fixes a harmless error message displaying when adding a VLAN from
within a VF. The message was meaningless but there is no reason to
confuse end users and system administrators, and this is now avoided.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Fri, 23 Jun 2017 08:24:50 +0000 (04:24 -0400)]
i40e/i40evf: use cmpxchg64 when updating private flags in ethtool
When a user gives an invalid command to change a private flag which is
not supported, either because it is read-only, or the device is not
capable of the feature, we simply ignore the request.
A naive solution would simply be to report error codes when one of the
flags was not supported. However, this causes problems because it makes
the operation not atomic. If a user requests multiple private flags
together at once we could end up changing one before failing at the
second flag.
We can do a bit better if we instead update a temporary copy of the
flags variable in the loop, and then copy it into place after. If we
aren't careful this has the pitfall of potentially silently overwriting
any changes caused by other threads.
Avoid this by using cmpxchg64 which will compare and swap the flags
variable only if it currently matched the old value. We'll report
-EAGAIN in the (hopefully rare!) case where the cmpxchg64 fails.
This ensures that we can properly report when flags are not supported in
an atomic fashion without the risk of overwriting other threads changes.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>