]> git.proxmox.com Git - mirror_ubuntu-eoan-kernel.git/log
mirror_ubuntu-eoan-kernel.git
7 years agoqed: hw_init() to receive parameter-struct
Mintz, Yuval [Tue, 28 Mar 2017 12:12:51 +0000 (15:12 +0300)]
qed: hw_init() to receive parameter-struct

We'll soon need additional information, so start by changing
the infrastructure to receive the initializing variables
via a parameter struct.

Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoqed: Correct HW stop flow
Tomer Tayar [Tue, 28 Mar 2017 12:12:50 +0000 (15:12 +0300)]
qed: Correct HW stop flow

Management firmware is used as arbiter between different PFs
which are loading/unloading, but in order to use the synchronization
it offers the contending configurations need to be applied either
between their LOAD_REQ <-> LOAD_DONE or UNLOAD_REQ <-> UNLOAD_DONE
management firmware commands.

Existing HW stop flow utilizes 2 different functions: qed_hw_stop() and
qed_hw_reset() which don't abide this requirement; Most of the closure
is doing outside the scope of the unload request.

This patch removes qed_hw_reset() and places the relevant stop
functionality underneath the management firmware protection.

Signed-off-by: Tomer Tayar <Tomer.Tayar@cavium.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'tipc-subscription-refcount-simplifications'
David S. Miller [Wed, 29 Mar 2017 01:03:33 +0000 (18:03 -0700)]
Merge branch 'tipc-subscription-refcount-simplifications'

Parthasarathy Bhuvaragan says:

====================
tipc: subscription refcount simplifications

The first patch makes the subscription refcount cleanup lockless and
the second updates the subscription refcount policy.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotipc: adjust the policy of holding subscription kref
Ying Xue [Tue, 28 Mar 2017 10:28:28 +0000 (12:28 +0200)]
tipc: adjust the policy of holding subscription kref

When a new subscription object is inserted into name_seq->subscriptions
list, it's under name_seq->lock protection; when a subscription is
deleted from the list, it's also under the same lock protection;
similarly, when accessing a subscription by going through subscriptions
list, the entire process is also protected by the name_seq->lock.

Therefore, if subscription refcount is increased before it's inserted
into subscriptions list, and its refcount is decreased after it's
deleted from the list, it will be unnecessary to hold refcount at all
before accessing subscription object which is obtained by going through
subscriptions list under name_seq->lock protection.

Signed-off-by: Ying Xue <ying.xue@windriver.com>
Reviewed-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotipc: advance the time of deleting subscription from subscriber->subscrp_list
Ying Xue [Tue, 28 Mar 2017 10:28:27 +0000 (12:28 +0200)]
tipc: advance the time of deleting subscription from subscriber->subscrp_list

After a subscription object is created, it's inserted into its
subscriber subscrp_list list under subscriber lock protection,
similarly, before it's destroyed, it should be first removed from
its subscriber->subscrp_list. Since the subscription list is
accessed with subscriber lock, all the subscriptions are valid
during the lock duration. Hence in tipc_subscrb_subscrp_delete(), we
remove subscription get/put and the extra subscriber unlock/lock.

After this change, the subscriptions refcount cleanup is very simple
and does not access any lock.

Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agostmmac: use netif_set_real_num_{rx,tx}_queues
Arnd Bergmann [Tue, 28 Mar 2017 09:48:21 +0000 (11:48 +0200)]
stmmac: use netif_set_real_num_{rx,tx}_queues

A driver must not access the two fields directly but should instead use
the helper functions to set the values and keep a consistent internal
state:

ethernet/stmicro/stmmac/stmmac_main.c: In function 'stmmac_dvr_probe':
ethernet/stmicro/stmmac/stmmac_main.c:4083:8: error: 'struct net_device' has no member named 'real_num_rx_queues'; did you mean 'real_num_tx_queues'?

Fixes: a8f5102af2a7 ("net: stmmac: TX and RX queue priority configuration")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosoc: qcom: smd-rpm: Add msm8996 compatibility
Bjorn Andersson [Tue, 28 Mar 2017 05:26:35 +0000 (22:26 -0700)]
soc: qcom: smd-rpm: Add msm8996 compatibility

With the RPM driver transitioned to RPMSG we can reuse the SMD-RPM
driver ontop of GLINK for 8996, without any modifications.

Acked-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosoc: qcom: smd: Remove standalone driver
Bjorn Andersson [Tue, 28 Mar 2017 05:26:34 +0000 (22:26 -0700)]
soc: qcom: smd: Remove standalone driver

Remove the standalone SMD implementation as we have transitioned the
client drivers to use the RPMSG based one.

Also remove all dependencies on QCOM_SMD from Kconfig files, in order to
keep them selectable in the absence of the removed symbol.

Acked-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agosoc: qcom: smd: Transition client drivers from smd to rpmsg
Bjorn Andersson [Tue, 28 Mar 2017 05:26:33 +0000 (22:26 -0700)]
soc: qcom: smd: Transition client drivers from smd to rpmsg

By moving these client drivers to use RPMSG instead of the direct SMD
API we can reuse them ontop of the newly added GLINK wire-protocol
support found in the 820 and 835 Qualcomm platforms.

As the new (RPMSG-based) and old SMD implementations are mutually
exclusive we have to change all client drivers in one commit, to make
sure we have a working system before and after this transition.

Acked-by: Andy Gross <andy.gross@linaro.org>
Acked-by: Kalle Valo <kvalo@codeaurora.org>
Acked-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agovxlan: don't age NTF_EXT_LEARNED fdb entries
Roopa Prabhu [Mon, 27 Mar 2017 22:46:41 +0000 (15:46 -0700)]
vxlan: don't age NTF_EXT_LEARNED fdb entries

vxlan driver already implicitly supports installing
of external fdb entries with NTF_EXT_LEARNED. This
patch just makes sure these entries are not aged
by the vxlan driver. An external entity managing these
entries will age them out. This is consistent with
the use of NTF_EXT_LEARNED in the bridge driver.

Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'net-dpipe'
David S. Miller [Wed, 29 Mar 2017 00:11:56 +0000 (17:11 -0700)]
Merge branch 'net-dpipe'

Jiri Pirko says:

====================
Add support for pipeline debug (dpipe)

Arkadi says:

While doing the hardware offloading process much of the hardware
specifics cannot be presented. An example for such is the routing
LPM algorithm which differ in hardware implementation from the
kernel software implementation. The only information the user receives
is whether specific route is offloaded or not, but he cannot really
understand the underlying implementation nor get the specific statistics
related to that process.

Another example is ACL offload using TC which is commonly implemented
using TCAM memory. Currently there is no capability to gain visibility
into the TCAM structure and to debug suboptimal resource allocation.

This patchset introduces capability for exporting the ASICs pipeline
abstraction via devlink infrastructure, which should serve as an
complementary tool. This infrastructure allows the user to get visibility
into the ASIC by modeling it as a set of match/action tables.

The main objects defined:
Table - abstraction for a single pipeline stage. Contains the
        available match/actions and counter availability.
Entry - entry in a specific table with specific matches/actions
        values and dedicated counter.
Header/field - tuples which describes the tables behavior.

As an example one of the ASIC's L3 blocks will be modeled. The egress
rif (router interface) table is the final step in the L3 pipeline
processing which does match on the internal rif index which was
determined before by the routing logic. The erif table determines
whether to forward or drop the packet and updates the corresponding
rif L3 statistics.

To expose this internal resources a special metadata header will
be introduced that describes the internal information gathered by
the ASIC's pipeline and contains the following fields: rif_port_index,
forward and drop.

Some internal hardware resources have direct mapping to kernel
objects. For example the rif_port_index is mapped to the net-devices
ifindex. By providing this mapping the users gains visibility into
the offloading process.

Follow-up work will include exporting more L3 tables which will give
visibility into the routing process.

First stage is adding support for dpipe in devlink. Next add support
in spectrum driver. Finally implement egress router interface
(erif) table for spectrum ASIC as an example.

---
v1->v2: Please see individual patches
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum: Add Support for erif table entries access
Arkadi Sharshevsky [Tue, 28 Mar 2017 15:24:17 +0000 (17:24 +0200)]
mlxsw: spectrum: Add Support for erif table entries access

Implement dpipe's table ops for erif table which provide:
1. Getting the entries in the table with the associate values.
- match on "mlxsw_meta:erif_index"
- action on "mlxsw_meta:forwared_out"
2. Synchronize the hardware in case of enabling/disabling counters which
   mean removing erif counters from all interfaces.

Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum_router: Add rif helper functions
Arkadi Sharshevsky [Tue, 28 Mar 2017 15:24:16 +0000 (17:24 +0200)]
mlxsw: spectrum_router: Add rif helper functions

Add rif helper function to access the rif index and rif devices ifindex.
This functions will be used by dpipe in order to dump the rif table.

Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum: Support for counters on router interfaces
Arkadi Sharshevsky [Tue, 28 Mar 2017 15:24:15 +0000 (17:24 +0200)]
mlxsw: spectrum: Support for counters on router interfaces

Add support for counter allocation on router interfaces. The allocation
depends on the counter state of relevant table. In case the counting is
disabled or no counters left the counter index will be set as invalid.

Also a counter pool for router allocation is added.

Signed-off-by: Arakdi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: reg: Add Router Interface Counter Register
Arkadi Sharshevsky [Tue, 28 Mar 2017 15:24:14 +0000 (17:24 +0200)]
mlxsw: reg: Add Router Interface Counter Register

The RICNT register retrieves per port performance counter. It will be
used to query the router interfaces statistics.

Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum: Add definition for egress rif table
Arkadi Sharshevsky [Tue, 28 Mar 2017 15:24:13 +0000 (17:24 +0200)]
mlxsw: spectrum: Add definition for egress rif table

Add definition for egress router interface table. This table describes
the final part in the routing pipeline. This table matches the egress
interface index (rif index, which is set by the previous stages and
determine the out port) and makes the decision of forwarding the packet
towards the L2 logic or dropping it.

The metadata header is added to represent this internal information.
The rif index field is mapped logically to netdevice ifindex.

Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum: Add placeholder for dpipe
Arkadi Sharshevsky [Tue, 28 Mar 2017 15:24:12 +0000 (17:24 +0200)]
mlxsw: spectrum: Add placeholder for dpipe

Add placeholder for dpipe. Support for specific tables and headers will
be introduced in following patches. The headers are shared between all
mlxsw_sp instances.

Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: reg: Add counter fields to RITR register
Arkadi Sharshevsky [Tue, 28 Mar 2017 15:24:11 +0000 (17:24 +0200)]
mlxsw: reg: Add counter fields to RITR register

Update RITR for counter support. This allows adding counters for
ASIC's router ports.

Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agodevlink: Support for pipeline debug (dpipe)
Arkadi Sharshevsky [Tue, 28 Mar 2017 15:24:10 +0000 (17:24 +0200)]
devlink: Support for pipeline debug (dpipe)

The pipeline debug is used to export the pipeline abstractions for the
main objects - tables, headers and entries. The only support for set is
for changing the counter parameter on specific table.

The basic structures:

Header - can represent a real protocol header information or internal
         metadata. Generic protocol headers like IPv4 can be shared
         between drivers. Each driver can add local headers.

Field - part of a header. Can represent protocol field or specific ASIC
        metadata field. Hardware special metadata fields can be mapped
        to different resources, for example switch ASIC ports can have
        internal number which from the systems point of view is mapped
        to netdeivce ifindex.

Match - represent specific match rule. Can describe match on specific
        field or header. The header index should be specified as well
        in order to support several header instances of the same type
        (tunneling).

Action - represents specific action rule. Actions can describe operations
         on specific field values for example like set, increment, etc.
         And header operation like add and delete.

Value - represents value which can be associated with specific match or
        action.

Table - represents a hardware block which can be described with match/
        action behavior. The match/action can be done on the packets
        data or on the internal metadata that it gathered along the
        packets traversal throw the pipeline which is vendor specific
        and should be exported in order to provide understanding of
        ASICs behavior.

Entry - represents single record in a specific table. The entry is
        identified by specific combination of values for match/action.

Prior to accessing the tables/entries the drivers provide the header/
field data base which is used by driver to user-space. The data base
is split between the shared headers and unique headers.

Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge tag 'mlx5e-failsafe' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed...
David S. Miller [Tue, 28 Mar 2017 04:16:03 +0000 (21:16 -0700)]
Merge tag 'mlx5e-failsafe' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5e-failsafe 27-03-2017

This series provides a fail-safe mechanism to allow safely re-configuring
mlx5e netdevice and provides a resiliency against sporadic
configuration failures.

To enable this we do some refactoring and code reorganizing to allow
breaking the drivers open/close flows to stages:
      open -> activate -> deactivate -> close.

In addition we need to allow creating fresh HW ring resources
(mlx5e_channels) with their own "new" set of parameters, while keeping
the current ones running and active until the new channels are
successfully created with the new configuration, and only then we can
safly replace (switch) old channels with new ones.

For that we introduce mlx5e_channels object and an API to manage it:
 - channels = open_channels(new_params):
   open fresh TX/RX channels
 - activate_channels(channels):
   redirect traffic to them and attach them to the netdev
 - deactivate_channes(channels)
   stop traffic and detach from netdev
 - close(channels)
   Free the TX/RX HW resources of those channels

With the above strategy it is straightforward to achieve the desired
behavior of fail-safe configuration.  In pseudo code:

make_new_config(new_params)
{
old_channels = current_active_channels;
new_channels = create_channels(new_params);
if (!new_channels)
return "Failed, but current channels are still active :)"

deactivate_channels(old_channels); /* Can't fail */
set_hw_new_state();                /* If needed  */
activate_channels(new_channels);   /* Can't fail */
close_channels(old_channels);
current_active_channels = new_channels;

        return "SUCCESS";
}

At the top of this series, we change the following flows to be fail-safe:
ethtool:
   - ring parameters
   - coalesce parameters
   - tx copy break parameters
   - cqe compressing/moderation mode setting (priv flags)
ndos:
   - tc setup
   - set features: LRO
   - change mtu
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'bond-link-status-fixes'
David S. Miller [Tue, 28 Mar 2017 04:11:50 +0000 (21:11 -0700)]
Merge branch 'bond-link-status-fixes'

Mahesh Bandewar says:

====================
link-status fixes for mii-monitoring

The mii monitoring is divided into two phases - inspect and commit. The
inspect phase technically should not make any changes to the state and
defer it to the commit phase. However detected link state inconsistencies
on several machines and discovered that it's the result of some
inconsistent update to link states and assumption that you *always* get
rtnl-mutex. In reality when trylock() fails to acquire rtnl-mutex, the
commit phase is postponed until next mii-mon run. At the next round
because of the state change performed in the previous inspect-run, this
round does not detect any changes and would skip calling commit phase.
This would result in an inconsistent state until next link event happens
(if it ever happens).

During the the commit phase, it's always assumed that speed and duplex
fetch is always successful, but that's always not the case. However the
slave state is marked UP irrespective of speed / duplex fetch operation.
If the speed / duplex fetch operation results in insane values for either
of these two fields, then keeping internal link state UP is not going to
provide fruitful results either.

Please see into individual patches for more details.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobonding: avoid printing while holding a spinlock
Mahesh Bandewar [Mon, 27 Mar 2017 18:37:40 +0000 (11:37 -0700)]
bonding: avoid printing while holding a spinlock

Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobonding: correctly update link status during mii-commit phase
Mahesh Bandewar [Mon, 27 Mar 2017 18:37:37 +0000 (11:37 -0700)]
bonding: correctly update link status during mii-commit phase

bond_miimon_commit() marks the link UP after attempting to get the speed
and duplex settings for the link. There is a possibility that
bond_update_speed_duplex() could fail. This is another place where it
could result into an inconsistent bonding link state.

With this patch the link will be marked UP only if the speed and duplex
values retrieved have sane values and processed further.

Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobonding: make speed, duplex setting consistent with link state
Mahesh Bandewar [Mon, 27 Mar 2017 18:37:35 +0000 (11:37 -0700)]
bonding: make speed, duplex setting consistent with link state

bond_update_speed_duplex() retrieves speed and duplex settings. There
is a possibility of failure in retrieving these values but caller has
to assume it's always successful. This leads to having inconsistent
slave link settings. If these (speed, duplex) values cannot be
retrieved, then keeping the link UP causes problems.

The updated bond_update_speed_duplex() returns 0 on success if it
retrieves sane values for speed and duplex. On failure it returns 1
and marks the link down.

Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobonding: improve link-status update in mii-monitoring
Mahesh Bandewar [Mon, 27 Mar 2017 18:37:33 +0000 (11:37 -0700)]
bonding: improve link-status update in mii-monitoring

The primary issue is that mii-inspect phase updates link-state and
expects changes to be committed during the mii-commit phase. After
the inspect phase if it fails to acquire rtnl-mutex, the commit
phase (bond_mii_commit) doesn't get to run. This partially updated
state stays and makes the internal-state inconsistent.

e.g. setup bond0 => slaves: eth1, eth2
eth1 goes DOWN -> UP
   mii_monitor()
mii-inspect()
    bond_set_slave_link_state(eth1, UP, DontNotify)
rtnl_trylock() <- fails!

Next mii-monitor round
eth1: No change
   mii_monitor()
mii-inspect()
    eth1->link == current-status (ethtool_ops->get_link)
    no-change-detected

End result:
    eth1:
      Link = BOND_LINK_UP
      Speed = 0xfffff  [SpeedUnknown]
      Duplex = 0xff    [DuplexUnknown]

This doesn't always happen but for some unlucky machines in a large set
of machines it creates problems.

The fix for this is to avoid making changes during inspect phase and
postpone them until acquiring the rtnl-mutex / invoking commit phase.

Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agobonding: split bond_set_slave_link_state into two parts
Mahesh Bandewar [Mon, 27 Mar 2017 18:37:30 +0000 (11:37 -0700)]
bonding: split bond_set_slave_link_state into two parts

Split the function into two (a) propose (b) commit phase without
changing the semantics for the original API.

Signed-off-by: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next...
David S. Miller [Tue, 28 Mar 2017 00:06:12 +0000 (17:06 -0700)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue

Jeff Kirsher says:

====================
40GbE Intel Wired LAN Driver Updates 2017-03-27

This series contains updates to i40e and i40evf only.

Alex updates the driver code so that we can do bulk updates of the page
reference count instead of just incrementing it by one reference at a
time.  Fixed an issue where we were not resetting skb back to NULL when
we have freed it.  Cleaned up the i40e_process_skb_fields() to align with
other Intel drivers.  Removed FCoE code, since it is not supported in any
of the Fortville/Fortpark hardware, so there is not much point of carrying
the code around, especially if it is broken and untested.

Harshitha fixes a bug in the driver where the calculation of the RSS size
was not taking into account the number of traffic classes enabled.

Robert fixes a potential race condition during VF reset by eliminating
IOMMU DMAR Faults caused by VF hardware and when the OS initiates a VF
reset and before the reset is finished we modify the VF's settings.

Bimmy removes a delay that is no longer needed, since it was only needed
for preproduction hardware.

Colin King fixes null pointer dereference, where VSI was being
dereferenced before the VSI NULL check.

Jake fixes an issue with the recent addition of the "client code" to the
driver, where we attempt to use an uninitialized variable, so correctly
initialize the params variable by calling i40e_client_get_params().

v2: dropped patch 5 of the original series from Carolyn since we need
    more documentation and reason why the added delay, so Carolyn is
    taking the time to update the patch before we re-submit it for
    kernel inclusion.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoi40e: initialize params before notifying of l2_param_changes
Jacob Keller [Mon, 20 Mar 2017 23:45:35 +0000 (16:45 -0700)]
i40e: initialize params before notifying of l2_param_changes

Probably due to some mis-merging fix a bug associated with commits
d7ce6422d6e6 ("i40e: don't check params until after checking for client
instance", 2017-02-09) and 3140aa9a78c9 ("i40e: KISS the client
interface", 2017-03-14)

The first commit tried to move the initialization of the params
structure so that we didn't bother doing this if we didn't have a client
interface. You can already see that it looks fishy because of the
indentation. The second commit refactors a bunch of the interface, and
incorrectly drops the params initialization.

I believe what occurred is that internally the two patches were
re-ordered, and the merge conflicts as a result were performed
incorrectly.

Fix the use of an uninitialized variable by correctly initializing the
params variable via i40e_client_get_params().

Reported-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40evf: dereference VSI after VSI has been null checked
Colin Ian King [Mon, 20 Mar 2017 12:03:03 +0000 (12:03 +0000)]
i40evf: dereference VSI after VSI has been null checked

VSI is being dereferenced before the VSI null check; if VSI is
null we end up with a null pointer dereference.  Fix this by
performing VSI deference after the VSI null check.  Also remove
the need for using adapter by using vsi->back->cinst.

Detected by CoverityScan, CID#1419696, CID#1419697
("Dereference before null check")

Fixes: ed0e894de7c133 ("i40evf: add client interface")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: Drop FCoE code that always evaluates to false or 0
Alexander Duyck [Tue, 21 Feb 2017 23:55:48 +0000 (15:55 -0800)]
i40e: Drop FCoE code that always evaluates to false or 0

Since FCoE isn't supported by the i40e products there isn't much point in
carrying around code that will always evaluate to false. This patch goes
through and strips out the code in several spots so that we don't go around
carrying variables and/or code that is always going to evaluate to false or
0.

Change-ID: I39d1d779c66c638b75525839db2b6208fdc809d7
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: Drop FCoE code from core driver files
Alexander Duyck [Tue, 21 Feb 2017 23:55:47 +0000 (15:55 -0800)]
i40e: Drop FCoE code from core driver files

Looking over the code for FCoE it looks like the Rx path has been broken at
least since the last major Rx refactor almost a year ago.  It seems like
FCoE isn't supported for any of the Fortville/Fortpark hardware so there
isn't much point in carrying the code around, especially if it is broken
and untested.

Change-ID: I892de8fa551cb129ce2361e738ff82ce55fa229e
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e/i40evf: Clean-up process_skb_fields
Alexander Duyck [Tue, 21 Feb 2017 23:55:46 +0000 (15:55 -0800)]
i40e/i40evf: Clean-up process_skb_fields

This is a minor clean-up to make the i40e/i40evf process_skb_fields
function look a little more like what we have in igb.  The Rx checksum
function called out a need for skb->protocol but I can't see where it
actually needs it.  I am assuming this is something that was likely
refactored out some time ago as the Rx checksum code has gone through a few
rewrites.

Change-ID: I0b4668a34d90b61b66ded7c7c26e19a3e2d06251
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: removed no longer needed delays
Bimmy Pujari [Tue, 21 Feb 2017 23:55:45 +0000 (15:55 -0800)]
i40e: removed no longer needed delays

Removed no longer needed delays.  At preproduction stage those delays were
needed but now these delays are not needed.

Signed-off-by: Bimmy Pujari <bimmy.pujari@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: Fixed race conditions in VF reset
Robert Konklewski [Tue, 21 Feb 2017 23:55:42 +0000 (15:55 -0800)]
i40e: Fixed race conditions in VF reset

First, this patch eliminates IOMMU DMAR Faults caused by VF hardware.
This is done by enabling VF hardware only after VSI resources are
freed. Otherwise, hardware could DMA into memory that is (or just has
been) being freed.

Then, the VF driver is activated only after VSI resources have been
reallocated. That's because the VF driver can request resources
immediately after it's activated. So they need to be ready at that
point.

The second race condition happens when the OS initiates a VF reset,
and then before it's finished modifies VF's settings by changing its
MAC, VLAN ID, bandwidth allocation, anti-spoof checking, etc. These
functions needed to be blocked while VF is undergoing reset. Otherwise,
they could operate on data structures that had just been freed or not
yet fully initialized.

Change-ID: I43ba5a7ae2c9a1cce3911611ffc4598ae33ae3ff
Signed-off-by: Robert Konklewski <robertx.konklewski@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e/i40evf: Fix use after free in Rx cleanup path
Alexander Duyck [Tue, 21 Feb 2017 23:55:41 +0000 (15:55 -0800)]
i40e/i40evf: Fix use after free in Rx cleanup path

We need to reset skb back to NULL when we have freed it in the Rx cleanup
path.  I found one spot where this wasn't occurring so this patch fixes it.

Change-ID: Iaca68934200732cd4a63eb0bd83b539c95f8c4dd
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e: fix configuration of RSS table with DCB
Harshitha Ramamurthy [Tue, 21 Feb 2017 23:55:40 +0000 (15:55 -0800)]
i40e: fix configuration of RSS table with DCB

There exists a bug in the driver where the calculation of the
RSS size was not taking into account the number of traffic classes
enabled. This patch factors in the traffic classes both in
the initial configuration of the table as well as reconfiguration.

Change-ID: I34dcd345ce52faf1d6b9614bea28d450cfd5f621
Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoi40e/i40evf: Update code to better handle incrementing page count
Alexander Duyck [Tue, 21 Feb 2017 23:55:39 +0000 (15:55 -0800)]
i40e/i40evf: Update code to better handle incrementing page count

Update the driver code so that we do bulk updates of the page reference
count instead of just incrementing it by one reference at a time.  The
advantage to doing this is that we cut down on atomic operations and
this in turn should give us a slight improvement in cycles per packet.
In addition if we eventually move this over to using build_skb the gains
will be more noticeable.

I also found and fixed a store forwarding stall from where we were
assigning "*new_buff = *old_buff".  By breaking it up into individual
copies we can avoid this and as a result the performance is slightly
improved.

Change-ID: I1d3880dece4133eca3c32423b04a5467321ccc52
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
7 years agoipv6: sr: select DST_CACHE by default
David Lebrun [Mon, 27 Mar 2017 09:43:59 +0000 (11:43 +0200)]
ipv6: sr: select DST_CACHE by default

When CONFIG_IPV6_SEG6_LWTUNNEL is selected, automatically select DST_CACHE.
This allows to remove multiple ifdefs.

Signed-off-by: David Lebrun <david.lebrun@uclouvain.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: ibmvnic: Remove unused net_stats member from struct ibmvnic_adapter
Tobias Klauser [Mon, 27 Mar 2017 06:56:59 +0000 (08:56 +0200)]
net: ibmvnic: Remove unused net_stats member from struct ibmvnic_adapter

The ibmvnic driver keeps its statistics in net_device->stats, so the
net_stats member in struct ibmvnic_adapter is unused. Remove it.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: ibmveth: Remove unused stats member from struct ibmveth_adapter
Tobias Klauser [Mon, 27 Mar 2017 06:56:15 +0000 (08:56 +0200)]
net: ibmveth: Remove unused stats member from struct ibmveth_adapter

The ibmveth driver keeps its statistics in net_device->stats, so the
stats member in struct ibmveth_adapter is unused. Remove it.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: bfin_mac: Remove unused stats member from struct bfin_mac_local
Tobias Klauser [Mon, 27 Mar 2017 06:55:11 +0000 (08:55 +0200)]
net: bfin_mac: Remove unused stats member from struct bfin_mac_local

The bfin_mac driver keeps its statistics in net_device->stats, so the
stats member in struct bfin_mac_local is unused. Remove it, as well as
the accompanying comment.

Cc: adi-buildroot-devel@lists.sourceforge.net
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonetvsc: fix dereference before null check errors
Colin Ian King [Sat, 25 Mar 2017 14:26:39 +0000 (14:26 +0000)]
netvsc: fix dereference before null check errors

ndev is being checked to see if it is a null pointer however before
the null check ndev is being dereferenced; hence there is a potential
null pointer dereference bug that needs fixing. Fix this by only
dereferencing ndev after the null check.

Detected by CoverityScan, CID#1420760, CID#140761 ("Dereference
before null check")

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: tehuti: use new api ethtool_{get|set}_link_ksettings
Philippe Reynes [Sun, 26 Mar 2017 20:03:13 +0000 (22:03 +0200)]
net: tehuti: use new api ethtool_{get|set}_link_ksettings

The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.

As I don't have the hardware, I'd be very pleased if
someone may test this patch.

Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: cris: eth_v10: use new api ethtool_{get|set}_link_ksettings
Philippe Reynes [Sat, 25 Mar 2017 18:39:05 +0000 (19:39 +0100)]
net: cris: eth_v10: use new api ethtool_{get|set}_link_ksettings

The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.

As I don't have the hardware, I'd be very pleased if
someone may test this patch.

Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'mpls-multipath-route-cleanups'
David S. Miller [Mon, 27 Mar 2017 21:09:50 +0000 (14:09 -0700)]
Merge branch 'mpls-multipath-route-cleanups'

David Ahern says:

====================
net: mpls: multipath route cleanups

When a device associated with a nexthop is deleted, the nexthop in
the route is effectively removed, so remove it from the route dump.

Further, when all nexhops have been deleted the route is effectively
done, so remove the route.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: mpls: Delete route when all nexthops have been deleted
David Ahern [Fri, 24 Mar 2017 22:21:57 +0000 (15:21 -0700)]
net: mpls: Delete route when all nexthops have been deleted

When all devices for all nexthops in a route have been deleted, the
route is effectively dead, so remove it.

Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: mpls: Don't show nexthop if device has been deleted
David Ahern [Fri, 24 Mar 2017 22:21:56 +0000 (15:21 -0700)]
net: mpls: Don't show nexthop if device has been deleted

If the device for a nexthop in a multipath route is deleted, the nexthop
is effectively removed from the route. Currently, a route dump still
returns the nexhop though without the device set:

$ ip -f mpls ro ls
100
nexthopvia inet 10.11.1.2  dev br0
nexthopvia inet 10.100.3.1  dev eth3
$ ip li del br0
$ ip -f mpls ro ls
100
nexthopvia inet 10.11.1.2  dev * dead linkdown
nexthopvia inet 10.100.3.1  dev eth3

Since the nexthop is effectively deleted, drop the hop from the route
dump.

Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Fail safe mtu and lro setting
Saeed Mahameed [Sun, 12 Feb 2017 23:19:14 +0000 (01:19 +0200)]
net/mlx5e: Fail safe mtu and lro setting

Use the new fail-safe channels switch mechanism to set new
netdev mtu and lro settings.

MTU and lro settings demand some HW configuration changes after new
channels are created and ready for action. In order to unify switch
channels routine for LRO and MTU changes, and maybe future configuration
features, we now pass to it a modify HW function pointer to be
invoked directly after old channels are de-activated and before new
channels are activated.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Fail safe tc setup
Saeed Mahameed [Sun, 12 Feb 2017 23:25:36 +0000 (01:25 +0200)]
net/mlx5e: Fail safe tc setup

Use the new fail-safe channels switch mechanism to set up new
tc parameters.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Fail safe cqe compressing/moderation mode setting
Saeed Mahameed [Sun, 12 Feb 2017 22:42:54 +0000 (00:42 +0200)]
net/mlx5e: Fail safe cqe compressing/moderation mode setting

Use the new fail-safe channels switch mechanism to set new
CQE compressing and CQE moderation mode settings.

We also move RX CQE compression modify function out of en_rx file  to
a more appropriate place.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Fail safe ethtool settings
Saeed Mahameed [Sun, 12 Feb 2017 21:21:08 +0000 (23:21 +0200)]
net/mlx5e: Fail safe ethtool settings

Use the new fail-safe channels switch mechanism to set new ethtool
settings:
 - ring parameters
 - coalesce parameters
 - tx copy break parameters

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Introduce switch channels
Saeed Mahameed [Tue, 27 Dec 2016 12:57:03 +0000 (14:57 +0200)]
net/mlx5e: Introduce switch channels

A fail safe helper functions that allows switching to new channels on the
fly,  In simple words:

make_new_config(new_params)
{
    new_channels = open_channels(new_params);
    if (!new_channels)
         return "Failed, but current channels are still active :)"

    switch_channels(new_channels);

    return "SUCCESS";
}

Demonstrate mlx5e_switch_priv_channels usage in set channels ethtool
callback and make it fail-safe using the new switch channels mechanism.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Minimize mlx5e_{open/close}_locked
Saeed Mahameed [Tue, 7 Feb 2017 14:35:49 +0000 (16:35 +0200)]
net/mlx5e: Minimize mlx5e_{open/close}_locked

mlx5e_redirect_rqts_to_{channels,drop} and mlx5e_{add,del}_sqs_fwd_rules
and Set real num tx/rx queues belong to
mlx5e_{activate,deactivate}_priv_channels, for that we move those functions
and minimize mlx5e_open/close flows.

This will be needed in downstream patches to replace old channels with new
ones without the need to call mlx5e_close/open.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: CQ and RQ don't need priv pointer
Saeed Mahameed [Tue, 14 Mar 2017 17:43:52 +0000 (19:43 +0200)]
net/mlx5e: CQ and RQ don't need priv pointer

Remove mlx5e_priv pointer from CQ and RQ structs,
it was needed only to access mdev pointer from priv pointer.

Instead we now pass mdev where needed.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Isolate open_channels from priv->params
Saeed Mahameed [Wed, 21 Dec 2016 15:24:35 +0000 (17:24 +0200)]
net/mlx5e: Isolate open_channels from priv->params

In order to have a clean separation between channels resources creation
flows and current active mlx5e netdev parameters, make sure each
resource creation function do not access priv->params, and only works
with on a new fresh set of parameters.

For this we add "new" mlx5e_params field to mlx5e_channels structure
and use it down the road to mlx5e_open_{cq,rq,sq} and so on.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Split open/close channels to stages
Saeed Mahameed [Tue, 20 Dec 2016 20:48:19 +0000 (22:48 +0200)]
net/mlx5e: Split open/close channels to stages

As a foundation for safe config flow, a simple clear API such as
(Open then Activate) where the "Open" handles the heavy unsafe
creation operation and the "activate" will be fast and fail safe,
to enable the newly created channels.

For this we split the RQs/TXQ SQs and channels open/close flows to
open => activate, deactivate => close.

This will simplify the ability to have fail safe configuration changes
in downstream patches as follows:

make_new_config(new_params)
{
     old_channels = current_active_channels;
     new_channels = create_channels(new_params);
     if (!new_channels)
              return "Failed, but current channels still active :)"
     deactivate_channels(old_channels); /* Can't fail */
     activate_channels(new_channels); /* Can't fail */
     close_channels(old_channels);
     current_active_channels = new_channels;

     return "SUCCESS";
}

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Refactor refresh TIRs
Saeed Mahameed [Tue, 20 Dec 2016 15:30:20 +0000 (17:30 +0200)]
net/mlx5e: Refactor refresh TIRs

Rename mlx5e_refresh_tirs_self_loopback to mlx5e_refresh_tirs,
as it will be used in downstream (Safe config flow) patches, and make it
fail safe on mlx5e_open.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Redirect RQT refactoring
Saeed Mahameed [Mon, 19 Dec 2016 21:20:17 +0000 (23:20 +0200)]
net/mlx5e: Redirect RQT refactoring

RQ Tables are always created once (on netdev creation) pointing to drop RQ
and at that stage, RQ tables (indirection tables) are always directed to
drop RQ.

We don't need to use mlx5e_fill_{direct,indir}_rqt_rqns to fill the drop
RQ in create RQT procedure.

Instead of having separate flows to redirect direct and indirect RQ Tables
to the current active channels Receive Queues (RQs), we unify the two
flows by introducing mlx5e_redirect_rqt function and redirect_rqt_param
struct. Combined, they provide one generic logic to fill the RQ table RQ
numbers regardless of the RQ table purpose (direct/indirect).

Demonstrated the usage with mlx5e_redirect_rqts_to_channels which will
be called on mlx5e_open and with mlx5e_redirect_rqts_to_drop which will
be called on mlx5e_close.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Introduce mlx5e_channels
Saeed Mahameed [Mon, 6 Feb 2017 11:14:34 +0000 (13:14 +0200)]
net/mlx5e: Introduce mlx5e_channels

Have a dedicated "channels" handler that will serve as channels
(RQs/SQs/etc..) holder to help with separating channels/parameters
operations, for the downstream fail-safe configuration flow, where we will
create a new instance of mlx5e_channels with the new requested parameters
and switch to the new channels on the fly.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Set netdev->rx_cpu_rmap on netdev creation
Saeed Mahameed [Tue, 7 Feb 2017 14:30:52 +0000 (16:30 +0200)]
net/mlx5e: Set netdev->rx_cpu_rmap on netdev creation

To simplify mlx5e_open_locked flow we set netdev->rx_cpu_rmap on netdev
creation rather on netdev open, it is redundant to set it every time on
mlx5e_open_locked.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agonet/mlx5e: Set SQ max rate on mlx5e_open_txqsq rather on open_channel
Saeed Mahameed [Mon, 14 Nov 2016 11:42:02 +0000 (13:42 +0200)]
net/mlx5e: Set SQ max rate on mlx5e_open_txqsq rather on open_channel

Instead of iterating over the channel SQs to set their max rate, do it
on SQ creation per TXQ SQ.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
7 years agoMerge branch 'netvsc-next'
David S. Miller [Sun, 26 Mar 2017 03:15:56 +0000 (20:15 -0700)]
Merge branch 'netvsc-next'

K. Y. Srinivasan says:

====================
netvsc: Fix miscellaneous issues

Fix miscellaneous issues.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonetvsc: Properly initialize the return value
K. Y. Srinivasan [Sat, 25 Mar 2017 03:54:37 +0000 (20:54 -0700)]
netvsc: Properly initialize the return value

Initialize the return value correctly.

Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonetvsc: Fix a bug in sub-channel handling
K. Y. Srinivasan [Sat, 25 Mar 2017 03:54:36 +0000 (20:54 -0700)]
netvsc: Fix a bug in sub-channel handling

All netvsc channels are handled via NAPI. Setup the "read mode" correctly
for the netvsc sub-channels.

Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'gtp-sgsn-side-tunnel'
David S. Miller [Sun, 26 Mar 2017 03:11:19 +0000 (20:11 -0700)]
Merge branch 'gtp-sgsn-side-tunnel'

Jonas Bonn says:

====================
GTP SGSN-side tunnel

Changes since v4:

* Respin the series on top of net-next; the conflicts were trivial,
  amounting to just code having been shifted about
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agogtp: support SGSN-side tunnels
Jonas Bonn [Fri, 24 Mar 2017 22:23:21 +0000 (23:23 +0100)]
gtp: support SGSN-side tunnels

The GTP-tunnel driver is explicitly GGSN-side as it searches for PDP
contexts based on the incoming packets _destination_ address.  If we
want to place ourselves on the SGSN side of the  tunnel, then we want
to be identifying PDP contexts based on _source_ address.

Let it be noted that in a "real" configuration this module would never
be used:  the SGSN normally does not see IP packets as input.  The
justification for this functionality is for PGW load-testing applications
where the input to the SGSN is locally generally IP traffic.

This patch adds a "role" argument at GTP-link creation time to specify
whether we are on the GGSN or SGSN side of the tunnel; this flag is then
used to determine which part of the IP packet to use in determining
the PDP context.

Signed-off-by: Jonas Bonn <jonas@southpole.se>
Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Acked-by: Harald Welte <laforge@gnumonks.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agogtp: rename SGSN netlink attribute
Jonas Bonn [Fri, 24 Mar 2017 22:23:20 +0000 (23:23 +0100)]
gtp: rename SGSN netlink attribute

This is a mostly cosmetic rename of the SGSN netlink attribute to
the GTP link.  The justification for this is that we will be making
the module support decapsulation of "downstream" SGSN packets, in
which case the netlink parameter actually refers to the upstream GGSN
peer.  Renaming the parameter makes the relationship clearer.

The legacy name is maintained as a define in the header file in order
to not break existing code.

Signed-off-by: Jonas Bonn <jonas@southpole.se>
Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Acked-by: Harald Welte <laforge@gnumonks.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'qmap-mux'
David S. Miller [Sun, 26 Mar 2017 03:03:35 +0000 (20:03 -0700)]
Merge branch 'qmap-mux'

Daniele Palmas says:

====================
net: usb: qmi_wwan: add qmap mux protocol support

This patch adds support for qmap mux protocol available in recent
Qualcomm based modems.

The qmap mux protocol can be used for multiplexing data packets in
order to have multiple ip streams through the same physical device.

Two new sysfs files are added for adding/removing the qmap mux based
interfaces (named qmimux):

/sys/class/net/<iface>/qmi/add_mux
/sys/class/net/<iface>/qmi/del_mux

Main patch author is Bjørn Mork <bjorn@mork.no>

An userspace implementation of the qmi requests needed to support
multiple ip streams is already available (namely libqmi since
version 1.18.0).

The qmap mux feature has been recently implemented in Codeaurora
gobinet out-of-kernel driver that was the inspiration for this
development.

Tests have been performed with Telit LE922A6 (PID 0x1040)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoDocumentation: ABI: testing: sysfs-class-net-qmi: add new qmap mux files description
Daniele Palmas [Fri, 24 Mar 2017 13:22:46 +0000 (14:22 +0100)]
Documentation: ABI: testing: sysfs-class-net-qmi: add new qmap mux files description

This patch updates the documentation related to the new files added for
qmap mux support.

Signed-off-by: Daniele Palmas <dnlplm@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: usb: qmi_wwan: add qmap mux protocol support
Daniele Palmas [Fri, 24 Mar 2017 13:22:45 +0000 (14:22 +0100)]
net: usb: qmi_wwan: add qmap mux protocol support

This patch adds support for qmap mux protocol available in recent
Qualcomm based modems.

The qmap mux protocol can be used for multiplexing data packets in
order to have multiple ip streams through the same physical device.

Two new sysfs files are added for adding/removing the qmap mux based
interfaces (named qmimux):

- /sys/class/net/<iface>/qmi/add_mux
- /sys/class/net/<iface>/qmi/del_mux

Main patch author is Bjørn Mork <bjorn@mork.no>

Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: Daniele Palmas <dnlplm@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum_kvdl: Cosmetic kvdl allocator API change
Arkadi Sharshevsky [Sat, 25 Mar 2017 07:28:22 +0000 (08:28 +0100)]
mlxsw: spectrum_kvdl: Cosmetic kvdl allocator API change

Currently the return allocated index and err value are multiplexed.
This patch changes the API to decouple the ret value from the allocated
index.

Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'epoll-busypoll'
David S. Miller [Sat, 25 Mar 2017 03:49:31 +0000 (20:49 -0700)]
Merge branch 'epoll-busypoll'

Alexander Duyck says:

====================
Add busy poll support for epoll

This patch set adds support for using busy polling with epoll. The main
idea behind this is that we record the NAPI ID for the last event that is
moved onto the ready list for the epoll context and then when we no longer
have any events on the ready list we begin polling with that ID. If the
busy polling does not yield any events then we will reset the NAPI ID to 0
and wait until a new event is added to the ready list with a valid NAPI ID
before we will resume busy polling.

Most of the changes in this set authored by me are meant to be cleanup or
fixes for various things. For example, I am trying to make it so that we
don't perform hash look-ups for the NAPI instance when we are only working
with sender_cpu and the like.

At the heart of this set is the last 3 patches which enable epoll support
and add support for obtaining the NAPI ID of a given socket. With these it
becomes possible for an application to make use of epoll and get optimal
busy poll utilization by stacking multiple sockets with the same NAPI ID on
the same epoll context.

v1: The first version of this series only allowed epoll to busy poll if all
    of the sockets with a NAPI ID shared the same NAPI ID. I feel we were
    too strict with this requirement, so I changed the behavior for v2.
v2: The second version was pretty much a full rewrite of the first set. The
    main changes consisted of pulling apart several patches to better
    address the need to clean up a few items and to make the code easier to
    review. In the set however I went a bit overboard and was trying to fix
    an issue that would only occur with 500+ years of uptime, and in the
    process limited the range for busy_poll/busy_read unnecessarily.
v3: Split off the code for limiting busy_poll and busy_read into a separate
    patch for net.
    Updated patch that changed busy loop time tracking so that it uses
    "local_clock() >> 10" as we originally did.
    Tweaked "Change return type.." patch by moving declaration of "work"
    inside the loop where is was accessed and always reset to 0.
    Added "Acked-by" for patches that received acks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: Introduce SO_INCOMING_NAPI_ID
Sridhar Samudrala [Fri, 24 Mar 2017 17:08:36 +0000 (10:08 -0700)]
net: Introduce SO_INCOMING_NAPI_ID

This socket option returns the NAPI ID associated with the queue on which
the last frame is received. This information can be used by the apps to
split the incoming flows among the threads based on the Rx queue on which
they are received.

If the NAPI ID actually represents a sender_cpu then the value is ignored
and 0 is returned.

Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoepoll: Add busy poll support to epoll with socket fds.
Sridhar Samudrala [Fri, 24 Mar 2017 17:08:30 +0000 (10:08 -0700)]
epoll: Add busy poll support to epoll with socket fds.

This patch adds busy poll support to epoll. The implementation is meant to
be opportunistic in that it will take the NAPI ID from the last socket
that is added to the ready list that contains a valid NAPI ID and it will
use that for busy polling until the ready list goes empty.  Once the ready
list goes empty the NAPI ID is reset and busy polling is disabled until a
new socket is added to the ready list.

In addition when we insert a new socket into the epoll we record the NAPI
ID and assume we are going to receive events on it.  If that doesn't occur
it will be evicted as the active NAPI ID and we will resume normal
behavior.

An application can use SO_INCOMING_CPU or SO_REUSEPORT_ATTACH_C/EBPF socket
options to spread the incoming connections to specific worker threads
based on the incoming queue. This enables epoll for each worker thread
to have only sockets that receive packets from a single queue. So when an
application calls epoll_wait() and there are no events available to report,
busy polling is done on the associated queue to pull the packets.

Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: Commonize busy polling code to focus on napi_id instead of socket
Sridhar Samudrala [Fri, 24 Mar 2017 17:08:24 +0000 (10:08 -0700)]
net: Commonize busy polling code to focus on napi_id instead of socket

Move the core functionality in sk_busy_loop() to napi_busy_loop() and
make it independent of sk.

This enables re-using this function in epoll busy loop implementation.

Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: Track start of busy loop instead of when it should end
Alexander Duyck [Fri, 24 Mar 2017 17:08:18 +0000 (10:08 -0700)]
net: Track start of busy loop instead of when it should end

This patch flips the logic we were using to determine if the busy polling
has timed out.  The main motivation for this is that we will need to
support two different possible timeout values in the future and by
recording the start time rather than when we would want to end we can focus
on making the end_time specific to the task be it epoll or socket based
polling.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: Change return type of sk_busy_loop from bool to void
Alexander Duyck [Fri, 24 Mar 2017 17:08:12 +0000 (10:08 -0700)]
net: Change return type of sk_busy_loop from bool to void

checking the return value of sk_busy_loop. As there are only a few
consumers of that data, and the data being checked for can be replaced
with a check for !skb_queue_empty() we might as well just pull the code
out of sk_busy_loop and place it in the spots that actually need it.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: Only define skb_mark_napi_id in one spot instead of two
Alexander Duyck [Fri, 24 Mar 2017 17:08:06 +0000 (10:08 -0700)]
net: Only define skb_mark_napi_id in one spot instead of two

Instead of defining two versions of skb_mark_napi_id I think it is more
readable to just match the format of the sk_mark_napi_id functions and just
wrap the contents of the function instead of defining two versions of the
function.  This way we can save a few lines of code since we only need 2 of
the ifdef/endif but needed 5 for the extra function declaration.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agotcp: Record Rx hash and NAPI ID in tcp_child_process
Alexander Duyck [Fri, 24 Mar 2017 17:08:00 +0000 (10:08 -0700)]
tcp: Record Rx hash and NAPI ID in tcp_child_process

While working on some recent busy poll changes we found that child sockets
were being instantiated without NAPI ID being set.  In our first attempt to
fix it, it was suggested that we should just pull programming the NAPI ID
into the function itself since all callers will need to have it set.

In addition to the NAPI ID change I have dropped the code that was
populating the Rx hash since it was actually being populated in
tcp_get_cookie_sock.

Reported-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: Busy polling should ignore sender CPUs
Alexander Duyck [Fri, 24 Mar 2017 17:07:53 +0000 (10:07 -0700)]
net: Busy polling should ignore sender CPUs

This patch is a cleanup/fix for NAPI IDs following the changes that made it
so that sender_cpu and napi_id were doing a better job of sharing the same
location in the sk_buff.

One issue I found is that we weren't validating the napi_id as being valid
before we started trying to setup the busy polling.  This change corrects
that by using the MIN_NAPI_ID value that is now used in both allocating the
NAPI IDs, as well as validating them.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'mlx5-xdp-perf-optimizations'
David S. Miller [Sat, 25 Mar 2017 02:11:47 +0000 (19:11 -0700)]
Merge branch 'mlx5-xdp-perf-optimizations'

Saeed Mahameed says:

====================
Mellanox mlx5e XDP performance optimization

This series provides some preformancee optimizations for mlx5e
driver, especially for XDP TX flows.

1st patch is a simple change of rmb to dma_rmb in CQE fetch routine
which shows a huge gain for both RX and TX packet rates.

2nd patch removes write combining logic from the driver TX handler
and simplifies the TX logic while improving TX CPU utilization.

All other patches combined provide some refactoring to the driver TX
flows to allow some significant XDP TX improvements.

More details and performance numbers per patch can be found in each patch
commit message compared to the preceding patch.

Overall performance improvemnets
  System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz

Test case                   Baseline      Now      improvement
---------------------------------------------------------------
TX packets (24 threads)     45Mpps        54Mpps      20%
TC stack Drop (1 core)      3.45Mpps      3.6Mpps     5%
XDP Drop      (1 core)      14Mpps        16.9Mpps    20%
XDP TX        (1 core)      10.4Mpps      13.7Mpps    31%
====================

Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Different SQ types
Saeed Mahameed [Fri, 24 Mar 2017 21:52:14 +0000 (00:52 +0300)]
net/mlx5e: Different SQ types

Different SQ types (tx, xdp, ico) are growing apart, we separate them
and remove unwanted parts in each one of them, to simplify data path and
utilize data cache.

Remove DB union from SQ structures since it is not needed anymore as we
now have different SQ data type for each SQ.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Generalize SQ create/modify/destroy functions
Saeed Mahameed [Fri, 24 Mar 2017 21:52:13 +0000 (00:52 +0300)]
net/mlx5e: Generalize SQ create/modify/destroy functions

In the next patches we will introduce different SQ types,
and we would want to reuse those functions, in this patch we make them
agnostic to SQ type (txq, xdp, ico).

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Proper names for SQ/RQ/CQ functions
Saeed Mahameed [Fri, 24 Mar 2017 21:52:12 +0000 (00:52 +0300)]
net/mlx5e: Proper names for SQ/RQ/CQ functions

Rename mlx5e_{create,destroy}_{sq,rq,cq} to
mlx5e_{alloc,free}_{sq,rq,cq}.

Rename mlx5e_{enable,disable}_{sq,rq,cq} to
mlx5e_{create,destroy}_{sq,rq,cq}.

mlx5e_{enable,disable}_{sq,rq,cq} used to actually create/destroy the SQ
in FW, so we rename them to align the functions names with FW semantics.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Generalize tx helper functions for different SQ types
Saeed Mahameed [Fri, 24 Mar 2017 21:52:11 +0000 (00:52 +0300)]
net/mlx5e: Generalize tx helper functions for different SQ types

In the next patches we will introduce different SQ types, for that we here
generalize some TX helper functions to work with more basic SQ parameters,
in order to re-use them for the different SQ types.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Optimize XDP frame xmit
Saeed Mahameed [Fri, 24 Mar 2017 21:52:10 +0000 (00:52 +0300)]
net/mlx5e: Optimize XDP frame xmit

XDP SQ has a fixed size WQE (MLX5E_XDP_TX_WQEBBS = 1) and only posts
one kind of WQE (MLX5_OPCODE_SEND),

Also we initialize SQ descriptors static fields once on open_xdpsq,
rather than every time on critical path.

Optimize the code in light of those facts and add a prefetch of the TX
descriptor first thing in the xdp xmit function.

Performance improvement:
System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz

Test case              Before     Now        improvement
---------------------------------------------------------------
XDP TX   (1 core)      13Mpps    13.7Mpps       5%

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Poll XDP TX CQ before RX CQ
Saeed Mahameed [Fri, 24 Mar 2017 21:52:09 +0000 (00:52 +0300)]
net/mlx5e: Poll XDP TX CQ before RX CQ

Handle XDP TX completions before handling RX packets, to make sure more
free space is available for XDP TX packets a moment before handling
RX packets.

Performance improvement:
System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz

Test case              Before     Now      improvement
---------------------------------------------------------------
XDP Drop (1 core)      16.9Mpps  16.9Mpps    No change
XDP TX   (1 core)      12Mpps    13Mpps      8%

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Move XDP SQ instance into RQ
Saeed Mahameed [Fri, 24 Mar 2017 21:52:08 +0000 (00:52 +0300)]
net/mlx5e: Move XDP SQ instance into RQ

To save many rq->channel->sq dereferences in fast-path.
And rename it to xdpsq.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Move mlx5e_rq struct declaration
Saeed Mahameed [Fri, 24 Mar 2017 21:52:07 +0000 (00:52 +0300)]
net/mlx5e: Move mlx5e_rq struct declaration

Move struct mlx5e_rq and friends to appear after mlx5e_sq declaration in
en.h.

We will need this for next patch to move the mlx5e_sq instance into
mlx5e_rq struct for XDP SQs.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Move XDP completion functions to rx file
Saeed Mahameed [Fri, 24 Mar 2017 21:52:06 +0000 (00:52 +0300)]
net/mlx5e: Move XDP completion functions to rx file

XDP code belongs to RX path, move mlx5e_poll_xdp_tx_cq and
mlx5e_free_xdp_tx_descs to en_rx.c.

Rename them to mlx5e_poll_xdpsq_cq and mlx5e_free_xdpsq_descs.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Single bfreg (UAR) for all mlx5e SQs and netdevs
Saeed Mahameed [Fri, 24 Mar 2017 21:52:05 +0000 (00:52 +0300)]
net/mlx5e: Single bfreg (UAR) for all mlx5e SQs and netdevs

One is sufficient since Blue Flame is not supported anymore.
This will also come in handy for switchdev mode to save resources, since
VF representors will use same single UAR as well for their own SQs.

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Xmit, no write combining
Saeed Mahameed [Fri, 24 Mar 2017 21:52:04 +0000 (00:52 +0300)]
net/mlx5e: Xmit, no write combining

mlx5e netdev Blue Flame (write combining) support demands a lot of
overhead for a little latency gain for some special cases, this overhead
is hurting the common case.

Here we remove xmit Blue Flame support by creating all bfregs with no
write combining for all SQs, and we remove a lot of BF logic and
conditions from xmit data path.

Simplify mlx5e_tx_notify_hw (doorbell function) by removing BF related
code and by removing one memory barrier needed for WC mapped SQ doorbell
buffers, which no longer exist.

Performance improvement:
System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz

Test case                   Before      Now      improvement
---------------------------------------------------------------
TX packets (24 threads)     50Mpps      54Mpps    8%

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet/mlx5e: Use dma_rmb rather than rmb in CQE fetch routine
Saeed Mahameed [Fri, 24 Mar 2017 21:52:03 +0000 (00:52 +0300)]
net/mlx5e: Use dma_rmb rather than rmb in CQE fetch routine

Use dma_rmb in mlx5e_get_cqe rather than aggressive rmb (at least on
some architectures), this should help improve the performance on such
CPU archs where dma_rmb is optimized.

Performance improvement:
System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz

Test case                   Baseline      Now      improvement
---------------------------------------------------------------
TX packets (24 threads)     45Mpps        50Mpps      11%
TC stack Drop (1 core)      3.45Mpps      3.6Mpps     5%
XDP Drop      (1 core)      14Mpps        16.9Mpps    20%
XDP TX        (1 core)      10.4Mpps      12Mpps      15%

Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet: dsa: bcm_sf2: Add missing OF_MDIO dependency
Florian Fainelli [Fri, 24 Mar 2017 21:57:22 +0000 (14:57 -0700)]
net: dsa: bcm_sf2: Add missing OF_MDIO dependency

bcm_sf2 does require the MDIO_BCM_UNIMAC driver which is now dependent
on OF_MDIO but also internally uses of_mdio.c provided routines which
are guarted with OF_MDIO.

Reported-by: kbuild test robot <fengguang.wu@intel.com>
Fixes: 90eff9096c01 ("net: phy: Allow splitting MDIO bus/device support from PHYs")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'ipv6-sr-perf-improvements'
David S. Miller [Fri, 24 Mar 2017 21:47:32 +0000 (14:47 -0700)]
Merge branch 'ipv6-sr-perf-improvements'

David Lebrun says:

====================
Performances improvement for IPv6 Segment Routing

This patch series improves the performances of IPv6 SR by optimizing skb head
reallocation and extending the use of dst_cache. The overall performances improve
by 35%.

Before patch series (SRH encap):
Result: OK: 7348320(c7347271+d1048) usec, 5000000 (1000byte,0frags)
  680427pps 5443Mb/sec (5443416000bps) errors: 0

After patch series (SRH encap):
Result: OK: 4774543(c4774084+d459) usec, 5000000 (1000byte,0frags)
  1047220pps 8377Mb/sec (8377760000bps) errors: 0

Baseline for plain IPv6 forwarding:
Result: OK: 4244144(c4243722+d422) usec, 5000000 (1000byte,0frags)
  1178093pps 9424Mb/sec (9424744000bps) errors: 0
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv6: sr: use dst_cache in seg6_input
David Lebrun [Fri, 24 Mar 2017 09:46:27 +0000 (10:46 +0100)]
ipv6: sr: use dst_cache in seg6_input

We already use dst_cache in seg6_output, when handling locally generated
packets. We extend it in seg6_input, to also handle forwarded packets, and avoid
unnecessary fib lookups.

Performances for SRH encapsulation before the patch:
Result: OK: 5656067(c5655678+d388) usec, 5000000 (1000byte,0frags)
  884006pps 7072Mb/sec (7072048000bps) errors: 0

Performances after the patch:
Result: OK: 4774543(c4774084+d459) usec, 5000000 (1000byte,0frags)
  1047220pps 8377Mb/sec (8377760000bps) errors: 0

Signed-off-by: David Lebrun <david.lebrun@uclouvain.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoipv6: sr: expand skb head only if necessary
David Lebrun [Fri, 24 Mar 2017 09:46:26 +0000 (10:46 +0100)]
ipv6: sr: expand skb head only if necessary

To insert or encapsulate a packet with an SRH, we need a large enough skb
headroom. Currently, we are using pskb_expand_head to inconditionally increase
the size of the headroom by the amount needed by the SRH (and IPv6 header).
If this reallocation is performed by another CPU than the one that initially
allocated the skb, then when the initial CPU kfree the skb, it will enter the
__slab_free slowpath, impacting performances.

This patch replaces pskb_expand_head with skb_cow_head, that will reallocate the
skb head only if the headroom is not large enough.

Performances for SRH encapsulation before the patch:
Result: OK: 7348320(c7347271+d1048) usec, 5000000 (1000byte,0frags)
  680427pps 5443Mb/sec (5443416000bps) errors: 0

Performances after the patch:
Result: OK: 5656067(c5655678+d388) usec, 5000000 (1000byte,0frags)
  884006pps 7072Mb/sec (7072048000bps) errors: 0

Signed-off-by: David Lebrun <david.lebrun@uclouvain.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agonet_sched: use setup_deferrable_timer
Geliang Tang [Fri, 24 Mar 2017 14:14:36 +0000 (22:14 +0800)]
net_sched: use setup_deferrable_timer

Use setup_deferrable_timer() instead of init_timer_deferrable() to
simplify the code.

Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agoMerge branch 'mlxsw-query-resources'
David S. Miller [Fri, 24 Mar 2017 20:53:29 +0000 (13:53 -0700)]
Merge branch 'mlxsw-query-resources'

Jiri Pirko says:

====================
mlxsw: Query resources from firmware

Ido says:

Some parts of the driver already use the resource query mechanism, but
in other parts we still rely on hard coded values that may change over
time.

This patchset removes most of these remaining values and queries them
from the firmware instead.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
7 years agomlxsw: spectrum: Query cell size from firmware
Ido Schimmel [Fri, 24 Mar 2017 07:02:51 +0000 (08:02 +0100)]
mlxsw: spectrum: Query cell size from firmware

As explained in the previous patch, the cell size may change in future
devices, so query it from the firmware instead of hard coding it.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>