Since our TSN map is capable of holding at most a 4K chunk gap,
there is no way that during this gap, a stream sequence number
(unsigned short) can wrap such that the new number is smaller
then the next expected one. If such a case is encountered,
this is a protocol violation.
sctp: Sysctl configuration for IPv4 Address Scoping
This patch introduces a new sysctl option to make IPv4 Address Scoping
configurable <draft-stewart-tsvwg-sctp-ipv4-00.txt>.
In networking environments where DNAT rules in iptables prerouting
chains convert destination IP's to link-local/private IP addresses,
SCTP connections fail to establish as the INIT chunk is dropped by the
kernel due to address scope match failure.
For example to support overlapping IP addresses (same IP address with
different vlan id) a Layer-5 application listens on link local IP's,
and there is a DNAT rule that maps the destination IP to a link local
IP. Such applications never get the SCTP INIT if the address-scoping
draft is strictly followed.
This sysctl configuration allows SCTP to function in such
unconventional networking environments.
Sysctl options:
0 - Disable IPv4 address scoping draft altogether
1 - Enable IPv4 address scoping (default, current behavior)
2 - Enable address scoping but allow IPv4 private addresses in init/init-ack
3 - Enable address scoping but allow IPv4 link local address in init/init-ack
sctp: Get rid of an extra routing lookup when adding a transport.
We used to perform 2 routing lookups for a new transport: one
just for path mtu detection, and one to actually route to destination
and path mtu update when sending a packet. There is no point in doing
both of them, especially since the first one just for path mtu doesn't
take into account source address and sometimes gives the wrong route,
causing path mtu updates anyway.
We now do just the one call to do both route to destination and get
path mtu updates.
We currently track if AUTH has been bundled using the 'auth'
pointer to the chunk. However, AUTH is disallowed after DATA
is already in the packet, so we need to instead use the
'has_auth' field.
sctp: fix to reset packet information after packet transmit
The packet information does not reset after packet transmit, this
may cause some problems such as following DATA chunk be sent without
AUTH chunk, even if the authentication of DATA chunk has been
requested by the peer.
sctp: Failover transmitted list on transport delete
Add-IP feature allows users to delete an active transport. If that
transport has chunks in flight, those chunks need to be moved to another
transport or association may get into unrecoverable state.
Reported-by: Rafael Laufer <rlaufer@cisco.com> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
sctp: Fix SCTP_MAXSEG socket option to comply to spec.
We had a bug that we never stored the user-defined value for
MAXSEG when setting the value on an association. Thus future
PMTU events ended up re-writing the frag point and increasing
it past user limit. Additionally, when setting the option on
the socket/endpoint, we effect all current associations, which
is against spec.
Now, we store the user 'maxseg' value along with the computed
'frag_point'. We inherit 'maxseg' from the socket at association
creation and use it as an upper limit for 'frag_point' when its
set.
sctp: Don't do NAGLE delay on large writes that were fragmented small
SCTP will delay the last part of a large write due to NAGLE, if that
part is smaller then MTU. Since we are doing large writes, we might
as well send the last portion now instead of waiting untill the next
large write happens. The small portion will be sent as is regardless,
so it's better to not delay it.
This is a result of much discussions with Wei Yongjun <yjwei@cn.fujitsu.com>
and Doug Graham <dgraham@nortel.com>. Many thanks go out to them.
The decision to delay due to Nagle should be based on the path mtu
and future packet size. We currently incorrectly base it on
'frag_point' which is the SCTP DATA segment size, and also we do
not count DATA chunk header overhead in the computation. This
actuall allows situations where a user can set low 'frag_point',
and then send small messages without delay.
sctp: Try not to change a_rwnd when faking a SACK from SHUTDOWN.
We currently set a_rwnd to 0 when faking a SACK from SHUTDOWN.
This results in an hung association if the remote only uses
SHUTDOWNs (which it's allowed to do) to acknowlege DATA when
closing. The reason for that is that we simply honor the a_rwnd
from the sack, but since we faked it to be 0, we enter 0-window
probing. The fix is to use the peers old rwnd and add our flight
size to it.
sctp: drop a_rwnd to 0 when receive buffer overflows.
SCTP has a problem that when small chunks are used, it is possible
to exhaust the receiver buffer without fully closing receive window.
This happens due to all overhead that we have account for with small
messages. To fix this, when receive buffer is exceeded, we'll drop
the window to 0 and save the 'drop' portion. When application starts
reading data and freeing up recevie buffer space, we'll wait until
we've reached the 'drop' window and then add back this 'drop' one
mtu at a time. This worked well in testing and under stress produced
rather even recovery.
Vlad Yasevich [Wed, 26 Aug 2009 13:36:25 +0000 (09:36 -0400)]
sctp: Fix error count increments that were results of HEARTBEATS
SCTP RFC 4960 states that unacknowledged HEARTBEATS count as
errors agains a given transport or endpoint. As such, we
should increment the error counts for only for unacknowledged
HB, otherwise we detect failure too soon. This goes for both
the overall error count and the path error count.
Now, there is a difference in how the detection is done
between the two. The path error detection is done after
the increment, so to detect it properly, we actually need
to exceed the path threshold. The overall error detection
is done _BEFORE_ the increment. Thus to detect the failure,
it's enough for the error count to match the threshold.
This is why all the state functions use '>=' to detect failure,
while path detection uses '>'.
Thanks goes to Chunbo Luo <chunbo.luo@windriver.com> who first
proposed patches to fix this issue and made me re-read the spec
and the code to figure out how this cruft really works.
Wei Yongjun [Sat, 22 Aug 2009 03:27:37 +0000 (11:27 +0800)]
sctp: fix check the chunk length of received HEARTBEAT-ACK chunk
The receiver of the HEARTBEAT should respond with a HEARTBEAT ACK
that contains the Heartbeat Information field copied from the
received HEARTBEAT chunk. So the received HEARTBEAT-ACK chunk
must have a length of:
sizeof(sctp_chunkhdr_t) + sizeof(sctp_sender_hb_info_t)
A badly formatted HB-ACK chunk, it is possible that we may access
invalid memory. We should really make sure that the chunk format
is what we expect, before attempting to touch the data.
Vlad Yasevich [Mon, 10 Aug 2009 17:51:03 +0000 (13:51 -0400)]
sctp: Send user messages to the lower layer as one
Currenlty, sctp breaks up user messages into fragments and
sends each fragment to the lower layer by itself. This means
that for each fragment we go all the way down the stack
and back up. This also discourages bundling of multiple
fragments when they can fit into a sigle packet (ex: due
to user setting a low fragmentation threashold).
We introduce a new command SCTP_CMD_SND_MSG and hand the
whole message down state machine. The state machine and
the side-effect parser will cork the queue, add all chunks
from the message to the queue, and then un-cork the queue
thus causing the chunks to get transmitted.
Vlad Yasevich [Fri, 7 Aug 2009 17:23:28 +0000 (13:23 -0400)]
sctp: Try to encourage SACK bundling with DATA.
If the association has a SACK timer pending and now DATA queued
to be send, we'll try to bundle the SACK with the next application send.
As such, try encourage bundling by accounting for SACK in the size
of the first chunk fragment.
Vlad Yasevich [Fri, 7 Aug 2009 14:43:07 +0000 (10:43 -0400)]
sctp: Generate SACKs when actually sending outbound DATA
We are now trying to bundle SACKs when we have outbound
DATA to send. However, there are situations where this
outbound DATA will not be sent (due to congestion or
available window). In such cases it's ok to wait for the
timer to expire. This patch refactors the sending code
so that betfore attempting to bundle the SACK we check
to see if the DATA will actually be transmitted.
Based on eirlier works for Doug Graham <dgraham@nortel.com> and
Wei Youngjun <yjwei@cn.fujitsu.com>.
Since an application may specify the maximum SCTP fragment size
that all data should be fragmented to, we need to fix how
we do segmentation. Right now, if a user specifies a small
fragment size, the segment size can go negative in the presence
of AUTH or COOKIE_ECHO bundling.
What we need to do is track the largest possbile DATA chunk that
can fit into the mtu. Then if the fragment size specified is
bigger then this maximum length, we'll shrink it down. Otherwise,
we just use the smaller segment size without changing it further.
If a socket has a lot of association that are in the process of
of being closed/aborted, it is possible for a remote to establish
new associations during the time period that the old ones are shutting
down. If this was a result of a close() call, there will be no socket
and will cause a memory leak. We'll prevent this by setting the
socket state to CLOSING and disallow new associations when in this state.
Doug Graham [Wed, 29 Jul 2009 16:05:57 +0000 (12:05 -0400)]
sctp: Fix piggybacked ACKs
This patch corrects the conditions under which a SACK will be piggybacked
on a DATA packet. The previous condition was incorrect due to a
misinterpretation of RFC 4960 and/or RFC 2960. Specifically, the
following paragraph from section 6.2 had not been implemented correctly:
Before an endpoint transmits a DATA chunk, if any received DATA
chunks have not been acknowledged (e.g., due to delayed ack), the
sender should create a SACK and bundle it with the outbound DATA
chunk, as long as the size of the final SCTP packet does not exceed
the current MTU. See Section 6.2.
When about to send a DATA chunk, the code now checks to see if the SACK
timer is running. If it is, we know we have a SACK to send to the
peer, so we append the SACK (assuming available space in the packet)
and turn off the timer. For a simple request-response scenario, this
will result in the SACK being bundled with the response, meaning the
the SACK is received quickly by the client, and also meaning that no
separate SACK packet needs to be sent by the server to acknowledge the
request. Prior to this patch, a separate SACK packet would have been
sent by the server SCTP only after its delayed-ACK timer had expired
(usually 200ms). This is wasteful of bandwidth, and can also have a
major negative impact on performance due the interaction of delayed ACKs
with the Nagle algorithm.
Signed-off-by: Doug Graham <dgraham@nortel.com> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Vlad Yasevich [Tue, 23 Jun 2009 15:28:05 +0000 (11:28 -0400)]
sctp: release cached route when the transport goes down.
When the sctp transport is marked down, we can release the
cached route and force a new lookup when attempting to use
this transport for anything. This way, if a better route
or source address is available, we'll try to use it.
Wei Yongjun [Tue, 16 Jun 2009 02:07:23 +0000 (10:07 +0800)]
sctp: update the route for non-active transports after addresses are added
Update the route and saddr entries for the non-active transports as some
of the added addresses can be used as better source addresses, or may
be there is a better route.
This patch adds support for legacy SJA1000 CAN controllers on the ISA
or PC-104 bus. The I/O port or memory address and the IRQ number must
be specified via module parameters:
Here is a full list of the supported module parameters:
port:I/O port number (array of ulong)
mem:I/O memory address (array of ulong)
indirect:Indirect access via address and data port (array of byte)
irq:IRQ number (array of int)
clk:External oscillator clock frequency (default=16000000 [16 MHz])
(array of int)
cdr:Clock divider register (default=0x48 [CDR_CBP | CDR_CLK_OFF])
(array of byte)
ocr:Output clock register (default=0x18 [OCR_TX0_PUSHPULL])
(array of byte)
Note: for clk, cdr, ocr, the first argument re-defines the default
for all other devices, e.g.:
Signed-off-by: Wolfgang Grandegger <wg@grandegger.com> Tested-by: Oliver Hartkopp <oliver@hartkopp.net> Signed-off-by: David S. Miller <davem@davemloft.net>
The member "tx_bytes" of "struct net_device_stats" should be
incremented when the interrupt is done and an "arbitration
lost error" is a TX error and the statistics should be updated
accordingly.
Signed-off-by: Wolfgang Grandegger <wg@grandegger.com> Signed-off-by: David S. Miller <davem@davemloft.net>
ipv6: Fix tcp_v6_send_response(): it didn't set skb transport header
Here is a patch which fixes an issue observed when using TCP over IPv6
and AH from IPsec.
When a connection gets closed the 4-way method and the last ACK from
the server gets dropped, the subsequent FINs from the client do not
get ACKed because tcp_v6_send_response does not set the transport
header pointer. This causes ah6_output to try to allocate a lot of
memory, which typically fails, so the ACKs never make it out of the
stack.
I have reproduced the problem on kernel 2.6.7, but after looking at
the latest kernel it seems the problem is still there.
Signed-off-by: Cosmin Ratiu <cratiu@ixiacom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Scott Feldman [Thu, 3 Sep 2009 17:02:45 +0000 (17:02 +0000)]
enic: organize device initialization/deinit into separate functions
To unclutter probe() a little bit, put all device initialization code
in one spot and device deinit code in another spot. Also remove unused
rq->buf_index variable/func.
Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Scott Feldman [Thu, 3 Sep 2009 17:01:58 +0000 (17:01 +0000)]
enic: workaround A0 erratum
A0 revision ASIC has an erratum on the RQ desc cache on chip where the
cache can become corrupted causing pkt buf writes to wrong locations. The s/w
workaround is to post a dummy RQ desc in the ring every 32 descs, causing a
flush of the cache. A0 parts are not production, but there are enough of
these parts in the wild in test setups to warrant including workaround. A1
revision ASIC parts fix erratum.
Signed-off-by: Scott Feldman <scofeldm@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 3 Sep 2009 00:39:16 +0000 (00:39 +0000)]
vlan: adds drops accounting
Its hard to tell if vlans are dropping frames, since
every frame given to vlan_???_start_xmit() functions
is accounted as fully transmitted by lower device.
We can test dev_queue_xmit() return values to
properly account for dropped frames.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 3 Sep 2009 00:11:45 +0000 (00:11 +0000)]
macvlan: add multiqueue capability
macvlan devices are currently not multi-queue capable.
We can do that defining rtnl_link_ops method,
get_tx_queues(), called from rtnl_create_link()
This new method gets num_tx_queues/real_num_tx_queues
from lower device.
macvlan_get_tx_queues() is a copy of vlan_get_tx_queues().
Because macvlan_start_xmit() has to update netdev_queue
stats only (and not dev->stats), I chose to change
tx_errors/tx_aborted_errors accounting to tx_dropped,
since netdev_queue structure doesnt define tx_errors /
tx_aborted_errors.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
While perusing vendor driver, I saw that it did not enable the Vaux
power unless device was able to wake from lan for D3cold.
This might help for Rene's power issue.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Fix a perpetual while() loop in unwinding partial
mapped tx skb on dma mapping failure.
Reported-by: "Juha Leppanen" <juha_motorsportcom@luukku.com> Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alloc 12k skbuffs so that firmware can aggregate more
packets into one buffer. This doesn't raise memory
consumption since 9k skbs use 16k slab cache anyway.
Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yi Zou [Thu, 3 Sep 2009 14:56:31 +0000 (14:56 +0000)]
ixgbe: Add support for using FCoE DDP in 82599 as FCoE targets
The FCoE DDP in 82599 can be used for both FCoE initiator as well as FCoE
target, depending on the indication of the exchange being the responder or
originator in the F_CTL (frame control) field in the encapsulated Fiber
Channel frame header (T10 Spec., FC-FS). For the initiator, OX_ID is used
for FCoE DDP, where for the target RX_ID is used for FCoE DDP.
Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yi Zou [Thu, 3 Sep 2009 14:56:10 +0000 (14:56 +0000)]
ixgbe: Distribute transmission of FCoE traffic in 82599
This adds a simple selection of a FCoE tx queue based on the current cpu id to
distribute transmission of FCoE traffic evenly among multiple FCoE transmit
queues.
Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yi Zou [Thu, 3 Sep 2009 14:55:50 +0000 (14:55 +0000)]
ixgbe: Add support for multiple Tx queues for FCoE in 82599
This patch adds support for multiple transmit queues to the Fiber Channel
over Ethernet (FCoE) feature found in 82599. Currently, FCoE has multiple
Rx queues available, along with a redirection table, that helps distribute
the I/O load across multiple CPUs based on the FC exchange ID. To make
this the most effective, we need to provide the same layout of transmit
queues to match receive.
Particularly, when Data Center Bridging (DCB) is enabled, the designated
traffic class for FCoE can have dedicated queues for just FCoE traffic,
while not affecting any other type of traffic flow.
Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 3 Sep 2009 14:49:33 +0000 (14:49 +0000)]
igb: set vf rlpml wasn't taking vlan tag into account
This patch updates things so that vlan tags are taken into account when
setting the receive large packet maximum length. This allows the VF driver
to correctly receive full sized frames when vlans are enabled.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 3 Sep 2009 14:49:15 +0000 (14:49 +0000)]
igb: only disable/enable interrupt bits for igb physical function
The igb_irq_disable/enable calls were causing virtual functions associated
with the igb physical function to have their interrupts disabled. In order
to prevent this from occuring we should only clear/set the bits related to
the physical function.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 3 Sep 2009 14:48:56 +0000 (14:48 +0000)]
igb: add support for set_rx_mode netdevice operation
This patch adds support for the set_rx_mode netdevice operation so that igb
can better support multiple unicast addresses.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 3 Sep 2009 09:19:58 +0000 (02:19 -0700)]
vlan: enable multiqueue xmits
vlan_dev_hard_start_xmit() & vlan_dev_hwaccel_hard_start_xmit()
select txqueue number 0, instead of using index provided by
skb_get_queue_mapping().
This is not correct after commit 2e59af3dcbdf11635c03f
[vlan: multiqueue vlan device] because
txq->tx_packets & txq->tx_bytes changes are performed on
a single location, and not the right locking.
Fix is to take the appropriate struct netdev_queue pointer
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Karl Hiramoto [Thu, 3 Sep 2009 06:26:39 +0000 (23:26 -0700)]
atm/br2684: netif_stop_queue() when atm device busy and netif_wake_queue() when we can send packets again.
This patch removes the call to dev_kfree_skb() when the atm device is busy.
Calling dev_kfree_skb() causes heavy packet loss then the device is under
heavy load, the more correct behavior should be to stop the upper layers,
then when the lower device can queue packets again wake the upper layers.
Signed-off-by: Karl Hiramoto <karl@hiramoto.org> Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil> Signed-off-by: David S. Miller <davem@davemloft.net>
fec_enet_mii, fec_enet_rx and fec_enet_tx are both only called by
fec_enet_interrupt in interrupt context. So they must not use
spin_lock_irq/spin_unlock_irq.
This fixes:
WARNING: at kernel/lockdep.c:2140 trace_hardirqs_on_caller+0x130/0x194()
...
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Patrick McHardy <kaber@trash.net> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Matt Waddel <Matt.Waddel@freescale.com> Cc: netdev@vger.kernel.org Cc: Tim Sander <tim01@vlsi.informatik.tu-darmstadt.de> Acked-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: David S. Miller <davem@davemloft.net>
mii_discover_phy is only called by fec_enet_mii (via mip->mii_func). So
&fep->mii_lock is already held and mii_discover_phy must not call
mii_queue which locks &fep->mii_lock, too.
This was noticed by lockdep:
=============================================
[ INFO: possible recursive locking detected ] 2.6.31-rc8-00038-g37d0892 #109
---------------------------------------------
swapper/1 is trying to acquire lock:
(&fep->mii_lock){-.....}, at: [<c01569f8>] mii_queue+0x2c/0xcc
but task is already holding lock:
(&fep->mii_lock){-.....}, at: [<c0156328>] fec_enet_interrupt+0x78/0x460
other info that might help us debug this:
2 locks held by swapper/1:
#0: (rtnl_mutex){+.+.+.}, at: [<c0183534>] rtnl_lock+0x18/0x20
#1: (&fep->mii_lock){-.....}, at: [<c0156328>] fec_enet_interrupt+0x78/0x460
The bpq ether driver is modifying the data art of the skb by first
dropping the KISS byte (a command byte for the radio) then prepending the
length + 4 of the remaining AX.25 packet to be transmitted as a little
endian 16-bit number. If the high byte of the length has a different
value than the dropped KISS byte users of clones of the skb may observe
this as corruption. This was observed with by running listen(8) -a which
uses a packet socket which clones transmit packets. The corruption will
then typically be displayed for as a KISS "TX Delay" command for AX.25
packets in the range of 252..508 bytes or any other KISS command for
yet larger packets.
Fixed by using skb_cow to create a private copy should the skb be cloned.
Using skb_cow also allows us to cleanup the old logic to ensure sufficient
headroom in the skb.
While at it, replace a return of 0 from bpq_xmit with the proper constant
NETDEV_TX_OK which is now being used everywhere else in this function.
Affected: all 2.2, 2.4 and 2.6 kernels.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Reported-by: Jann Traschewski <jann@gmx.de> Signed-off-by: David S. Miller <davem@davemloft.net>
David raised a concern that if the allocation fails in tcp_send_fin(), and it's
GFP_ATOMIC, we are going to yield() (which sleeps) and loop endlessly waiting
for the allocation to succeed.
But fact is, the original GFP_KERNEL also sleeps. GFP_ATOMIC+yield() looks
weird, but it is no worse the implicit sleep inside GFP_KERNEL. Both could
loop endlessly under memory pressure.
CC: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> CC: David S. Miller <davem@davemloft.net> CC: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net/ethtool: Add support for the ethtool feature to flash firmware image from a specified file.
This patch adds support to flash a firmware image to a device using ethtool.
The driver gets the filename of the firmware image and flashes the image
using the request firmware path.
The region "on the chip" to be flashed can be specified by an option.
It is upto the device driver to enumerate the region number passed by ethtool,
to the region to be flashed.
The default behavior is to flash all the regions on the chip.
Signed-off-by: Ajit Khaparde <ajitk@serverengines.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 31 Aug 2009 06:34:50 +0000 (06:34 +0000)]
drivers: Kill now superfluous ->last_rx stores
The generic packet receive code takes care of setting
netdev->last_rx when necessary, for the sake of the
bonding ARP monitor.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Neil Horman <nhorman@txudriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 3 Sep 2009 01:05:33 +0000 (18:05 -0700)]
ip: Report qdisc packet drops
Christoph Lameter pointed out that packet drops at qdisc level where not
accounted in SNMP counters. Only if application sets IP_RECVERR, drops
are reported to user (-ENOBUFS errors) and SNMP counters updated.
IP_RECVERR is used to enable extended reliable error message passing,
but these are not needed to update system wide SNMP stats.
This patch changes things a bit to allow SNMP counters to be updated,
regardless of IP_RECVERR being set or not on the socket.
Example after an UDP tx flood
# netstat -s
...
IP: 1487048 outgoing packets dropped
...
Udp:
...
SndbufErrors: 1487048
send() syscalls, do however still return an OK status, to not
break applications.
"The output queue for a network interface was full.
This generally indicates that the interface has stopped sending,
but may be caused by transient congestion.
(Normally, this does not occur in Linux. Packets are just silently
dropped when a device queue overflows.) "
This is not true for IP_RECVERR enabled sockets : a send() syscall
that hit a qdisc drop returns an ENOBUFS error.
Many thanks to Christoph, David, and last but not least, Alexey !
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Neil Horman [Wed, 2 Sep 2009 21:37:45 +0000 (14:37 -0700)]
net: drop_monitor: make last_rx timestamp private
It was recently pointed out to me that the last_rx field of the
net_device structure wasn't updated regularly. In fact only the
bonding driver really uses it currently. Since the drop_monitor code
relies on the last_rx field to detect drops on recevie in hardware, We
need to find a more reliable way to rate limit our drop checks (so
that we don't check for drops on every frame recevied, which would be
inefficient. This patch makes a last_rx timestamp that is private to
the drop monitor code and is updated for every device that we track.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Bob Copeland [Tue, 1 Sep 2009 22:12:11 +0000 (18:12 -0400)]
cfg80211: fix looping soft lockup in find_ie()
The find_ie() function uses a size_t for the len parameter, and
directly uses len as a loop variable. If any received packets
are malformed, it is possible for the decrease of len to overflow,
and since the result is unsigned, the loop will not terminate.
Change it to a signed int so the loop conditional works for
negative values.
wireless: remove mac80211 rate selection extra menu
We can just display this upon enabling mac80211 with an
'if MAC80211 != n' check.
Cc: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
Change it to a menuconfig to give it some documentation, to
refer users to our wireless wiki for extra resources and
documentation. It seems our wiki is still obscure to some.
Signed-off-by: Luis R. Rodriguez <lrodriguez@atheros.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
Matt Carlson [Tue, 1 Sep 2009 13:21:36 +0000 (13:21 +0000)]
tg3: Add MDIO bus address assignments
The 5717 is a dual port chip that has a shared MDIO bus design. While
it is impossible for one function to interface with the wrong phy, that
function still needs to know which MDIO bus address to use when
interfacing with its own phy. This patch adds code to determine which
MDIO bus address to use.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Benjamin Li <benli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Matt Carlson [Tue, 1 Sep 2009 13:19:05 +0000 (13:19 +0000)]
tg3: Assign rx ret producer indexes by vector
When RSS is enabled, the status block format changes slightly. The
"rx_jumbo_consumer", "reserved", and "rx_mini_consumer" members get
mapped to the other three rx return ring producer indexes. This patch
introduces a new per-interrupt member which identifies which location
in the status block a particular vector should look for return ring
updates.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Benjamin Li <benli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Matt Carlson [Tue, 1 Sep 2009 13:16:33 +0000 (13:16 +0000)]
tg3: Adjust RSS ring allocation strategies
When multivector RSS is enabled, the first interrupt vector is only used
to report link interrupts and error conditions. This patch changes the
code so that rx and tx ring resources are not allocated for this vector.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Benjamin Li <benli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Matt Carlson [Tue, 1 Sep 2009 12:58:41 +0000 (12:58 +0000)]
tg3: Add mailbox assignments
The 5717 assigns mailbox locations to interrupt vectors in a rather
non-intuitive way. (Much of the complexity stems from legacy
compatibility issues.) This patch implements the assignment scheme.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Benjamin Li <benli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>