2 * INET An implementation of the TCP/IP protocol suite for the LINUX
3 * operating system. INET is implemented using the BSD Socket
4 * interface as the means of communication with the user level.
6 * Implementation of the Transmission Control Protocol(TCP).
8 * Version: $Id: tcp.c,v 1.216 2002/02/01 22:01:04 davem Exp $
11 * Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
12 * Mark Evans, <evansmp@uhura.aston.ac.uk>
13 * Corey Minyard <wf-rch!minyard@relay.EU.net>
14 * Florian La Roche, <flla@stud.uni-sb.de>
15 * Charles Hedrick, <hedrick@klinzhai.rutgers.edu>
16 * Linus Torvalds, <torvalds@cs.helsinki.fi>
17 * Alan Cox, <gw4pts@gw4pts.ampr.org>
18 * Matthew Dillon, <dillon@apollo.west.oic.com>
19 * Arnt Gulbrandsen, <agulbra@nvg.unit.no>
20 * Jorge Cwik, <jorge@laser.satlink.net>
23 * Alan Cox : Numerous verify_area() calls
24 * Alan Cox : Set the ACK bit on a reset
25 * Alan Cox : Stopped it crashing if it closed while
26 * sk->inuse=1 and was trying to connect
28 * Alan Cox : All icmp error handling was broken
29 * pointers passed where wrong and the
30 * socket was looked up backwards. Nobody
31 * tested any icmp error code obviously.
32 * Alan Cox : tcp_err() now handled properly. It
33 * wakes people on errors. poll
34 * behaves and the icmp error race
35 * has gone by moving it into sock.c
36 * Alan Cox : tcp_send_reset() fixed to work for
37 * everything not just packets for
39 * Alan Cox : tcp option processing.
40 * Alan Cox : Reset tweaked (still not 100%) [Had
42 * Herp Rosmanith : More reset fixes
43 * Alan Cox : No longer acks invalid rst frames.
44 * Acking any kind of RST is right out.
45 * Alan Cox : Sets an ignore me flag on an rst
46 * receive otherwise odd bits of prattle
48 * Alan Cox : Fixed another acking RST frame bug.
49 * Should stop LAN workplace lockups.
50 * Alan Cox : Some tidyups using the new skb list
52 * Alan Cox : sk->keepopen now seems to work
53 * Alan Cox : Pulls options out correctly on accepts
54 * Alan Cox : Fixed assorted sk->rqueue->next errors
55 * Alan Cox : PSH doesn't end a TCP read. Switched a
57 * Alan Cox : Tidied tcp_data to avoid a potential
59 * Alan Cox : Added some better commenting, as the
60 * tcp is hard to follow
61 * Alan Cox : Removed incorrect check for 20 * psh
62 * Michael O'Reilly : ack < copied bug fix.
63 * Johannes Stille : Misc tcp fixes (not all in yet).
64 * Alan Cox : FIN with no memory -> CRASH
65 * Alan Cox : Added socket option proto entries.
66 * Also added awareness of them to accept.
67 * Alan Cox : Added TCP options (SOL_TCP)
68 * Alan Cox : Switched wakeup calls to callbacks,
69 * so the kernel can layer network
71 * Alan Cox : Use ip_tos/ip_ttl settings.
72 * Alan Cox : Handle FIN (more) properly (we hope).
73 * Alan Cox : RST frames sent on unsynchronised
75 * Alan Cox : Put in missing check for SYN bit.
76 * Alan Cox : Added tcp_select_window() aka NET2E
77 * window non shrink trick.
78 * Alan Cox : Added a couple of small NET2E timer
80 * Charles Hedrick : TCP fixes
81 * Toomas Tamm : TCP window fixes
82 * Alan Cox : Small URG fix to rlogin ^C ack fight
83 * Charles Hedrick : Rewrote most of it to actually work
84 * Linus : Rewrote tcp_read() and URG handling
86 * Gerhard Koerting: Fixed some missing timer handling
87 * Matthew Dillon : Reworked TCP machine states as per RFC
88 * Gerhard Koerting: PC/TCP workarounds
89 * Adam Caldwell : Assorted timer/timing errors
90 * Matthew Dillon : Fixed another RST bug
91 * Alan Cox : Move to kernel side addressing changes.
92 * Alan Cox : Beginning work on TCP fastpathing
94 * Arnt Gulbrandsen: Turbocharged tcp_check() routine.
95 * Alan Cox : TCP fast path debugging
96 * Alan Cox : Window clamping
97 * Michael Riepe : Bug in tcp_check()
98 * Matt Dillon : More TCP improvements and RST bug fixes
99 * Matt Dillon : Yet more small nasties remove from the
100 * TCP code (Be very nice to this man if
101 * tcp finally works 100%) 8)
102 * Alan Cox : BSD accept semantics.
103 * Alan Cox : Reset on closedown bug.
104 * Peter De Schrijver : ENOTCONN check missing in tcp_sendto().
105 * Michael Pall : Handle poll() after URG properly in
107 * Michael Pall : Undo the last fix in tcp_read_urg()
108 * (multi URG PUSH broke rlogin).
109 * Michael Pall : Fix the multi URG PUSH problem in
110 * tcp_readable(), poll() after URG
112 * Michael Pall : recv(...,MSG_OOB) never blocks in the
114 * Alan Cox : Changed the semantics of sk->socket to
115 * fix a race and a signal problem with
116 * accept() and async I/O.
117 * Alan Cox : Relaxed the rules on tcp_sendto().
118 * Yury Shevchuk : Really fixed accept() blocking problem.
119 * Craig I. Hagan : Allow for BSD compatible TIME_WAIT for
120 * clients/servers which listen in on
122 * Alan Cox : Cleaned the above up and shrank it to
123 * a sensible code size.
124 * Alan Cox : Self connect lockup fix.
125 * Alan Cox : No connect to multicast.
126 * Ross Biro : Close unaccepted children on master
128 * Alan Cox : Reset tracing code.
129 * Alan Cox : Spurious resets on shutdown.
130 * Alan Cox : Giant 15 minute/60 second timer error
131 * Alan Cox : Small whoops in polling before an
133 * Alan Cox : Kept the state trace facility since
134 * it's handy for debugging.
135 * Alan Cox : More reset handler fixes.
136 * Alan Cox : Started rewriting the code based on
137 * the RFC's for other useful protocol
138 * references see: Comer, KA9Q NOS, and
139 * for a reference on the difference
140 * between specifications and how BSD
141 * works see the 4.4lite source.
142 * A.N.Kuznetsov : Don't time wait on completion of tidy
144 * Linus Torvalds : Fin/Shutdown & copied_seq changes.
145 * Linus Torvalds : Fixed BSD port reuse to work first syn
146 * Alan Cox : Reimplemented timers as per the RFC
147 * and using multiple timers for sanity.
148 * Alan Cox : Small bug fixes, and a lot of new
150 * Alan Cox : Fixed dual reader crash by locking
151 * the buffers (much like datagram.c)
152 * Alan Cox : Fixed stuck sockets in probe. A probe
153 * now gets fed up of retrying without
154 * (even a no space) answer.
155 * Alan Cox : Extracted closing code better
156 * Alan Cox : Fixed the closing state machine to
158 * Alan Cox : More 'per spec' fixes.
159 * Jorge Cwik : Even faster checksumming.
160 * Alan Cox : tcp_data() doesn't ack illegal PSH
161 * only frames. At least one pc tcp stack
163 * Alan Cox : Cache last socket.
164 * Alan Cox : Per route irtt.
165 * Matt Day : poll()->select() match BSD precisely on error
166 * Alan Cox : New buffers
167 * Marc Tamsky : Various sk->prot->retransmits and
168 * sk->retransmits misupdating fixed.
169 * Fixed tcp_write_timeout: stuck close,
170 * and TCP syn retries gets used now.
171 * Mark Yarvis : In tcp_read_wakeup(), don't send an
172 * ack if state is TCP_CLOSED.
173 * Alan Cox : Look up device on a retransmit - routes may
174 * change. Doesn't yet cope with MSS shrink right
176 * Marc Tamsky : Closing in closing fixes.
177 * Mike Shaver : RFC1122 verifications.
178 * Alan Cox : rcv_saddr errors.
179 * Alan Cox : Block double connect().
180 * Alan Cox : Small hooks for enSKIP.
181 * Alexey Kuznetsov: Path MTU discovery.
182 * Alan Cox : Support soft errors.
183 * Alan Cox : Fix MTU discovery pathological case
184 * when the remote claims no mtu!
185 * Marc Tamsky : TCP_CLOSE fix.
186 * Colin (G3TNE) : Send a reset on syn ack replies in
187 * window but wrong (fixes NT lpd problems)
188 * Pedro Roque : Better TCP window handling, delayed ack.
189 * Joerg Reuter : No modification of locked buffers in
190 * tcp_do_retransmit()
191 * Eric Schenk : Changed receiver side silly window
192 * avoidance algorithm to BSD style
193 * algorithm. This doubles throughput
194 * against machines running Solaris,
195 * and seems to result in general
197 * Stefan Magdalinski : adjusted tcp_readable() to fix FIONREAD
198 * Willy Konynenberg : Transparent proxying support.
199 * Mike McLagan : Routing by source
200 * Keith Owens : Do proper merging with partial SKB's in
201 * tcp_do_sendmsg to avoid burstiness.
202 * Eric Schenk : Fix fast close down bug with
203 * shutdown() followed by close().
204 * Andi Kleen : Make poll agree with SIGIO
205 * Salvatore Sanfilippo : Support SO_LINGER with linger == 1 and
206 * lingertime == 0 (RFC 793 ABORT Call)
207 * Hirokazu Takahashi : Use copy_from_user() instead of
208 * csum_and_copy_from_user() if possible.
210 * This program is free software; you can redistribute it and/or
211 * modify it under the terms of the GNU General Public License
212 * as published by the Free Software Foundation; either version
213 * 2 of the License, or(at your option) any later version.
215 * Description of States:
217 * TCP_SYN_SENT sent a connection request, waiting for ack
219 * TCP_SYN_RECV received a connection request, sent ack,
220 * waiting for final ack in three-way handshake.
222 * TCP_ESTABLISHED connection established
224 * TCP_FIN_WAIT1 our side has shutdown, waiting to complete
225 * transmission of remaining buffered data
227 * TCP_FIN_WAIT2 all buffered data sent, waiting for remote
230 * TCP_CLOSING both sides have shutdown but we still have
231 * data we have to finish sending
233 * TCP_TIME_WAIT timeout to catch resent junk before entering
234 * closed, can only be entered from FIN_WAIT2
235 * or CLOSING. Required because the other end
236 * may not have gotten our last ACK causing it
237 * to retransmit the data packet (which we ignore)
239 * TCP_CLOSE_WAIT remote side has shutdown and is waiting for
240 * us to finish writing our data and to shutdown
241 * (we have to close() to move on to LAST_ACK)
243 * TCP_LAST_ACK out side has shutdown after remote has
244 * shutdown. There may still be data in our
245 * buffer that we have to finish sending
247 * TCP_CLOSE socket is finished
250 #include <linux/kernel.h>
251 #include <linux/module.h>
252 #include <linux/types.h>
253 #include <linux/fcntl.h>
254 #include <linux/poll.h>
255 #include <linux/init.h>
256 #include <linux/fs.h>
257 #include <linux/skbuff.h>
258 #include <linux/splice.h>
259 #include <linux/net.h>
260 #include <linux/socket.h>
261 #include <linux/random.h>
262 #include <linux/bootmem.h>
263 #include <linux/cache.h>
264 #include <linux/err.h>
265 #include <linux/crypto.h>
267 #include <net/icmp.h>
269 #include <net/xfrm.h>
271 #include <net/netdma.h>
272 #include <net/sock.h>
274 #include <asm/uaccess.h>
275 #include <asm/ioctls.h>
277 int sysctl_tcp_fin_timeout __read_mostly
= TCP_FIN_TIMEOUT
;
279 DEFINE_SNMP_STAT(struct tcp_mib
, tcp_statistics
) __read_mostly
;
281 atomic_t tcp_orphan_count
= ATOMIC_INIT(0);
283 EXPORT_SYMBOL_GPL(tcp_orphan_count
);
285 int sysctl_tcp_mem
[3] __read_mostly
;
286 int sysctl_tcp_wmem
[3] __read_mostly
;
287 int sysctl_tcp_rmem
[3] __read_mostly
;
289 EXPORT_SYMBOL(sysctl_tcp_mem
);
290 EXPORT_SYMBOL(sysctl_tcp_rmem
);
291 EXPORT_SYMBOL(sysctl_tcp_wmem
);
293 atomic_t tcp_memory_allocated
; /* Current allocated memory. */
294 atomic_t tcp_sockets_allocated
; /* Current number of TCP sockets. */
296 EXPORT_SYMBOL(tcp_memory_allocated
);
297 EXPORT_SYMBOL(tcp_sockets_allocated
);
302 struct tcp_splice_state
{
303 struct pipe_inode_info
*pipe
;
309 * Pressure flag: try to collapse.
310 * Technical note: it is used by multiple contexts non atomically.
311 * All the __sk_mem_schedule() is of this nature: accounting
312 * is strict, actions are advisory and have some latency.
314 int tcp_memory_pressure __read_mostly
;
316 EXPORT_SYMBOL(tcp_memory_pressure
);
318 void tcp_enter_memory_pressure(void)
320 if (!tcp_memory_pressure
) {
321 NET_INC_STATS(LINUX_MIB_TCPMEMORYPRESSURES
);
322 tcp_memory_pressure
= 1;
326 EXPORT_SYMBOL(tcp_enter_memory_pressure
);
329 * Wait for a TCP event.
331 * Note that we don't need to lock the socket, as the upper poll layers
332 * take care of normal races (between the test and the event) and we don't
333 * go look at any of the socket buffers directly.
335 unsigned int tcp_poll(struct file
*file
, struct socket
*sock
, poll_table
*wait
)
338 struct sock
*sk
= sock
->sk
;
339 struct tcp_sock
*tp
= tcp_sk(sk
);
341 poll_wait(file
, sk
->sk_sleep
, wait
);
342 if (sk
->sk_state
== TCP_LISTEN
)
343 return inet_csk_listen_poll(sk
);
345 /* Socket is not locked. We are protected from async events
346 by poll logic and correct handling of state changes
347 made by another threads is impossible in any case.
355 * POLLHUP is certainly not done right. But poll() doesn't
356 * have a notion of HUP in just one direction, and for a
357 * socket the read side is more interesting.
359 * Some poll() documentation says that POLLHUP is incompatible
360 * with the POLLOUT/POLLWR flags, so somebody should check this
361 * all. But careful, it tends to be safer to return too many
362 * bits than too few, and you can easily break real applications
363 * if you don't tell them that something has hung up!
367 * Check number 1. POLLHUP is _UNMASKABLE_ event (see UNIX98 and
368 * our fs/select.c). It means that after we received EOF,
369 * poll always returns immediately, making impossible poll() on write()
370 * in state CLOSE_WAIT. One solution is evident --- to set POLLHUP
371 * if and only if shutdown has been made in both directions.
372 * Actually, it is interesting to look how Solaris and DUX
373 * solve this dilemma. I would prefer, if PULLHUP were maskable,
374 * then we could set it on SND_SHUTDOWN. BTW examples given
375 * in Stevens' books assume exactly this behaviour, it explains
376 * why PULLHUP is incompatible with POLLOUT. --ANK
378 * NOTE. Check for TCP_CLOSE is added. The goal is to prevent
379 * blocking on fresh not-connected or disconnected socket. --ANK
381 if (sk
->sk_shutdown
== SHUTDOWN_MASK
|| sk
->sk_state
== TCP_CLOSE
)
383 if (sk
->sk_shutdown
& RCV_SHUTDOWN
)
384 mask
|= POLLIN
| POLLRDNORM
| POLLRDHUP
;
387 if ((1 << sk
->sk_state
) & ~(TCPF_SYN_SENT
| TCPF_SYN_RECV
)) {
388 /* Potential race condition. If read of tp below will
389 * escape above sk->sk_state, we can be illegally awaken
390 * in SYN_* states. */
391 if ((tp
->rcv_nxt
!= tp
->copied_seq
) &&
392 (tp
->urg_seq
!= tp
->copied_seq
||
393 tp
->rcv_nxt
!= tp
->copied_seq
+ 1 ||
394 sock_flag(sk
, SOCK_URGINLINE
) || !tp
->urg_data
))
395 mask
|= POLLIN
| POLLRDNORM
;
397 if (!(sk
->sk_shutdown
& SEND_SHUTDOWN
)) {
398 if (sk_stream_wspace(sk
) >= sk_stream_min_wspace(sk
)) {
399 mask
|= POLLOUT
| POLLWRNORM
;
400 } else { /* send SIGIO later */
401 set_bit(SOCK_ASYNC_NOSPACE
,
402 &sk
->sk_socket
->flags
);
403 set_bit(SOCK_NOSPACE
, &sk
->sk_socket
->flags
);
405 /* Race breaker. If space is freed after
406 * wspace test but before the flags are set,
407 * IO signal will be lost.
409 if (sk_stream_wspace(sk
) >= sk_stream_min_wspace(sk
))
410 mask
|= POLLOUT
| POLLWRNORM
;
414 if (tp
->urg_data
& TCP_URG_VALID
)
420 int tcp_ioctl(struct sock
*sk
, int cmd
, unsigned long arg
)
422 struct tcp_sock
*tp
= tcp_sk(sk
);
427 if (sk
->sk_state
== TCP_LISTEN
)
431 if ((1 << sk
->sk_state
) & (TCPF_SYN_SENT
| TCPF_SYN_RECV
))
433 else if (sock_flag(sk
, SOCK_URGINLINE
) ||
435 before(tp
->urg_seq
, tp
->copied_seq
) ||
436 !before(tp
->urg_seq
, tp
->rcv_nxt
)) {
437 answ
= tp
->rcv_nxt
- tp
->copied_seq
;
439 /* Subtract 1, if FIN is in queue. */
440 if (answ
&& !skb_queue_empty(&sk
->sk_receive_queue
))
442 tcp_hdr((struct sk_buff
*)sk
->sk_receive_queue
.prev
)->fin
;
444 answ
= tp
->urg_seq
- tp
->copied_seq
;
448 answ
= tp
->urg_data
&& tp
->urg_seq
== tp
->copied_seq
;
451 if (sk
->sk_state
== TCP_LISTEN
)
454 if ((1 << sk
->sk_state
) & (TCPF_SYN_SENT
| TCPF_SYN_RECV
))
457 answ
= tp
->write_seq
- tp
->snd_una
;
463 return put_user(answ
, (int __user
*)arg
);
466 static inline void tcp_mark_push(struct tcp_sock
*tp
, struct sk_buff
*skb
)
468 TCP_SKB_CB(skb
)->flags
|= TCPCB_FLAG_PSH
;
469 tp
->pushed_seq
= tp
->write_seq
;
472 static inline int forced_push(struct tcp_sock
*tp
)
474 return after(tp
->write_seq
, tp
->pushed_seq
+ (tp
->max_window
>> 1));
477 static inline void skb_entail(struct sock
*sk
, struct sk_buff
*skb
)
479 struct tcp_sock
*tp
= tcp_sk(sk
);
480 struct tcp_skb_cb
*tcb
= TCP_SKB_CB(skb
);
483 tcb
->seq
= tcb
->end_seq
= tp
->write_seq
;
484 tcb
->flags
= TCPCB_FLAG_ACK
;
486 skb_header_release(skb
);
487 tcp_add_write_queue_tail(sk
, skb
);
488 sk
->sk_wmem_queued
+= skb
->truesize
;
489 sk_mem_charge(sk
, skb
->truesize
);
490 if (tp
->nonagle
& TCP_NAGLE_PUSH
)
491 tp
->nonagle
&= ~TCP_NAGLE_PUSH
;
494 static inline void tcp_mark_urg(struct tcp_sock
*tp
, int flags
,
497 if (flags
& MSG_OOB
) {
499 tp
->snd_up
= tp
->write_seq
;
500 TCP_SKB_CB(skb
)->sacked
|= TCPCB_URG
;
504 static inline void tcp_push(struct sock
*sk
, int flags
, int mss_now
,
507 struct tcp_sock
*tp
= tcp_sk(sk
);
509 if (tcp_send_head(sk
)) {
510 struct sk_buff
*skb
= tcp_write_queue_tail(sk
);
511 if (!(flags
& MSG_MORE
) || forced_push(tp
))
512 tcp_mark_push(tp
, skb
);
513 tcp_mark_urg(tp
, flags
, skb
);
514 __tcp_push_pending_frames(sk
, mss_now
,
515 (flags
& MSG_MORE
) ? TCP_NAGLE_CORK
: nonagle
);
519 static int tcp_splice_data_recv(read_descriptor_t
*rd_desc
, struct sk_buff
*skb
,
520 unsigned int offset
, size_t len
)
522 struct tcp_splice_state
*tss
= rd_desc
->arg
.data
;
524 return skb_splice_bits(skb
, offset
, tss
->pipe
, tss
->len
, tss
->flags
);
527 static int __tcp_splice_read(struct sock
*sk
, struct tcp_splice_state
*tss
)
529 /* Store TCP splice context information in read_descriptor_t. */
530 read_descriptor_t rd_desc
= {
534 return tcp_read_sock(sk
, &rd_desc
, tcp_splice_data_recv
);
538 * tcp_splice_read - splice data from TCP socket to a pipe
539 * @sock: socket to splice from
540 * @ppos: position (not valid)
541 * @pipe: pipe to splice to
542 * @len: number of bytes to splice
543 * @flags: splice modifier flags
546 * Will read pages from given socket and fill them into a pipe.
549 ssize_t
tcp_splice_read(struct socket
*sock
, loff_t
*ppos
,
550 struct pipe_inode_info
*pipe
, size_t len
,
553 struct sock
*sk
= sock
->sk
;
554 struct tcp_splice_state tss
= {
564 * We can't seek on a socket input
573 timeo
= sock_rcvtimeo(sk
, flags
& SPLICE_F_NONBLOCK
);
575 ret
= __tcp_splice_read(sk
, &tss
);
581 if (flags
& SPLICE_F_NONBLOCK
) {
585 if (sock_flag(sk
, SOCK_DONE
))
588 ret
= sock_error(sk
);
591 if (sk
->sk_shutdown
& RCV_SHUTDOWN
)
593 if (sk
->sk_state
== TCP_CLOSE
) {
595 * This occurs when user tries to read
596 * from never connected socket.
598 if (!sock_flag(sk
, SOCK_DONE
))
606 sk_wait_data(sk
, &timeo
);
607 if (signal_pending(current
)) {
608 ret
= sock_intr_errno(timeo
);
619 if (sk
->sk_err
|| sk
->sk_state
== TCP_CLOSE
||
620 (sk
->sk_shutdown
& RCV_SHUTDOWN
) || !timeo
||
621 signal_pending(current
))
633 struct sk_buff
*sk_stream_alloc_skb(struct sock
*sk
, int size
, gfp_t gfp
)
637 /* The TCP header must be at least 32-bit aligned. */
638 size
= ALIGN(size
, 4);
640 skb
= alloc_skb_fclone(size
+ sk
->sk_prot
->max_header
, gfp
);
642 if (sk_wmem_schedule(sk
, skb
->truesize
)) {
644 * Make sure that we have exactly size bytes
645 * available to the caller, no more, no less.
647 skb_reserve(skb
, skb_tailroom(skb
) - size
);
652 sk
->sk_prot
->enter_memory_pressure();
653 sk_stream_moderate_sndbuf(sk
);
658 static ssize_t
do_tcp_sendpages(struct sock
*sk
, struct page
**pages
, int poffset
,
659 size_t psize
, int flags
)
661 struct tcp_sock
*tp
= tcp_sk(sk
);
662 int mss_now
, size_goal
;
665 long timeo
= sock_sndtimeo(sk
, flags
& MSG_DONTWAIT
);
667 /* Wait for a connection to finish. */
668 if ((1 << sk
->sk_state
) & ~(TCPF_ESTABLISHED
| TCPF_CLOSE_WAIT
))
669 if ((err
= sk_stream_wait_connect(sk
, &timeo
)) != 0)
672 clear_bit(SOCK_ASYNC_NOSPACE
, &sk
->sk_socket
->flags
);
674 mss_now
= tcp_current_mss(sk
, !(flags
&MSG_OOB
));
675 size_goal
= tp
->xmit_size_goal
;
679 if (sk
->sk_err
|| (sk
->sk_shutdown
& SEND_SHUTDOWN
))
683 struct sk_buff
*skb
= tcp_write_queue_tail(sk
);
684 struct page
*page
= pages
[poffset
/ PAGE_SIZE
];
685 int copy
, i
, can_coalesce
;
686 int offset
= poffset
% PAGE_SIZE
;
687 int size
= min_t(size_t, psize
, PAGE_SIZE
- offset
);
689 if (!tcp_send_head(sk
) || (copy
= size_goal
- skb
->len
) <= 0) {
691 if (!sk_stream_memory_free(sk
))
692 goto wait_for_sndbuf
;
694 skb
= sk_stream_alloc_skb(sk
, 0, sk
->sk_allocation
);
696 goto wait_for_memory
;
705 i
= skb_shinfo(skb
)->nr_frags
;
706 can_coalesce
= skb_can_coalesce(skb
, i
, page
, offset
);
707 if (!can_coalesce
&& i
>= MAX_SKB_FRAGS
) {
708 tcp_mark_push(tp
, skb
);
711 if (!sk_wmem_schedule(sk
, copy
))
712 goto wait_for_memory
;
715 skb_shinfo(skb
)->frags
[i
- 1].size
+= copy
;
718 skb_fill_page_desc(skb
, i
, page
, offset
, copy
);
722 skb
->data_len
+= copy
;
723 skb
->truesize
+= copy
;
724 sk
->sk_wmem_queued
+= copy
;
725 sk_mem_charge(sk
, copy
);
726 skb
->ip_summed
= CHECKSUM_PARTIAL
;
727 tp
->write_seq
+= copy
;
728 TCP_SKB_CB(skb
)->end_seq
+= copy
;
729 skb_shinfo(skb
)->gso_segs
= 0;
732 TCP_SKB_CB(skb
)->flags
&= ~TCPCB_FLAG_PSH
;
736 if (!(psize
-= copy
))
739 if (skb
->len
< mss_now
|| (flags
& MSG_OOB
))
742 if (forced_push(tp
)) {
743 tcp_mark_push(tp
, skb
);
744 __tcp_push_pending_frames(sk
, mss_now
, TCP_NAGLE_PUSH
);
745 } else if (skb
== tcp_send_head(sk
))
746 tcp_push_one(sk
, mss_now
);
750 set_bit(SOCK_NOSPACE
, &sk
->sk_socket
->flags
);
753 tcp_push(sk
, flags
& ~MSG_MORE
, mss_now
, TCP_NAGLE_PUSH
);
755 if ((err
= sk_stream_wait_memory(sk
, &timeo
)) != 0)
758 mss_now
= tcp_current_mss(sk
, !(flags
&MSG_OOB
));
759 size_goal
= tp
->xmit_size_goal
;
764 tcp_push(sk
, flags
, mss_now
, tp
->nonagle
);
771 return sk_stream_error(sk
, flags
, err
);
774 ssize_t
tcp_sendpage(struct socket
*sock
, struct page
*page
, int offset
,
775 size_t size
, int flags
)
778 struct sock
*sk
= sock
->sk
;
780 if (!(sk
->sk_route_caps
& NETIF_F_SG
) ||
781 !(sk
->sk_route_caps
& NETIF_F_ALL_CSUM
))
782 return sock_no_sendpage(sock
, page
, offset
, size
, flags
);
786 res
= do_tcp_sendpages(sk
, &page
, offset
, size
, flags
);
792 #define TCP_PAGE(sk) (sk->sk_sndmsg_page)
793 #define TCP_OFF(sk) (sk->sk_sndmsg_off)
795 static inline int select_size(struct sock
*sk
)
797 struct tcp_sock
*tp
= tcp_sk(sk
);
798 int tmp
= tp
->mss_cache
;
800 if (sk
->sk_route_caps
& NETIF_F_SG
) {
804 int pgbreak
= SKB_MAX_HEAD(MAX_TCP_HEADER
);
806 if (tmp
>= pgbreak
&&
807 tmp
<= pgbreak
+ (MAX_SKB_FRAGS
- 1) * PAGE_SIZE
)
815 int tcp_sendmsg(struct kiocb
*iocb
, struct socket
*sock
, struct msghdr
*msg
,
818 struct sock
*sk
= sock
->sk
;
820 struct tcp_sock
*tp
= tcp_sk(sk
);
823 int mss_now
, size_goal
;
830 flags
= msg
->msg_flags
;
831 timeo
= sock_sndtimeo(sk
, flags
& MSG_DONTWAIT
);
833 /* Wait for a connection to finish. */
834 if ((1 << sk
->sk_state
) & ~(TCPF_ESTABLISHED
| TCPF_CLOSE_WAIT
))
835 if ((err
= sk_stream_wait_connect(sk
, &timeo
)) != 0)
838 /* This should be in poll */
839 clear_bit(SOCK_ASYNC_NOSPACE
, &sk
->sk_socket
->flags
);
841 mss_now
= tcp_current_mss(sk
, !(flags
&MSG_OOB
));
842 size_goal
= tp
->xmit_size_goal
;
844 /* Ok commence sending. */
845 iovlen
= msg
->msg_iovlen
;
850 if (sk
->sk_err
|| (sk
->sk_shutdown
& SEND_SHUTDOWN
))
853 while (--iovlen
>= 0) {
854 int seglen
= iov
->iov_len
;
855 unsigned char __user
*from
= iov
->iov_base
;
862 skb
= tcp_write_queue_tail(sk
);
864 if (!tcp_send_head(sk
) ||
865 (copy
= size_goal
- skb
->len
) <= 0) {
868 /* Allocate new segment. If the interface is SG,
869 * allocate skb fitting to single page.
871 if (!sk_stream_memory_free(sk
))
872 goto wait_for_sndbuf
;
874 skb
= sk_stream_alloc_skb(sk
, select_size(sk
),
877 goto wait_for_memory
;
880 * Check whether we can use HW checksum.
882 if (sk
->sk_route_caps
& NETIF_F_ALL_CSUM
)
883 skb
->ip_summed
= CHECKSUM_PARTIAL
;
889 /* Try to append data to the end of skb. */
893 /* Where to copy to? */
894 if (skb_tailroom(skb
) > 0) {
895 /* We have some space in skb head. Superb! */
896 if (copy
> skb_tailroom(skb
))
897 copy
= skb_tailroom(skb
);
898 if ((err
= skb_add_data(skb
, from
, copy
)) != 0)
902 int i
= skb_shinfo(skb
)->nr_frags
;
903 struct page
*page
= TCP_PAGE(sk
);
904 int off
= TCP_OFF(sk
);
906 if (skb_can_coalesce(skb
, i
, page
, off
) &&
908 /* We can extend the last page
911 } else if (i
== MAX_SKB_FRAGS
||
913 !(sk
->sk_route_caps
& NETIF_F_SG
))) {
914 /* Need to add new fragment and cannot
915 * do this because interface is non-SG,
916 * or because all the page slots are
918 tcp_mark_push(tp
, skb
);
921 if (off
== PAGE_SIZE
) {
923 TCP_PAGE(sk
) = page
= NULL
;
929 if (copy
> PAGE_SIZE
- off
)
930 copy
= PAGE_SIZE
- off
;
932 if (!sk_wmem_schedule(sk
, copy
))
933 goto wait_for_memory
;
936 /* Allocate new cache page. */
937 if (!(page
= sk_stream_alloc_page(sk
)))
938 goto wait_for_memory
;
941 /* Time to copy data. We are close to
943 err
= skb_copy_to_page(sk
, from
, skb
, page
,
946 /* If this page was new, give it to the
947 * socket so it does not get leaked.
956 /* Update the skb. */
958 skb_shinfo(skb
)->frags
[i
- 1].size
+=
961 skb_fill_page_desc(skb
, i
, page
, off
, copy
);
964 } else if (off
+ copy
< PAGE_SIZE
) {
970 TCP_OFF(sk
) = off
+ copy
;
974 TCP_SKB_CB(skb
)->flags
&= ~TCPCB_FLAG_PSH
;
976 tp
->write_seq
+= copy
;
977 TCP_SKB_CB(skb
)->end_seq
+= copy
;
978 skb_shinfo(skb
)->gso_segs
= 0;
982 if ((seglen
-= copy
) == 0 && iovlen
== 0)
985 if (skb
->len
< mss_now
|| (flags
& MSG_OOB
))
988 if (forced_push(tp
)) {
989 tcp_mark_push(tp
, skb
);
990 __tcp_push_pending_frames(sk
, mss_now
, TCP_NAGLE_PUSH
);
991 } else if (skb
== tcp_send_head(sk
))
992 tcp_push_one(sk
, mss_now
);
996 set_bit(SOCK_NOSPACE
, &sk
->sk_socket
->flags
);
999 tcp_push(sk
, flags
& ~MSG_MORE
, mss_now
, TCP_NAGLE_PUSH
);
1001 if ((err
= sk_stream_wait_memory(sk
, &timeo
)) != 0)
1004 mss_now
= tcp_current_mss(sk
, !(flags
&MSG_OOB
));
1005 size_goal
= tp
->xmit_size_goal
;
1011 tcp_push(sk
, flags
, mss_now
, tp
->nonagle
);
1012 TCP_CHECK_TIMER(sk
);
1018 tcp_unlink_write_queue(skb
, sk
);
1019 /* It is the one place in all of TCP, except connection
1020 * reset, where we can be unlinking the send_head.
1022 tcp_check_send_head(sk
, skb
);
1023 sk_wmem_free_skb(sk
, skb
);
1030 err
= sk_stream_error(sk
, flags
, err
);
1031 TCP_CHECK_TIMER(sk
);
1037 * Handle reading urgent data. BSD has very simple semantics for
1038 * this, no blocking and very strange errors 8)
1041 static int tcp_recv_urg(struct sock
*sk
, long timeo
,
1042 struct msghdr
*msg
, int len
, int flags
,
1045 struct tcp_sock
*tp
= tcp_sk(sk
);
1047 /* No URG data to read. */
1048 if (sock_flag(sk
, SOCK_URGINLINE
) || !tp
->urg_data
||
1049 tp
->urg_data
== TCP_URG_READ
)
1050 return -EINVAL
; /* Yes this is right ! */
1052 if (sk
->sk_state
== TCP_CLOSE
&& !sock_flag(sk
, SOCK_DONE
))
1055 if (tp
->urg_data
& TCP_URG_VALID
) {
1057 char c
= tp
->urg_data
;
1059 if (!(flags
& MSG_PEEK
))
1060 tp
->urg_data
= TCP_URG_READ
;
1062 /* Read urgent data. */
1063 msg
->msg_flags
|= MSG_OOB
;
1066 if (!(flags
& MSG_TRUNC
))
1067 err
= memcpy_toiovec(msg
->msg_iov
, &c
, 1);
1070 msg
->msg_flags
|= MSG_TRUNC
;
1072 return err
? -EFAULT
: len
;
1075 if (sk
->sk_state
== TCP_CLOSE
|| (sk
->sk_shutdown
& RCV_SHUTDOWN
))
1078 /* Fixed the recv(..., MSG_OOB) behaviour. BSD docs and
1079 * the available implementations agree in this case:
1080 * this call should never block, independent of the
1081 * blocking state of the socket.
1082 * Mike <pall@rz.uni-karlsruhe.de>
1087 /* Clean up the receive buffer for full frames taken by the user,
1088 * then send an ACK if necessary. COPIED is the number of bytes
1089 * tcp_recvmsg has given to the user so far, it speeds up the
1090 * calculation of whether or not we must ACK for the sake of
1093 void tcp_cleanup_rbuf(struct sock
*sk
, int copied
)
1095 struct tcp_sock
*tp
= tcp_sk(sk
);
1096 int time_to_ack
= 0;
1099 struct sk_buff
*skb
= skb_peek(&sk
->sk_receive_queue
);
1101 BUG_TRAP(!skb
|| before(tp
->copied_seq
, TCP_SKB_CB(skb
)->end_seq
));
1104 if (inet_csk_ack_scheduled(sk
)) {
1105 const struct inet_connection_sock
*icsk
= inet_csk(sk
);
1106 /* Delayed ACKs frequently hit locked sockets during bulk
1108 if (icsk
->icsk_ack
.blocked
||
1109 /* Once-per-two-segments ACK was not sent by tcp_input.c */
1110 tp
->rcv_nxt
- tp
->rcv_wup
> icsk
->icsk_ack
.rcv_mss
||
1112 * If this read emptied read buffer, we send ACK, if
1113 * connection is not bidirectional, user drained
1114 * receive buffer and there was a small segment
1118 ((icsk
->icsk_ack
.pending
& ICSK_ACK_PUSHED2
) ||
1119 ((icsk
->icsk_ack
.pending
& ICSK_ACK_PUSHED
) &&
1120 !icsk
->icsk_ack
.pingpong
)) &&
1121 !atomic_read(&sk
->sk_rmem_alloc
)))
1125 /* We send an ACK if we can now advertise a non-zero window
1126 * which has been raised "significantly".
1128 * Even if window raised up to infinity, do not send window open ACK
1129 * in states, where we will not receive more. It is useless.
1131 if (copied
> 0 && !time_to_ack
&& !(sk
->sk_shutdown
& RCV_SHUTDOWN
)) {
1132 __u32 rcv_window_now
= tcp_receive_window(tp
);
1134 /* Optimize, __tcp_select_window() is not cheap. */
1135 if (2*rcv_window_now
<= tp
->window_clamp
) {
1136 __u32 new_window
= __tcp_select_window(sk
);
1138 /* Send ACK now, if this read freed lots of space
1139 * in our buffer. Certainly, new_window is new window.
1140 * We can advertise it now, if it is not less than current one.
1141 * "Lots" means "at least twice" here.
1143 if (new_window
&& new_window
>= 2 * rcv_window_now
)
1151 static void tcp_prequeue_process(struct sock
*sk
)
1153 struct sk_buff
*skb
;
1154 struct tcp_sock
*tp
= tcp_sk(sk
);
1156 NET_INC_STATS_USER(LINUX_MIB_TCPPREQUEUED
);
1158 /* RX process wants to run with disabled BHs, though it is not
1161 while ((skb
= __skb_dequeue(&tp
->ucopy
.prequeue
)) != NULL
)
1162 sk
->sk_backlog_rcv(sk
, skb
);
1165 /* Clear memory counter. */
1166 tp
->ucopy
.memory
= 0;
1169 static inline struct sk_buff
*tcp_recv_skb(struct sock
*sk
, u32 seq
, u32
*off
)
1171 struct sk_buff
*skb
;
1174 skb_queue_walk(&sk
->sk_receive_queue
, skb
) {
1175 offset
= seq
- TCP_SKB_CB(skb
)->seq
;
1176 if (tcp_hdr(skb
)->syn
)
1178 if (offset
< skb
->len
|| tcp_hdr(skb
)->fin
) {
1187 * This routine provides an alternative to tcp_recvmsg() for routines
1188 * that would like to handle copying from skbuffs directly in 'sendfile'
1191 * - It is assumed that the socket was locked by the caller.
1192 * - The routine does not block.
1193 * - At present, there is no support for reading OOB data
1194 * or for 'peeking' the socket using this routine
1195 * (although both would be easy to implement).
1197 int tcp_read_sock(struct sock
*sk
, read_descriptor_t
*desc
,
1198 sk_read_actor_t recv_actor
)
1200 struct sk_buff
*skb
;
1201 struct tcp_sock
*tp
= tcp_sk(sk
);
1202 u32 seq
= tp
->copied_seq
;
1206 if (sk
->sk_state
== TCP_LISTEN
)
1208 while ((skb
= tcp_recv_skb(sk
, seq
, &offset
)) != NULL
) {
1209 if (offset
< skb
->len
) {
1212 len
= skb
->len
- offset
;
1213 /* Stop reading if we hit a patch of urgent data */
1215 u32 urg_offset
= tp
->urg_seq
- seq
;
1216 if (urg_offset
< len
)
1221 used
= recv_actor(desc
, skb
, offset
, len
);
1226 } else if (used
<= len
) {
1231 if (offset
!= skb
->len
)
1234 if (tcp_hdr(skb
)->fin
) {
1235 sk_eat_skb(sk
, skb
, 0);
1239 sk_eat_skb(sk
, skb
, 0);
1243 tp
->copied_seq
= seq
;
1245 tcp_rcv_space_adjust(sk
);
1247 /* Clean up data we have read: This will do ACK frames. */
1249 tcp_cleanup_rbuf(sk
, copied
);
1254 * This routine copies from a sock struct into the user buffer.
1256 * Technical note: in 2.3 we work on _locked_ socket, so that
1257 * tricks with *seq access order and skb->users are not required.
1258 * Probably, code can be easily improved even more.
1261 int tcp_recvmsg(struct kiocb
*iocb
, struct sock
*sk
, struct msghdr
*msg
,
1262 size_t len
, int nonblock
, int flags
, int *addr_len
)
1264 struct tcp_sock
*tp
= tcp_sk(sk
);
1270 int target
; /* Read at least this many bytes */
1272 struct task_struct
*user_recv
= NULL
;
1273 int copied_early
= 0;
1274 struct sk_buff
*skb
;
1278 TCP_CHECK_TIMER(sk
);
1281 if (sk
->sk_state
== TCP_LISTEN
)
1284 timeo
= sock_rcvtimeo(sk
, nonblock
);
1286 /* Urgent data needs to be handled specially. */
1287 if (flags
& MSG_OOB
)
1290 seq
= &tp
->copied_seq
;
1291 if (flags
& MSG_PEEK
) {
1292 peek_seq
= tp
->copied_seq
;
1296 target
= sock_rcvlowat(sk
, flags
& MSG_WAITALL
, len
);
1298 #ifdef CONFIG_NET_DMA
1299 tp
->ucopy
.dma_chan
= NULL
;
1301 skb
= skb_peek_tail(&sk
->sk_receive_queue
);
1306 available
= TCP_SKB_CB(skb
)->seq
+ skb
->len
- (*seq
);
1307 if ((available
< target
) &&
1308 (len
> sysctl_tcp_dma_copybreak
) && !(flags
& MSG_PEEK
) &&
1309 !sysctl_tcp_low_latency
&&
1310 __get_cpu_var(softnet_data
).net_dma
) {
1311 preempt_enable_no_resched();
1312 tp
->ucopy
.pinned_list
=
1313 dma_pin_iovec_pages(msg
->msg_iov
, len
);
1315 preempt_enable_no_resched();
1323 /* Are we at urgent data? Stop if we have read anything or have SIGURG pending. */
1324 if (tp
->urg_data
&& tp
->urg_seq
== *seq
) {
1327 if (signal_pending(current
)) {
1328 copied
= timeo
? sock_intr_errno(timeo
) : -EAGAIN
;
1333 /* Next get a buffer. */
1335 skb
= skb_peek(&sk
->sk_receive_queue
);
1340 /* Now that we have two receive queues this
1343 if (before(*seq
, TCP_SKB_CB(skb
)->seq
)) {
1344 printk(KERN_INFO
"recvmsg bug: copied %X "
1345 "seq %X\n", *seq
, TCP_SKB_CB(skb
)->seq
);
1348 offset
= *seq
- TCP_SKB_CB(skb
)->seq
;
1349 if (tcp_hdr(skb
)->syn
)
1351 if (offset
< skb
->len
)
1353 if (tcp_hdr(skb
)->fin
)
1355 BUG_TRAP(flags
& MSG_PEEK
);
1357 } while (skb
!= (struct sk_buff
*)&sk
->sk_receive_queue
);
1359 /* Well, if we have backlog, try to process it now yet. */
1361 if (copied
>= target
&& !sk
->sk_backlog
.tail
)
1366 sk
->sk_state
== TCP_CLOSE
||
1367 (sk
->sk_shutdown
& RCV_SHUTDOWN
) ||
1369 signal_pending(current
) ||
1373 if (sock_flag(sk
, SOCK_DONE
))
1377 copied
= sock_error(sk
);
1381 if (sk
->sk_shutdown
& RCV_SHUTDOWN
)
1384 if (sk
->sk_state
== TCP_CLOSE
) {
1385 if (!sock_flag(sk
, SOCK_DONE
)) {
1386 /* This occurs when user tries to read
1387 * from never connected socket.
1400 if (signal_pending(current
)) {
1401 copied
= sock_intr_errno(timeo
);
1406 tcp_cleanup_rbuf(sk
, copied
);
1408 if (!sysctl_tcp_low_latency
&& tp
->ucopy
.task
== user_recv
) {
1409 /* Install new reader */
1410 if (!user_recv
&& !(flags
& (MSG_TRUNC
| MSG_PEEK
))) {
1411 user_recv
= current
;
1412 tp
->ucopy
.task
= user_recv
;
1413 tp
->ucopy
.iov
= msg
->msg_iov
;
1416 tp
->ucopy
.len
= len
;
1418 BUG_TRAP(tp
->copied_seq
== tp
->rcv_nxt
||
1419 (flags
& (MSG_PEEK
| MSG_TRUNC
)));
1421 /* Ugly... If prequeue is not empty, we have to
1422 * process it before releasing socket, otherwise
1423 * order will be broken at second iteration.
1424 * More elegant solution is required!!!
1426 * Look: we have the following (pseudo)queues:
1428 * 1. packets in flight
1433 * Each queue can be processed only if the next ones
1434 * are empty. At this point we have empty receive_queue.
1435 * But prequeue _can_ be not empty after 2nd iteration,
1436 * when we jumped to start of loop because backlog
1437 * processing added something to receive_queue.
1438 * We cannot release_sock(), because backlog contains
1439 * packets arrived _after_ prequeued ones.
1441 * Shortly, algorithm is clear --- to process all
1442 * the queues in order. We could make it more directly,
1443 * requeueing packets from backlog to prequeue, if
1444 * is not empty. It is more elegant, but eats cycles,
1447 if (!skb_queue_empty(&tp
->ucopy
.prequeue
))
1450 /* __ Set realtime policy in scheduler __ */
1453 if (copied
>= target
) {
1454 /* Do not sleep, just process backlog. */
1458 sk_wait_data(sk
, &timeo
);
1460 #ifdef CONFIG_NET_DMA
1461 tp
->ucopy
.wakeup
= 0;
1467 /* __ Restore normal policy in scheduler __ */
1469 if ((chunk
= len
- tp
->ucopy
.len
) != 0) {
1470 NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMBACKLOG
, chunk
);
1475 if (tp
->rcv_nxt
== tp
->copied_seq
&&
1476 !skb_queue_empty(&tp
->ucopy
.prequeue
)) {
1478 tcp_prequeue_process(sk
);
1480 if ((chunk
= len
- tp
->ucopy
.len
) != 0) {
1481 NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE
, chunk
);
1487 if ((flags
& MSG_PEEK
) && peek_seq
!= tp
->copied_seq
) {
1488 if (net_ratelimit())
1489 printk(KERN_DEBUG
"TCP(%s:%d): Application bug, race in MSG_PEEK.\n",
1490 current
->comm
, task_pid_nr(current
));
1491 peek_seq
= tp
->copied_seq
;
1496 /* Ok so how much can we use? */
1497 used
= skb
->len
- offset
;
1501 /* Do we have urgent data here? */
1503 u32 urg_offset
= tp
->urg_seq
- *seq
;
1504 if (urg_offset
< used
) {
1506 if (!sock_flag(sk
, SOCK_URGINLINE
)) {
1518 if (!(flags
& MSG_TRUNC
)) {
1519 #ifdef CONFIG_NET_DMA
1520 if (!tp
->ucopy
.dma_chan
&& tp
->ucopy
.pinned_list
)
1521 tp
->ucopy
.dma_chan
= get_softnet_dma();
1523 if (tp
->ucopy
.dma_chan
) {
1524 tp
->ucopy
.dma_cookie
= dma_skb_copy_datagram_iovec(
1525 tp
->ucopy
.dma_chan
, skb
, offset
,
1527 tp
->ucopy
.pinned_list
);
1529 if (tp
->ucopy
.dma_cookie
< 0) {
1531 printk(KERN_ALERT
"dma_cookie < 0\n");
1533 /* Exception. Bailout! */
1538 if ((offset
+ used
) == skb
->len
)
1544 err
= skb_copy_datagram_iovec(skb
, offset
,
1545 msg
->msg_iov
, used
);
1547 /* Exception. Bailout! */
1559 tcp_rcv_space_adjust(sk
);
1562 if (tp
->urg_data
&& after(tp
->copied_seq
, tp
->urg_seq
)) {
1564 tcp_fast_path_check(sk
);
1566 if (used
+ offset
< skb
->len
)
1569 if (tcp_hdr(skb
)->fin
)
1571 if (!(flags
& MSG_PEEK
)) {
1572 sk_eat_skb(sk
, skb
, copied_early
);
1578 /* Process the FIN. */
1580 if (!(flags
& MSG_PEEK
)) {
1581 sk_eat_skb(sk
, skb
, copied_early
);
1588 if (!skb_queue_empty(&tp
->ucopy
.prequeue
)) {
1591 tp
->ucopy
.len
= copied
> 0 ? len
: 0;
1593 tcp_prequeue_process(sk
);
1595 if (copied
> 0 && (chunk
= len
- tp
->ucopy
.len
) != 0) {
1596 NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE
, chunk
);
1602 tp
->ucopy
.task
= NULL
;
1606 #ifdef CONFIG_NET_DMA
1607 if (tp
->ucopy
.dma_chan
) {
1608 dma_cookie_t done
, used
;
1610 dma_async_memcpy_issue_pending(tp
->ucopy
.dma_chan
);
1612 while (dma_async_memcpy_complete(tp
->ucopy
.dma_chan
,
1613 tp
->ucopy
.dma_cookie
, &done
,
1614 &used
) == DMA_IN_PROGRESS
) {
1615 /* do partial cleanup of sk_async_wait_queue */
1616 while ((skb
= skb_peek(&sk
->sk_async_wait_queue
)) &&
1617 (dma_async_is_complete(skb
->dma_cookie
, done
,
1618 used
) == DMA_SUCCESS
)) {
1619 __skb_dequeue(&sk
->sk_async_wait_queue
);
1624 /* Safe to free early-copied skbs now */
1625 __skb_queue_purge(&sk
->sk_async_wait_queue
);
1626 dma_chan_put(tp
->ucopy
.dma_chan
);
1627 tp
->ucopy
.dma_chan
= NULL
;
1629 if (tp
->ucopy
.pinned_list
) {
1630 dma_unpin_iovec_pages(tp
->ucopy
.pinned_list
);
1631 tp
->ucopy
.pinned_list
= NULL
;
1635 /* According to UNIX98, msg_name/msg_namelen are ignored
1636 * on connected socket. I was just happy when found this 8) --ANK
1639 /* Clean up data we have read: This will do ACK frames. */
1640 tcp_cleanup_rbuf(sk
, copied
);
1642 TCP_CHECK_TIMER(sk
);
1647 TCP_CHECK_TIMER(sk
);
1652 err
= tcp_recv_urg(sk
, timeo
, msg
, len
, flags
, addr_len
);
1657 * State processing on a close. This implements the state shift for
1658 * sending our FIN frame. Note that we only send a FIN for some
1659 * states. A shutdown() may have already sent the FIN, or we may be
1663 static const unsigned char new_state
[16] = {
1664 /* current state: new state: action: */
1665 /* (Invalid) */ TCP_CLOSE
,
1666 /* TCP_ESTABLISHED */ TCP_FIN_WAIT1
| TCP_ACTION_FIN
,
1667 /* TCP_SYN_SENT */ TCP_CLOSE
,
1668 /* TCP_SYN_RECV */ TCP_FIN_WAIT1
| TCP_ACTION_FIN
,
1669 /* TCP_FIN_WAIT1 */ TCP_FIN_WAIT1
,
1670 /* TCP_FIN_WAIT2 */ TCP_FIN_WAIT2
,
1671 /* TCP_TIME_WAIT */ TCP_CLOSE
,
1672 /* TCP_CLOSE */ TCP_CLOSE
,
1673 /* TCP_CLOSE_WAIT */ TCP_LAST_ACK
| TCP_ACTION_FIN
,
1674 /* TCP_LAST_ACK */ TCP_LAST_ACK
,
1675 /* TCP_LISTEN */ TCP_CLOSE
,
1676 /* TCP_CLOSING */ TCP_CLOSING
,
1679 static int tcp_close_state(struct sock
*sk
)
1681 int next
= (int)new_state
[sk
->sk_state
];
1682 int ns
= next
& TCP_STATE_MASK
;
1684 tcp_set_state(sk
, ns
);
1686 return next
& TCP_ACTION_FIN
;
1690 * Shutdown the sending side of a connection. Much like close except
1691 * that we don't receive shut down or set_sock_flag(sk, SOCK_DEAD).
1694 void tcp_shutdown(struct sock
*sk
, int how
)
1696 /* We need to grab some memory, and put together a FIN,
1697 * and then put it into the queue to be sent.
1698 * Tim MacKenzie(tym@dibbler.cs.monash.edu.au) 4 Dec '92.
1700 if (!(how
& SEND_SHUTDOWN
))
1703 /* If we've already sent a FIN, or it's a closed state, skip this. */
1704 if ((1 << sk
->sk_state
) &
1705 (TCPF_ESTABLISHED
| TCPF_SYN_SENT
|
1706 TCPF_SYN_RECV
| TCPF_CLOSE_WAIT
)) {
1707 /* Clear out any half completed packets. FIN if needed. */
1708 if (tcp_close_state(sk
))
1713 void tcp_close(struct sock
*sk
, long timeout
)
1715 struct sk_buff
*skb
;
1716 int data_was_unread
= 0;
1720 sk
->sk_shutdown
= SHUTDOWN_MASK
;
1722 if (sk
->sk_state
== TCP_LISTEN
) {
1723 tcp_set_state(sk
, TCP_CLOSE
);
1726 inet_csk_listen_stop(sk
);
1728 goto adjudge_to_death
;
1731 /* We need to flush the recv. buffs. We do this only on the
1732 * descriptor close, not protocol-sourced closes, because the
1733 * reader process may not have drained the data yet!
1735 while ((skb
= __skb_dequeue(&sk
->sk_receive_queue
)) != NULL
) {
1736 u32 len
= TCP_SKB_CB(skb
)->end_seq
- TCP_SKB_CB(skb
)->seq
-
1738 data_was_unread
+= len
;
1744 /* As outlined in RFC 2525, section 2.17, we send a RST here because
1745 * data was lost. To witness the awful effects of the old behavior of
1746 * always doing a FIN, run an older 2.1.x kernel or 2.0.x, start a bulk
1747 * GET in an FTP client, suspend the process, wait for the client to
1748 * advertise a zero window, then kill -9 the FTP client, wheee...
1749 * Note: timeout is always zero in such a case.
1751 if (data_was_unread
) {
1752 /* Unread data was tossed, zap the connection. */
1753 NET_INC_STATS_USER(LINUX_MIB_TCPABORTONCLOSE
);
1754 tcp_set_state(sk
, TCP_CLOSE
);
1755 tcp_send_active_reset(sk
, GFP_KERNEL
);
1756 } else if (sock_flag(sk
, SOCK_LINGER
) && !sk
->sk_lingertime
) {
1757 /* Check zero linger _after_ checking for unread data. */
1758 sk
->sk_prot
->disconnect(sk
, 0);
1759 NET_INC_STATS_USER(LINUX_MIB_TCPABORTONDATA
);
1760 } else if (tcp_close_state(sk
)) {
1761 /* We FIN if the application ate all the data before
1762 * zapping the connection.
1765 /* RED-PEN. Formally speaking, we have broken TCP state
1766 * machine. State transitions:
1768 * TCP_ESTABLISHED -> TCP_FIN_WAIT1
1769 * TCP_SYN_RECV -> TCP_FIN_WAIT1 (forget it, it's impossible)
1770 * TCP_CLOSE_WAIT -> TCP_LAST_ACK
1772 * are legal only when FIN has been sent (i.e. in window),
1773 * rather than queued out of window. Purists blame.
1775 * F.e. "RFC state" is ESTABLISHED,
1776 * if Linux state is FIN-WAIT-1, but FIN is still not sent.
1778 * The visible declinations are that sometimes
1779 * we enter time-wait state, when it is not required really
1780 * (harmless), do not send active resets, when they are
1781 * required by specs (TCP_ESTABLISHED, TCP_CLOSE_WAIT, when
1782 * they look as CLOSING or LAST_ACK for Linux)
1783 * Probably, I missed some more holelets.
1789 sk_stream_wait_close(sk
, timeout
);
1792 state
= sk
->sk_state
;
1795 atomic_inc(sk
->sk_prot
->orphan_count
);
1797 /* It is the last release_sock in its life. It will remove backlog. */
1801 /* Now socket is owned by kernel and we acquire BH lock
1802 to finish close. No need to check for user refs.
1806 BUG_TRAP(!sock_owned_by_user(sk
));
1808 /* Have we already been destroyed by a softirq or backlog? */
1809 if (state
!= TCP_CLOSE
&& sk
->sk_state
== TCP_CLOSE
)
1812 /* This is a (useful) BSD violating of the RFC. There is a
1813 * problem with TCP as specified in that the other end could
1814 * keep a socket open forever with no application left this end.
1815 * We use a 3 minute timeout (about the same as BSD) then kill
1816 * our end. If they send after that then tough - BUT: long enough
1817 * that we won't make the old 4*rto = almost no time - whoops
1820 * Nope, it was not mistake. It is really desired behaviour
1821 * f.e. on http servers, when such sockets are useless, but
1822 * consume significant resources. Let's do it with special
1823 * linger2 option. --ANK
1826 if (sk
->sk_state
== TCP_FIN_WAIT2
) {
1827 struct tcp_sock
*tp
= tcp_sk(sk
);
1828 if (tp
->linger2
< 0) {
1829 tcp_set_state(sk
, TCP_CLOSE
);
1830 tcp_send_active_reset(sk
, GFP_ATOMIC
);
1831 NET_INC_STATS_BH(LINUX_MIB_TCPABORTONLINGER
);
1833 const int tmo
= tcp_fin_time(sk
);
1835 if (tmo
> TCP_TIMEWAIT_LEN
) {
1836 inet_csk_reset_keepalive_timer(sk
,
1837 tmo
- TCP_TIMEWAIT_LEN
);
1839 tcp_time_wait(sk
, TCP_FIN_WAIT2
, tmo
);
1844 if (sk
->sk_state
!= TCP_CLOSE
) {
1846 if (tcp_too_many_orphans(sk
,
1847 atomic_read(sk
->sk_prot
->orphan_count
))) {
1848 if (net_ratelimit())
1849 printk(KERN_INFO
"TCP: too many of orphaned "
1851 tcp_set_state(sk
, TCP_CLOSE
);
1852 tcp_send_active_reset(sk
, GFP_ATOMIC
);
1853 NET_INC_STATS_BH(LINUX_MIB_TCPABORTONMEMORY
);
1857 if (sk
->sk_state
== TCP_CLOSE
)
1858 inet_csk_destroy_sock(sk
);
1859 /* Otherwise, socket is reprieved until protocol close. */
1867 /* These states need RST on ABORT according to RFC793 */
1869 static inline int tcp_need_reset(int state
)
1871 return (1 << state
) &
1872 (TCPF_ESTABLISHED
| TCPF_CLOSE_WAIT
| TCPF_FIN_WAIT1
|
1873 TCPF_FIN_WAIT2
| TCPF_SYN_RECV
);
1876 int tcp_disconnect(struct sock
*sk
, int flags
)
1878 struct inet_sock
*inet
= inet_sk(sk
);
1879 struct inet_connection_sock
*icsk
= inet_csk(sk
);
1880 struct tcp_sock
*tp
= tcp_sk(sk
);
1882 int old_state
= sk
->sk_state
;
1884 if (old_state
!= TCP_CLOSE
)
1885 tcp_set_state(sk
, TCP_CLOSE
);
1887 /* ABORT function of RFC793 */
1888 if (old_state
== TCP_LISTEN
) {
1889 inet_csk_listen_stop(sk
);
1890 } else if (tcp_need_reset(old_state
) ||
1891 (tp
->snd_nxt
!= tp
->write_seq
&&
1892 (1 << old_state
) & (TCPF_CLOSING
| TCPF_LAST_ACK
))) {
1893 /* The last check adjusts for discrepancy of Linux wrt. RFC
1896 tcp_send_active_reset(sk
, gfp_any());
1897 sk
->sk_err
= ECONNRESET
;
1898 } else if (old_state
== TCP_SYN_SENT
)
1899 sk
->sk_err
= ECONNRESET
;
1901 tcp_clear_xmit_timers(sk
);
1902 __skb_queue_purge(&sk
->sk_receive_queue
);
1903 tcp_write_queue_purge(sk
);
1904 __skb_queue_purge(&tp
->out_of_order_queue
);
1905 #ifdef CONFIG_NET_DMA
1906 __skb_queue_purge(&sk
->sk_async_wait_queue
);
1911 if (!(sk
->sk_userlocks
& SOCK_BINDADDR_LOCK
))
1912 inet_reset_saddr(sk
);
1914 sk
->sk_shutdown
= 0;
1915 sock_reset_flag(sk
, SOCK_DONE
);
1917 if ((tp
->write_seq
+= tp
->max_window
+ 2) == 0)
1919 icsk
->icsk_backoff
= 0;
1921 icsk
->icsk_probes_out
= 0;
1922 tp
->packets_out
= 0;
1923 tp
->snd_ssthresh
= 0x7fffffff;
1924 tp
->snd_cwnd_cnt
= 0;
1925 tp
->bytes_acked
= 0;
1926 tcp_set_ca_state(sk
, TCP_CA_Open
);
1927 tcp_clear_retrans(tp
);
1928 inet_csk_delack_init(sk
);
1929 tcp_init_send_head(sk
);
1930 memset(&tp
->rx_opt
, 0, sizeof(tp
->rx_opt
));
1933 BUG_TRAP(!inet
->num
|| icsk
->icsk_bind_hash
);
1935 sk
->sk_error_report(sk
);
1940 * Socket option code for TCP.
1942 static int do_tcp_setsockopt(struct sock
*sk
, int level
,
1943 int optname
, char __user
*optval
, int optlen
)
1945 struct tcp_sock
*tp
= tcp_sk(sk
);
1946 struct inet_connection_sock
*icsk
= inet_csk(sk
);
1950 /* This is a string value all the others are int's */
1951 if (optname
== TCP_CONGESTION
) {
1952 char name
[TCP_CA_NAME_MAX
];
1957 val
= strncpy_from_user(name
, optval
,
1958 min(TCP_CA_NAME_MAX
-1, optlen
));
1964 err
= tcp_set_congestion_control(sk
, name
);
1969 if (optlen
< sizeof(int))
1972 if (get_user(val
, (int __user
*)optval
))
1979 /* Values greater than interface MTU won't take effect. However
1980 * at the point when this call is done we typically don't yet
1981 * know which interface is going to be used */
1982 if (val
< 8 || val
> MAX_TCP_WINDOW
) {
1986 tp
->rx_opt
.user_mss
= val
;
1991 /* TCP_NODELAY is weaker than TCP_CORK, so that
1992 * this option on corked socket is remembered, but
1993 * it is not activated until cork is cleared.
1995 * However, when TCP_NODELAY is set we make
1996 * an explicit push, which overrides even TCP_CORK
1997 * for currently queued segments.
1999 tp
->nonagle
|= TCP_NAGLE_OFF
|TCP_NAGLE_PUSH
;
2000 tcp_push_pending_frames(sk
);
2002 tp
->nonagle
&= ~TCP_NAGLE_OFF
;
2007 /* When set indicates to always queue non-full frames.
2008 * Later the user clears this option and we transmit
2009 * any pending partial frames in the queue. This is
2010 * meant to be used alongside sendfile() to get properly
2011 * filled frames when the user (for example) must write
2012 * out headers with a write() call first and then use
2013 * sendfile to send out the data parts.
2015 * TCP_CORK can be set together with TCP_NODELAY and it is
2016 * stronger than TCP_NODELAY.
2019 tp
->nonagle
|= TCP_NAGLE_CORK
;
2021 tp
->nonagle
&= ~TCP_NAGLE_CORK
;
2022 if (tp
->nonagle
&TCP_NAGLE_OFF
)
2023 tp
->nonagle
|= TCP_NAGLE_PUSH
;
2024 tcp_push_pending_frames(sk
);
2029 if (val
< 1 || val
> MAX_TCP_KEEPIDLE
)
2032 tp
->keepalive_time
= val
* HZ
;
2033 if (sock_flag(sk
, SOCK_KEEPOPEN
) &&
2034 !((1 << sk
->sk_state
) &
2035 (TCPF_CLOSE
| TCPF_LISTEN
))) {
2036 __u32 elapsed
= tcp_time_stamp
- tp
->rcv_tstamp
;
2037 if (tp
->keepalive_time
> elapsed
)
2038 elapsed
= tp
->keepalive_time
- elapsed
;
2041 inet_csk_reset_keepalive_timer(sk
, elapsed
);
2046 if (val
< 1 || val
> MAX_TCP_KEEPINTVL
)
2049 tp
->keepalive_intvl
= val
* HZ
;
2052 if (val
< 1 || val
> MAX_TCP_KEEPCNT
)
2055 tp
->keepalive_probes
= val
;
2058 if (val
< 1 || val
> MAX_TCP_SYNCNT
)
2061 icsk
->icsk_syn_retries
= val
;
2067 else if (val
> sysctl_tcp_fin_timeout
/ HZ
)
2070 tp
->linger2
= val
* HZ
;
2073 case TCP_DEFER_ACCEPT
:
2074 icsk
->icsk_accept_queue
.rskq_defer_accept
= 0;
2076 /* Translate value in seconds to number of
2078 while (icsk
->icsk_accept_queue
.rskq_defer_accept
< 32 &&
2079 val
> ((TCP_TIMEOUT_INIT
/ HZ
) <<
2080 icsk
->icsk_accept_queue
.rskq_defer_accept
))
2081 icsk
->icsk_accept_queue
.rskq_defer_accept
++;
2082 icsk
->icsk_accept_queue
.rskq_defer_accept
++;
2086 case TCP_WINDOW_CLAMP
:
2088 if (sk
->sk_state
!= TCP_CLOSE
) {
2092 tp
->window_clamp
= 0;
2094 tp
->window_clamp
= val
< SOCK_MIN_RCVBUF
/ 2 ?
2095 SOCK_MIN_RCVBUF
/ 2 : val
;
2100 icsk
->icsk_ack
.pingpong
= 1;
2102 icsk
->icsk_ack
.pingpong
= 0;
2103 if ((1 << sk
->sk_state
) &
2104 (TCPF_ESTABLISHED
| TCPF_CLOSE_WAIT
) &&
2105 inet_csk_ack_scheduled(sk
)) {
2106 icsk
->icsk_ack
.pending
|= ICSK_ACK_PUSHED
;
2107 tcp_cleanup_rbuf(sk
, 1);
2109 icsk
->icsk_ack
.pingpong
= 1;
2114 #ifdef CONFIG_TCP_MD5SIG
2116 /* Read the IP->Key mappings from userspace */
2117 err
= tp
->af_specific
->md5_parse(sk
, optval
, optlen
);
2130 int tcp_setsockopt(struct sock
*sk
, int level
, int optname
, char __user
*optval
,
2133 struct inet_connection_sock
*icsk
= inet_csk(sk
);
2135 if (level
!= SOL_TCP
)
2136 return icsk
->icsk_af_ops
->setsockopt(sk
, level
, optname
,
2138 return do_tcp_setsockopt(sk
, level
, optname
, optval
, optlen
);
2141 #ifdef CONFIG_COMPAT
2142 int compat_tcp_setsockopt(struct sock
*sk
, int level
, int optname
,
2143 char __user
*optval
, int optlen
)
2145 if (level
!= SOL_TCP
)
2146 return inet_csk_compat_setsockopt(sk
, level
, optname
,
2148 return do_tcp_setsockopt(sk
, level
, optname
, optval
, optlen
);
2151 EXPORT_SYMBOL(compat_tcp_setsockopt
);
2154 /* Return information about state of tcp endpoint in API format. */
2155 void tcp_get_info(struct sock
*sk
, struct tcp_info
*info
)
2157 struct tcp_sock
*tp
= tcp_sk(sk
);
2158 const struct inet_connection_sock
*icsk
= inet_csk(sk
);
2159 u32 now
= tcp_time_stamp
;
2161 memset(info
, 0, sizeof(*info
));
2163 info
->tcpi_state
= sk
->sk_state
;
2164 info
->tcpi_ca_state
= icsk
->icsk_ca_state
;
2165 info
->tcpi_retransmits
= icsk
->icsk_retransmits
;
2166 info
->tcpi_probes
= icsk
->icsk_probes_out
;
2167 info
->tcpi_backoff
= icsk
->icsk_backoff
;
2169 if (tp
->rx_opt
.tstamp_ok
)
2170 info
->tcpi_options
|= TCPI_OPT_TIMESTAMPS
;
2171 if (tcp_is_sack(tp
))
2172 info
->tcpi_options
|= TCPI_OPT_SACK
;
2173 if (tp
->rx_opt
.wscale_ok
) {
2174 info
->tcpi_options
|= TCPI_OPT_WSCALE
;
2175 info
->tcpi_snd_wscale
= tp
->rx_opt
.snd_wscale
;
2176 info
->tcpi_rcv_wscale
= tp
->rx_opt
.rcv_wscale
;
2179 if (tp
->ecn_flags
&TCP_ECN_OK
)
2180 info
->tcpi_options
|= TCPI_OPT_ECN
;
2182 info
->tcpi_rto
= jiffies_to_usecs(icsk
->icsk_rto
);
2183 info
->tcpi_ato
= jiffies_to_usecs(icsk
->icsk_ack
.ato
);
2184 info
->tcpi_snd_mss
= tp
->mss_cache
;
2185 info
->tcpi_rcv_mss
= icsk
->icsk_ack
.rcv_mss
;
2187 if (sk
->sk_state
== TCP_LISTEN
) {
2188 info
->tcpi_unacked
= sk
->sk_ack_backlog
;
2189 info
->tcpi_sacked
= sk
->sk_max_ack_backlog
;
2191 info
->tcpi_unacked
= tp
->packets_out
;
2192 info
->tcpi_sacked
= tp
->sacked_out
;
2194 info
->tcpi_lost
= tp
->lost_out
;
2195 info
->tcpi_retrans
= tp
->retrans_out
;
2196 info
->tcpi_fackets
= tp
->fackets_out
;
2198 info
->tcpi_last_data_sent
= jiffies_to_msecs(now
- tp
->lsndtime
);
2199 info
->tcpi_last_data_recv
= jiffies_to_msecs(now
- icsk
->icsk_ack
.lrcvtime
);
2200 info
->tcpi_last_ack_recv
= jiffies_to_msecs(now
- tp
->rcv_tstamp
);
2202 info
->tcpi_pmtu
= icsk
->icsk_pmtu_cookie
;
2203 info
->tcpi_rcv_ssthresh
= tp
->rcv_ssthresh
;
2204 info
->tcpi_rtt
= jiffies_to_usecs(tp
->srtt
)>>3;
2205 info
->tcpi_rttvar
= jiffies_to_usecs(tp
->mdev
)>>2;
2206 info
->tcpi_snd_ssthresh
= tp
->snd_ssthresh
;
2207 info
->tcpi_snd_cwnd
= tp
->snd_cwnd
;
2208 info
->tcpi_advmss
= tp
->advmss
;
2209 info
->tcpi_reordering
= tp
->reordering
;
2211 info
->tcpi_rcv_rtt
= jiffies_to_usecs(tp
->rcv_rtt_est
.rtt
)>>3;
2212 info
->tcpi_rcv_space
= tp
->rcvq_space
.space
;
2214 info
->tcpi_total_retrans
= tp
->total_retrans
;
2217 EXPORT_SYMBOL_GPL(tcp_get_info
);
2219 static int do_tcp_getsockopt(struct sock
*sk
, int level
,
2220 int optname
, char __user
*optval
, int __user
*optlen
)
2222 struct inet_connection_sock
*icsk
= inet_csk(sk
);
2223 struct tcp_sock
*tp
= tcp_sk(sk
);
2226 if (get_user(len
, optlen
))
2229 len
= min_t(unsigned int, len
, sizeof(int));
2236 val
= tp
->mss_cache
;
2237 if (!val
&& ((1 << sk
->sk_state
) & (TCPF_CLOSE
| TCPF_LISTEN
)))
2238 val
= tp
->rx_opt
.user_mss
;
2241 val
= !!(tp
->nonagle
&TCP_NAGLE_OFF
);
2244 val
= !!(tp
->nonagle
&TCP_NAGLE_CORK
);
2247 val
= (tp
->keepalive_time
? : sysctl_tcp_keepalive_time
) / HZ
;
2250 val
= (tp
->keepalive_intvl
? : sysctl_tcp_keepalive_intvl
) / HZ
;
2253 val
= tp
->keepalive_probes
? : sysctl_tcp_keepalive_probes
;
2256 val
= icsk
->icsk_syn_retries
? : sysctl_tcp_syn_retries
;
2261 val
= (val
? : sysctl_tcp_fin_timeout
) / HZ
;
2263 case TCP_DEFER_ACCEPT
:
2264 val
= !icsk
->icsk_accept_queue
.rskq_defer_accept
? 0 :
2265 ((TCP_TIMEOUT_INIT
/ HZ
) << (icsk
->icsk_accept_queue
.rskq_defer_accept
- 1));
2267 case TCP_WINDOW_CLAMP
:
2268 val
= tp
->window_clamp
;
2271 struct tcp_info info
;
2273 if (get_user(len
, optlen
))
2276 tcp_get_info(sk
, &info
);
2278 len
= min_t(unsigned int, len
, sizeof(info
));
2279 if (put_user(len
, optlen
))
2281 if (copy_to_user(optval
, &info
, len
))
2286 val
= !icsk
->icsk_ack
.pingpong
;
2289 case TCP_CONGESTION
:
2290 if (get_user(len
, optlen
))
2292 len
= min_t(unsigned int, len
, TCP_CA_NAME_MAX
);
2293 if (put_user(len
, optlen
))
2295 if (copy_to_user(optval
, icsk
->icsk_ca_ops
->name
, len
))
2299 return -ENOPROTOOPT
;
2302 if (put_user(len
, optlen
))
2304 if (copy_to_user(optval
, &val
, len
))
2309 int tcp_getsockopt(struct sock
*sk
, int level
, int optname
, char __user
*optval
,
2312 struct inet_connection_sock
*icsk
= inet_csk(sk
);
2314 if (level
!= SOL_TCP
)
2315 return icsk
->icsk_af_ops
->getsockopt(sk
, level
, optname
,
2317 return do_tcp_getsockopt(sk
, level
, optname
, optval
, optlen
);
2320 #ifdef CONFIG_COMPAT
2321 int compat_tcp_getsockopt(struct sock
*sk
, int level
, int optname
,
2322 char __user
*optval
, int __user
*optlen
)
2324 if (level
!= SOL_TCP
)
2325 return inet_csk_compat_getsockopt(sk
, level
, optname
,
2327 return do_tcp_getsockopt(sk
, level
, optname
, optval
, optlen
);
2330 EXPORT_SYMBOL(compat_tcp_getsockopt
);
2333 struct sk_buff
*tcp_tso_segment(struct sk_buff
*skb
, int features
)
2335 struct sk_buff
*segs
= ERR_PTR(-EINVAL
);
2340 unsigned int oldlen
;
2343 if (!pskb_may_pull(skb
, sizeof(*th
)))
2347 thlen
= th
->doff
* 4;
2348 if (thlen
< sizeof(*th
))
2351 if (!pskb_may_pull(skb
, thlen
))
2354 oldlen
= (u16
)~skb
->len
;
2355 __skb_pull(skb
, thlen
);
2357 if (skb_gso_ok(skb
, features
| NETIF_F_GSO_ROBUST
)) {
2358 /* Packet is from an untrusted source, reset gso_segs. */
2359 int type
= skb_shinfo(skb
)->gso_type
;
2368 !(type
& (SKB_GSO_TCPV4
| SKB_GSO_TCPV6
))))
2371 mss
= skb_shinfo(skb
)->gso_size
;
2372 skb_shinfo(skb
)->gso_segs
= DIV_ROUND_UP(skb
->len
, mss
);
2378 segs
= skb_segment(skb
, features
);
2382 len
= skb_shinfo(skb
)->gso_size
;
2383 delta
= htonl(oldlen
+ (thlen
+ len
));
2387 seq
= ntohl(th
->seq
);
2390 th
->fin
= th
->psh
= 0;
2392 th
->check
= ~csum_fold((__force __wsum
)((__force u32
)th
->check
+
2393 (__force u32
)delta
));
2394 if (skb
->ip_summed
!= CHECKSUM_PARTIAL
)
2396 csum_fold(csum_partial(skb_transport_header(skb
),
2403 th
->seq
= htonl(seq
);
2405 } while (skb
->next
);
2407 delta
= htonl(oldlen
+ (skb
->tail
- skb
->transport_header
) +
2409 th
->check
= ~csum_fold((__force __wsum
)((__force u32
)th
->check
+
2410 (__force u32
)delta
));
2411 if (skb
->ip_summed
!= CHECKSUM_PARTIAL
)
2412 th
->check
= csum_fold(csum_partial(skb_transport_header(skb
),
2418 EXPORT_SYMBOL(tcp_tso_segment
);
2420 #ifdef CONFIG_TCP_MD5SIG
2421 static unsigned long tcp_md5sig_users
;
2422 static struct tcp_md5sig_pool
**tcp_md5sig_pool
;
2423 static DEFINE_SPINLOCK(tcp_md5sig_pool_lock
);
2425 static void __tcp_free_md5sig_pool(struct tcp_md5sig_pool
**pool
)
2428 for_each_possible_cpu(cpu
) {
2429 struct tcp_md5sig_pool
*p
= *per_cpu_ptr(pool
, cpu
);
2431 if (p
->md5_desc
.tfm
)
2432 crypto_free_hash(p
->md5_desc
.tfm
);
2440 void tcp_free_md5sig_pool(void)
2442 struct tcp_md5sig_pool
**pool
= NULL
;
2444 spin_lock_bh(&tcp_md5sig_pool_lock
);
2445 if (--tcp_md5sig_users
== 0) {
2446 pool
= tcp_md5sig_pool
;
2447 tcp_md5sig_pool
= NULL
;
2449 spin_unlock_bh(&tcp_md5sig_pool_lock
);
2451 __tcp_free_md5sig_pool(pool
);
2454 EXPORT_SYMBOL(tcp_free_md5sig_pool
);
2456 static struct tcp_md5sig_pool
**__tcp_alloc_md5sig_pool(void)
2459 struct tcp_md5sig_pool
**pool
;
2461 pool
= alloc_percpu(struct tcp_md5sig_pool
*);
2465 for_each_possible_cpu(cpu
) {
2466 struct tcp_md5sig_pool
*p
;
2467 struct crypto_hash
*hash
;
2469 p
= kzalloc(sizeof(*p
), GFP_KERNEL
);
2472 *per_cpu_ptr(pool
, cpu
) = p
;
2474 hash
= crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC
);
2475 if (!hash
|| IS_ERR(hash
))
2478 p
->md5_desc
.tfm
= hash
;
2482 __tcp_free_md5sig_pool(pool
);
2486 struct tcp_md5sig_pool
**tcp_alloc_md5sig_pool(void)
2488 struct tcp_md5sig_pool
**pool
;
2492 spin_lock_bh(&tcp_md5sig_pool_lock
);
2493 pool
= tcp_md5sig_pool
;
2494 if (tcp_md5sig_users
++ == 0) {
2496 spin_unlock_bh(&tcp_md5sig_pool_lock
);
2499 spin_unlock_bh(&tcp_md5sig_pool_lock
);
2503 spin_unlock_bh(&tcp_md5sig_pool_lock
);
2506 /* we cannot hold spinlock here because this may sleep. */
2507 struct tcp_md5sig_pool
**p
= __tcp_alloc_md5sig_pool();
2508 spin_lock_bh(&tcp_md5sig_pool_lock
);
2511 spin_unlock_bh(&tcp_md5sig_pool_lock
);
2514 pool
= tcp_md5sig_pool
;
2516 /* oops, it has already been assigned. */
2517 spin_unlock_bh(&tcp_md5sig_pool_lock
);
2518 __tcp_free_md5sig_pool(p
);
2520 tcp_md5sig_pool
= pool
= p
;
2521 spin_unlock_bh(&tcp_md5sig_pool_lock
);
2527 EXPORT_SYMBOL(tcp_alloc_md5sig_pool
);
2529 struct tcp_md5sig_pool
*__tcp_get_md5sig_pool(int cpu
)
2531 struct tcp_md5sig_pool
**p
;
2532 spin_lock_bh(&tcp_md5sig_pool_lock
);
2533 p
= tcp_md5sig_pool
;
2536 spin_unlock_bh(&tcp_md5sig_pool_lock
);
2537 return (p
? *per_cpu_ptr(p
, cpu
) : NULL
);
2540 EXPORT_SYMBOL(__tcp_get_md5sig_pool
);
2542 void __tcp_put_md5sig_pool(void)
2544 tcp_free_md5sig_pool();
2547 EXPORT_SYMBOL(__tcp_put_md5sig_pool
);
2550 void tcp_done(struct sock
*sk
)
2552 if(sk
->sk_state
== TCP_SYN_SENT
|| sk
->sk_state
== TCP_SYN_RECV
)
2553 TCP_INC_STATS_BH(TCP_MIB_ATTEMPTFAILS
);
2555 tcp_set_state(sk
, TCP_CLOSE
);
2556 tcp_clear_xmit_timers(sk
);
2558 sk
->sk_shutdown
= SHUTDOWN_MASK
;
2560 if (!sock_flag(sk
, SOCK_DEAD
))
2561 sk
->sk_state_change(sk
);
2563 inet_csk_destroy_sock(sk
);
2565 EXPORT_SYMBOL_GPL(tcp_done
);
2567 extern struct tcp_congestion_ops tcp_reno
;
2569 static __initdata
unsigned long thash_entries
;
2570 static int __init
set_thash_entries(char *str
)
2574 thash_entries
= simple_strtoul(str
, &str
, 0);
2577 __setup("thash_entries=", set_thash_entries
);
2579 void __init
tcp_init(void)
2581 struct sk_buff
*skb
= NULL
;
2582 unsigned long limit
;
2583 int order
, i
, max_share
;
2585 BUILD_BUG_ON(sizeof(struct tcp_skb_cb
) > sizeof(skb
->cb
));
2587 tcp_hashinfo
.bind_bucket_cachep
=
2588 kmem_cache_create("tcp_bind_bucket",
2589 sizeof(struct inet_bind_bucket
), 0,
2590 SLAB_HWCACHE_ALIGN
|SLAB_PANIC
, NULL
);
2592 /* Size and allocate the main established and bind bucket
2595 * The methodology is similar to that of the buffer cache.
2597 tcp_hashinfo
.ehash
=
2598 alloc_large_system_hash("TCP established",
2599 sizeof(struct inet_ehash_bucket
),
2601 (num_physpages
>= 128 * 1024) ?
2604 &tcp_hashinfo
.ehash_size
,
2606 thash_entries
? 0 : 512 * 1024);
2607 tcp_hashinfo
.ehash_size
= 1 << tcp_hashinfo
.ehash_size
;
2608 for (i
= 0; i
< tcp_hashinfo
.ehash_size
; i
++) {
2609 INIT_HLIST_HEAD(&tcp_hashinfo
.ehash
[i
].chain
);
2610 INIT_HLIST_HEAD(&tcp_hashinfo
.ehash
[i
].twchain
);
2612 if (inet_ehash_locks_alloc(&tcp_hashinfo
))
2613 panic("TCP: failed to alloc ehash_locks");
2614 tcp_hashinfo
.bhash
=
2615 alloc_large_system_hash("TCP bind",
2616 sizeof(struct inet_bind_hashbucket
),
2617 tcp_hashinfo
.ehash_size
,
2618 (num_physpages
>= 128 * 1024) ?
2621 &tcp_hashinfo
.bhash_size
,
2624 tcp_hashinfo
.bhash_size
= 1 << tcp_hashinfo
.bhash_size
;
2625 for (i
= 0; i
< tcp_hashinfo
.bhash_size
; i
++) {
2626 spin_lock_init(&tcp_hashinfo
.bhash
[i
].lock
);
2627 INIT_HLIST_HEAD(&tcp_hashinfo
.bhash
[i
].chain
);
2630 /* Try to be a bit smarter and adjust defaults depending
2631 * on available memory.
2633 for (order
= 0; ((1 << order
) << PAGE_SHIFT
) <
2634 (tcp_hashinfo
.bhash_size
* sizeof(struct inet_bind_hashbucket
));
2638 tcp_death_row
.sysctl_max_tw_buckets
= 180000;
2639 sysctl_tcp_max_orphans
= 4096 << (order
- 4);
2640 sysctl_max_syn_backlog
= 1024;
2641 } else if (order
< 3) {
2642 tcp_death_row
.sysctl_max_tw_buckets
>>= (3 - order
);
2643 sysctl_tcp_max_orphans
>>= (3 - order
);
2644 sysctl_max_syn_backlog
= 128;
2647 /* Set the pressure threshold to be a fraction of global memory that
2648 * is up to 1/2 at 256 MB, decreasing toward zero with the amount of
2649 * memory, with a floor of 128 pages.
2651 limit
= min(nr_all_pages
, 1UL<<(28-PAGE_SHIFT
)) >> (20-PAGE_SHIFT
);
2652 limit
= (limit
* (nr_all_pages
>> (20-PAGE_SHIFT
))) >> (PAGE_SHIFT
-11);
2653 limit
= max(limit
, 128UL);
2654 sysctl_tcp_mem
[0] = limit
/ 4 * 3;
2655 sysctl_tcp_mem
[1] = limit
;
2656 sysctl_tcp_mem
[2] = sysctl_tcp_mem
[0] * 2;
2658 /* Set per-socket limits to no more than 1/128 the pressure threshold */
2659 limit
= ((unsigned long)sysctl_tcp_mem
[1]) << (PAGE_SHIFT
- 7);
2660 max_share
= min(4UL*1024*1024, limit
);
2662 sysctl_tcp_wmem
[0] = SK_MEM_QUANTUM
;
2663 sysctl_tcp_wmem
[1] = 16*1024;
2664 sysctl_tcp_wmem
[2] = max(64*1024, max_share
);
2666 sysctl_tcp_rmem
[0] = SK_MEM_QUANTUM
;
2667 sysctl_tcp_rmem
[1] = 87380;
2668 sysctl_tcp_rmem
[2] = max(87380, max_share
);
2670 printk(KERN_INFO
"TCP: Hash tables configured "
2671 "(established %d bind %d)\n",
2672 tcp_hashinfo
.ehash_size
, tcp_hashinfo
.bhash_size
);
2674 tcp_register_congestion_control(&tcp_reno
);
2677 EXPORT_SYMBOL(tcp_close
);
2678 EXPORT_SYMBOL(tcp_disconnect
);
2679 EXPORT_SYMBOL(tcp_getsockopt
);
2680 EXPORT_SYMBOL(tcp_ioctl
);
2681 EXPORT_SYMBOL(tcp_poll
);
2682 EXPORT_SYMBOL(tcp_read_sock
);
2683 EXPORT_SYMBOL(tcp_recvmsg
);
2684 EXPORT_SYMBOL(tcp_sendmsg
);
2685 EXPORT_SYMBOL(tcp_splice_read
);
2686 EXPORT_SYMBOL(tcp_sendpage
);
2687 EXPORT_SYMBOL(tcp_setsockopt
);
2688 EXPORT_SYMBOL(tcp_shutdown
);
2689 EXPORT_SYMBOL(tcp_statistics
);