PageRenderTime 24ms CodeModel.GetById 21ms RepoModel.GetById 1ms app.codeStats 0ms

/kernel-2.6/337-net-recvmsg-MSG_PEEK.patch

http://wl500g.googlecode.com/
Patch | 80 lines | 70 code | 10 blank | 0 comment | 0 complexity | 1879040303fc42d1608a8ea2e154586d MD5 | raw file
Possible License(s): GPL-2.0
  1. Subject: [PATCH] tcp: Fix recvmsg MSG_PEEK influence of blocking behavior.
  2. From 518a09ef11f8454f4676125d47c3e775b300c6a5
  3. From: David S. Miller
  4. Date: Wed, 5 Nov 2008 03:36:01 -0800
  5. Vito Caputo noticed that tcp_recvmsg() returns immediately from
  6. partial reads when MSG_PEEK is used. In particular, this means that
  7. SO_RCVLOWAT is not respected.
  8. Simply remove the test. And this matches the behavior of several
  9. other systems, including BSD.
  10. Signed-off-by: David S. Miller <davem@davemloft.net>
  11. ---
  12. Subject: [PATCH] tcp: fix MSG_PEEK race check
  13. From 775273131810caa41dfc7f9e552ea5d8508caf40
  14. From: Ilpo Jarvinen
  15. Date: Sun, 10 May 2009 20:32:34 +0000
  16. Commit 518a09ef11 (tcp: Fix recvmsg MSG_PEEK influence of
  17. blocking behavior) lets the loop run longer than the race check
  18. did previously expect, so we need to be more careful with this
  19. check and consider the work we have been doing.
  20. I tried my best to deal with urg hole madness too which happens
  21. here:
  22. if (!sock_flag(sk, SOCK_URGINLINE)) {
  23. ++*seq;
  24. ...
  25. by using additional offset by one but I certainly have very
  26. little interest in testing that part.
  27. Signed-off-by: Ilpo J채rvinen <ilpo.jarvinen@helsinki.fi>
  28. Tested-by: Frans Pop <elendil@planet.nl>
  29. Tested-by: Ian Zimmermann <itz@buug.org>
  30. Signed-off-by: David S. Miller <davem@davemloft.net>
  31. ---
  32. diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
  33. --- a/net/ipv4/tcp.c
  34. +++ b/net/ipv4/tcp.c
  35. @@ -1321,6 +1321,7 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
  36. long timeo;
  37. struct task_struct *user_recv = NULL;
  38. int copied_early = 0;
  39. + u32 urg_hole = 0;
  40. lock_sock(sk);
  41. @@ -1374,8 +1374,7 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
  42. sk->sk_state == TCP_CLOSE ||
  43. (sk->sk_shutdown & RCV_SHUTDOWN) ||
  44. !timeo ||
  45. - signal_pending(current) ||
  46. - (flags & MSG_PEEK))
  47. + signal_pending(current))
  48. break;
  49. } else {
  50. if (sock_flag(sk, SOCK_DONE))
  51. @@ -1532,7 +1533,8 @@ do_prequeue:
  52. }
  53. }
  54. }
  55. - if ((flags & MSG_PEEK) && peek_seq != tp->copied_seq) {
  56. + if ((flags & MSG_PEEK) &&
  57. + (peek_seq - copied - urg_hole != tp->copied_seq)) {
  58. if (net_ratelimit())
  59. printk(KERN_DEBUG "TCP(%s:%d): Application bug, race in MSG_PEEK.\n",
  60. current->comm, current->pid);
  61. @@ -1553,6 +1555,7 @@ do_prequeue:
  62. if (!urg_offset) {
  63. if (!sock_flag(sk, SOCK_URGINLINE)) {
  64. ++*seq;
  65. + urg_hole++;
  66. offset++;
  67. used--;
  68. if (!used)
  69. --
  70. 1.7.1