/share/doc/smm/18.net/c.t

https://bitbucket.org/freebsd/freebsd-head/ · Raku · 151 lines · 151 code · 0 blank · 0 comment · 33 complexity · da93192e23f413b002bdcde56ecc6b22 MD5 · raw file

  1. .\" Copyright (c) 1983, 1986, 1993
  2. .\" The Regents of the University of California. All rights reserved.
  3. .\"
  4. .\" Redistribution and use in source and binary forms, with or without
  5. .\" modification, are permitted provided that the following conditions
  6. .\" are met:
  7. .\" 1. Redistributions of source code must retain the above copyright
  8. .\" notice, this list of conditions and the following disclaimer.
  9. .\" 2. Redistributions in binary form must reproduce the above copyright
  10. .\" notice, this list of conditions and the following disclaimer in the
  11. .\" documentation and/or other materials provided with the distribution.
  12. .\" 3. All advertising materials mentioning features or use of this software
  13. .\" must display the following acknowledgement:
  14. .\" This product includes software developed by the University of
  15. .\" California, Berkeley and its contributors.
  16. .\" 4. Neither the name of the University nor the names of its contributors
  17. .\" may be used to endorse or promote products derived from this software
  18. .\" without specific prior written permission.
  19. .\"
  20. .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
  21. .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
  22. .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  23. .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
  24. .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
  25. .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
  26. .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
  27. .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
  28. .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
  29. .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
  30. .\" SUCH DAMAGE.
  31. .\"
  32. .\" @(#)c.t 8.1 (Berkeley) 6/8/93
  33. .\"
  34. .nr H2 1
  35. .\".ds RH "Buffering and congestion control
  36. .br
  37. .ne 2i
  38. .NH
  39. \s+2Buffering and congestion control\s0
  40. .PP
  41. One of the major factors in the performance of a protocol is
  42. the buffering policy used. Lack of a proper buffering policy
  43. can force packets to be dropped, cause falsified windowing
  44. information to be emitted by protocols, fragment host memory,
  45. degrade the overall host performance, etc. Due to problems
  46. such as these, most systems allocate a fixed pool of memory
  47. to the networking system and impose
  48. a policy optimized for ``normal'' network operation.
  49. .PP
  50. The networking system developed for UNIX is little different in this
  51. respect. At boot time a fixed amount of memory is allocated by
  52. the networking system. At later times more system memory
  53. may be requested as the need arises, but at no time is
  54. memory ever returned to the system. It is possible to
  55. garbage collect memory from the network, but difficult. In
  56. order to perform this garbage collection properly, some
  57. portion of the network will have to be ``turned off'' as
  58. data structures are updated. The interval over which this
  59. occurs must kept small compared to the average inter-packet
  60. arrival time, or too much traffic may
  61. be lost, impacting other hosts on the network, as well as
  62. increasing load on the interconnecting mediums. In our
  63. environment we have not experienced a need for such compaction,
  64. and thus have left the problem unresolved.
  65. .PP
  66. The mbuf structure was introduced in chapter 5. In this
  67. section a brief description will be given of the allocation
  68. mechanisms, and policies used by the protocols in performing
  69. connection level buffering.
  70. .NH 2
  71. Memory management
  72. .PP
  73. The basic memory allocation routines manage a private page map,
  74. the size of which determines the maximum amount of memory
  75. that may be allocated by the network.
  76. A small amount of memory is allocated at boot time
  77. to initialize the mbuf and mbuf page cluster free lists.
  78. When the free lists are exhausted, more memory is requested
  79. from the system memory allocator if space remains in the map.
  80. If memory cannot be allocated,
  81. callers may block awaiting free memory,
  82. or the failure may be reflected to the caller immediately.
  83. The allocator will not block awaiting free map entries, however,
  84. as exhaustion of the page map usually indicates that buffers have been lost
  85. due to a ``leak.''
  86. The private page table is used by the network buffer management
  87. routines in remapping pages to
  88. be logically contiguous as the need arises. In addition, an
  89. array of reference counts parallels the page table and is used
  90. when multiple references to a page are present.
  91. .PP
  92. Mbufs are 128 byte structures, 8 fitting in a 1Kbyte
  93. page of memory. When data is placed in mbufs,
  94. it is copied or remapped into logically contiguous pages of
  95. memory from the network page pool if possible.
  96. Data smaller than half of the size
  97. of a page is copied into one or more 112 byte mbuf data areas.
  98. .NH 2
  99. Protocol buffering policies
  100. .PP
  101. Protocols reserve fixed amounts of
  102. buffering for send and receive queues at socket creation time. These
  103. amounts define the high and low water marks used by the socket routines
  104. in deciding when to block and unblock a process. The reservation
  105. of space does not currently
  106. result in any action by the memory management
  107. routines.
  108. .PP
  109. Protocols which provide connection level flow control do this
  110. based on the amount of space in the associated socket queues. That
  111. is, send windows are calculated based on the amount of free space
  112. in the socket's receive queue, while receive windows are adjusted
  113. based on the amount of data awaiting transmission in the send queue.
  114. Care has been taken to avoid the ``silly window syndrome'' described
  115. in [Clark82] at both the sending and receiving ends.
  116. .NH 2
  117. Queue limiting
  118. .PP
  119. Incoming packets from the network are always received unless
  120. memory allocation fails. However, each Level 1 protocol
  121. input queue
  122. has an upper bound on the queue's length, and any packets
  123. exceeding that bound are discarded. It is possible for a host to be
  124. overwhelmed by excessive network traffic (for instance a host
  125. acting as a gateway from a high bandwidth network to a low bandwidth
  126. network). As a ``defensive'' mechanism the queue limits may be
  127. adjusted to throttle network traffic load on a host.
  128. Consider a host willing to devote some percentage of
  129. its machine to handling network traffic.
  130. If the cost of handling an
  131. incoming packet can be calculated so that an acceptable
  132. ``packet handling rate''
  133. can be determined, then input queue lengths may be dynamically
  134. adjusted based on a host's network load and the number of packets
  135. awaiting processing. Obviously, discarding packets is
  136. not a satisfactory solution to a problem such as this
  137. (simply dropping packets is likely to increase the load on a network);
  138. the queue lengths were incorporated mainly as a safeguard mechanism.
  139. .NH 2
  140. Packet forwarding
  141. .PP
  142. When packets can not be forwarded because of memory limitations,
  143. the system attempts to generate a ``source quench'' message. In addition,
  144. any other problems encountered during packet forwarding are also
  145. reflected back to the sender in the form of ICMP packets. This
  146. helps hosts avoid unneeded retransmissions.
  147. .PP
  148. Broadcast packets are never forwarded due to possible dire
  149. consequences. In an early stage of network development, broadcast
  150. packets were forwarded and a ``routing loop'' resulted in network
  151. saturation and every host on the network crashing.