/share/man/man9/mbpool.9

https://bitbucket.org/freebsd/freebsd-head/ · Unknown · 264 lines · 264 code · 0 blank · 0 comment · 0 complexity · bf58e1aea8c5d6e172aebbfc18ffcdae MD5 · raw file

  1. .\" Copyright (c) 2003
  2. .\" Fraunhofer Institute for Open Communication Systems (FhG Fokus).
  3. .\" All rights reserved.
  4. .\"
  5. .\" Redistribution and use in source and binary forms, with or without
  6. .\" modification, are permitted provided that the following conditions
  7. .\" are met:
  8. .\" 1. Redistributions of source code must retain the above copyright
  9. .\" notice, this list of conditions and the following disclaimer.
  10. .\" 2. Redistributions in binary form must reproduce the above copyright
  11. .\" notice, this list of conditions and the following disclaimer in the
  12. .\" documentation and/or other materials provided with the distribution.
  13. .\"
  14. .\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
  15. .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
  16. .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  17. .\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
  18. .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
  19. .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
  20. .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
  21. .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
  22. .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
  23. .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
  24. .\" SUCH DAMAGE.
  25. .\"
  26. .\" Author: Hartmut Brandt <harti@FreeBSD.org>
  27. .\"
  28. .\" $FreeBSD$
  29. .\"
  30. .Dd July 15, 2003
  31. .Dt MBPOOL 9
  32. .Os
  33. .Sh NAME
  34. .Nm mbpool
  35. .Nd "buffer pools for network interfaces"
  36. .Sh SYNOPSIS
  37. .In sys/types.h
  38. .In machine/bus.h
  39. .In sys/mbpool.h
  40. .Vt struct mbpool ;
  41. .Ft int
  42. .Fo mbp_create
  43. .Fa "struct mbpool **mbp" "const char *name" "bus_dma_tag_t dmat"
  44. .Fa "u_int max_pages" "size_t page_size" "size_t chunk_size"
  45. .Fc
  46. .Ft void
  47. .Fn mbp_destroy "struct mbpool *mbp"
  48. .Ft "void *"
  49. .Fn mbp_alloc "struct mbpool *mbp" "bus_addr_t *pa" "uint32_t *hp"
  50. .Ft void
  51. .Fn mbp_free "struct mbpool *mbp" "void *p"
  52. .Ft void
  53. .Fn mbp_ext_free "void *" "void *"
  54. .Ft void
  55. .Fn mbp_card_free "struct mbpool *mbp"
  56. .Ft void
  57. .Fn mbp_count "struct mbpool *mbp" "u_int *used" "u_int *card" "u_int *free"
  58. .Ft "void *"
  59. .Fn mbp_get "struct mbpool *mbp" "uint32_t h"
  60. .Ft "void *"
  61. .Fn mbp_get_keep "struct mbpool *mbp" "uint32_t h"
  62. .Ft void
  63. .Fo mbp_sync
  64. .Fa "struct mbpool *mbp" "uint32_t h" "bus_addr_t off" "bus_size_t len"
  65. .Fa "u_int op"
  66. .Fc
  67. .Pp
  68. .Fn MODULE_DEPEND "your_module" "libmbpool" 1 1 1
  69. .Pp
  70. .Cd "options LIBMBPOOL"
  71. .Sh DESCRIPTION
  72. Mbuf pools are intended to help drivers for interface cards that need huge
  73. amounts of receive buffers, and additionally provides a mapping between these
  74. buffers and 32-bit handles.
  75. .Pp
  76. An example of these cards are the Fore/Marconi ForeRunnerHE cards.
  77. These
  78. employ up to 8 receive groups, each with two buffer pools, each of which
  79. can contain up to 8192.
  80. This gives a total maximum number of more than
  81. 100000 buffers.
  82. Even with a more moderate configuration the card eats several
  83. thousand buffers.
  84. Each of these buffers must be mapped for DMA.
  85. While for
  86. machines without an IOMMU and with lesser than 4GByte memory this is not
  87. a problem, for other machines this may quickly eat up all available IOMMU
  88. address space and/or bounce buffers.
  89. On sparc64, the default I/O page size
  90. is 16k, so mapping a simple mbuf wastes 31/32 of the address space.
  91. .Pp
  92. Another problem with most of these cards is that they support putting a 32-bit
  93. handle into the buffer descriptor together with the physical address.
  94. This handle is reflected back to the driver when the buffer is filled, and
  95. assists the driver in finding the buffer in host memory.
  96. For 32-bit machines,
  97. the virtual address of the buffer is usually used as the handle.
  98. This does not
  99. work for 64-bit machines for obvious reasons, so a mapping is needed between
  100. these handles and the buffers.
  101. This mapping should be possible without
  102. searching lists and the like.
  103. .Pp
  104. An mbuf pool overcomes both problems by allocating DMA-able memory page wise
  105. with a per-pool configurable page size.
  106. Each page is divided into a number of
  107. equally-sized chunks, the last
  108. .Dv MBPOOL_TRAILER_SIZE
  109. of which are used by the pool code (4 bytes).
  110. The rest of each chunk is
  111. usable as a buffer.
  112. There is a per-pool limit on pages that will be allocated.
  113. .Pp
  114. Additionally, the code manages two flags for each buffer:
  115. .Dq on-card
  116. and
  117. .Dq used .
  118. A buffer may be in one of three states:
  119. .Bl -tag -width "on-card"
  120. .It free
  121. None of the flags is set.
  122. .It on-card
  123. Both flags are set.
  124. The buffer is assumed to be handed over to the card and
  125. waiting to be filled.
  126. .It used
  127. The buffer was returned by the card and is now travelling through the system.
  128. .El
  129. .Pp
  130. A pool is created with
  131. .Fn mbp_create .
  132. This call specifies a DMA tag
  133. .Fa dmat
  134. to be used to create and map the memory pages via
  135. .Xr bus_dmamem_alloc 9 .
  136. The
  137. .Fa chunk_size
  138. includes the pool overhead.
  139. It means that to get buffers for 5 ATM cells
  140. (240 bytes), a chunk size of 256 should be specified.
  141. This results in 12 unused
  142. bytes between the buffer, and the pool overhead of four byte.
  143. The total
  144. maximum number of buffers in a pool is
  145. .Fa max_pages
  146. *
  147. .Fa ( page_size
  148. /
  149. .Fa chunk_size ) .
  150. The maximum value for
  151. .Fa max_pages
  152. is 2^14-1 (16383) and the maximum of
  153. .Fa page_size
  154. /
  155. .Fa chunk_size
  156. is 2^9 (512).
  157. If the call is successful, a pointer to a newly allocated
  158. .Vt "struct mbpool"
  159. is set into the variable pointed to by
  160. .Fa mpb .
  161. .Pp
  162. A pool is destroyed with
  163. .Fn mbp_destroy .
  164. This frees all pages and the pool structure itself.
  165. If compiled with
  166. .Dv DIAGNOSTICS ,
  167. the code checks that all buffers are free.
  168. If not, a warning message is issued
  169. to the console.
  170. .Pp
  171. A buffer is allocated with
  172. .Fn mbp_alloc .
  173. This returns the virtual address of the buffer and stores the physical
  174. address into the variable pointed to by
  175. .Fa pa .
  176. The handle is stored into the variable pointed to by
  177. .Fa hp .
  178. The two most significant bits and the 7 least significant bits of the handle
  179. are unused by the pool code and may be used by the caller.
  180. These are
  181. automatically stripped when passing a handle to one of the other functions.
  182. If a buffer cannot be allocated (either because the maximum number of pages
  183. is reached, no memory is available or the memory cannot be mapped),
  184. .Dv NULL
  185. is returned.
  186. If a buffer could be allocated, it is in the
  187. .Dq on-card
  188. state.
  189. .Pp
  190. When the buffer is returned by the card, the driver calls
  191. .Fn mbp_get
  192. with the handle.
  193. This function returns the virtual address of the buffer
  194. and clears the
  195. .Dq on-card
  196. bit.
  197. The buffer is now in the
  198. .Dq used
  199. state.
  200. The function
  201. .Fn mbp_get_keep
  202. differs from
  203. .Fn mbp_get
  204. in that it does not clear the
  205. .Dq on-card
  206. bit.
  207. This can be used for buffers
  208. that are returned
  209. .Dq partially
  210. by the card.
  211. .Pp
  212. A buffer is freed by calling
  213. .Fn mbp_free
  214. with the virtual address of the buffer.
  215. This clears the
  216. .Dq used
  217. bit, and
  218. puts the buffer on the free list of the pool.
  219. Note that free buffers
  220. are NOT returned to the system.
  221. The function
  222. .Fn mbp_ext_free
  223. can be given to
  224. .Fn m_extadd
  225. as the free function.
  226. The user argument must be the pointer to
  227. the pool.
  228. .Pp
  229. Before using the contents of a buffer returned by the card, the driver
  230. must call
  231. .Fn mbp_sync
  232. with the appropriate parameters.
  233. This results in a call to
  234. .Xr bus_dmamap_sync 9
  235. for the buffer.
  236. .Pp
  237. All buffers in the pool that are currently in the
  238. .Dq on-card
  239. state can be freed
  240. with a call to
  241. .Fn mbp_card_free .
  242. This may be called by the driver when it stops the interface.
  243. Buffers in the
  244. .Dq used
  245. state are not freed by this call.
  246. .Pp
  247. For debugging it is possible to call
  248. .Fn mbp_count .
  249. This returns the number of buffers in the
  250. .Dq used
  251. and
  252. .Dq on-card
  253. states and
  254. the number of buffers on the free list.
  255. .Sh SEE ALSO
  256. .Xr mbuf 9
  257. .Sh AUTHORS
  258. .An Harti Brandt Aq harti@FreeBSD.org
  259. .Sh CAVEATS
  260. The function
  261. .Fn mbp_sync
  262. is currently a no-op because
  263. .Xr bus_dmamap_sync 9
  264. is missing the offset and length parameters.