/share/doc/smm/05.fastfs/3.t

https://bitbucket.org/freebsd/freebsd-head/ · Raku · 598 lines · 598 code · 0 blank · 0 comment · 8 complexity · 354e02437614d7aed4a325c73b748ea1 MD5 · raw file

  1. .\" Copyright (c) 1986, 1993
  2. .\" The Regents of the University of California. All rights reserved.
  3. .\"
  4. .\" Redistribution and use in source and binary forms, with or without
  5. .\" modification, are permitted provided that the following conditions
  6. .\" are met:
  7. .\" 1. Redistributions of source code must retain the above copyright
  8. .\" notice, this list of conditions and the following disclaimer.
  9. .\" 2. Redistributions in binary form must reproduce the above copyright
  10. .\" notice, this list of conditions and the following disclaimer in the
  11. .\" documentation and/or other materials provided with the distribution.
  12. .\" 3. All advertising materials mentioning features or use of this software
  13. .\" must display the following acknowledgement:
  14. .\" This product includes software developed by the University of
  15. .\" California, Berkeley and its contributors.
  16. .\" 4. Neither the name of the University nor the names of its contributors
  17. .\" may be used to endorse or promote products derived from this software
  18. .\" without specific prior written permission.
  19. .\"
  20. .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
  21. .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
  22. .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  23. .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
  24. .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
  25. .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
  26. .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
  27. .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
  28. .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
  29. .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
  30. .\" SUCH DAMAGE.
  31. .\"
  32. .\" @(#)3.t 8.1 (Berkeley) 6/8/93
  33. .\"
  34. .\" $FreeBSD$
  35. .\"
  36. .ds RH New file system
  37. .NH
  38. New file system organization
  39. .PP
  40. In the new file system organization (as in the
  41. old file system organization),
  42. each disk drive contains one or more file systems.
  43. A file system is described by its super-block,
  44. located at the beginning of the file system's disk partition.
  45. Because the super-block contains critical data,
  46. it is replicated to protect against catastrophic loss.
  47. This is done when the file system is created;
  48. since the super-block data does not change,
  49. the copies need not be referenced unless a head crash
  50. or other hard disk error causes the default super-block
  51. to be unusable.
  52. .PP
  53. To insure that it is possible to create files as large as
  54. .if n 2 ** 32
  55. .if t $2 sup 32$
  56. bytes with only two levels of indirection,
  57. the minimum size of a file system block is 4096 bytes.
  58. The size of file system blocks can be any power of two
  59. greater than or equal to 4096.
  60. The block size of a file system is recorded in the
  61. file system's super-block
  62. so it is possible for file systems with different block sizes
  63. to be simultaneously accessible on the same system.
  64. The block size must be decided at the time that
  65. the file system is created;
  66. it cannot be subsequently changed without rebuilding the file system.
  67. .PP
  68. The new file system organization divides a disk partition
  69. into one or more areas called
  70. .I "cylinder groups".
  71. A cylinder group is comprised of one or more consecutive
  72. cylinders on a disk.
  73. Associated with each cylinder group is some bookkeeping information
  74. that includes a redundant copy of the super-block,
  75. space for inodes,
  76. a bit map describing available blocks in the cylinder group,
  77. and summary information describing the usage of data blocks
  78. within the cylinder group.
  79. The bit map of available blocks in the cylinder group replaces
  80. the traditional file system's free list.
  81. For each cylinder group a static number of inodes
  82. is allocated at file system creation time.
  83. The default policy is to allocate one inode for each 2048
  84. bytes of space in the cylinder group, expecting this
  85. to be far more than will ever be needed.
  86. .PP
  87. All the cylinder group bookkeeping information could be
  88. placed at the beginning of each cylinder group.
  89. However if this approach were used,
  90. all the redundant information would be on the top platter.
  91. A single hardware failure that destroyed the top platter
  92. could cause the loss of all redundant copies of the super-block.
  93. Thus the cylinder group bookkeeping information
  94. begins at a varying offset from the beginning of the cylinder group.
  95. The offset for each successive cylinder group is calculated to be
  96. about one track further from the beginning of the cylinder group
  97. than the preceding cylinder group.
  98. In this way the redundant
  99. information spirals down into the pack so that any single track, cylinder,
  100. or platter can be lost without losing all copies of the super-block.
  101. Except for the first cylinder group,
  102. the space between the beginning of the cylinder group
  103. and the beginning of the cylinder group information
  104. is used for data blocks.\(dg
  105. .FS
  106. \(dg While it appears that the first cylinder group could be laid
  107. out with its super-block at the ``known'' location,
  108. this would not work for file systems
  109. with blocks sizes of 16 kilobytes or greater.
  110. This is because of a requirement that the first 8 kilobytes of the disk
  111. be reserved for a bootstrap program and a separate requirement that
  112. the cylinder group information begin on a file system block boundary.
  113. To start the cylinder group on a file system block boundary,
  114. file systems with block sizes larger than 8 kilobytes
  115. would have to leave an empty space between the end of
  116. the boot block and the beginning of the cylinder group.
  117. Without knowing the size of the file system blocks,
  118. the system would not know what roundup function to use
  119. to find the beginning of the first cylinder group.
  120. .FE
  121. .NH 2
  122. Optimizing storage utilization
  123. .PP
  124. Data is laid out so that larger blocks can be transferred
  125. in a single disk transaction, greatly increasing file system throughput.
  126. As an example, consider a file in the new file system
  127. composed of 4096 byte data blocks.
  128. In the old file system this file would be composed of 1024 byte blocks.
  129. By increasing the block size, disk accesses in the new file
  130. system may transfer up to four times as much information per
  131. disk transaction.
  132. In large files, several
  133. 4096 byte blocks may be allocated from the same cylinder so that
  134. even larger data transfers are possible before requiring a seek.
  135. .PP
  136. The main problem with
  137. larger blocks is that most UNIX
  138. file systems are composed of many small files.
  139. A uniformly large block size wastes space.
  140. Table 1 shows the effect of file system
  141. block size on the amount of wasted space in the file system.
  142. The files measured to obtain these figures reside on
  143. one of our time sharing
  144. systems that has roughly 1.2 gigabytes of on-line storage.
  145. The measurements are based on the active user file systems containing
  146. about 920 megabytes of formatted space.
  147. .KF
  148. .DS B
  149. .TS
  150. box;
  151. l|l|l
  152. a|n|l.
  153. Space used % waste Organization
  154. _
  155. 775.2 Mb 0.0 Data only, no separation between files
  156. 807.8 Mb 4.2 Data only, each file starts on 512 byte boundary
  157. 828.7 Mb 6.9 Data + inodes, 512 byte block UNIX file system
  158. 866.5 Mb 11.8 Data + inodes, 1024 byte block UNIX file system
  159. 948.5 Mb 22.4 Data + inodes, 2048 byte block UNIX file system
  160. 1128.3 Mb 45.6 Data + inodes, 4096 byte block UNIX file system
  161. .TE
  162. Table 1 \- Amount of wasted space as a function of block size.
  163. .DE
  164. .KE
  165. The space wasted is calculated to be the percentage of space
  166. on the disk not containing user data.
  167. As the block size on the disk
  168. increases, the waste rises quickly, to an intolerable
  169. 45.6% waste with 4096 byte file system blocks.
  170. .PP
  171. To be able to use large blocks without undue waste,
  172. small files must be stored in a more efficient way.
  173. The new file system accomplishes this goal by allowing the division
  174. of a single file system block into one or more
  175. .I "fragments".
  176. The file system fragment size is specified
  177. at the time that the file system is created;
  178. each file system block can optionally be broken into
  179. 2, 4, or 8 fragments, each of which is addressable.
  180. The lower bound on the size of these fragments is constrained
  181. by the disk sector size,
  182. typically 512 bytes.
  183. The block map associated with each cylinder group
  184. records the space available in a cylinder group
  185. at the fragment level;
  186. to determine if a block is available, aligned fragments are examined.
  187. Figure 1 shows a piece of a map from a 4096/1024 file system.
  188. .KF
  189. .DS B
  190. .TS
  191. box;
  192. l|c c c c.
  193. Bits in map XXXX XXOO OOXX OOOO
  194. Fragment numbers 0-3 4-7 8-11 12-15
  195. Block numbers 0 1 2 3
  196. .TE
  197. Figure 1 \- Example layout of blocks and fragments in a 4096/1024 file system.
  198. .DE
  199. .KE
  200. Each bit in the map records the status of a fragment;
  201. an ``X'' shows that the fragment is in use,
  202. while an ``O'' shows that the fragment is available for allocation.
  203. In this example,
  204. fragments 0\-5, 10, and 11 are in use,
  205. while fragments 6\-9, and 12\-15 are free.
  206. Fragments of adjoining blocks cannot be used as a full block,
  207. even if they are large enough.
  208. In this example,
  209. fragments 6\-9 cannot be allocated as a full block;
  210. only fragments 12\-15 can be coalesced into a full block.
  211. .PP
  212. On a file system with a block size of 4096 bytes
  213. and a fragment size of 1024 bytes,
  214. a file is represented by zero or more 4096 byte blocks of data,
  215. and possibly a single fragmented block.
  216. If a file system block must be fragmented to obtain
  217. space for a small amount of data,
  218. the remaining fragments of the block are made
  219. available for allocation to other files.
  220. As an example consider an 11000 byte file stored on
  221. a 4096/1024 byte file system.
  222. This file would uses two full size blocks and one
  223. three fragment portion of another block.
  224. If no block with three aligned fragments is
  225. available at the time the file is created,
  226. a full size block is split yielding the necessary
  227. fragments and a single unused fragment.
  228. This remaining fragment can be allocated to another file as needed.
  229. .PP
  230. Space is allocated to a file when a program does a \fIwrite\fP
  231. system call.
  232. Each time data is written to a file, the system checks to see if
  233. the size of the file has increased*.
  234. .FS
  235. * A program may be overwriting data in the middle of an existing file
  236. in which case space would already have been allocated.
  237. .FE
  238. If the file needs to be expanded to hold the new data,
  239. one of three conditions exists:
  240. .IP 1)
  241. There is enough space left in an already allocated
  242. block or fragment to hold the new data.
  243. The new data is written into the available space.
  244. .IP 2)
  245. The file contains no fragmented blocks (and the last
  246. block in the file
  247. contains insufficient space to hold the new data).
  248. If space exists in a block already allocated,
  249. the space is filled with new data.
  250. If the remainder of the new data contains more than
  251. a full block of data, a full block is allocated and
  252. the first full block of new data is written there.
  253. This process is repeated until less than a full block
  254. of new data remains.
  255. If the remaining new data to be written will
  256. fit in less than a full block,
  257. a block with the necessary fragments is located,
  258. otherwise a full block is located.
  259. The remaining new data is written into the located space.
  260. .IP 3)
  261. The file contains one or more fragments (and the
  262. fragments contain insufficient space to hold the new data).
  263. If the size of the new data plus the size of the data
  264. already in the fragments exceeds the size of a full block,
  265. a new block is allocated.
  266. The contents of the fragments are copied
  267. to the beginning of the block
  268. and the remainder of the block is filled with new data.
  269. The process then continues as in (2) above.
  270. Otherwise, if the new data to be written will
  271. fit in less than a full block,
  272. a block with the necessary fragments is located,
  273. otherwise a full block is located.
  274. The contents of the existing fragments
  275. appended with the new data
  276. are written into the allocated space.
  277. .PP
  278. The problem with expanding a file one fragment at a
  279. a time is that data may be copied many times as a
  280. fragmented block expands to a full block.
  281. Fragment reallocation can be minimized
  282. if the user program writes a full block at a time,
  283. except for a partial block at the end of the file.
  284. Since file systems with different block sizes may reside on
  285. the same system,
  286. the file system interface has been extended to provide
  287. application programs the optimal size for a read or write.
  288. For files the optimal size is the block size of the file system
  289. on which the file is being accessed.
  290. For other objects, such as pipes and sockets,
  291. the optimal size is the underlying buffer size.
  292. This feature is used by the Standard
  293. Input/Output Library,
  294. a package used by most user programs.
  295. This feature is also used by
  296. certain system utilities such as archivers and loaders
  297. that do their own input and output management
  298. and need the highest possible file system bandwidth.
  299. .PP
  300. The amount of wasted space in the 4096/1024 byte new file system
  301. organization is empirically observed to be about the same as in the
  302. 1024 byte old file system organization.
  303. A file system with 4096 byte blocks and 512 byte fragments
  304. has about the same amount of wasted space as the 512 byte
  305. block UNIX file system.
  306. The new file system uses less space
  307. than the 512 byte or 1024 byte
  308. file systems for indexing information for
  309. large files and the same amount of space
  310. for small files.
  311. These savings are offset by the need to use
  312. more space for keeping track of available free blocks.
  313. The net result is about the same disk utilization
  314. when a new file system's fragment size
  315. equals an old file system's block size.
  316. .PP
  317. In order for the layout policies to be effective,
  318. a file system cannot be kept completely full.
  319. For each file system there is a parameter, termed
  320. the free space reserve, that
  321. gives the minimum acceptable percentage of file system
  322. blocks that should be free.
  323. If the number of free blocks drops below this level
  324. only the system administrator can continue to allocate blocks.
  325. The value of this parameter may be changed at any time,
  326. even when the file system is mounted and active.
  327. The transfer rates that appear in section 4 were measured on file
  328. systems kept less than 90% full (a reserve of 10%).
  329. If the number of free blocks falls to zero,
  330. the file system throughput tends to be cut in half,
  331. because of the inability of the file system to localize
  332. blocks in a file.
  333. If a file system's performance degrades because
  334. of overfilling, it may be restored by removing
  335. files until the amount of free space once again
  336. reaches the minimum acceptable level.
  337. Access rates for files created during periods of little
  338. free space may be restored by moving their data once enough
  339. space is available.
  340. The free space reserve must be added to the
  341. percentage of waste when comparing the organizations given
  342. in Table 1.
  343. Thus, the percentage of waste in
  344. an old 1024 byte UNIX file system is roughly
  345. comparable to a new 4096/512 byte file system
  346. with the free space reserve set at 5%.
  347. (Compare 11.8% wasted with the old file system
  348. to 6.9% waste + 5% reserved space in the
  349. new file system.)
  350. .NH 2
  351. File system parameterization
  352. .PP
  353. Except for the initial creation of the free list,
  354. the old file system ignores the parameters of the underlying hardware.
  355. It has no information about either the physical characteristics
  356. of the mass storage device,
  357. or the hardware that interacts with it.
  358. A goal of the new file system is to parameterize the
  359. processor capabilities and
  360. mass storage characteristics
  361. so that blocks can be allocated in an
  362. optimum configuration-dependent way.
  363. Parameters used include the speed of the processor,
  364. the hardware support for mass storage transfers,
  365. and the characteristics of the mass storage devices.
  366. Disk technology is constantly improving and
  367. a given installation can have several different disk technologies
  368. running on a single processor.
  369. Each file system is parameterized so that it can be
  370. adapted to the characteristics of the disk on which
  371. it is placed.
  372. .PP
  373. For mass storage devices such as disks,
  374. the new file system tries to allocate new blocks
  375. on the same cylinder as the previous block in the same file.
  376. Optimally, these new blocks will also be
  377. rotationally well positioned.
  378. The distance between ``rotationally optimal'' blocks varies greatly;
  379. it can be a consecutive block
  380. or a rotationally delayed block
  381. depending on system characteristics.
  382. On a processor with an input/output channel that does not require
  383. any processor intervention between mass storage transfer requests,
  384. two consecutive disk blocks can often be accessed
  385. without suffering lost time because of an intervening disk revolution.
  386. For processors without input/output channels,
  387. the main processor must field an interrupt and
  388. prepare for a new disk transfer.
  389. The expected time to service this interrupt and
  390. schedule a new disk transfer depends on the
  391. speed of the main processor.
  392. .PP
  393. The physical characteristics of each disk include
  394. the number of blocks per track and the rate at which
  395. the disk spins.
  396. The allocation routines use this information to calculate
  397. the number of milliseconds required to skip over a block.
  398. The characteristics of the processor include
  399. the expected time to service an interrupt and schedule a
  400. new disk transfer.
  401. Given a block allocated to a file,
  402. the allocation routines calculate the number of blocks to
  403. skip over so that the next block in the file will
  404. come into position under the disk head in the expected
  405. amount of time that it takes to start a new
  406. disk transfer operation.
  407. For programs that sequentially access large amounts of data,
  408. this strategy minimizes the amount of time spent waiting for
  409. the disk to position itself.
  410. .PP
  411. To ease the calculation of finding rotationally optimal blocks,
  412. the cylinder group summary information includes
  413. a count of the available blocks in a cylinder
  414. group at different rotational positions.
  415. Eight rotational positions are distinguished,
  416. so the resolution of the
  417. summary information is 2 milliseconds for a typical 3600
  418. revolution per minute drive.
  419. The super-block contains a vector of lists called
  420. .I "rotational layout tables".
  421. The vector is indexed by rotational position.
  422. Each component of the vector
  423. lists the index into the block map for every data block contained
  424. in its rotational position.
  425. When looking for an allocatable block,
  426. the system first looks through the summary counts for a rotational
  427. position with a non-zero block count.
  428. It then uses the index of the rotational position to find the appropriate
  429. list to use to index through
  430. only the relevant parts of the block map to find a free block.
  431. .PP
  432. The parameter that defines the
  433. minimum number of milliseconds between the completion of a data
  434. transfer and the initiation of
  435. another data transfer on the same cylinder
  436. can be changed at any time,
  437. even when the file system is mounted and active.
  438. If a file system is parameterized to lay out blocks with
  439. a rotational separation of 2 milliseconds,
  440. and the disk pack is then moved to a system that has a
  441. processor requiring 4 milliseconds to schedule a disk operation,
  442. the throughput will drop precipitously because of lost disk revolutions
  443. on nearly every block.
  444. If the eventual target machine is known,
  445. the file system can be parameterized for it
  446. even though it is initially created on a different processor.
  447. Even if the move is not known in advance,
  448. the rotational layout delay can be reconfigured after the disk is moved
  449. so that all further allocation is done based on the
  450. characteristics of the new host.
  451. .NH 2
  452. Layout policies
  453. .PP
  454. The file system layout policies are divided into two distinct parts.
  455. At the top level are global policies that use file system
  456. wide summary information to make decisions regarding
  457. the placement of new inodes and data blocks.
  458. These routines are responsible for deciding the
  459. placement of new directories and files.
  460. They also calculate rotationally optimal block layouts,
  461. and decide when to force a long seek to a new cylinder group
  462. because there are insufficient blocks left
  463. in the current cylinder group to do reasonable layouts.
  464. Below the global policy routines are
  465. the local allocation routines that use a locally optimal scheme to
  466. lay out data blocks.
  467. .PP
  468. Two methods for improving file system performance are to increase
  469. the locality of reference to minimize seek latency
  470. as described by [Trivedi80], and
  471. to improve the layout of data to make larger transfers possible
  472. as described by [Nevalainen77].
  473. The global layout policies try to improve performance
  474. by clustering related information.
  475. They cannot attempt to localize all data references,
  476. but must also try to spread unrelated data
  477. among different cylinder groups.
  478. If too much localization is attempted,
  479. the local cylinder group may run out of space
  480. forcing the data to be scattered to non-local cylinder groups.
  481. Taken to an extreme,
  482. total localization can result in a single huge cluster of data
  483. resembling the old file system.
  484. The global policies try to balance the two conflicting
  485. goals of localizing data that is concurrently accessed
  486. while spreading out unrelated data.
  487. .PP
  488. One allocatable resource is inodes.
  489. Inodes are used to describe both files and directories.
  490. Inodes of files in the same directory are frequently accessed together.
  491. For example, the ``list directory'' command often accesses
  492. the inode for each file in a directory.
  493. The layout policy tries to place all the inodes of
  494. files in a directory in the same cylinder group.
  495. To ensure that files are distributed throughout the disk,
  496. a different policy is used for directory allocation.
  497. A new directory is placed in a cylinder group that has a greater
  498. than average number of free inodes,
  499. and the smallest number of directories already in it.
  500. The intent of this policy is to allow the inode clustering policy
  501. to succeed most of the time.
  502. The allocation of inodes within a cylinder group is done using a
  503. next free strategy.
  504. Although this allocates the inodes randomly within a cylinder group,
  505. all the inodes for a particular cylinder group can be read with
  506. 8 to 16 disk transfers.
  507. (At most 16 disk transfers are required because a cylinder
  508. group may have no more than 2048 inodes.)
  509. This puts a small and constant upper bound on the number of
  510. disk transfers required to access the inodes
  511. for all the files in a directory.
  512. In contrast, the old file system typically requires
  513. one disk transfer to fetch the inode for each file in a directory.
  514. .PP
  515. The other major resource is data blocks.
  516. Since data blocks for a file are typically accessed together,
  517. the policy routines try to place all data
  518. blocks for a file in the same cylinder group,
  519. preferably at rotationally optimal positions in the same cylinder.
  520. The problem with allocating all the data blocks
  521. in the same cylinder group is that large files will
  522. quickly use up available space in the cylinder group,
  523. forcing a spill over to other areas.
  524. Further, using all the space in a cylinder group
  525. causes future allocations for any file in the cylinder group
  526. to also spill to other areas.
  527. Ideally none of the cylinder groups should ever become completely full.
  528. The heuristic solution chosen is to
  529. redirect block allocation
  530. to a different cylinder group
  531. when a file exceeds 48 kilobytes,
  532. and at every megabyte thereafter.*
  533. .FS
  534. * The first spill over point at 48 kilobytes is the point
  535. at which a file on a 4096 byte block file system first
  536. requires a single indirect block. This appears to be
  537. a natural first point at which to redirect block allocation.
  538. The other spillover points are chosen with the intent of
  539. forcing block allocation to be redirected when a
  540. file has used about 25% of the data blocks in a cylinder group.
  541. In observing the new file system in day to day use, the heuristics appear
  542. to work well in minimizing the number of completely filled
  543. cylinder groups.
  544. .FE
  545. The newly chosen cylinder group is selected from those cylinder
  546. groups that have a greater than average number of free blocks left.
  547. Although big files tend to be spread out over the disk,
  548. a megabyte of data is typically accessible before
  549. a long seek must be performed,
  550. and the cost of one long seek per megabyte is small.
  551. .PP
  552. The global policy routines call local allocation routines with
  553. requests for specific blocks.
  554. The local allocation routines will
  555. always allocate the requested block
  556. if it is free, otherwise it
  557. allocates a free block of the requested size that is
  558. rotationally closest to the requested block.
  559. If the global layout policies had complete information,
  560. they could always request unused blocks and
  561. the allocation routines would be reduced to simple bookkeeping.
  562. However, maintaining complete information is costly;
  563. thus the implementation of the global layout policy
  564. uses heuristics that employ only partial information.
  565. .PP
  566. If a requested block is not available, the local allocator uses
  567. a four level allocation strategy:
  568. .IP 1)
  569. Use the next available block rotationally closest
  570. to the requested block on the same cylinder. It is assumed
  571. here that head switching time is zero. On disk
  572. controllers where this is not the case, it may be possible
  573. to incorporate the time required to switch between disk platters
  574. when constructing the rotational layout tables. This, however,
  575. has not yet been tried.
  576. .IP 2)
  577. If there are no blocks available on the same cylinder,
  578. use a block within the same cylinder group.
  579. .IP 3)
  580. If that cylinder group is entirely full,
  581. quadratically hash the cylinder group number to choose
  582. another cylinder group to look for a free block.
  583. .IP 4)
  584. Finally if the hash fails, apply an exhaustive search
  585. to all cylinder groups.
  586. .PP
  587. Quadratic hash is used because of its speed in finding
  588. unused slots in nearly full hash tables [Knuth75].
  589. File systems that are parameterized to maintain at least
  590. 10% free space rarely use this strategy.
  591. File systems that are run without maintaining any free
  592. space typically have so few free blocks that almost any
  593. allocation is random;
  594. the most important characteristic of
  595. the strategy used under such conditions is that the strategy be fast.
  596. .ds RH Performance
  597. .sp 2
  598. .ne 1i