/Objects/dictnotes.txt

http://unladen-swallow.googlecode.com/ · Plain Text · 270 lines · 211 code · 59 blank · 0 comment · 0 complexity · bcfe05c21297f41855c945f3c7464a75 MD5 · raw file

  1. NOTES ON OPTIMIZING DICTIONARIES
  2. ================================
  3. Principal Use Cases for Dictionaries
  4. ------------------------------------
  5. Passing keyword arguments
  6. Typically, one read and one write for 1 to 3 elements.
  7. Occurs frequently in normal python code.
  8. Class method lookup
  9. Dictionaries vary in size with 8 to 16 elements being common.
  10. Usually written once with many lookups.
  11. When base classes are used, there are many failed lookups
  12. followed by a lookup in a base class.
  13. Instance attribute lookup and Global variables
  14. Dictionaries vary in size. 4 to 10 elements are common.
  15. Both reads and writes are common.
  16. Builtins
  17. Frequent reads. Almost never written.
  18. Size 126 interned strings (as of Py2.3b1).
  19. A few keys are accessed much more frequently than others.
  20. Uniquification
  21. Dictionaries of any size. Bulk of work is in creation.
  22. Repeated writes to a smaller set of keys.
  23. Single read of each key.
  24. Some use cases have two consecutive accesses to the same key.
  25. * Removing duplicates from a sequence.
  26. dict.fromkeys(seqn).keys()
  27. * Counting elements in a sequence.
  28. for e in seqn:
  29. d[e] = d.get(e,0) + 1
  30. * Accumulating references in a dictionary of lists:
  31. for pagenumber, page in enumerate(pages):
  32. for word in page:
  33. d.setdefault(word, []).append(pagenumber)
  34. Note, the second example is a use case characterized by a get and set
  35. to the same key. There are similar use cases with a __contains__
  36. followed by a get, set, or del to the same key. Part of the
  37. justification for d.setdefault is combining the two lookups into one.
  38. Membership Testing
  39. Dictionaries of any size. Created once and then rarely changes.
  40. Single write to each key.
  41. Many calls to __contains__() or has_key().
  42. Similar access patterns occur with replacement dictionaries
  43. such as with the % formatting operator.
  44. Dynamic Mappings
  45. Characterized by deletions interspersed with adds and replacements.
  46. Performance benefits greatly from the re-use of dummy entries.
  47. Data Layout (assuming a 32-bit box with 64 bytes per cache line)
  48. ----------------------------------------------------------------
  49. Smalldicts (8 entries) are attached to the dictobject structure
  50. and the whole group nearly fills two consecutive cache lines.
  51. Larger dicts use the first half of the dictobject structure (one cache
  52. line) and a separate, continuous block of entries (at 12 bytes each
  53. for a total of 5.333 entries per cache line).
  54. Tunable Dictionary Parameters
  55. -----------------------------
  56. * PyDict_MINSIZE. Currently set to 8.
  57. Must be a power of two. New dicts have to zero-out every cell.
  58. Each additional 8 consumes 1.5 cache lines. Increasing improves
  59. the sparseness of small dictionaries but costs time to read in
  60. the additional cache lines if they are not already in cache.
  61. That case is common when keyword arguments are passed.
  62. * Maximum dictionary load in PyDict_SetItem. Currently set to 2/3.
  63. Increasing this ratio makes dictionaries more dense resulting
  64. in more collisions. Decreasing it improves sparseness at the
  65. expense of spreading entries over more cache lines and at the
  66. cost of total memory consumed.
  67. The load test occurs in highly time sensitive code. Efforts
  68. to make the test more complex (for example, varying the load
  69. for different sizes) have degraded performance.
  70. * Growth rate upon hitting maximum load. Currently set to *2.
  71. Raising this to *4 results in half the number of resizes,
  72. less effort to resize, better sparseness for some (but not
  73. all dict sizes), and potentially doubles memory consumption
  74. depending on the size of the dictionary. Setting to *4
  75. eliminates every other resize step.
  76. * Maximum sparseness (minimum dictionary load). What percentage
  77. of entries can be unused before the dictionary shrinks to
  78. free up memory and speed up iteration? (The current CPython
  79. code does not represent this parameter directly.)
  80. * Shrinkage rate upon exceeding maximum sparseness. The current
  81. CPython code never even checks sparseness when deleting a
  82. key. When a new key is added, it resizes based on the number
  83. of active keys, so that the addition may trigger shrinkage
  84. rather than growth.
  85. Tune-ups should be measured across a broad range of applications and
  86. use cases. A change to any parameter will help in some situations and
  87. hurt in others. The key is to find settings that help the most common
  88. cases and do the least damage to the less common cases. Results will
  89. vary dramatically depending on the exact number of keys, whether the
  90. keys are all strings, whether reads or writes dominate, the exact
  91. hash values of the keys (some sets of values have fewer collisions than
  92. others). Any one test or benchmark is likely to prove misleading.
  93. While making a dictionary more sparse reduces collisions, it impairs
  94. iteration and key listing. Those methods loop over every potential
  95. entry. Doubling the size of dictionary results in twice as many
  96. non-overlapping memory accesses for keys(), items(), values(),
  97. __iter__(), iterkeys(), iteritems(), itervalues(), and update().
  98. Also, every dictionary iterates at least twice, once for the memset()
  99. when it is created and once by dealloc().
  100. Dictionary operations involving only a single key can be O(1) unless
  101. resizing is possible. By checking for a resize only when the
  102. dictionary can grow (and may *require* resizing), other operations
  103. remain O(1), and the odds of resize thrashing or memory fragmentation
  104. are reduced. In particular, an algorithm that empties a dictionary
  105. by repeatedly invoking .pop will see no resizing, which might
  106. not be necessary at all because the dictionary is eventually
  107. discarded entirely.
  108. Results of Cache Locality Experiments
  109. -------------------------------------
  110. When an entry is retrieved from memory, 4.333 adjacent entries are also
  111. retrieved into a cache line. Since accessing items in cache is *much*
  112. cheaper than a cache miss, an enticing idea is to probe the adjacent
  113. entries as a first step in collision resolution. Unfortunately, the
  114. introduction of any regularity into collision searches results in more
  115. collisions than the current random chaining approach.
  116. Exploiting cache locality at the expense of additional collisions fails
  117. to payoff when the entries are already loaded in cache (the expense
  118. is paid with no compensating benefit). This occurs in small dictionaries
  119. where the whole dictionary fits into a pair of cache lines. It also
  120. occurs frequently in large dictionaries which have a common access pattern
  121. where some keys are accessed much more frequently than others. The
  122. more popular entries *and* their collision chains tend to remain in cache.
  123. To exploit cache locality, change the collision resolution section
  124. in lookdict() and lookdict_string(). Set i^=1 at the top of the
  125. loop and move the i = (i << 2) + i + perturb + 1 to an unrolled
  126. version of the loop.
  127. This optimization strategy can be leveraged in several ways:
  128. * If the dictionary is kept sparse (through the tunable parameters),
  129. then the occurrence of additional collisions is lessened.
  130. * If lookdict() and lookdict_string() are specialized for small dicts
  131. and for largedicts, then the versions for large_dicts can be given
  132. an alternate search strategy without increasing collisions in small dicts
  133. which already have the maximum benefit of cache locality.
  134. * If the use case for a dictionary is known to have a random key
  135. access pattern (as opposed to a more common pattern with a Zipf's law
  136. distribution), then there will be more benefit for large dictionaries
  137. because any given key is no more likely than another to already be
  138. in cache.
  139. * In use cases with paired accesses to the same key, the second access
  140. is always in cache and gets no benefit from efforts to further improve
  141. cache locality.
  142. Optimizing the Search of Small Dictionaries
  143. -------------------------------------------
  144. If lookdict() and lookdict_string() are specialized for smaller dictionaries,
  145. then a custom search approach can be implemented that exploits the small
  146. search space and cache locality.
  147. * The simplest example is a linear search of contiguous entries. This is
  148. simple to implement, guaranteed to terminate rapidly, never searches
  149. the same entry twice, and precludes the need to check for dummy entries.
  150. * A more advanced example is a self-organizing search so that the most
  151. frequently accessed entries get probed first. The organization
  152. adapts if the access pattern changes over time. Treaps are ideally
  153. suited for self-organization with the most common entries at the
  154. top of the heap and a rapid binary search pattern. Most probes and
  155. results are all located at the top of the tree allowing them all to
  156. be located in one or two cache lines.
  157. * Also, small dictionaries may be made more dense, perhaps filling all
  158. eight cells to take the maximum advantage of two cache lines.
  159. Strategy Pattern
  160. ----------------
  161. Consider allowing the user to set the tunable parameters or to select a
  162. particular search method. Since some dictionary use cases have known
  163. sizes and access patterns, the user may be able to provide useful hints.
  164. 1) For example, if membership testing or lookups dominate runtime and memory
  165. is not at a premium, the user may benefit from setting the maximum load
  166. ratio at 5% or 10% instead of the usual 66.7%. This will sharply
  167. curtail the number of collisions but will increase iteration time.
  168. The builtin namespace is a prime example of a dictionary that can
  169. benefit from being highly sparse.
  170. 2) Dictionary creation time can be shortened in cases where the ultimate
  171. size of the dictionary is known in advance. The dictionary can be
  172. pre-sized so that no resize operations are required during creation.
  173. Not only does this save resizes, but the key insertion will go
  174. more quickly because the first half of the keys will be inserted into
  175. a more sparse environment than before. The preconditions for this
  176. strategy arise whenever a dictionary is created from a key or item
  177. sequence and the number of *unique* keys is known.
  178. 3) If the key space is large and the access pattern is known to be random,
  179. then search strategies exploiting cache locality can be fruitful.
  180. The preconditions for this strategy arise in simulations and
  181. numerical analysis.
  182. 4) If the keys are fixed and the access pattern strongly favors some of
  183. the keys, then the entries can be stored contiguously and accessed
  184. with a linear search or treap. This exploits knowledge of the data,
  185. cache locality, and a simplified search routine. It also eliminates
  186. the need to test for dummy entries on each probe. The preconditions
  187. for this strategy arise in symbol tables and in the builtin dictionary.
  188. Readonly Dictionaries
  189. ---------------------
  190. Some dictionary use cases pass through a build stage and then move to a
  191. more heavily exercised lookup stage with no further changes to the
  192. dictionary.
  193. An idea that emerged on python-dev is to be able to convert a dictionary
  194. to a read-only state. This can help prevent programming errors and also
  195. provide knowledge that can be exploited for lookup optimization.
  196. The dictionary can be immediately rebuilt (eliminating dummy entries),
  197. resized (to an appropriate level of sparseness), and the keys can be
  198. jostled (to minimize collisions). The lookdict() routine can then
  199. eliminate the test for dummy entries (saving about 1/4 of the time
  200. spent in the collision resolution loop).
  201. An additional possibility is to insert links into the empty spaces
  202. so that dictionary iteration can proceed in len(d) steps instead of
  203. (mp->mask + 1) steps. Alternatively, a separate tuple of keys can be
  204. kept just for iteration.
  205. Caching Lookups
  206. ---------------
  207. The idea is to exploit key access patterns by anticipating future lookups
  208. based on previous lookups.
  209. The simplest incarnation is to save the most recently accessed entry.
  210. This gives optimal performance for use cases where every get is followed
  211. by a set or del to the same key.