PageRenderTime 53ms CodeModel.GetById 12ms RepoModel.GetById 1ms app.codeStats 0ms

/Doc/howto/functional.rst

http://unladen-swallow.googlecode.com/
ReStructuredText | 1378 lines | 1022 code | 356 blank | 0 comment | 0 complexity | 477be5252aa21501c01d4da0e255dce1 MD5 | raw file
Possible License(s): 0BSD, BSD-3-Clause
  1. ********************************
  2. Functional Programming HOWTO
  3. ********************************
  4. :Author: A. M. Kuchling
  5. :Release: 0.31
  6. (This is a first draft. Please send comments/error reports/suggestions to
  7. amk@amk.ca.)
  8. In this document, we'll take a tour of Python's features suitable for
  9. implementing programs in a functional style. After an introduction to the
  10. concepts of functional programming, we'll look at language features such as
  11. :term:`iterator`\s and :term:`generator`\s and relevant library modules such as
  12. :mod:`itertools` and :mod:`functools`.
  13. Introduction
  14. ============
  15. This section explains the basic concept of functional programming; if you're
  16. just interested in learning about Python language features, skip to the next
  17. section.
  18. Programming languages support decomposing problems in several different ways:
  19. * Most programming languages are **procedural**: programs are lists of
  20. instructions that tell the computer what to do with the program's input. C,
  21. Pascal, and even Unix shells are procedural languages.
  22. * In **declarative** languages, you write a specification that describes the
  23. problem to be solved, and the language implementation figures out how to
  24. perform the computation efficiently. SQL is the declarative language you're
  25. most likely to be familiar with; a SQL query describes the data set you want
  26. to retrieve, and the SQL engine decides whether to scan tables or use indexes,
  27. which subclauses should be performed first, etc.
  28. * **Object-oriented** programs manipulate collections of objects. Objects have
  29. internal state and support methods that query or modify this internal state in
  30. some way. Smalltalk and Java are object-oriented languages. C++ and Python
  31. are languages that support object-oriented programming, but don't force the
  32. use of object-oriented features.
  33. * **Functional** programming decomposes a problem into a set of functions.
  34. Ideally, functions only take inputs and produce outputs, and don't have any
  35. internal state that affects the output produced for a given input. Well-known
  36. functional languages include the ML family (Standard ML, OCaml, and other
  37. variants) and Haskell.
  38. The designers of some computer languages choose to emphasize one
  39. particular approach to programming. This often makes it difficult to
  40. write programs that use a different approach. Other languages are
  41. multi-paradigm languages that support several different approaches.
  42. Lisp, C++, and Python are multi-paradigm; you can write programs or
  43. libraries that are largely procedural, object-oriented, or functional
  44. in all of these languages. In a large program, different sections
  45. might be written using different approaches; the GUI might be
  46. object-oriented while the processing logic is procedural or
  47. functional, for example.
  48. In a functional program, input flows through a set of functions. Each function
  49. operates on its input and produces some output. Functional style discourages
  50. functions with side effects that modify internal state or make other changes
  51. that aren't visible in the function's return value. Functions that have no side
  52. effects at all are called **purely functional**. Avoiding side effects means
  53. not using data structures that get updated as a program runs; every function's
  54. output must only depend on its input.
  55. Some languages are very strict about purity and don't even have assignment
  56. statements such as ``a=3`` or ``c = a + b``, but it's difficult to avoid all
  57. side effects. Printing to the screen or writing to a disk file are side
  58. effects, for example. For example, in Python a ``print`` statement or a
  59. ``time.sleep(1)`` both return no useful value; they're only called for their
  60. side effects of sending some text to the screen or pausing execution for a
  61. second.
  62. Python programs written in functional style usually won't go to the extreme of
  63. avoiding all I/O or all assignments; instead, they'll provide a
  64. functional-appearing interface but will use non-functional features internally.
  65. For example, the implementation of a function will still use assignments to
  66. local variables, but won't modify global variables or have other side effects.
  67. Functional programming can be considered the opposite of object-oriented
  68. programming. Objects are little capsules containing some internal state along
  69. with a collection of method calls that let you modify this state, and programs
  70. consist of making the right set of state changes. Functional programming wants
  71. to avoid state changes as much as possible and works with data flowing between
  72. functions. In Python you might combine the two approaches by writing functions
  73. that take and return instances representing objects in your application (e-mail
  74. messages, transactions, etc.).
  75. Functional design may seem like an odd constraint to work under. Why should you
  76. avoid objects and side effects? There are theoretical and practical advantages
  77. to the functional style:
  78. * Formal provability.
  79. * Modularity.
  80. * Composability.
  81. * Ease of debugging and testing.
  82. Formal provability
  83. ------------------
  84. A theoretical benefit is that it's easier to construct a mathematical proof that
  85. a functional program is correct.
  86. For a long time researchers have been interested in finding ways to
  87. mathematically prove programs correct. This is different from testing a program
  88. on numerous inputs and concluding that its output is usually correct, or reading
  89. a program's source code and concluding that the code looks right; the goal is
  90. instead a rigorous proof that a program produces the right result for all
  91. possible inputs.
  92. The technique used to prove programs correct is to write down **invariants**,
  93. properties of the input data and of the program's variables that are always
  94. true. For each line of code, you then show that if invariants X and Y are true
  95. **before** the line is executed, the slightly different invariants X' and Y' are
  96. true **after** the line is executed. This continues until you reach the end of
  97. the program, at which point the invariants should match the desired conditions
  98. on the program's output.
  99. Functional programming's avoidance of assignments arose because assignments are
  100. difficult to handle with this technique; assignments can break invariants that
  101. were true before the assignment without producing any new invariants that can be
  102. propagated onward.
  103. Unfortunately, proving programs correct is largely impractical and not relevant
  104. to Python software. Even trivial programs require proofs that are several pages
  105. long; the proof of correctness for a moderately complicated program would be
  106. enormous, and few or none of the programs you use daily (the Python interpreter,
  107. your XML parser, your web browser) could be proven correct. Even if you wrote
  108. down or generated a proof, there would then be the question of verifying the
  109. proof; maybe there's an error in it, and you wrongly believe you've proved the
  110. program correct.
  111. Modularity
  112. ----------
  113. A more practical benefit of functional programming is that it forces you to
  114. break apart your problem into small pieces. Programs are more modular as a
  115. result. It's easier to specify and write a small function that does one thing
  116. than a large function that performs a complicated transformation. Small
  117. functions are also easier to read and to check for errors.
  118. Ease of debugging and testing
  119. -----------------------------
  120. Testing and debugging a functional-style program is easier.
  121. Debugging is simplified because functions are generally small and clearly
  122. specified. When a program doesn't work, each function is an interface point
  123. where you can check that the data are correct. You can look at the intermediate
  124. inputs and outputs to quickly isolate the function that's responsible for a bug.
  125. Testing is easier because each function is a potential subject for a unit test.
  126. Functions don't depend on system state that needs to be replicated before
  127. running a test; instead you only have to synthesize the right input and then
  128. check that the output matches expectations.
  129. Composability
  130. -------------
  131. As you work on a functional-style program, you'll write a number of functions
  132. with varying inputs and outputs. Some of these functions will be unavoidably
  133. specialized to a particular application, but others will be useful in a wide
  134. variety of programs. For example, a function that takes a directory path and
  135. returns all the XML files in the directory, or a function that takes a filename
  136. and returns its contents, can be applied to many different situations.
  137. Over time you'll form a personal library of utilities. Often you'll assemble
  138. new programs by arranging existing functions in a new configuration and writing
  139. a few functions specialized for the current task.
  140. Iterators
  141. =========
  142. I'll start by looking at a Python language feature that's an important
  143. foundation for writing functional-style programs: iterators.
  144. An iterator is an object representing a stream of data; this object returns the
  145. data one element at a time. A Python iterator must support a method called
  146. ``next()`` that takes no arguments and always returns the next element of the
  147. stream. If there are no more elements in the stream, ``next()`` must raise the
  148. ``StopIteration`` exception. Iterators don't have to be finite, though; it's
  149. perfectly reasonable to write an iterator that produces an infinite stream of
  150. data.
  151. The built-in :func:`iter` function takes an arbitrary object and tries to return
  152. an iterator that will return the object's contents or elements, raising
  153. :exc:`TypeError` if the object doesn't support iteration. Several of Python's
  154. built-in data types support iteration, the most common being lists and
  155. dictionaries. An object is called an **iterable** object if you can get an
  156. iterator for it.
  157. You can experiment with the iteration interface manually:
  158. >>> L = [1,2,3]
  159. >>> it = iter(L)
  160. >>> print it
  161. <...iterator object at ...>
  162. >>> it.next()
  163. 1
  164. >>> it.next()
  165. 2
  166. >>> it.next()
  167. 3
  168. >>> it.next()
  169. Traceback (most recent call last):
  170. File "<stdin>", line 1, in ?
  171. StopIteration
  172. >>>
  173. Python expects iterable objects in several different contexts, the most
  174. important being the ``for`` statement. In the statement ``for X in Y``, Y must
  175. be an iterator or some object for which ``iter()`` can create an iterator.
  176. These two statements are equivalent::
  177. for i in iter(obj):
  178. print i
  179. for i in obj:
  180. print i
  181. Iterators can be materialized as lists or tuples by using the :func:`list` or
  182. :func:`tuple` constructor functions:
  183. >>> L = [1,2,3]
  184. >>> iterator = iter(L)
  185. >>> t = tuple(iterator)
  186. >>> t
  187. (1, 2, 3)
  188. Sequence unpacking also supports iterators: if you know an iterator will return
  189. N elements, you can unpack them into an N-tuple:
  190. >>> L = [1,2,3]
  191. >>> iterator = iter(L)
  192. >>> a,b,c = iterator
  193. >>> a,b,c
  194. (1, 2, 3)
  195. Built-in functions such as :func:`max` and :func:`min` can take a single
  196. iterator argument and will return the largest or smallest element. The ``"in"``
  197. and ``"not in"`` operators also support iterators: ``X in iterator`` is true if
  198. X is found in the stream returned by the iterator. You'll run into obvious
  199. problems if the iterator is infinite; ``max()``, ``min()``, and ``"not in"``
  200. will never return, and if the element X never appears in the stream, the
  201. ``"in"`` operator won't return either.
  202. Note that you can only go forward in an iterator; there's no way to get the
  203. previous element, reset the iterator, or make a copy of it. Iterator objects
  204. can optionally provide these additional capabilities, but the iterator protocol
  205. only specifies the ``next()`` method. Functions may therefore consume all of
  206. the iterator's output, and if you need to do something different with the same
  207. stream, you'll have to create a new iterator.
  208. Data Types That Support Iterators
  209. ---------------------------------
  210. We've already seen how lists and tuples support iterators. In fact, any Python
  211. sequence type, such as strings, will automatically support creation of an
  212. iterator.
  213. Calling :func:`iter` on a dictionary returns an iterator that will loop over the
  214. dictionary's keys:
  215. .. not a doctest since dict ordering varies across Pythons
  216. ::
  217. >>> m = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6,
  218. ... 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}
  219. >>> for key in m:
  220. ... print key, m[key]
  221. Mar 3
  222. Feb 2
  223. Aug 8
  224. Sep 9
  225. Apr 4
  226. Jun 6
  227. Jul 7
  228. Jan 1
  229. May 5
  230. Nov 11
  231. Dec 12
  232. Oct 10
  233. Note that the order is essentially random, because it's based on the hash
  234. ordering of the objects in the dictionary.
  235. Applying ``iter()`` to a dictionary always loops over the keys, but dictionaries
  236. have methods that return other iterators. If you want to iterate over keys,
  237. values, or key/value pairs, you can explicitly call the ``iterkeys()``,
  238. ``itervalues()``, or ``iteritems()`` methods to get an appropriate iterator.
  239. The :func:`dict` constructor can accept an iterator that returns a finite stream
  240. of ``(key, value)`` tuples:
  241. >>> L = [('Italy', 'Rome'), ('France', 'Paris'), ('US', 'Washington DC')]
  242. >>> dict(iter(L))
  243. {'Italy': 'Rome', 'US': 'Washington DC', 'France': 'Paris'}
  244. Files also support iteration by calling the ``readline()`` method until there
  245. are no more lines in the file. This means you can read each line of a file like
  246. this::
  247. for line in file:
  248. # do something for each line
  249. ...
  250. Sets can take their contents from an iterable and let you iterate over the set's
  251. elements::
  252. S = set((2, 3, 5, 7, 11, 13))
  253. for i in S:
  254. print i
  255. Generator expressions and list comprehensions
  256. =============================================
  257. Two common operations on an iterator's output are 1) performing some operation
  258. for every element, 2) selecting a subset of elements that meet some condition.
  259. For example, given a list of strings, you might want to strip off trailing
  260. whitespace from each line or extract all the strings containing a given
  261. substring.
  262. List comprehensions and generator expressions (short form: "listcomps" and
  263. "genexps") are a concise notation for such operations, borrowed from the
  264. functional programming language Haskell (http://www.haskell.org). You can strip
  265. all the whitespace from a stream of strings with the following code::
  266. line_list = [' line 1\n', 'line 2 \n', ...]
  267. # Generator expression -- returns iterator
  268. stripped_iter = (line.strip() for line in line_list)
  269. # List comprehension -- returns list
  270. stripped_list = [line.strip() for line in line_list]
  271. You can select only certain elements by adding an ``"if"`` condition::
  272. stripped_list = [line.strip() for line in line_list
  273. if line != ""]
  274. With a list comprehension, you get back a Python list; ``stripped_list`` is a
  275. list containing the resulting lines, not an iterator. Generator expressions
  276. return an iterator that computes the values as necessary, not needing to
  277. materialize all the values at once. This means that list comprehensions aren't
  278. useful if you're working with iterators that return an infinite stream or a very
  279. large amount of data. Generator expressions are preferable in these situations.
  280. Generator expressions are surrounded by parentheses ("()") and list
  281. comprehensions are surrounded by square brackets ("[]"). Generator expressions
  282. have the form::
  283. ( expression for expr in sequence1
  284. if condition1
  285. for expr2 in sequence2
  286. if condition2
  287. for expr3 in sequence3 ...
  288. if condition3
  289. for exprN in sequenceN
  290. if conditionN )
  291. Again, for a list comprehension only the outside brackets are different (square
  292. brackets instead of parentheses).
  293. The elements of the generated output will be the successive values of
  294. ``expression``. The ``if`` clauses are all optional; if present, ``expression``
  295. is only evaluated and added to the result when ``condition`` is true.
  296. Generator expressions always have to be written inside parentheses, but the
  297. parentheses signalling a function call also count. If you want to create an
  298. iterator that will be immediately passed to a function you can write::
  299. obj_total = sum(obj.count for obj in list_all_objects())
  300. The ``for...in`` clauses contain the sequences to be iterated over. The
  301. sequences do not have to be the same length, because they are iterated over from
  302. left to right, **not** in parallel. For each element in ``sequence1``,
  303. ``sequence2`` is looped over from the beginning. ``sequence3`` is then looped
  304. over for each resulting pair of elements from ``sequence1`` and ``sequence2``.
  305. To put it another way, a list comprehension or generator expression is
  306. equivalent to the following Python code::
  307. for expr1 in sequence1:
  308. if not (condition1):
  309. continue # Skip this element
  310. for expr2 in sequence2:
  311. if not (condition2):
  312. continue # Skip this element
  313. ...
  314. for exprN in sequenceN:
  315. if not (conditionN):
  316. continue # Skip this element
  317. # Output the value of
  318. # the expression.
  319. This means that when there are multiple ``for...in`` clauses but no ``if``
  320. clauses, the length of the resulting output will be equal to the product of the
  321. lengths of all the sequences. If you have two lists of length 3, the output
  322. list is 9 elements long:
  323. .. doctest::
  324. :options: +NORMALIZE_WHITESPACE
  325. >>> seq1 = 'abc'
  326. >>> seq2 = (1,2,3)
  327. >>> [(x,y) for x in seq1 for y in seq2]
  328. [('a', 1), ('a', 2), ('a', 3),
  329. ('b', 1), ('b', 2), ('b', 3),
  330. ('c', 1), ('c', 2), ('c', 3)]
  331. To avoid introducing an ambiguity into Python's grammar, if ``expression`` is
  332. creating a tuple, it must be surrounded with parentheses. The first list
  333. comprehension below is a syntax error, while the second one is correct::
  334. # Syntax error
  335. [ x,y for x in seq1 for y in seq2]
  336. # Correct
  337. [ (x,y) for x in seq1 for y in seq2]
  338. Generators
  339. ==========
  340. Generators are a special class of functions that simplify the task of writing
  341. iterators. Regular functions compute a value and return it, but generators
  342. return an iterator that returns a stream of values.
  343. You're doubtless familiar with how regular function calls work in Python or C.
  344. When you call a function, it gets a private namespace where its local variables
  345. are created. When the function reaches a ``return`` statement, the local
  346. variables are destroyed and the value is returned to the caller. A later call
  347. to the same function creates a new private namespace and a fresh set of local
  348. variables. But, what if the local variables weren't thrown away on exiting a
  349. function? What if you could later resume the function where it left off? This
  350. is what generators provide; they can be thought of as resumable functions.
  351. Here's the simplest example of a generator function:
  352. .. testcode::
  353. def generate_ints(N):
  354. for i in range(N):
  355. yield i
  356. Any function containing a ``yield`` keyword is a generator function; this is
  357. detected by Python's :term:`bytecode` compiler which compiles the function
  358. specially as a result.
  359. When you call a generator function, it doesn't return a single value; instead it
  360. returns a generator object that supports the iterator protocol. On executing
  361. the ``yield`` expression, the generator outputs the value of ``i``, similar to a
  362. ``return`` statement. The big difference between ``yield`` and a ``return``
  363. statement is that on reaching a ``yield`` the generator's state of execution is
  364. suspended and local variables are preserved. On the next call to the
  365. generator's ``.next()`` method, the function will resume executing.
  366. Here's a sample usage of the ``generate_ints()`` generator:
  367. >>> gen = generate_ints(3)
  368. >>> gen
  369. <generator object generate_ints at ...>
  370. >>> gen.next()
  371. 0
  372. >>> gen.next()
  373. 1
  374. >>> gen.next()
  375. 2
  376. >>> gen.next()
  377. Traceback (most recent call last):
  378. File "stdin", line 1, in ?
  379. File "stdin", line 2, in generate_ints
  380. StopIteration
  381. You could equally write ``for i in generate_ints(5)``, or ``a,b,c =
  382. generate_ints(3)``.
  383. Inside a generator function, the ``return`` statement can only be used without a
  384. value, and signals the end of the procession of values; after executing a
  385. ``return`` the generator cannot return any further values. ``return`` with a
  386. value, such as ``return 5``, is a syntax error inside a generator function. The
  387. end of the generator's results can also be indicated by raising
  388. ``StopIteration`` manually, or by just letting the flow of execution fall off
  389. the bottom of the function.
  390. You could achieve the effect of generators manually by writing your own class
  391. and storing all the local variables of the generator as instance variables. For
  392. example, returning a list of integers could be done by setting ``self.count`` to
  393. 0, and having the ``next()`` method increment ``self.count`` and return it.
  394. However, for a moderately complicated generator, writing a corresponding class
  395. can be much messier.
  396. The test suite included with Python's library, ``test_generators.py``, contains
  397. a number of more interesting examples. Here's one generator that implements an
  398. in-order traversal of a tree using generators recursively. ::
  399. # A recursive generator that generates Tree leaves in in-order.
  400. def inorder(t):
  401. if t:
  402. for x in inorder(t.left):
  403. yield x
  404. yield t.label
  405. for x in inorder(t.right):
  406. yield x
  407. Two other examples in ``test_generators.py`` produce solutions for the N-Queens
  408. problem (placing N queens on an NxN chess board so that no queen threatens
  409. another) and the Knight's Tour (finding a route that takes a knight to every
  410. square of an NxN chessboard without visiting any square twice).
  411. Passing values into a generator
  412. -------------------------------
  413. In Python 2.4 and earlier, generators only produced output. Once a generator's
  414. code was invoked to create an iterator, there was no way to pass any new
  415. information into the function when its execution is resumed. You could hack
  416. together this ability by making the generator look at a global variable or by
  417. passing in some mutable object that callers then modify, but these approaches
  418. are messy.
  419. In Python 2.5 there's a simple way to pass values into a generator.
  420. :keyword:`yield` became an expression, returning a value that can be assigned to
  421. a variable or otherwise operated on::
  422. val = (yield i)
  423. I recommend that you **always** put parentheses around a ``yield`` expression
  424. when you're doing something with the returned value, as in the above example.
  425. The parentheses aren't always necessary, but it's easier to always add them
  426. instead of having to remember when they're needed.
  427. (PEP 342 explains the exact rules, which are that a ``yield``-expression must
  428. always be parenthesized except when it occurs at the top-level expression on the
  429. right-hand side of an assignment. This means you can write ``val = yield i``
  430. but have to use parentheses when there's an operation, as in ``val = (yield i)
  431. + 12``.)
  432. Values are sent into a generator by calling its ``send(value)`` method. This
  433. method resumes the generator's code and the ``yield`` expression returns the
  434. specified value. If the regular ``next()`` method is called, the ``yield``
  435. returns ``None``.
  436. Here's a simple counter that increments by 1 and allows changing the value of
  437. the internal counter.
  438. .. testcode::
  439. def counter (maximum):
  440. i = 0
  441. while i < maximum:
  442. val = (yield i)
  443. # If value provided, change counter
  444. if val is not None:
  445. i = val
  446. else:
  447. i += 1
  448. And here's an example of changing the counter:
  449. >>> it = counter(10)
  450. >>> print it.next()
  451. 0
  452. >>> print it.next()
  453. 1
  454. >>> print it.send(8)
  455. 8
  456. >>> print it.next()
  457. 9
  458. >>> print it.next()
  459. Traceback (most recent call last):
  460. File "t.py", line 15, in ?
  461. print it.next()
  462. StopIteration
  463. Because ``yield`` will often be returning ``None``, you should always check for
  464. this case. Don't just use its value in expressions unless you're sure that the
  465. ``send()`` method will be the only method used resume your generator function.
  466. In addition to ``send()``, there are two other new methods on generators:
  467. * ``throw(type, value=None, traceback=None)`` is used to raise an exception
  468. inside the generator; the exception is raised by the ``yield`` expression
  469. where the generator's execution is paused.
  470. * ``close()`` raises a :exc:`GeneratorExit` exception inside the generator to
  471. terminate the iteration. On receiving this exception, the generator's code
  472. must either raise :exc:`GeneratorExit` or :exc:`StopIteration`; catching the
  473. exception and doing anything else is illegal and will trigger a
  474. :exc:`RuntimeError`. ``close()`` will also be called by Python's garbage
  475. collector when the generator is garbage-collected.
  476. If you need to run cleanup code when a :exc:`GeneratorExit` occurs, I suggest
  477. using a ``try: ... finally:`` suite instead of catching :exc:`GeneratorExit`.
  478. The cumulative effect of these changes is to turn generators from one-way
  479. producers of information into both producers and consumers.
  480. Generators also become **coroutines**, a more generalized form of subroutines.
  481. Subroutines are entered at one point and exited at another point (the top of the
  482. function, and a ``return`` statement), but coroutines can be entered, exited,
  483. and resumed at many different points (the ``yield`` statements).
  484. Built-in functions
  485. ==================
  486. Let's look in more detail at built-in functions often used with iterators.
  487. Two of Python's built-in functions, :func:`map` and :func:`filter`, are somewhat
  488. obsolete; they duplicate the features of list comprehensions but return actual
  489. lists instead of iterators.
  490. ``map(f, iterA, iterB, ...)`` returns a list containing ``f(iterA[0], iterB[0]),
  491. f(iterA[1], iterB[1]), f(iterA[2], iterB[2]), ...``.
  492. >>> def upper(s):
  493. ... return s.upper()
  494. >>> map(upper, ['sentence', 'fragment'])
  495. ['SENTENCE', 'FRAGMENT']
  496. >>> [upper(s) for s in ['sentence', 'fragment']]
  497. ['SENTENCE', 'FRAGMENT']
  498. As shown above, you can achieve the same effect with a list comprehension. The
  499. :func:`itertools.imap` function does the same thing but can handle infinite
  500. iterators; it'll be discussed later, in the section on the :mod:`itertools` module.
  501. ``filter(predicate, iter)`` returns a list that contains all the sequence
  502. elements that meet a certain condition, and is similarly duplicated by list
  503. comprehensions. A **predicate** is a function that returns the truth value of
  504. some condition; for use with :func:`filter`, the predicate must take a single
  505. value.
  506. >>> def is_even(x):
  507. ... return (x % 2) == 0
  508. >>> filter(is_even, range(10))
  509. [0, 2, 4, 6, 8]
  510. This can also be written as a list comprehension:
  511. >>> [x for x in range(10) if is_even(x)]
  512. [0, 2, 4, 6, 8]
  513. :func:`filter` also has a counterpart in the :mod:`itertools` module,
  514. :func:`itertools.ifilter`, that returns an iterator and can therefore handle
  515. infinite sequences just as :func:`itertools.imap` can.
  516. ``reduce(func, iter, [initial_value])`` doesn't have a counterpart in the
  517. :mod:`itertools` module because it cumulatively performs an operation on all the
  518. iterable's elements and therefore can't be applied to infinite iterables.
  519. ``func`` must be a function that takes two elements and returns a single value.
  520. :func:`reduce` takes the first two elements A and B returned by the iterator and
  521. calculates ``func(A, B)``. It then requests the third element, C, calculates
  522. ``func(func(A, B), C)``, combines this result with the fourth element returned,
  523. and continues until the iterable is exhausted. If the iterable returns no
  524. values at all, a :exc:`TypeError` exception is raised. If the initial value is
  525. supplied, it's used as a starting point and ``func(initial_value, A)`` is the
  526. first calculation.
  527. >>> import operator
  528. >>> reduce(operator.concat, ['A', 'BB', 'C'])
  529. 'ABBC'
  530. >>> reduce(operator.concat, [])
  531. Traceback (most recent call last):
  532. ...
  533. TypeError: reduce() of empty sequence with no initial value
  534. >>> reduce(operator.mul, [1,2,3], 1)
  535. 6
  536. >>> reduce(operator.mul, [], 1)
  537. 1
  538. If you use :func:`operator.add` with :func:`reduce`, you'll add up all the
  539. elements of the iterable. This case is so common that there's a special
  540. built-in called :func:`sum` to compute it:
  541. >>> reduce(operator.add, [1,2,3,4], 0)
  542. 10
  543. >>> sum([1,2,3,4])
  544. 10
  545. >>> sum([])
  546. 0
  547. For many uses of :func:`reduce`, though, it can be clearer to just write the
  548. obvious :keyword:`for` loop::
  549. # Instead of:
  550. product = reduce(operator.mul, [1,2,3], 1)
  551. # You can write:
  552. product = 1
  553. for i in [1,2,3]:
  554. product *= i
  555. ``enumerate(iter)`` counts off the elements in the iterable, returning 2-tuples
  556. containing the count and each element.
  557. >>> for item in enumerate(['subject', 'verb', 'object']):
  558. ... print item
  559. (0, 'subject')
  560. (1, 'verb')
  561. (2, 'object')
  562. :func:`enumerate` is often used when looping through a list and recording the
  563. indexes at which certain conditions are met::
  564. f = open('data.txt', 'r')
  565. for i, line in enumerate(f):
  566. if line.strip() == '':
  567. print 'Blank line at line #%i' % i
  568. ``sorted(iterable, [cmp=None], [key=None], [reverse=False])`` collects all the
  569. elements of the iterable into a list, sorts the list, and returns the sorted
  570. result. The ``cmp``, ``key``, and ``reverse`` arguments are passed through to
  571. the constructed list's ``.sort()`` method. ::
  572. >>> import random
  573. >>> # Generate 8 random numbers between [0, 10000)
  574. >>> rand_list = random.sample(range(10000), 8)
  575. >>> rand_list
  576. [769, 7953, 9828, 6431, 8442, 9878, 6213, 2207]
  577. >>> sorted(rand_list)
  578. [769, 2207, 6213, 6431, 7953, 8442, 9828, 9878]
  579. >>> sorted(rand_list, reverse=True)
  580. [9878, 9828, 8442, 7953, 6431, 6213, 2207, 769]
  581. (For a more detailed discussion of sorting, see the Sorting mini-HOWTO in the
  582. Python wiki at http://wiki.python.org/moin/HowTo/Sorting.)
  583. The ``any(iter)`` and ``all(iter)`` built-ins look at the truth values of an
  584. iterable's contents. :func:`any` returns True if any element in the iterable is
  585. a true value, and :func:`all` returns True if all of the elements are true
  586. values:
  587. >>> any([0,1,0])
  588. True
  589. >>> any([0,0,0])
  590. False
  591. >>> any([1,1,1])
  592. True
  593. >>> all([0,1,0])
  594. False
  595. >>> all([0,0,0])
  596. False
  597. >>> all([1,1,1])
  598. True
  599. Small functions and the lambda expression
  600. =========================================
  601. When writing functional-style programs, you'll often need little functions that
  602. act as predicates or that combine elements in some way.
  603. If there's a Python built-in or a module function that's suitable, you don't
  604. need to define a new function at all::
  605. stripped_lines = [line.strip() for line in lines]
  606. existing_files = filter(os.path.exists, file_list)
  607. If the function you need doesn't exist, you need to write it. One way to write
  608. small functions is to use the ``lambda`` statement. ``lambda`` takes a number
  609. of parameters and an expression combining these parameters, and creates a small
  610. function that returns the value of the expression::
  611. lowercase = lambda x: x.lower()
  612. print_assign = lambda name, value: name + '=' + str(value)
  613. adder = lambda x, y: x+y
  614. An alternative is to just use the ``def`` statement and define a function in the
  615. usual way::
  616. def lowercase(x):
  617. return x.lower()
  618. def print_assign(name, value):
  619. return name + '=' + str(value)
  620. def adder(x,y):
  621. return x + y
  622. Which alternative is preferable? That's a style question; my usual course is to
  623. avoid using ``lambda``.
  624. One reason for my preference is that ``lambda`` is quite limited in the
  625. functions it can define. The result has to be computable as a single
  626. expression, which means you can't have multiway ``if... elif... else``
  627. comparisons or ``try... except`` statements. If you try to do too much in a
  628. ``lambda`` statement, you'll end up with an overly complicated expression that's
  629. hard to read. Quick, what's the following code doing?
  630. ::
  631. total = reduce(lambda a, b: (0, a[1] + b[1]), items)[1]
  632. You can figure it out, but it takes time to disentangle the expression to figure
  633. out what's going on. Using a short nested ``def`` statements makes things a
  634. little bit better::
  635. def combine (a, b):
  636. return 0, a[1] + b[1]
  637. total = reduce(combine, items)[1]
  638. But it would be best of all if I had simply used a ``for`` loop::
  639. total = 0
  640. for a, b in items:
  641. total += b
  642. Or the :func:`sum` built-in and a generator expression::
  643. total = sum(b for a,b in items)
  644. Many uses of :func:`reduce` are clearer when written as ``for`` loops.
  645. Fredrik Lundh once suggested the following set of rules for refactoring uses of
  646. ``lambda``:
  647. 1) Write a lambda function.
  648. 2) Write a comment explaining what the heck that lambda does.
  649. 3) Study the comment for a while, and think of a name that captures the essence
  650. of the comment.
  651. 4) Convert the lambda to a def statement, using that name.
  652. 5) Remove the comment.
  653. I really like these rules, but you're free to disagree
  654. about whether this lambda-free style is better.
  655. The itertools module
  656. ====================
  657. The :mod:`itertools` module contains a number of commonly-used iterators as well
  658. as functions for combining several iterators. This section will introduce the
  659. module's contents by showing small examples.
  660. The module's functions fall into a few broad classes:
  661. * Functions that create a new iterator based on an existing iterator.
  662. * Functions for treating an iterator's elements as function arguments.
  663. * Functions for selecting portions of an iterator's output.
  664. * A function for grouping an iterator's output.
  665. Creating new iterators
  666. ----------------------
  667. ``itertools.count(n)`` returns an infinite stream of integers, increasing by 1
  668. each time. You can optionally supply the starting number, which defaults to 0::
  669. itertools.count() =>
  670. 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...
  671. itertools.count(10) =>
  672. 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...
  673. ``itertools.cycle(iter)`` saves a copy of the contents of a provided iterable
  674. and returns a new iterator that returns its elements from first to last. The
  675. new iterator will repeat these elements infinitely. ::
  676. itertools.cycle([1,2,3,4,5]) =>
  677. 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, ...
  678. ``itertools.repeat(elem, [n])`` returns the provided element ``n`` times, or
  679. returns the element endlessly if ``n`` is not provided. ::
  680. itertools.repeat('abc') =>
  681. abc, abc, abc, abc, abc, abc, abc, abc, abc, abc, ...
  682. itertools.repeat('abc', 5) =>
  683. abc, abc, abc, abc, abc
  684. ``itertools.chain(iterA, iterB, ...)`` takes an arbitrary number of iterables as
  685. input, and returns all the elements of the first iterator, then all the elements
  686. of the second, and so on, until all of the iterables have been exhausted. ::
  687. itertools.chain(['a', 'b', 'c'], (1, 2, 3)) =>
  688. a, b, c, 1, 2, 3
  689. ``itertools.izip(iterA, iterB, ...)`` takes one element from each iterable and
  690. returns them in a tuple::
  691. itertools.izip(['a', 'b', 'c'], (1, 2, 3)) =>
  692. ('a', 1), ('b', 2), ('c', 3)
  693. It's similar to the built-in :func:`zip` function, but doesn't construct an
  694. in-memory list and exhaust all the input iterators before returning; instead
  695. tuples are constructed and returned only if they're requested. (The technical
  696. term for this behaviour is `lazy evaluation
  697. <http://en.wikipedia.org/wiki/Lazy_evaluation>`__.)
  698. This iterator is intended to be used with iterables that are all of the same
  699. length. If the iterables are of different lengths, the resulting stream will be
  700. the same length as the shortest iterable. ::
  701. itertools.izip(['a', 'b'], (1, 2, 3)) =>
  702. ('a', 1), ('b', 2)
  703. You should avoid doing this, though, because an element may be taken from the
  704. longer iterators and discarded. This means you can't go on to use the iterators
  705. further because you risk skipping a discarded element.
  706. ``itertools.islice(iter, [start], stop, [step])`` returns a stream that's a
  707. slice of the iterator. With a single ``stop`` argument, it will return the
  708. first ``stop`` elements. If you supply a starting index, you'll get
  709. ``stop-start`` elements, and if you supply a value for ``step``, elements will
  710. be skipped accordingly. Unlike Python's string and list slicing, you can't use
  711. negative values for ``start``, ``stop``, or ``step``. ::
  712. itertools.islice(range(10), 8) =>
  713. 0, 1, 2, 3, 4, 5, 6, 7
  714. itertools.islice(range(10), 2, 8) =>
  715. 2, 3, 4, 5, 6, 7
  716. itertools.islice(range(10), 2, 8, 2) =>
  717. 2, 4, 6
  718. ``itertools.tee(iter, [n])`` replicates an iterator; it returns ``n``
  719. independent iterators that will all return the contents of the source iterator.
  720. If you don't supply a value for ``n``, the default is 2. Replicating iterators
  721. requires saving some of the contents of the source iterator, so this can consume
  722. significant memory if the iterator is large and one of the new iterators is
  723. consumed more than the others. ::
  724. itertools.tee( itertools.count() ) =>
  725. iterA, iterB
  726. where iterA ->
  727. 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...
  728. and iterB ->
  729. 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...
  730. Calling functions on elements
  731. -----------------------------
  732. Two functions are used for calling other functions on the contents of an
  733. iterable.
  734. ``itertools.imap(f, iterA, iterB, ...)`` returns a stream containing
  735. ``f(iterA[0], iterB[0]), f(iterA[1], iterB[1]), f(iterA[2], iterB[2]), ...``::
  736. itertools.imap(operator.add, [5, 6, 5], [1, 2, 3]) =>
  737. 6, 8, 8
  738. The ``operator`` module contains a set of functions corresponding to Python's
  739. operators. Some examples are ``operator.add(a, b)`` (adds two values),
  740. ``operator.ne(a, b)`` (same as ``a!=b``), and ``operator.attrgetter('id')``
  741. (returns a callable that fetches the ``"id"`` attribute).
  742. ``itertools.starmap(func, iter)`` assumes that the iterable will return a stream
  743. of tuples, and calls ``f()`` using these tuples as the arguments::
  744. itertools.starmap(os.path.join,
  745. [('/usr', 'bin', 'java'), ('/bin', 'python'),
  746. ('/usr', 'bin', 'perl'),('/usr', 'bin', 'ruby')])
  747. =>
  748. /usr/bin/java, /bin/python, /usr/bin/perl, /usr/bin/ruby
  749. Selecting elements
  750. ------------------
  751. Another group of functions chooses a subset of an iterator's elements based on a
  752. predicate.
  753. ``itertools.ifilter(predicate, iter)`` returns all the elements for which the
  754. predicate returns true::
  755. def is_even(x):
  756. return (x % 2) == 0
  757. itertools.ifilter(is_even, itertools.count()) =>
  758. 0, 2, 4, 6, 8, 10, 12, 14, ...
  759. ``itertools.ifilterfalse(predicate, iter)`` is the opposite, returning all
  760. elements for which the predicate returns false::
  761. itertools.ifilterfalse(is_even, itertools.count()) =>
  762. 1, 3, 5, 7, 9, 11, 13, 15, ...
  763. ``itertools.takewhile(predicate, iter)`` returns elements for as long as the
  764. predicate returns true. Once the predicate returns false, the iterator will
  765. signal the end of its results.
  766. ::
  767. def less_than_10(x):
  768. return (x < 10)
  769. itertools.takewhile(less_than_10, itertools.count()) =>
  770. 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
  771. itertools.takewhile(is_even, itertools.count()) =>
  772. 0
  773. ``itertools.dropwhile(predicate, iter)`` discards elements while the predicate
  774. returns true, and then returns the rest of the iterable's results.
  775. ::
  776. itertools.dropwhile(less_than_10, itertools.count()) =>
  777. 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...
  778. itertools.dropwhile(is_even, itertools.count()) =>
  779. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ...
  780. Grouping elements
  781. -----------------
  782. The last function I'll discuss, ``itertools.groupby(iter, key_func=None)``, is
  783. the most complicated. ``key_func(elem)`` is a function that can compute a key
  784. value for each element returned by the iterable. If you don't supply a key
  785. function, the key is simply each element itself.
  786. ``groupby()`` collects all the consecutive elements from the underlying iterable
  787. that have the same key value, and returns a stream of 2-tuples containing a key
  788. value and an iterator for the elements with that key.
  789. ::
  790. city_list = [('Decatur', 'AL'), ('Huntsville', 'AL'), ('Selma', 'AL'),
  791. ('Anchorage', 'AK'), ('Nome', 'AK'),
  792. ('Flagstaff', 'AZ'), ('Phoenix', 'AZ'), ('Tucson', 'AZ'),
  793. ...
  794. ]
  795. def get_state ((city, state)):
  796. return state
  797. itertools.groupby(city_list, get_state) =>
  798. ('AL', iterator-1),
  799. ('AK', iterator-2),
  800. ('AZ', iterator-3), ...
  801. where
  802. iterator-1 =>
  803. ('Decatur', 'AL'), ('Huntsville', 'AL'), ('Selma', 'AL')
  804. iterator-2 =>
  805. ('Anchorage', 'AK'), ('Nome', 'AK')
  806. iterator-3 =>
  807. ('Flagstaff', 'AZ'), ('Phoenix', 'AZ'), ('Tucson', 'AZ')
  808. ``groupby()`` assumes that the underlying iterable's contents will already be
  809. sorted based on the key. Note that the returned iterators also use the
  810. underlying iterable, so you have to consume the results of iterator-1 before
  811. requesting iterator-2 and its corresponding key.
  812. The functools module
  813. ====================
  814. The :mod:`functools` module in Python 2.5 contains some higher-order functions.
  815. A **higher-order function** takes one or more functions as input and returns a
  816. new function. The most useful tool in this module is the
  817. :func:`functools.partial` function.
  818. For programs written in a functional style, you'll sometimes want to construct
  819. variants of existing functions that have some of the parameters filled in.
  820. Consider a Python function ``f(a, b, c)``; you may wish to create a new function
  821. ``g(b, c)`` that's equivalent to ``f(1, b, c)``; you're filling in a value for
  822. one of ``f()``'s parameters. This is called "partial function application".
  823. The constructor for ``partial`` takes the arguments ``(function, arg1, arg2,
  824. ... kwarg1=value1, kwarg2=value2)``. The resulting object is callable, so you
  825. can just call it to invoke ``function`` with the filled-in arguments.
  826. Here's a small but realistic example::
  827. import functools
  828. def log (message, subsystem):
  829. "Write the contents of 'message' to the specified subsystem."
  830. print '%s: %s' % (subsystem, message)
  831. ...
  832. server_log = functools.partial(log, subsystem='server')
  833. server_log('Unable to open socket')
  834. The operator module
  835. -------------------
  836. The :mod:`operator` module was mentioned earlier. It contains a set of
  837. functions corresponding to Python's operators. These functions are often useful
  838. in functional-style code because they save you from writing trivial functions
  839. that perform a single operation.
  840. Some of the functions in this module are:
  841. * Math operations: ``add()``, ``sub()``, ``mul()``, ``div()``, ``floordiv()``,
  842. ``abs()``, ...
  843. * Logical operations: ``not_()``, ``truth()``.
  844. * Bitwise operations: ``and_()``, ``or_()``, ``invert()``.
  845. * Comparisons: ``eq()``, ``ne()``, ``lt()``, ``le()``, ``gt()``, and ``ge()``.
  846. * Object identity: ``is_()``, ``is_not()``.
  847. Consult the operator module's documentation for a complete list.
  848. The functional module
  849. ---------------------
  850. Collin Winter's `functional module <http://oakwinter.com/code/functional/>`__
  851. provides a number of more advanced tools for functional programming. It also
  852. reimplements several Python built-ins, trying to make them more intuitive to
  853. those used to functional programming in other languages.
  854. This section contains an introduction to some of the most important functions in
  855. ``functional``; full documentation can be found at `the project's website
  856. <http://oakwinter.com/code/functional/documentation/>`__.
  857. ``compose(outer, inner, unpack=False)``
  858. The ``compose()`` function implements function composition. In other words, it
  859. returns a wrapper around the ``outer`` and ``inner`` callables, such that the
  860. return value from ``inner`` is fed directly to ``outer``. That is, ::
  861. >>> def add(a, b):
  862. ... return a + b
  863. ...
  864. >>> def double(a):
  865. ... return 2 * a
  866. ...
  867. >>> compose(double, add)(5, 6)
  868. 22
  869. is equivalent to ::
  870. >>> double(add(5, 6))
  871. 22
  872. The ``unpack`` keyword is provided to work around the fact that Python functions
  873. are not always `fully curried <http://en.wikipedia.org/wiki/Currying>`__. By
  874. default, it is expected that the ``inner`` function will return a single object
  875. and that the ``outer`` function will take a single argument. Setting the
  876. ``unpack`` argument causes ``compose`` to expect a tuple from ``inner`` which
  877. will be expanded before being passed to ``outer``. Put simply, ::
  878. compose(f, g)(5, 6)
  879. is equivalent to::
  880. f(g(5, 6))
  881. while ::
  882. compose(f, g, unpack=True)(5, 6)
  883. is equivalent to::
  884. f(*g(5, 6))
  885. Even though ``compose()`` only accepts two functions, it's trivial to build up a
  886. version that will compose any number of functions. We'll use ``reduce()``,
  887. ``compose()`` and ``partial()`` (the last of which is provided by both
  888. ``functional`` and ``functools``). ::
  889. from functional import compose, partial
  890. multi_compose = partial(reduce, compose)
  891. We can also use ``map()``, ``compose()`` and ``partial()`` to craft a version of
  892. ``"".join(...)`` that converts its arguments to string::
  893. from functional import compose, partial
  894. join = compose("".join, partial(map, str))
  895. ``flip(func)``
  896. ``flip()`` wraps the callable in ``func`` and causes it to receive its
  897. non-keyword arguments in reverse order. ::
  898. >>> def triple(a, b, c):
  899. ... return (a, b, c)
  900. ...
  901. >>> triple(5, 6, 7)
  902. (5, 6, 7)
  903. >>>
  904. >>> flipped_triple = flip(triple)
  905. >>> flipped_triple(5, 6, 7)
  906. (7, 6, 5)
  907. ``foldl(func, start, iterable)``
  908. ``foldl()`` takes a binary function, a starting value (usually some kind of
  909. 'zero'), and an iterable. The function is applied to the starting value and the
  910. first element of the list, then the result of that and the second element of the
  911. list, then the result of that and the third element of the list, and so on.
  912. This means that a call such as::
  913. foldl(f, 0, [1, 2, 3])
  914. is equivalent to::
  915. f(f(f(0, 1), 2), 3)
  916. ``foldl()`` is roughly equivalent to the following recursive function::
  917. def foldl(func, start, seq):
  918. if len(seq) == 0:
  919. return start
  920. return foldl(func, func(start, seq[0]), seq[1:])
  921. Speaking of equivalence, the above ``foldl`` call can be expressed in terms of
  922. the built-in ``reduce`` like so::
  923. reduce(f, [1, 2, 3], 0)
  924. We can use ``foldl()``, ``operator.concat()`` and ``partial()`` to write a
  925. cleaner, more aesthetically-pleasing version of Python's ``"".join(...)``
  926. idiom::
  927. from functional import foldl, partial from operator import concat
  928. join = partial(foldl, concat, "")
  929. Revision History and Acknowledgements
  930. =====================================
  931. The author would like to thank the following people for offering suggestions,
  932. corrections and assistance with various drafts of this article: Ian Bicking,
  933. Nick Coghlan, Nick Efford, Raymond Hettinger, Jim Jewett, Mike Krell, Leandro
  934. Lameiro, Jussi Salmela, Collin Winter, Blake Winton.
  935. Version 0.1: posted June 30 2006.
  936. Version 0.11: posted July 1 2006. Typo fixes.
  937. Version 0.2: posted July 10 2006. Merged genexp and listcomp sections into one.
  938. Typo fixes.
  939. Version 0.21: Added more references suggested on the tutor mailing list.
  940. Version 0.30: Adds a section on the ``functional`` module written by Collin
  941. Winter; adds short section on the operator module; a few other edits.
  942. References
  943. ==========
  944. General
  945. -------
  946. **Structure and Interpretation of Computer Programs**, by Harold Abelson and
  947. Gerald Jay Sussman with Julie Sussman. Full text at
  948. http://mitpress.mit.edu/sicp/. In this classic textbook of computer science,
  949. chapters 2 and 3 discuss the use of sequences and streams to organize the data
  950. flow inside a program. The book uses Scheme for its examples, but many of the
  951. design approaches described in these chapters are applicable to functional-style
  952. Python code.
  953. http://www.defmacro.org/ramblings/fp.html: A general introduction to functional
  954. programming that uses Java examples and has a lengthy historical introduction.
  955. http://en.wikipedia.org/wiki/Functional_programming: General Wikipedia entry
  956. describing functional programming.
  957. http://en.wikipedia.org/wiki/Coroutine: Entry for coroutines.
  958. http://en.wikipedia.org/wiki/Currying: Entry for the concept of currying.
  959. Python-specific
  960. ---------------
  961. http://gnosis.cx/TPiP/: The first chapter of David Mertz's book
  962. :title-reference:`Text Processing in Python` discusses functional programming
  963. for text processing, in the section titled "Utilizing Higher-Order Functions in
  964. Text Processing".
  965. Mertz also wrote a 3-part series of articles on functional programming
  966. for IBM's DeveloperWorks site; see
  967. `part 1 <http://www-128.ibm.com/developerworks/library/l-prog.html>`__,
  968. `part 2 <http://www-128.ibm.com/developerworks/library/l-prog2.html>`__, and
  969. `part 3 <http://www-128.ibm.com/developerworks/linux/library/l-prog3.html>`__,
  970. Python documentation
  971. --------------------
  972. Documentation for the :mod:`itertools` module.
  973. Documentation for the :mod:`operator` module.
  974. :pep:`289`: "Generator Expressions"
  975. :pep:`342`: "Coroutines via Enhanced Generators" describes the new generator
  976. features in Python 2.5.
  977. .. comment
  978. Topics to place
  979. -----------------------------
  980. XXX os.walk()
  981. XXX Need a large example.
  982. But will an example add much? I'll post a first draft and see
  983. what the comments say.
  984. .. comment
  985. Original outline:
  986. Introduction
  987. Idea of FP
  988. Programs built out of functions
  989. Functions are strictly input-output, no internal state
  990. Opposed to OO programming, where objects have state
  991. Why FP?
  992. Formal provability
  993. Assignment is difficult to reason about
  994. Not very relevant to Python
  995. Modularity
  996. Small functions that do one thing
  997. Debuggability:
  998. Easy to test due to lack of state
  999. Easy to verify output from intermediate steps
  1000. Composability
  1001. You assemble a toolbox of functions that can be mixed
  1002. Tackling a problem
  1003. Need a significant example
  1004. Iterators
  1005. Generators
  1006. The itertools module
  1007. List comprehensions
  1008. Small functions and the lambda statement
  1009. Built-in functions
  1010. map
  1011. filter
  1012. reduce
  1013. .. comment
  1014. Handy little function for printing part of an iterator -- used
  1015. while writing this document.
  1016. import itertools
  1017. def print_iter(it):
  1018. slice = itertools.islice(it, 10)
  1019. for elem in slice[:-1]:
  1020. sys.stdout.write(str(elem))
  1021. sys.stdout.write(', ')
  1022. print elem[-1]