PageRenderTime 1440ms CodeModel.GetById 162ms app.highlight 1134ms RepoModel.GetById 137ms app.codeStats 0ms

/Lib/test/README

http://unladen-swallow.googlecode.com/
#! | 413 lines | 316 code | 97 blank | 0 comment | 0 complexity | 37cc162af839e149b47867cf5bd6ee9f MD5 | raw file
  1+++++++++++++++++++++++++++++++
  2Writing Python Regression Tests
  3+++++++++++++++++++++++++++++++
  4
  5:Author: Skip Montanaro
  6:Contact: skip@pobox.com
  7
  8Introduction
  9============
 10
 11If you add a new module to Python or modify the functionality of an existing
 12module, you should write one or more test cases to exercise that new
 13functionality.  There are different ways to do this within the regression
 14testing facility provided with Python; any particular test should use only
 15one of these options.  Each option requires writing a test module using the
 16conventions of the selected option:
 17
 18    - unittest_ based tests
 19    - doctest_ based tests
 20    - "traditional" Python test modules
 21
 22Regardless of the mechanics of the testing approach you choose,
 23you will be writing unit tests (isolated tests of functions and objects
 24defined by the module) using white box techniques.  Unlike black box
 25testing, where you only have the external interfaces to guide your test case
 26writing, in white box testing you can see the code being tested and tailor
 27your test cases to exercise it more completely.  In particular, you will be
 28able to refer to the C and Python code in the CVS repository when writing
 29your regression test cases.
 30
 31.. _unittest: http://www.python.org/doc/current/lib/module-unittest.html
 32.. _doctest: http://www.python.org/doc/current/lib/module-doctest.html
 33
 34unittest-based tests
 35------------------
 36The unittest_ framework is based on the ideas of unit testing as espoused
 37by Kent Beck and the `Extreme Programming`_ (XP) movement.  The specific
 38interface provided by the framework is tightly based on the JUnit_
 39Java implementation of Beck's original SmallTalk test framework.  Please
 40see the documentation of the unittest_ module for detailed information on
 41the interface and general guidelines on writing unittest-based tests.
 42
 43The test_support helper module provides a function for use by
 44unittest-based tests in the Python regression testing framework,
 45``run_unittest()``. This is the primary way of running tests in the
 46standard library. You can pass it any number of the following:
 47
 48- classes derived from or instances of ``unittest.TestCase`` or
 49  ``unittest.TestSuite``. These will be handed off to unittest for
 50  converting into a proper TestSuite instance.
 51
 52- a string; this must be a key in sys.modules. The module associated with
 53  that string will be scanned by ``unittest.TestLoader.loadTestsFromModule``.
 54  This is usually seen as ``test_support.run_unittest(__name__)`` in a test
 55  module's ``test_main()`` function. This has the advantage of picking up
 56  new tests automatically, without you having to add each new test case
 57  manually.
 58   
 59All test methods in the Python regression framework have names that
 60start with "``test_``" and use lower-case names with words separated with
 61underscores.
 62
 63Test methods should *not* have docstrings!  The unittest module prints
 64the docstring if there is one, but otherwise prints the function name
 65and the full class name.  When there's a problem with a test, the
 66latter information makes it easier to find the source for the test
 67than the docstring.
 68
 69All unittest-based tests in the Python test suite use boilerplate that
 70looks like this (with minor variations)::
 71
 72    import unittest
 73    from test import test_support
 74
 75    class MyTestCase1(unittest.TestCase):
 76
 77        # Define setUp and tearDown only if needed
 78
 79        def setUp(self):
 80            unittest.TestCase.setUp(self)
 81            ... additional initialization...
 82
 83        def tearDown(self):
 84            ... additional finalization...
 85            unittest.TestCase.tearDown(self)
 86
 87        def test_feature_one(self):
 88            # Testing feature one
 89            ...unit test for feature one...
 90
 91        def test_feature_two(self):
 92            # Testing feature two
 93            ...unit test for feature two...
 94
 95        ...etc...
 96
 97    class MyTestCase2(unittest.TestCase):
 98        ...same structure as MyTestCase1...
 99
100    ...etc...
101
102    def test_main():
103        test_support.run_unittest(__name__)
104
105    if __name__ == "__main__":
106        test_main()
107
108This has the advantage that it allows the unittest module to be used
109as a script to run individual tests as well as working well with the
110regrtest framework.
111
112.. _Extreme Programming: http://www.extremeprogramming.org/
113.. _JUnit: http://www.junit.org/
114
115doctest based tests
116-------------------
117Tests written to use doctest_ are actually part of the docstrings for
118the module being tested.  Each test is written as a display of an
119interactive session, including the Python prompts, statements that would
120be typed by the user, and the output of those statements (including
121tracebacks, although only the exception msg needs to be retained then).
122The module in the test package is simply a wrapper that causes doctest
123to run over the tests in the module.  The test for the difflib module
124provides a convenient example::
125
126    import difflib
127    from test import test_support
128    test_support.run_doctest(difflib)
129
130If the test is successful, nothing is written to stdout (so you should not
131create a corresponding output/test_difflib file), but running regrtest
132with -v will give a detailed report, the same as if passing -v to doctest.
133
134A second argument can be passed to run_doctest to tell doctest to search
135``sys.argv`` for -v instead of using test_support's idea of verbosity.  This
136is useful for writing doctest-based tests that aren't simply running a
137doctest'ed Lib module, but contain the doctests themselves.  Then at
138times you may want to run such a test directly as a doctest, independent
139of the regrtest framework.  The tail end of test_descrtut.py is a good
140example::
141
142    def test_main(verbose=None):
143        from test import test_support, test_descrtut
144        test_support.run_doctest(test_descrtut, verbose)
145
146    if __name__ == "__main__":
147        test_main(1)
148
149If run via regrtest, ``test_main()`` is called (by regrtest) without
150specifying verbose, and then test_support's idea of verbosity is used.  But
151when run directly, ``test_main(1)`` is called, and then doctest's idea of
152verbosity is used.
153
154See the documentation for the doctest module for information on
155writing tests using the doctest framework.
156
157"traditional" Python test modules
158---------------------------------
159The mechanics of how the "traditional" test system operates are fairly
160straightforward.  When a test case is run, the output is compared with the
161expected output that is stored in .../Lib/test/output.  If the test runs to
162completion and the actual and expected outputs match, the test succeeds, if
163not, it fails.  If an ``ImportError`` or ``test_support.TestSkipped`` error
164is raised, the test is not run.
165
166Executing Test Cases
167====================
168If you are writing test cases for module spam, you need to create a file
169in .../Lib/test named test_spam.py.  In addition, if the tests are expected
170to write to stdout during a successful run, you also need to create an
171expected output file in .../Lib/test/output named test_spam ("..."
172represents the top-level directory in the Python source tree, the directory
173containing the configure script).  If needed, generate the initial version
174of the test output file by executing::
175
176    ./python Lib/test/regrtest.py -g test_spam.py
177
178from the top-level directory.
179
180Any time you modify test_spam.py you need to generate a new expected
181output file.  Don't forget to desk check the generated output to make sure
182it's really what you expected to find!  All in all it's usually better
183not to have an expected-out file (note that doctest- and unittest-based
184tests do not).
185
186To run a single test after modifying a module, simply run regrtest.py
187without the -g flag::
188
189    ./python Lib/test/regrtest.py test_spam.py
190
191While debugging a regression test, you can of course execute it
192independently of the regression testing framework and see what it prints::
193
194    ./python Lib/test/test_spam.py
195
196To run the entire test suite:
197
198- [UNIX, + other platforms where "make" works] Make the "test" target at the
199  top level::
200
201    make test
202
203- [WINDOWS] Run rt.bat from your PCBuild directory.  Read the comments at
204  the top of rt.bat for the use of special -d, -O and -q options processed
205  by rt.bat.
206
207- [OTHER] You can simply execute the two runs of regrtest (optimized and
208  non-optimized) directly::
209
210    ./python Lib/test/regrtest.py
211    ./python -O Lib/test/regrtest.py
212
213But note that this way picks up whatever .pyc and .pyo files happen to be
214around.  The makefile and rt.bat ways run the tests twice, the first time
215removing all .pyc and .pyo files from the subtree rooted at Lib/.
216
217Test cases generate output based upon values computed by the test code.
218When executed, regrtest.py compares the actual output generated by executing
219the test case with the expected output and reports success or failure.  It
220stands to reason that if the actual and expected outputs are to match, they
221must not contain any machine dependencies.  This means your test cases
222should not print out absolute machine addresses (e.g. the return value of
223the id() builtin function) or floating point numbers with large numbers of
224significant digits (unless you understand what you are doing!).
225
226
227Test Case Writing Tips
228======================
229Writing good test cases is a skilled task and is too complex to discuss in
230detail in this short document.  Many books have been written on the subject.
231I'll show my age by suggesting that Glenford Myers' `"The Art of Software
232Testing"`_, published in 1979, is still the best introduction to the subject
233available.  It is short (177 pages), easy to read, and discusses the major
234elements of software testing, though its publication predates the
235object-oriented software revolution, so doesn't cover that subject at all.
236Unfortunately, it is very expensive (about $100 new).  If you can borrow it
237or find it used (around $20), I strongly urge you to pick up a copy.
238
239The most important goal when writing test cases is to break things.  A test
240case that doesn't uncover a bug is much less valuable than one that does.
241In designing test cases you should pay attention to the following:
242
243    * Your test cases should exercise all the functions and objects defined
244      in the module, not just the ones meant to be called by users of your
245      module.  This may require you to write test code that uses the module
246      in ways you don't expect (explicitly calling internal functions, for
247      example - see test_atexit.py).
248
249    * You should consider any boundary values that may tickle exceptional
250      conditions (e.g. if you were writing regression tests for division,
251      you might well want to generate tests with numerators and denominators
252      at the limits of floating point and integer numbers on the machine
253      performing the tests as well as a denominator of zero).
254
255    * You should exercise as many paths through the code as possible.  This
256      may not always be possible, but is a goal to strive for.  In
257      particular, when considering if statements (or their equivalent), you
258      want to create test cases that exercise both the true and false
259      branches.  For loops, you should create test cases that exercise the
260      loop zero, one and multiple times.
261
262    * You should test with obviously invalid input.  If you know that a
263      function requires an integer input, try calling it with other types of
264      objects to see how it responds.
265
266    * You should test with obviously out-of-range input.  If the domain of a
267      function is only defined for positive integers, try calling it with a
268      negative integer.
269
270    * If you are going to fix a bug that wasn't uncovered by an existing
271      test, try to write a test case that exposes the bug (preferably before
272      fixing it).
273
274    * If you need to create a temporary file, you can use the filename in
275      ``test_support.TESTFN`` to do so.  It is important to remove the file
276      when done; other tests should be able to use the name without cleaning
277      up after your test.
278
279.. _"The Art of Software Testing": 
280        http://www.amazon.com/exec/obidos/ISBN=0471043281
281
282Regression Test Writing Rules
283=============================
284Each test case is different.  There is no "standard" form for a Python
285regression test case, though there are some general rules (note that
286these mostly apply only to the "classic" tests; unittest_- and doctest_-
287based tests should follow the conventions natural to those frameworks)::
288
289    * If your test case detects a failure, raise ``TestFailed`` (found in
290      ``test.test_support``).
291
292    * Import everything you'll need as early as possible.
293
294    * If you'll be importing objects from a module that is at least
295      partially platform-dependent, only import those objects you need for
296      the current test case to avoid spurious ``ImportError`` exceptions
297      that prevent the test from running to completion.
298
299    * Print all your test case results using the ``print`` statement.  For
300      non-fatal errors, print an error message (or omit a successful
301      completion print) to indicate the failure, but proceed instead of
302      raising ``TestFailed``.
303
304    * Use ``assert`` sparingly, if at all.  It's usually better to just print
305      what you got, and rely on regrtest's got-vs-expected comparison to
306      catch deviations from what you expect.  ``assert`` statements aren't
307      executed at all when regrtest is run in -O mode; and, because they
308      cause the test to stop immediately, can lead to a long & tedious
309      test-fix, test-fix, test-fix, ... cycle when things are badly broken
310      (and note that "badly broken" often includes running the test suite
311      for the first time on new platforms or under new implementations of
312      the language).
313
314Miscellaneous
315=============
316There is a test_support module in the test package you can import for
317your test case.  Import this module using either::
318
319    import test.test_support
320
321or::
322
323    from test import test_support
324
325test_support provides the following useful objects:
326
327    * ``TestFailed`` - raise this exception when your regression test detects
328      a failure.
329
330    * ``TestSkipped`` - raise this if the test could not be run because the
331      platform doesn't offer all the required facilities (like large
332      file support), even if all the required modules are available.
333
334    * ``ResourceDenied`` - this is raised when a test requires a resource that
335      is not available.  Primarily used by 'requires'.
336
337    * ``verbose`` - you can use this variable to control print output.  Many
338      modules use it.  Search for "verbose" in the test_*.py files to see
339      lots of examples.
340
341    * ``forget(module_name)`` - attempts to cause Python to "forget" that it
342      loaded a module and erase any PYC files.
343
344    * ``is_resource_enabled(resource)`` - Returns a boolean based on whether
345      the resource is enabled or not.
346
347    * ``requires(resource [, msg])`` - if the required resource is not
348      available the ResourceDenied exception is raised.
349    
350    * ``verify(condition, reason='test failed')``.  Use this instead of::
351
352          assert condition[, reason]
353
354      ``verify()`` has two advantages over ``assert``:  it works even in -O
355      mode, and it raises ``TestFailed`` on failure instead of
356      ``AssertionError``.
357
358    * ``have_unicode`` - true if Unicode is available, false otherwise.
359
360    * ``is_jython`` - true if the interpreter is Jython, false otherwise.
361
362    * ``TESTFN`` - a string that should always be used as the filename when
363      you need to create a temp file.  Also use ``try``/``finally`` to
364      ensure that your temp files are deleted before your test completes.
365      Note that you cannot unlink an open file on all operating systems, so
366      also be sure to close temp files before trying to unlink them.
367
368    * ``sortdict(dict)`` - acts like ``repr(dict.items())``, but sorts the
369      items first.  This is important when printing a dict value, because
370      the order of items produced by ``dict.items()`` is not defined by the
371      language.
372
373    * ``findfile(file)`` - you can call this function to locate a file
374      somewhere along sys.path or in the Lib/test tree - see
375      test_linuxaudiodev.py for an example of its use.
376
377    * ``fcmp(x,y)`` - you can call this function to compare two floating
378      point numbers when you expect them to only be approximately equal
379      withing a fuzz factor (``test_support.FUZZ``, which defaults to 1e-6).
380
381    * ``check_syntax_error(testcase, statement)`` - make sure that the
382      statement is *not* correct Python syntax.
383
384
385Some Non-Obvious regrtest Features
386==================================
387    * Automagic test detection:  When you create a new test file
388      test_spam.py, you do not need to modify regrtest (or anything else)
389      to advertise its existence.  regrtest searches for and runs all
390      modules in the test directory with names of the form test_xxx.py.
391
392    * Miranda output:  If, when running test_spam.py, regrtest does not
393      find an expected-output file test/output/test_spam, regrtest
394      pretends that it did find one, containing the single line
395
396      test_spam
397
398      This allows new tests that don't expect to print anything to stdout
399      to not bother creating expected-output files.
400
401    * Two-stage testing:  To run test_spam.py, regrtest imports test_spam
402      as a module.  Most tests run to completion as a side-effect of
403      getting imported.  After importing test_spam, regrtest also executes
404      ``test_spam.test_main()``, if test_spam has a ``test_main`` attribute.
405      This is rarely required with the "traditional" Python tests, and
406      you shouldn't create a module global with name test_main unless
407      you're specifically exploiting this gimmick.  This usage does
408      prove useful with unittest-based tests as well, however; defining
409      a ``test_main()`` which is run by regrtest and a script-stub in the
410      test module ("``if __name__ == '__main__': test_main()``") allows
411      the test to be used like any other Python test and also work
412      with the unittest.py-as-a-script approach, allowing a developer
413      to run specific tests from the command line.