IPython Documentation

Table Of Contents

Previous topic

testing.skipdoctest

Next topic

utils.PyColorize

This Page

testing.tools

Module: testing.tools

Inheritance diagram for IPython.testing.tools:

Generic testing tools.

In particular, this module exposes a set of top-level assert* functions that can be used in place of nose.tools.assert* in method generators (the ones in nose can not, at least as of nose 0.10.4).

Authors

Classes

AssertNotPrints

class IPython.testing.tools.AssertNotPrints(s, channel='stdout', suppress=True)

Bases: IPython.testing.tools.AssertPrints

Context manager for checking that certain output isn’t produced.

Counterpart of AssertPrints

__init__(s, channel='stdout', suppress=True)

AssertPrints

class IPython.testing.tools.AssertPrints(s, channel='stdout', suppress=True)

Bases: object

Context manager for testing that code prints certain text.

Examples

>>> with AssertPrints("abc", suppress=False):
...     print "abcd"
...     print "def"
... 
abcd
def
__init__(s, channel='stdout', suppress=True)

TempFileMixin

class IPython.testing.tools.TempFileMixin

Bases: object

Utility class to create temporary Python/IPython files.

Meant as a mixin class for test cases.

__init__()

x.__init__(...) initializes x; see help(type(x)) for signature

mktmp(src, ext='.py')

Make a valid python temp file.

tearDown()

Functions

IPython.testing.tools.check_pairs(func, pairs)

Utility function for the common case of checking a function with a sequence of input/output pairs.

Parameters :

func : callable

The function to be tested. Should accept a single argument.

pairs : iterable

A list of (input, expected_output) tuples.

Returns :

None. Raises an AssertionError if any output does not match the expected :

value. :

IPython.testing.tools.default_argv()

Return a valid default argv for creating testing instances of ipython

IPython.testing.tools.default_config()

Return a config object with good defaults for testing.

IPython.testing.tools.full_path(startPath, files)

Make full paths for all the listed files, based on startPath.

Only the base part of startPath is kept, since this routine is typically used with a script’s __file__ variable as startPath. The base of startPath is then prepended to all the listed files, forming the output list.

Parameters :

startPath : string

Initial path to use as the base for the results. This path is split

using os.path.split() and only its first component is kept.

files : string or list

One or more files.

Examples

>>> full_path('/foo/bar.py',['a.txt','b.txt'])
['/foo/a.txt', '/foo/b.txt']
>>> full_path('/foo',['a.txt','b.txt'])
['/a.txt', '/b.txt']

If a single file is given, the output is still a list: >>> full_path(‘/foo’,’a.txt’) [‘/a.txt’]

IPython.testing.tools.ipexec(fname, options=None)

Utility to call ‘ipython filename’.

Starts IPython witha minimal and safe configuration to make startup as fast as possible.

Note that this starts IPython in a subprocess!

Parameters :

fname : str

Name of file to be executed (should have .py or .ipy extension).

options : optional, list

Extra command-line flags to be passed to IPython.

Returns :

(stdout, stderr) of ipython subprocess. :

IPython.testing.tools.ipexec_validate(fname, expected_out, expected_err='', options=None)

Utility to call ‘ipython filename’ and validate output/error.

This function raises an AssertionError if the validation fails.

Note that this starts IPython in a subprocess!

Parameters :

fname : str

Name of the file to be executed (should have .py or .ipy extension).

expected_out : str

Expected stdout of the process.

expected_err : optional, str

Expected stderr of the process.

options : optional, list

Extra command-line flags to be passed to IPython.

Returns :

None :

IPython.testing.tools.make_tempfile(*args, **kwds)

Create an empty, named, temporary file for the duration of the context.

IPython.testing.tools.mute_warn(*args, **kwds)
IPython.testing.tools.parse_test_output(txt)

Parse the output of a test run and return errors, failures.

Parameters :

txt : str

Text output of a test run, assumed to contain a line of one of the following forms:

'FAILED (errors=1)'
'FAILED (failures=1)'
'FAILED (errors=1, failures=1)'
Returns :

nerr, nfail: number of errors and failures. :