This documentation is for an old version of IPython. You can find docs for newer versions here.

Module: testing.tools

Generic testing tools.


4 Classes

class IPython.testing.tools.TempFileMixin

Bases: object

Utility class to create temporary Python/IPython files.

Meant as a mixin class for test cases.

mktmp(src, ext='.py')

Make a valid python temp file.


alias of StringIO

class IPython.testing.tools.AssertPrints(s, channel='stdout', suppress=True)

Bases: object

Context manager for testing that code prints certain text.


>>> with AssertPrints("abc", suppress=False):
...     print("abcd")
...     print("def")
__init__(s, channel='stdout', suppress=True)
class IPython.testing.tools.AssertNotPrints(s, channel='stdout', suppress=True)

Bases: IPython.testing.tools.AssertPrints

Context manager for checking that certain output isn’t produced.

Counterpart of AssertPrints

14 Functions

IPython.testing.tools.full_path(startPath, files)

Make full paths for all the listed files, based on startPath.

Only the base part of startPath is kept, since this routine is typically used with a script’s __file__ variable as startPath. The base of startPath is then prepended to all the listed files, forming the output list.


startPath : string

Initial path to use as the base for the results. This path is split using os.path.split() and only its first component is kept.

files : string or list

One or more files.


>>> full_path('/foo/bar.py',['a.txt','b.txt'])
['/foo/a.txt', '/foo/b.txt']
>>> full_path('/foo',['a.txt','b.txt'])
['/a.txt', '/b.txt']

If a single file is given, the output is still a list:

>>> full_path('/foo','a.txt')

Parse the output of a test run and return errors, failures.


txt : str

Text output of a test run, assumed to contain a line of one of the following forms:

'FAILED (errors=1)'
'FAILED (failures=1)'
'FAILED (errors=1, failures=1)'

nerr, nfail

number of errors and failures.


Return a valid default argv for creating testing instances of ipython


Return a config object with good defaults for testing.


Return appropriate IPython command line name. By default, this will return a list that can be used with subprocess.Popen, for example, but passing as_string=True allows for returning the IPython command as a string.


as_string: bool

Flag to allow to return the command as a string.

IPython.testing.tools.ipexec(fname, options=None, commands=())

Utility to call ‘ipython filename’.

Starts IPython with a minimal and safe configuration to make startup as fast as possible.

Note that this starts IPython in a subprocess!


fname : str

Name of file to be executed (should have .py or .ipy extension).

options : optional, list

Extra command-line flags to be passed to IPython.

commands : optional, list

Commands to send in on stdin


(stdout, stderr) of ipython subprocess.

IPython.testing.tools.ipexec_validate(fname, expected_out, expected_err='', options=None, commands=())

Utility to call ‘ipython filename’ and validate output/error.

This function raises an AssertionError if the validation fails.

Note that this starts IPython in a subprocess!


fname : str

Name of the file to be executed (should have .py or .ipy extension).

expected_out : str

Expected stdout of the process.

expected_err : optional, str

Expected stderr of the process.

options : optional, list

Extra command-line flags to be passed to IPython.



IPython.testing.tools.check_pairs(func, pairs)

Utility function for the common case of checking a function with a sequence of input/output pairs.


func : callable

The function to be tested. Should accept a single argument.

pairs : iterable

A list of (input, expected_output) tuples.


None. Raises an AssertionError if any output does not match the expected



Create an empty, named, temporary file for the duration of the context.

IPython.testing.tools.monkeypatch(obj, name, attr)

Context manager to replace attribute named name in obj with attr.


test that ipython [subcommand] -h works


test that ipython [subcommand] --help-all works

IPython.testing.tools.assert_big_text_equal(a, b, chunk_size=80)

assert that large strings are equal

Zooms in on first chunk that differs, to give better info than vanilla assertEqual for large text blobs.