Functions and Classes in Python

Writing for Pytest

Overview:

  • Teaching: 10 min
  • Exercises: 10 min

Questions

  • How do I access pytest functionality?
  • How do I skip tests?
  • How do I incorporate conditional testing?
  • How can I run a test with multiple parameters?
  • How can I test for errors

Objectives

  • Know that pytest provides decorators to modify the behaviour of tests.
  • Know how to skip tests, apply conditions, parameterise tests.
  • Know how to run pytest conditionally

So far we have seen how to run tests using pytest. In this section we will learn how to write tests and take advantage of the powerful pytest.mark to make these tests more useful and informative.

Skipping tests

In the previous section we saw that pytest could flag tests as SKIPPED and report information as to why this was the case. Let's look at one of the test functions to see how this was achieved.

# mypkg/test/test_mymath.py
@pytest.mark.skip(reason="Not yet implemented.")
def test_div():
    """ Test the div function. """
    assert div(9, 2) == pytest.approx(4.5)

Here we have written a test for an, as yet, unimplemented function div that divides one number by another and returns the result. For a simple function like this the expected output is obvious so it's easy to write a test before the function is even implemented. We are asserting what the output of the function should be, not how it should be implemented.

Writing tests before implementing functionality is often good practice and is referred test driven development. Writing tests first can help to better structure your code. Once a test is written you should write the minimum functionality that makes the test pass, then add more tests, and refine.

We have marked the test to be skipped by using @pytest.mark.skip with a reason given in the parentheses. Don't worry about this funny syntax. It is an example of what's known as a function decorator. We are wrapping our test function inside another function called skip.

In [1]:
!pytest mypkg/test/test_mymath.py::test_div -rs
============================= test session starts ==============================
platform linux -- Python 3.6.3, pytest-3.3.1, py-1.5.0, pluggy-0.6.0
rootdir: /home/lester/Code/siremol.org/chryswoods.com/python_and_data/testing, inifile:
collected 1 item                                                               

mypkg/test/test_mymath.py s                                              [100%]
=========================== short test summary info ============================
SKIP [1] mypkg/test/test_mymath.py:30: Not yet implemented.

========================== 1 skipped in 0.01 seconds ===========================

Let's look at another reason why we might want to skip a test.

# mypkg/test/test_errors.py
@pytest.mark.skipif(sys.platform != 'win32', reason="Only runs on windows.")
def test_BSoD():
    blueScreenOfDeath()

Here the test is marked with a conditional skip. The test will only be run if the host operating system is Windows. Adding conditional skips like this allows your test suite to be robust and portable.

In [2]:
!pytest mypkg/test/test_errors.py::test_BSoD -rs
============================= test session starts ==============================
platform linux -- Python 3.6.3, pytest-3.3.1, py-1.5.0, pluggy-0.6.0
rootdir: /home/lester/Code/siremol.org/chryswoods.com/python_and_data/testing, inifile:
collected 1 item                                                               

mypkg/test/test_errors.py s                                              [100%]
=========================== short test summary info ============================
SKIP [1] mypkg/test/test_errors.py:12: Only runs on Windows.

========================== 1 skipped in 0.00 seconds ===========================

Parametrizing tests

As we have already seen, it is usually desirable to run a test for a range of different input parameters. With pytest it is easy to parametrize our test functions.

# mypkg/test/test_mymath.py
@pytest.mark.parametrize("x", [1, 2])
@pytest.mark.parametrize("y", [3, 4])
def test_mul(x, y):
    """ Test the mul function. """
    assert mul(x, y) == mul(y, x)

Here the function test_mul is parametrized with two parameters, x and y. By marking the test in this manner it will be executed using all possible parameter pairs (x, y), i.e. (1, 3), (1, 4), (2, 3), (2, 4).

In [3]:
!pytest mypkg/test/test_mymath.py::test_mul -v
============================= test session starts ==============================
platform linux -- Python 3.6.3, pytest-3.3.1, py-1.5.0, pluggy-0.6.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/lester/Code/siremol.org/chryswoods.com/python_and_data/testing, inifile:
collected 4 items                                                              

mypkg/test/test_mymath.py::test_mul[3-1] PASSED                          [ 25%]
mypkg/test/test_mymath.py::test_mul[3-2] PASSED                          [ 50%]
mypkg/test/test_mymath.py::test_mul[4-1] PASSED                          [ 75%]
mypkg/test/test_mymath.py::test_mul[4-2] PASSED                          [100%]

=========================== 4 passed in 0.01 seconds ===========================

Tests can also be parametrized in a different way.

# mypkg/test/test_mymath.py
@pytest.mark.parametrize("x, y, expected",
                        [(1, 2, -1),
                         (7, 3,  4),
                         (21, 58, -37)])
def test_sub(x, y, expected):
    """ Test the sub function. """
    assert sub(x, y) == -sub(y, x) == expected

Here we are passing a list containing different parameter sets, with the names of the parameters matched against the arguments of the test function. Each set of parameters contains the two values to be tested, x and y, as well as the expected outcome of the test. This allows the use of a single assert statement in the body of the test function. Can you think why having a single assertion is a good thing?

In [4]:
!pytest mypkg/test/test_mymath.py::test_sub -v
============================= test session starts ==============================
platform linux -- Python 3.6.3, pytest-3.3.1, py-1.5.0, pluggy-0.6.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/lester/Code/siremol.org/chryswoods.com/python_and_data/testing, inifile:
collected 3 items                                                              

mypkg/test/test_mymath.py::test_sub[1-2--1] PASSED                       [ 33%]
mypkg/test/test_mymath.py::test_sub[7-3-4] PASSED                        [ 66%]
mypkg/test/test_mymath.py::test_sub[21-58--37] PASSED                    [100%]

=========================== 3 passed in 0.01 seconds ===========================

Remember that it's also important to test for conditions not being met. Here we use an if condition inside the test function to change the assert statement depending on the input parameters.

# mypkg/test/test_mymath.py
@pytest.mark.parametrize("x, y",
                        [(108, 56),
                         (-64, -333),
                         (3, 7),
                         (74, 15)])
def test_greaterThan(x, y):
    """ Test the greaterThan function. """
    if x > y:
        assert greaterThan(x, y)
    else:
        assert not greaterThan(x, y)

Expected failures

By using marks we can also indicate that we expect a particular test to fail.

# mypkg/test/test_mymath.py
@pytest.mark.xfail(reason="Broken code. Working on a fix.")
def test_add():
    """ Test the add function. """
    assert add(1, 1) ==  2
    assert add(1, 2) == add(2, 1) == 3

This is good practice. Rather than hiding tests for our buggy code, we are acknowledging that we are aware of the problem and are working on a fix. The user can query the expected failures and see the reasons for their inclusion. Once a bug has been fixed it is important to keep the test as part of your codebase. That way you'll know whenever a change reintroduces a bug that was previously fixed. This is known as a regression test.

Testing exceptions

In the previous session you learned how to use exceptions to handle run-time errors in programs. Pytest provides a way of testing your code for known exceptions. For example, suppose we had a function that raises an IndexError:

# mypkg/mymodule.py
def indexError():
    """ A function that raises an IndexError. """
    a = []
    a[3]

We could then write a test to validate that the error is thrown as expected:

# mypkg/test/test_errors.py
def test_indexError():
    with pytest.raises(IndexError):
        indexError()
In [5]:
!pytest mypkg/test/test_errors.py::test_indexError
============================= test session starts ==============================
platform linux -- Python 3.6.3, pytest-3.3.1, py-1.5.0, pluggy-0.6.0
rootdir: /home/lester/Code/siremol.org/chryswoods.com/python_and_data/testing, inifile:
collected 1 item                                                               

mypkg/test/test_errors.py .                                              [100%]

=========================== 1 passed in 0.00 seconds ===========================

Custom attributes

It's possible to mark test functions with any attribute you like. For example:

# mypkg/test/test_mymath.py
@pytest.mark.slow
def test_bigSum():
    """ Test the bigSum function. """
    assert bigSum() == 20000000100000000

Here we have marked the test_bigSum function with the attribute slow in order to indicate that it takes a while to run. From the command line it is possible to run or skip tests with a particular mark.

pytest mypkg -m "slow"        # only run the slow tests
pytest mypkg -m "not slow"    # skip the slow tests

The custom attribute can just be a label, as in this case, or could be your own function decorator.

In [6]:
!pytest mypkg -m "slow"
============================= test session starts ==============================
platform linux -- Python 3.6.3, pytest-3.3.1, py-1.5.0, pluggy-0.6.0
rootdir: /home/lester/Code/siremol.org/chryswoods.com/python_and_data/testing, inifile:
collected 17 items                                                             

mypkg/test/test_mymath.py .                                              [100%]

============================= 16 tests deselected ==============================
=================== 1 passed, 16 deselected in 2.71 seconds ====================

Exercises

Here you'll be modifying the following files:

  • mypkg/mymodule.py
  • mypkg/test/test_mymath.py
  • mypkg/test/test_errors.py

After each exercise, verify that your updated tests work by re-running pytest.

1

Fix the bug in the add function in mymodule.py and delete the xfail mark from test_add (since we now expect the test to pass).

2

Parametrize the test_add function so that it can work with a single assert statement. Make sure you test floating point addition too.

3

Write functionality for the div function in mymodule.py and remove the skip mark from test_div.

4

Add a mark to the tes_mul function to indicate that it is critical. Run pytest only for this critical test.

Solution

5

Add a test to test_errors.py to test the function keyError from mymodule.py. This functions throws a KeyError, i.e. it tries to acess a dictionary using an unknown key.

Bonus

Just because you see tests pass doesn't mean that a piece of software is trustworthy. With a limited number of tests that use a small range of parameters, how can you be sure that the output is correct in all cases? It's also important to remember that tests are themselves just code, so are also prone to errors and bugs. A poor software developer is likely to write poor tests. When writing software it is your job to break things (and then fix them).

Can you find the bug in test_isLucky?

Further reading

Pytest fixtures allow objects to be initalised before test functions are run. This enables objects to be re-used across different tests, which is particularly useful when instantiating objects is complicated or time consuming. Similarly, a fixture can perform clean up once the test functions have finished.

Pytest also provides functionality for mocking or monkey patching modules and environments. This allows you set up fake objects and environments, allowing your tests to run in situations where they otherwise wouldn't be able to.

Key Points:

  • Pytest provides decorators to modify the simple execution of tests
  • Modifying tests allows you to mark know fails or tests for unritten features
  • Just becuase you test, bugs may still feature in your code