So far we have seen how to run tests using pytest
. In this section we will learn how to write tests and take advantage of the powerful pytest.mark
to make these tests more useful and informative.
In the previous section we saw that pytest
could flag tests as SKIPPED
and report information as to why this was the case. Let's look at one of the test functions to see how this was achieved.
# mypkg/test/test_mymath.py
@pytest.mark.skip(reason="Not yet implemented.")
def test_div():
""" Test the div function. """
assert div(9, 2) == pytest.approx(4.5)
Here we have written a test for an, as yet, unimplemented function div
that divides one number by another and returns the result. For a simple function like this the expected output is obvious so it's easy to write a test before the function is even implemented. We are asserting what the output of the function should be, not how it should be implemented.
Writing tests before implementing functionality is often good practice and is referred test driven development. Writing tests first can help to better structure your code. Once a test is written you should write the minimum functionality that makes the test pass, then add more tests, and refine.
We have marked the test to be skipped by using @pytest.mark.skip
with a reason given in the parentheses. Don't worry about this funny syntax. It is an example of what's known as a function decorator. We are wrapping our test function inside another function called skip
.
!pytest mypkg/test/test_mymath.py::test_div -rs
Let's look at another reason why we might want to skip a test.
# mypkg/test/test_errors.py
@pytest.mark.skipif(sys.platform != 'win32', reason="Only runs on windows.")
def test_BSoD():
blueScreenOfDeath()
Here the test is marked with a conditional skip. The test will only be run if the host operating system is Windows. Adding conditional skips like this allows your test suite to be robust and portable.
!pytest mypkg/test/test_errors.py::test_BSoD -rs
As we have already seen, it is usually desirable to run a test for a range of different input parameters. With pytest
it is easy to parametrize our test functions.
# mypkg/test/test_mymath.py
@pytest.mark.parametrize("x", [1, 2])
@pytest.mark.parametrize("y", [3, 4])
def test_mul(x, y):
""" Test the mul function. """
assert mul(x, y) == mul(y, x)
Here the function test_mul
is parametrized with two parameters, x
and y
. By marking the test in this manner it will be executed using all possible parameter pairs (x, y)
, i.e. (1, 3), (1, 4), (2, 3), (2, 4)
.
!pytest mypkg/test/test_mymath.py::test_mul -v
Tests can also be parametrized in a different way.
# mypkg/test/test_mymath.py
@pytest.mark.parametrize("x, y, expected",
[(1, 2, -1),
(7, 3, 4),
(21, 58, -37)])
def test_sub(x, y, expected):
""" Test the sub function. """
assert sub(x, y) == -sub(y, x) == expected
Here we are passing a list containing different parameter sets, with the names of the parameters matched against the arguments of the test function. Each set of parameters contains the two values to be tested, x
and y
, as well as the expected
outcome of the test. This allows the use of a single assert
statement in the body of the test function. Can you think why having a single assertion is a good thing?
!pytest mypkg/test/test_mymath.py::test_sub -v
Remember that it's also important to test for conditions not being met. Here we use an if
condition inside the test function to change the assert
statement depending on the input parameters.
# mypkg/test/test_mymath.py
@pytest.mark.parametrize("x, y",
[(108, 56),
(-64, -333),
(3, 7),
(74, 15)])
def test_greaterThan(x, y):
""" Test the greaterThan function. """
if x > y:
assert greaterThan(x, y)
else:
assert not greaterThan(x, y)
By using marks we can also indicate that we expect a particular test to fail.
# mypkg/test/test_mymath.py
@pytest.mark.xfail(reason="Broken code. Working on a fix.")
def test_add():
""" Test the add function. """
assert add(1, 1) == 2
assert add(1, 2) == add(2, 1) == 3
This is good practice. Rather than hiding tests for our buggy code, we are acknowledging that we are aware of the problem and are working on a fix. The user can query the expected failures and see the reasons for their inclusion. Once a bug has been fixed it is important to keep the test as part of your codebase. That way you'll know whenever a change reintroduces a bug that was previously fixed. This is known as a regression test.
In the previous session you learned how to use exceptions to handle run-time errors in programs. Pytest provides a way of testing your code for known exceptions. For example, suppose we had a function that raises an IndexError
:
# mypkg/mymodule.py
def indexError():
""" A function that raises an IndexError. """
a = []
a[3]
We could then write a test to validate that the error is thrown as expected:
# mypkg/test/test_errors.py
def test_indexError():
with pytest.raises(IndexError):
indexError()
!pytest mypkg/test/test_errors.py::test_indexError
It's possible to mark test functions with any attribute you like. For example:
# mypkg/test/test_mymath.py
@pytest.mark.slow
def test_bigSum():
""" Test the bigSum function. """
assert bigSum() == 20000000100000000
Here we have marked the test_bigSum
function with the attribute slow
in order to indicate that it takes a while to run. From the command line it is possible to run or skip tests with a particular mark.
pytest mypkg -m "slow" # only run the slow tests
pytest mypkg -m "not slow" # skip the slow tests
The custom attribute can just be a label, as in this case, or could be your own function decorator.
!pytest mypkg -m "slow"
Here you'll be modifying the following files:
mypkg/mymodule.py
mypkg/test/test_mymath.py
mypkg/test/test_errors.py
After each exercise, verify that your updated tests work by re-running pytest
.
Pytest fixtures allow objects to be initalised before test functions are run. This enables objects to be re-used across different tests, which is particularly useful when instantiating objects is complicated or time consuming. Similarly, a fixture can perform clean up once the test functions have finished.
Pytest also provides functionality for mocking or monkey patching modules and environments. This allows you set up fake objects and environments, allowing your tests to run in situations where they otherwise wouldn't be able to.