# Python testing reference This document is a reference for common testing patterns in a Django/Python project using Pytest. Contents: <!-- Run :InsertToc to update --> - [Set-up and tear-down](#set-up-and-tear-down) - [Creating Django model fixtures](#creating-django-model-fixtures) - [Creating other forms of fixture](#creating-other-forms-of-fixture) - [Mocks](#mocks) - [Stubbing](#stubbing) - [Passing stubs as arguments](#passing-stubs-as-arguments) - [Stubbing Django model instances](#stubbing-django-model-instances) - [Stubbing multiple return values](#stubbing-multiple-return-values) - [Stubbing function calls that raise an exception](#stubbing-function-calls-that-raise-an-exception) - [Stubbing HTTP responses](#stubbing-http-responses) - [Overriding Django settings](#overriding-django-settings) - [Controlling the system clock](#controlling-the-system-clock) - [Spying](#spying) - [How to use spys](#how-to-use-spys) - [Verifying a spy was called correctly](#verifying-a-spy-was-called-correctly) - [Verifying _all_ calls to a spy](#verifying-_all_-calls-to-a-spy) - [Verifying unordered calls](#verifying-unordered-calls) - [Verifying partial calls](#verifying-partial-calls) - [Extracting information about how spy was called](#extracting-information-about-how-spy-was-called) - [Spying without stubbing](#spying-without-stubbing) - [Checking values with sentinels](#checking-values-with-sentinels) - [Controlling external dependencies](#controlling-external-dependencies) - [Using temporary files](#using-temporary-files) - [Functional testing](#functional-testing) - [High quality functional tests](#high-quality-functional-tests) - [Django views](#django-views) - [Testing error responses](#testing-error-responses) - [Filling in forms](#filling-in-forms) - [Django management commands](#django-management-commands) - [Click commands](#click-commands) - [Running tests](#running-tests) - [Capturing output](#capturing-output) - [Using Pytest fixtures](#using-pytest-fixtures) - [Shared fixtures](#shared-fixtures) - [Prefer to inject factories](#prefer-to-inject-factories) - [Writing high quality code and tests](#writing-high-quality-code-and-tests) - [Anti-patterns](#anti-patterns) - [Resources](#resources) ## Set-up and tear-down Tools and patterns for setting up the world how you want it (and cleaning up afterwards). ### Creating Django model fixtures For Django models, the basic pattern with [factory boy](https://factoryboy.readthedocs.io/) is: ```py import factory from foobar import models class Frob(factory.django.DjangoModelFactory): # For fields that need to be unique. sequence_field = factory.Sequence(lambda n: "Bar" + n) # For fields where we want to compute the value at runtime. datetime_field = factory.LazyFunction(datetime.now) # For fields computed from the value of other fields. computed_field = factory.LazyAttribute(lambda obj: f"foo-{obj.sequence_field}") # Referring to other factories. bar = factory.SubFactory("tests.factories.foobar.Bar") class Meta: model = models.Frob ``` Using [post-generation hooks](https://factoryboy.readthedocs.io/en/stable/reference.html#factory.post_generation): ```py class MyFactory(factory.Factory): blah = factory.PostGeneration(lambda obj, create, extracted, **kwargs: 42) MyFactory( blah=42, # Passed in the 'extracted' argument of the lambda blah__foo=1, # Passed in kwargs as 'foo': 1 blah__baz=2, # Passed in kwargs as 'baz': 2 blah_bar=3, # Not passed ) ``` ### Creating other forms of fixture Factory boy can also be used to create other object types, such as dicts. Do this by specifying the class to be instantiated in the `Meta.model` field: ```py import factory class Payload(factory.Factory): name = "Alan" age = 40 class Meta: model = dict assert Payload() == {"name": "Alan", "age": 40} ``` There's also a convenient `factory.DictFactory` class that can used for `dict` factories If the dict has fields that aren't valid Python keyword args (e.g. they include hyphens or shadow built-in keywords like `from`), use the `rename` meta arg: ```py class AwkwardDict(factory.DictFactory): # Named with trailing underscore as we can't use 'from' from_ = "Person" # Named with underscore as we can't use a hyphen is_nice = False class Meta: rename = {"from_": "from", "is_nice": "is-nice"} assert AwkwardDict() == {"from": "Person", "is-nice": False} ``` This is useful for writing concise tests that pass a complex object as an input. ## Mocks Python's mock library is very flexible. It's helpful to distinguish between two ways that mock objects are used: - _Stubs_: where the behaviour of the mock object is specified _before_ the act phase of a test - _Spys_: where the mock calls are inspected _after_ the act phase of a test. Equivalently you can think of mocks as either being actors (stubs) or critics (spys). ### Stubbing Stubbing involves replacing an argument or collaborator with your own version so you can specify its behaviour in advance. #### Passing stubs as arguments When passing stubs as arguments to the target, prefer [`mock.create_autospec`](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.create_autospec) so function/attribute calls are checked. ```py from unittest import mock from foobar.vendor import acme def test_with_spec(): # Function stubs will have their arguments checked against real function # signature. fn = mock.create_autospec(spec=acme.do_thing, **attributes) # Use instance=True when stubbing a class instance. client = mock.create_autospec(spec=acme.Client, instance=True, **attributes) ``` - Don't pass instantiated class instances as the `spec` argument, use `instance=True` instead. - Be aware that `create_autospec` can have poor performance if it needs to traverse a large graph of objects. - Be aware that you can't stub a `name` attribute when calling `mock.create_autospec` or via `mock.Mock()`. Instead, either call `mock.configure_mock(name=...)` or assign the `name` attribute in a separate statement: ```py m = mock.create_autospec(spec=SomeClass, instance=True) m.name = "..." ``` #### Stubbing Django model instances Use the following formula to create stubbed Django model instances that can be assigned as foreign keys: ```py from unittest import mock from django.db import models def test_django_model_instance(): instance = mock.create_autospec( spec=models.SomeModel, instance=True, **fields, _state=mock.create_autospec( spec=models.base.ModelState, spec_set=True, db=None, adding=True ), ) ``` This is useful for writing isolated unit tests that involve Django model instances. #### Stubbing multiple return values Assign an iterable as a mock `side_effect`: ```py stub = mock.create_autospec(spec=SomeClass) stub.method.side_effect = [1, 2, 3] assert stub.method() == 1 assert stub.method() == 2 assert stub.method() == 3 ``` #### Stubbing function calls that raise an exception Assign an exception as a mock `side_effect`: ```py stub = mock.create_autospec(spec=SomeClass) stub.method.side_effect = ValueError("Bad!") with pytest.raises(ValueError): assert stub.method() ``` #### Stubbing HTTP responses Use the [`responses`](https://pypi.org/project/responses/) library. It provides a decorator and a clean API for stubbing the responses to HTTP requests: ```py import responses @responses.activate def test_something(): responses.add( method=responses.POST, url="https://taylor.rest/", status=200, json={ "author": "Taylor Swift", "quote": "Bring on all the pretenders!" }, ) ) ``` - Can pass `body` instead of `json`. - `url` can be a compiled regex. #### Overriding Django settings Use Django's [`@override_settings`](https://docs.djangoproject.com/en/dev/topics/testing/tools/#django.test.override_settings) decorator to override scalar settings: ```py from django.test import override_settings @override_settings(TIME_ZONE="Europe/London") def test_something(): ... ``` Pytest-Django includes an equivalent [`settings` Pytest fixture](https://pytest-django.readthedocs.io/en/latest/helpers.html#settings): ```py def test_run(settings): # Assignments to the `settings` object will be reverted when this test completes. settings.FOO = 1 run() ``` Use Django's [`@modify_settings`](https://docs.djangoproject.com/en/dev/topics/testing/tools/#django.test.modify_settings) decorator to prepend/append to dict settings: ```py from django.test import override_settings @modify_settings(MIDDLEWARE={ "prepend": "some.other.thing", "append": "some.alternate.thing", }) def test_something_with_middleware(): ... ``` Both `override_settings` and `modify_settings` can be used as class decorators but only on `TestCase` subclasses. #### Controlling the system clock Calling the system clock in tests is generally a bad idea as it can lead to flakiness. Better to pass in relevant dates or datetimes, or if that isn't possible, use [`time_machine`](https://github.com/adamchainz/time-machine): ```py import time_machine def test_something(): # Can pass a string, date/datetime instance, lambda function or iterable. with time_machine.travel(dt, tick=True): pass pass ``` or [`freezegun`](https://github.com/spulec/freezegun): ```py import time_machine def test_something(): # Can pass a string, date/datetime instance, lambda function or iterable. with time_machine.travel(dt, tick=True): pass ``` Note: - Can be used as a decorator - Within the context block, use `time_machine.travel(other_dt)` or `freezegun.move_to(other_dt)` to move time to a specified value. The `time_machine.travel` decorator is useful for debugging flakey tests that fail when run at certain times (like during the DST changeover day). To recreate the flakey fail, pin time to when the test failed on your CI service: ```py @time_machine.travel("2021-03-28T23:15Z") def test_that_failed_last_night(): ... ``` ### Spying Spying involves replacing an argument to the system-under-test or one of its collaborators with a fake version so you can verify how it was called. Spys can be created as `unitest.mock.Mock` instances using `mock.create_autospec`. If stubs are _actors_, then spys are _critics_. #### How to use spys Here's an example of passing a spy as an **argument to the system-under-test**: ```py from unittest import mock from foobar.vendor import acme from foobar import usecase def test_client_called_correctly(): # Create spy. client = mock.create_autospec(spec=acme.Client, instance=True) # Pass spy object as an argument. usecase.run(client=client, x=100) # Check spy was called correctly. client.do_the_thing.assert_called_with(x=100) ``` Here's an example of using a spy for a **collaborator of the system-under-test**: ```py from unittest import mock from foobar.vendor import acme from foobar import usecase @mock.patch.object(usecase, "get_client") def test_client_called_correctly(get_client): # Create spy and ensure factory function returns it. client = mock.create_autospec(spec=acme.Client, instance=True) get_client.return_value = client # Here the client object is constructed from within the use case by calling # a `get_client` factory function. usecase.run(x=100) # Check spy was called correctly. client.do_the_thing.assert_called_with(x=100) ``` As you can see, the use of dependency injection in the first example leads to simpler tests. #### Verifying a spy was called correctly Objects from Python's `unittest.mock` library provide several [`assert_*` methods](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.assert_called) that can be used to verify how a spy was called: - `assert_called` - `assert_called_once` - `assert_called_with` (only checks the _last_ call to the spy) - `assert_called_once_with` - `assert_any_call` - `assert_not_called` - `assert_has_calls` #### Verifying _all_ calls to a spy Note `assert_has_calls` shouldn't be used to check _all_ calls to the spy as it won't fail if additional calls are made. For that it's better to use the `call_args_list` property. E.g. ```py assert spy.call_args_list == [ mock.call(x=1), mock.call(x=2), ] ``` #### Verifying unordered calls If the order in which a spy is called is not important, then use this pattern: ```py assert len(spy.call_args_list) == 2 assert mock.call(x=1) in spy.call_args_list assert mock.call(x=2) in spy.call_args_list ``` #### Verifying partial calls If you only want to make an assertion about _some_ of the arguments passed to a spy, use the [`unittest.mock.ANY` helper](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.ANY), which pass equality checks with _everything_: ```py m.assert_called_with(x=100, y=ANY) ``` #### Extracting information about how spy was called Spys have several attributes that store how they were called. ```py Mock.called # bool for whether the spy was called Mock.call_count # how many times the spy was called Mock.call_args # a tuple of (args, kwargs) of how the spy was LAST called Mock.call_args_list # a list of calls Mock.method_calls # a list of methods and attributes called Mock.mock_calls # a list of ALL calls to the spy (and its methods and attributes) ``` The `call` objects returned by `Mock.call_args` and `Mock.call_args_list` are two-tuples of (positional args, keyword args) but the `call` objects returned by `Mock.method_calls` and `Mock.mock_calls` are three-tuples of (name, positional args, keyword args). Use `unittest.mock.call` objects to make assertions about calls: ```py assert mock_function.call_args_list == [mock.call(x=1), mock.call(x=2)] assert mock_object.method_calls == [mock.call.add(1), mock.call.delete(x=1)] ``` To make fine-grained assertions about function or method calls, you can use the `call_args` property: ```py _, call_kwargs = some_mocked_function.call_args assert "succeeded" in call_kwargs["message"] ``` #### Spying without stubbing You can wrap an object with a mock so that method calls are forwarded on but also recorded for later examination: For _direct_ collaborators, use something like: ```py from unittest import mock from foobar.vendors import client from foobar import usecases def test_injected_client_called_correctly(): client_spy = mock.Mock(wraps=client) usecases.do_the_thing(client_spy, x=100) client_spy.some_method.assert_called_with(x=100) ``` For _indirect_ collaborators, use `mock.patch.object`: ```py from unittest import mock from foobar.vendors import client from foobar import usecases @mock.patch.object(usecases, "client", wraps=client): def test_collaborator_client_called_correctly(client_spy): usecases.do_the_thing(x=100) client_spy.some_method.assert_called_with(x=100) ``` ### Checking values with sentinels [Sentinels](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.sentinel) provide on-demand unique objects and are useful for passing into the system-under-test when the actual value of the argument isn't important. ```py @mock.patch.object(somemodule, "collaborator") def test_passing_sentinel(collaborator): arg = mock.sentinel.BAZ somemodule.target(arg) collaborator.assert_called_with(bar=arg) ``` - It makes it explicit that the test is using a stand-in object. - Any attribute access other than `.name` raises `AttributeError`. Reading: - <https://www.seanh.cc/2017/03/17/sentinel/> ## Controlling external dependencies ### Using temporary files For tests that need to write something to a file location but we don't leave detritus around after the test run is finished. This should only be needed where a _filepath_ is an argument to the system-under-test, such as functional tests. For other types of tests, it is preferable to pass file-like objects as arguments so tests can pass `io.StringIO` instances. Here's how to create a temporary CSV file using Python's `tempfile` module: ```python import csv import tempfile from django.core.management import call_command def test_csv_import(): with tempfile.NamedTemporaryFile(mode="w") as f: writer = csv.writer(f) writer.writerow(["EA:E2001BND", "01/10/2020", 0.584955, -0.229834]) # Call the management command passing the CSV filepath as an argument. call_command("import_csv_file", f.name) ``` The same thing can be done using [Pytest's `tmp_path` fixture](https://docs.pytest.org/en/stable/tmpdir.html#the-tmp-path-fixture) with provides a [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html) object: ```python import csv import pytest from django.core.management import call_command def test_csv_import(tmp_path): # Create temporary CSV file csv_file = tmp_path / "temp.csv" with csv_file.open("w") as f: writer = csv.writer(f) writer.writerow(["EA:E2001BND", "01/10/2020", 0.584955, -0.229834]) # Call the management command passing the CSV filepath as an argument. call_command("import_csv_file", csv_file) ``` Pytest provides a few other fixtures for creating temporary files and folders: - [`tmp_path_factory`](https://docs.pytest.org/en/stable/tmpdir.html#the-tmp-path-factory-fixture) — a _session_-scoped fixture for creating `pathlib.Path` temporary directories. - [`tmpdir`](https://docs.pytest.org/en/stable/tmpdir.html#the-tmpdir-fixture) — a _function_-scoped fixture for creating `py.path.local` temporary directories. - [`tmpdir_factory`](https://docs.pytest.org/en/stable/tmpdir.html#the-tmpdir-factory-fixture) — a _session_-scoped fixture for creating `py.path.local` temporary directories. ## Functional testing End-to-end tests that trigger the system by an external interface such as a HTTP request or CLI invocation. ### High quality functional tests Functional tests will necessarily be slow and fail with less-than-helpful error messages. That's ok - the value they provide is regression protection. You can sleep well at night knowing that all your units are plumbed together correctly. Follow these patterns when writing functional tests: - Explicitly comment each phase of a test to explain what is going on. Don't rely on the test name or a docstring. - Strive to make the test as end-to-end as possible. Exercise the system using an external call (like a HTTP request) and only mock calls to external services. - Ensure all relevant settings are explicitly defined in the test set-up. Don't rely on implicit setting values. ### Django views Use [`django-webtest`](https://github.com/django-webtest/django-webtest) for testing Django views. It provides a readable API for clicking on buttons and [submitting forms](https://docs.pylonsproject.org/projects/webtest/en/latest/forms.html). #### Testing error responses Pass `status="*"` so 4XX or 5XX responses don't raise an exception. #### Filling in forms To fill in a multi-checkbox widget, assign a list of the values to select. For Django model widgets, this is the PKs of the selected models: ```py form = page.forms["my_form"] form["roles"] = [some_role.pk, other_role.pk] response = form.submit() ``` ### Django management commands Use something like this: ```py import io import datetime import time_machine from django.core.management import call_command from dateutil import tz def test_some_command(): # Capture output streams. stdout = io.StringIO() stderr = io.StringIO() # Control time when MC runs. run_at = datetime(2021, 2, 14, 12, tzinfo=tz.gettz('Europe/London')) with time_machine.travel(run_at): call_command("some_command_name", stdout=stdout, stderr=stderr) # Check command output (if any). assert stdout.getvalue() == "..." assert stderr.getvalue() == "..." # Check side-effects. ``` or using Octo's private pytest fixtures: ```py import time_machine from django.core.management import call_command def test_some_command(command, factory): run_at = factory.local.dt("2021-03-25 15:12:00") # Run command with a smaller number of prizes to create. with time_machine.travel(run_at): result = command.run("some_command_name") # Check command output (if any). assert result.stdout.getvalue() == "..." assert result.stderr.getvalue() == "..." # Check side-effects. ``` ### Click commands Use something like this: ```py # tests/functional/conftest.py import pytest from click.testing import CliRunner @pytest.fixture def runner(): yield CliRunner( # Provide a dictionary of environment variables so that configuration # parsing works. Don't provide any values though - ensure tests specific # values relevant to them. env=dict(...) ) # tests/functional/test_command.py import main import time_machine def test_some_command(runner): # Run command at a fixed point in time, specifying any relevant env vars. with time_machine.travel(dt, tick=True): result = runner.invoke( main.cli, args=["name-of-command"], catch_exceptions=False, env={ "VENDOR_API_KEY": "xxx", }, ) # Check exit code. assert result.exit_code == 0, result.exception # Check side-effects. ``` ## Running tests ### Capturing output Default is for pytest to capture but show output if the test fails. Use `-s` to prevent output capturing — this is required for `ipdb` breakpoints to work but not for `pdb` or `pdbpp`. ## Using Pytest fixtures ### Shared fixtures Fixtures defined in a `conftest.py` module can be used in several ways: - Apply to a single test by adding the fixture name as an argument. - Apply to every test in a class by decorating with `@pytest.mark.usefixtures("...")`. - Apply to every test in a module class by defining a module-level `pytestmark` variable: ```py pytestmark = pytest.mark.usefixtures("...") ``` - Apply to every test in a test suite using the `pytest.ini` file: ```dosini [pytest] usefixtures = ... ``` See [docs on the `usefixtures` fixture][usefixtures]. [usefixtures]: https://docs.pytest.org/en/7.1.x/how-to/fixtures.html#use-fixtures-in-classes-and-modules-with-usefixtures ### Prefer to inject factories It's tricky to configure Pytest fixtures and so it's best to inject a _factory_ function/class that can be called with configuration arguments. ## Writing high quality code and tests High quality code is [easy to change](https://codeinthehole.com/tips/easy-to-change/). ### Anti-patterns Some anti-patterns for unit tests: - _Lots of mocks_ - this indicates your unit under test has to many collaborators. - _Nested mocks_ - this indicates your unit under test know intimate details about its collaborators (that it shouldn't know). - _Mocking indirect collaborators_ - it's best to mock the direct collaborators of a unit being tested, not those further down the call chain. Use of `mock.patch` (instead of `mock.patch.object)` is a smell of this problem. - _Careless factory usage_ - beware of factories creating lots of unnecessary related objects, which can expose test flakiness around ordering (as the test assumes there's only one of something). - _Conditional logic in tests_ - this is sometimes done to share some set-up steps but often makes the test much harder to understand. It's almost always better to create separate tests (with no conditional logic) and find another way to share common code. Rules of thumb: - Design code to use dependency injection and pass in adapters that handle IO. This includes clients for third party APIs and services for talking to the network, file system or database. - Keep IO separate from business logic. You want your business logic to live in side-effect free, pure functions. ### Resources Useful talks: - [Fast test, slow test](https://www.youtube.com/watch?v=RAxiiRPHS9k&ab_channel=NextDayVideo) by Gary Bernhardt, Pycon 2012 - [Stop using mocks](https://www.youtube.com/watch?v=rk-f3B-eMkI&ab_channel=PyConUS) by Harry Percival, Pycon 2020 This includes clients for third party APIs and services for talking to the network, file system or database.