liminfo

Pytest Reference

Free reference guide: Pytest Reference

50 results

About Pytest Reference

The Pytest Reference is a structured, searchable cheat sheet for the pytest testing framework. It covers eight categories — Basics, Fixtures, Markers, Parametrize, Mocking, Plugins, Config, and CLI — giving Python developers and QA engineers a single place to look up any pytest pattern. Each entry includes a concise description and a real code example, making it faster to write and debug tests without toggling between documentation tabs.

pytest is the de facto standard testing framework for Python projects, used by teams building web applications, data pipelines, CLI tools, and libraries. Its fixture system allows setup and teardown logic to be declared once and reused across tests with fine-grained scope control (function, module, session). The parametrize decorator enables data-driven testing with a single test function covering dozens of input/output combinations. This reference covers all of these patterns, including advanced use cases like indirect parametrization, yield fixtures for setup/teardown, and the built-in `tmp_path` fixture for temporary file testing.

The reference also covers the mocking tools available in pytest: the built-in `monkeypatch` fixture for patching attributes, environment variables, and file system paths; `unittest.mock.patch` and `MagicMock` for creating mock objects; and the `pytest-mock` plugin that provides a `mocker` fixture as a convenient wrapper. On the plugins side, it includes `pytest-cov` for coverage reports, `pytest-xdist` for parallel test execution, `pytest-asyncio` for async test support, and `pytest-django` for Django database fixtures. The CLI section covers the most useful flags like `-v`, `-k`, `-x`, `--lf`, `-m`, and `--durations`.

Key Features

  • Covers 8 categories: Basics, Fixtures, Markers, Parametrize, Mocking, Plugins, Config, CLI
  • Fixture scopes: function (default), module, session — with yield-based setup/teardown examples
  • Parametrize patterns: basic, multi-param combinations, custom IDs, indirect parametrization
  • Mocking: monkeypatch.setattr/setenv, unittest.mock.patch, MagicMock, mock.call_count
  • Plugins: pytest-cov, pytest-xdist (-n auto), pytest-asyncio, pytest-mock, pytest-django
  • Config files: pytest.ini, pyproject.toml, setup.cfg, conftest.py hooks, filterwarnings
  • CLI flags: -v, -k, -x, --lf, -m, --tb=short, -s, --durations with usage examples
  • 100% client-side — no sign-up, no download, no data sent to any server

Frequently Asked Questions

What is a pytest fixture and when should I use one?

A pytest fixture is a function decorated with `@pytest.fixture` that provides setup data or resources to test functions. Fixtures promote reuse — you define shared logic (like creating a database connection, a test client, or sample data) once and inject it into any test function by naming it as a parameter. Use fixtures whenever multiple tests share the same setup code. For cleanup, use `yield` inside the fixture: code before `yield` runs before the test, and code after `yield` runs as teardown.

What is the difference between fixture scopes?

Fixture scope controls how often a fixture is created and destroyed. The default scope is "function", meaning a new fixture instance is created for each test. "module" scope creates one instance shared across all tests in a module. "session" scope creates one instance shared across the entire test run. Use wider scopes for expensive resources like database connections or application servers, and narrow scopes (function) for anything that must be isolated between tests.

How does @pytest.mark.parametrize work?

`@pytest.mark.parametrize` lets you run the same test function with multiple sets of inputs. You specify argument names as a comma-separated string and a list of value tuples: `@pytest.mark.parametrize("x,y,expected", [(1, 2, 3), (5, 5, 10)])`. pytest generates a separate test case for each tuple. You can add multiple `@parametrize` decorators to get the Cartesian product of all combinations. Use `pytest.param(..., id="name")` to give test cases human-readable IDs in the output.

What is monkeypatch and how is it different from unittest.mock?

`monkeypatch` is a pytest built-in fixture that temporarily replaces attributes, environment variables, dictionary entries, and file system paths for the duration of a single test, then automatically restores the originals after the test. It is simpler for straightforward attribute replacement. `unittest.mock.patch` (and `MagicMock`) is more powerful for creating full mock objects with return values, call tracking, and side effects. Both approaches work well in pytest; `pytest-mock` provides a `mocker` fixture that wraps `unittest.mock` in the pytest style.

How do I measure test coverage with pytest?

Install the `pytest-cov` plugin (`pip install pytest-cov`) and run `pytest --cov=myapp --cov-report=html tests/`. This generates an HTML coverage report showing which lines are covered by your tests. You can also use `--cov-report=term-missing` for a terminal output that shows uncovered line numbers. Configure coverage settings in `pyproject.toml` or `.coveragerc` to exclude files, set a minimum coverage threshold, or control branch coverage.

How do I skip a test or mark it as expected to fail?

Use `@pytest.mark.skip(reason="...")` to unconditionally skip a test. Use `@pytest.mark.skipif(condition, reason="...")` to skip based on a runtime condition, such as the OS platform or a missing optional dependency. Use `@pytest.mark.xfail(reason="...")` to mark a test as an expected failure — pytest will report it as "xfailed" if it fails (expected) or "xpassed" if it unexpectedly passes, which can be treated as an error with `xfail_strict = true` in your config.

How do I run only the tests that failed last time?

Use `pytest --lf` (or `--last-failed`) to run only the tests that failed in the previous run. pytest stores results in a `.pytest_cache` directory. If no tests failed last time, `--lf` falls back to running the full suite. You can combine it with `-x` (stop on first failure) as `pytest --lf -x` to quickly iterate on a failing test without running the entire suite each time.

How do I configure pytest for my project?

pytest can be configured in `pytest.ini`, `pyproject.toml` (under `[tool.pytest.ini_options]`), or `setup.cfg` (under `[tool:pytest]`). Common settings include `testpaths` to specify where tests live, `addopts` to set default CLI flags (like `-v --tb=short`), `markers` to register custom markers (required with `--strict-markers`), and `filterwarnings` to treat specific warnings as errors or ignore them. The `conftest.py` file is used to share fixtures and hook functions across multiple test files.