pytest-codspeed
pytest-codspeed is a pytest plugin that integrates performance benchmarking into your Python test suite. It allows developers to measure and track the performance of their code, providing support for CPU simulation, wall-time, and memory usage measurements. The library is actively maintained with frequent releases, currently at version 4.3.0, and aims to provide highly consistent benchmark results, especially when run in CI environments.
Warnings
- breaking Upgrading to v4.0.0 introduced `CodSpeedHQ/instrument-hooks`. This may cause slight performance changes in tiny microbenchmarks compared to previous versions due to changes in how instrument states are controlled.
- gotcha Running benchmarks locally with the `--codspeed` flag will display results in the console but will *not* produce performance reports or integrate with the CodSpeed dashboard. For full performance reporting, regression tracking, and consistent measurements, benchmarks must be run in a CI/CD environment using the official CodSpeed runner (e.g., GitHub Action).
- gotcha When using the `walltime` instrument (enabled via `--codspeed-mode walltime`), avoid running multiple benchmark processes in parallel (e.g., with `pytest-xdist`). Parallel execution can lead to noisy and inconsistent measurements. For optimal consistency in walltime mode, use isolated machines like CodSpeed Hosted Macro Runners.
- gotcha Python 3.9 or newer is required. Older Python versions are not supported.
- gotcha Benchmarks are still tests: always include assertions in your benchmark functions to verify that the code being measured is producing the expected output. This ensures that you're not just measuring speed, but the speed of correct functionality.
Install
-
pip install pytest-codspeed
Imports
- pytest.mark.benchmark
import pytest @pytest.mark.benchmark def test_my_function(): ...
- benchmark fixture
def test_my_function(benchmark): result = benchmark(lambda: my_code_to_measure()) assert result is not None
Quickstart
import pytest
def sum_squares(arr):
"""Sum the squares of the numbers in an array."""
total = 0
for x in arr:
total += x * x
return total
@pytest.mark.benchmark
def test_sum_squares_full_function():
"""Benchmark an entire test function using the decorator."""
# This entire function's execution will be measured
assert sum_squares(range(1000)) == 332833500
def mean(data):
return sum(data) / len(data)
def test_mean_performance_fine_grained(benchmark):
"""Benchmark a specific part of a test function using the fixture."""
data = list(range(1_000_000)) # Setup not measured
# Only the lambda function's execution will be measured
result = benchmark(lambda: mean(data))
assert result == 499999.5
# To run locally: pytest your_test_file.py --codspeed