{"id":5433,"library":"pytest-race","title":"pytest-race","description":"pytest-race is a plugin for the pytest testing framework that helps detect and test race conditions in your Python codebase. It introduces a `start_race` fixture, allowing you to easily run a target callable function multiple times concurrently in separate threads. The current version is 0.2.0, with a slow release cadence, last updated in June 2022.","status":"active","version":"0.2.0","language":"en","source_language":"en","source_url":"https://github.com/idlesign/pytest-race","tags":["pytest","testing","race conditions","concurrency","plugin","multithreading"],"install":[{"cmd":"pip install pytest-race","lang":"bash","label":"Install with pip"}],"dependencies":[{"reason":"Core testing framework; required for fixtures and test execution.","package":"pytest","optional":false}],"imports":[{"note":"start_race is a pytest fixture and is injected directly into test functions as an argument.","symbol":"start_race","correct":"def test_my_race_condition(start_race):"}],"quickstart":{"code":"import pytest\nfrom time import sleep\nfrom random import randint\n\nACCUMULATOR = 0 # This global var is race conditions prone.\n\ndef test_race_condition_example(start_race):\n    \"\"\"Demonstrates testing a race condition with pytest-race.\"\"\"\n    global ACCUMULATOR\n    ACCUMULATOR = 0 # Reset for each test run\n\n    def actual_test_target():\n        global ACCUMULATOR\n        increment = randint(1, 10000)\n        accumulator_before = ACCUMULATOR\n        sleep(0.01) # Simulate some lag for race condition to manifest\n        ACCUMULATOR += increment\n\n        # Assert that the accumulator increased by exactly 'increment' \n        # (this assertion will likely fail due to race conditions).\n        # In a real test, you'd assert against expected concurrent behavior.\n        assert accumulator_before + increment == ACCUMULATOR\n\n    # Run 'actual_test_target' in 2 threads simultaneously\n    start_race(threads_num=2, target=actual_test_target)\n\n# To run this test:\n# 1. Save it as e.g., test_race_example.py\n# 2. Run `pytest -s` (the -s flag is useful to see print output if you add any)\n","lang":"python","description":"This quickstart demonstrates how to use the `start_race` fixture to test a simple race condition involving a global accumulator. The `actual_test_target` function is executed concurrently by multiple threads, and an assertion is used to detect if the race condition leads to unexpected results. The `sleep` call helps simulate conditions where race conditions are more likely to occur."},"warnings":[{"fix":"Ensure that shared resources in your `target` callable are either properly locked/synchronized or designed to be thread-safe. Alternatively, craft your assertions to reflect the non-deterministic but acceptable outcomes of concurrent operations, or to specifically catch the race condition you expect.","message":"When writing concurrent tests, be extremely careful with shared mutable state (like global variables or shared objects). The `pytest-race` fixture merely facilitates running code in multiple threads; it's up to the user to correctly design the test to expose and assert against race conditions, which often involves shared state that isn't properly synchronized.","severity":"gotcha","affected_versions":"All"},{"fix":"Perform thread-safe logging or accumulate results within the `target` callable, and then perform final assertions on the collected data in the main test function (after `start_race` completes).","message":"Pytest itself is single-threaded. Avoid using pytest-specific assertion helpers or primitives (e.g., `pytest.warns()`, `pytest.raises()`, `caplog`, `capsys`) directly from multiple threads spawned by `start_race`'s `target` callable, as these are not designed to be thread-safe.","severity":"gotcha","affected_versions":"All"},{"fix":"Design assertions to be robust against expected timing variations, and consider using techniques like `pytest.approx()` for numerical comparisons or implementing retries/waits with timeouts within the target function before making assertions, if appropriate for your use case. Reproduce flaky tests repeatedly to confirm the issue.","message":"Race condition tests can be inherently flaky due to timing. Overly strict or poorly timed assertions can lead to intermittent failures that mask the actual race condition or lead to false positives.","severity":"gotcha","affected_versions":"All"}],"env_vars":null,"last_verified":"2026-04-13T00:00:00.000Z","next_check":"2026-07-12T00:00:00.000Z"}