{"id":7598,"library":"pyperf","title":"pyperf: Python Benchmark Suite","description":"pyperf is a Python module designed to write, run, and analyze benchmarks. It provides a robust API for reliable performance measurements, including automatic calibration, multi-process execution, statistical analysis with outlier detection, and comprehensive metadata collection. Currently at version 2.10.0, pyperf maintains an active release cadence with regular updates and requires Python 3.9 or newer.","status":"active","version":"2.10.0","language":"en","source_language":"en","source_url":"https://github.com/psf/pyperf","tags":["benchmarking","performance","profiling","testing"],"install":[{"cmd":"pip install pyperf","lang":"bash","label":"Install latest version"}],"dependencies":[{"reason":"Required for memory tracking features (e.g., --track-memory), especially on macOS.","package":"psutil","optional":true}],"imports":[{"symbol":"Runner","correct":"from pyperf import Runner"},{"note":"The library was renamed from 'perf' to 'pyperf' in version 1.6.0. Old imports will cause a ModuleNotFoundError.","wrong":"import perf","symbol":"pyperf","correct":"import pyperf"}],"quickstart":{"code":"import pyperf\nimport time\n\ndef my_benchmark_func():\n    time.sleep(0.001)\n\nrunner = pyperf.Runner()\n# Benchmark a simple function\nrunner.bench_func('sleep_1ms', my_benchmark_func)\n\n# Or benchmark a statement\n# runner.timeit(\n#     name=\"sort a sorted list\",\n#     stmt=\"sorted(s, key=f)\",\n#     setup=\"f = lambda x: x; s = list(range(1000))\"\n# )\n# Results are printed to stdout by default. Use -o output.json for file output.","lang":"python","description":"This quickstart demonstrates benchmarking a simple function using `pyperf.Runner`. It shows how to define a function to be benchmarked and then execute it with the runner. The output will include mean execution time and standard deviation. For more complex scenarios, `runner.timeit()` is suitable for benchmarking single statements, and results can be saved to a JSON file using the `-o` command-line option when running the script."},"warnings":[{"fix":"Upgrade your Python environment to 3.9 or newer. For Python 2.7, use `pyperf==1.7.1`.","message":"pyperf 2.x requires Python 3.9 or newer. Python 3.8 and older are no longer supported. Attempting to use newer pyperf versions on older Python versions will result in a `RuntimeError`.","severity":"breaking","affected_versions":">=2.0.0"},{"fix":"Update all imports from `import perf` to `import pyperf` in your code.","message":"The project was renamed from `perf` to `pyperf` in version 1.6.0. Old `import perf` statements will cause `ModuleNotFoundError`.","severity":"breaking","affected_versions":">=1.6.0"},{"fix":"Replace calls to `pyperf.perf_counter()` with `time.perf_counter()`.","message":"`pyperf.perf_counter()` was deprecated in version 2.0.0. It is recommended to use Python's built-in `time.perf_counter()` directly.","severity":"deprecated","affected_versions":">=2.0.0"},{"fix":"Rerun the benchmark with more `--runs`, `--values`, or `--loops` (e.g., `python3 your_script.py --runs 40 --values 20`). Consider running `python3 -m pyperf system tune` to prepare your system for stable benchmarking.","message":"Benchmarks may report 'WARNING: the benchmark result may be unstable' due to high variance, often from system jitter or insufficient measurement runs/values. This can lead to unreliable performance figures.","severity":"gotcha","affected_versions":"All"},{"fix":"Install the `psutil` dependency: `pip install psutil`.","message":"Memory tracking using the `--track-memory` option requires the optional `psutil` package. If `psutil` is not installed, memory tracking will not function, particularly on macOS, and may lead to unexpected behavior or incomplete data.","severity":"gotcha","affected_versions":"All"},{"fix":"Update your code and command-line scripts to use 'value' terminology. For instance, replace `runner.bench_sample_func()` with `runner.bench_time_func()`.","message":"In pyperf 2.0.0, the terminology changed from 'sample' to 'value' across the API and command-line options. For example, `Benchmark.get_samples()` became `Benchmark.get_values()`, and `--samples` became `--values`.","severity":"breaking","affected_versions":">=2.0.0"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Update your import statements from `import perf` to `import pyperf`, and update any references to `perf` within your code.","cause":"The library was renamed from 'perf' to 'pyperf' in version 1.6.0. Your code is trying to import the old name.","error":"ModuleNotFoundError: No module named 'perf'"},{"fix":"Upgrade your Python environment to version 3.9 or higher. If you must use Python 2.7, install an older version of pyperf: `pip install pyperf==1.7.1`.","cause":"You are attempting to run a pyperf version (>=2.0.0) that is incompatible with your current Python interpreter (e.g., Python 3.8 or older).","error":"RuntimeError: pyperf requires Python 3.9 or newer"},{"fix":"Increase the number of runs, values, and/or loops by passing options like `--runs N`, `--values N`, `--loops N` to your benchmark command. Running `python3 -m pyperf system tune` can also help reduce system jitter.","cause":"pyperf detected a significant variance in benchmark results, indicating potential instability in measurements or insufficient data points.","error":"WARNING: the benchmark result may be unstable * the maximum (X us) is Y% greater than the mean (Z us)"},{"fix":"Update your code to use the newer method name: `runner.bench_time_func()`. Similarly, update any command-line options like `--samples` to `--values`.","cause":"In pyperf 2.0.0, the `bench_sample_func` method (and similar 'sample' terminology) was renamed to `bench_time_func` (and 'value' terminology).","error":"AttributeError: 'Runner' object has no attribute 'bench_sample_func'"}]}