pyperf: Python Benchmark Suite
pyperf is a Python module designed to write, run, and analyze benchmarks. It provides a robust API for reliable performance measurements, including automatic calibration, multi-process execution, statistical analysis with outlier detection, and comprehensive metadata collection. Currently at version 2.10.0, pyperf maintains an active release cadence with regular updates and requires Python 3.9 or newer.
Common errors
-
ModuleNotFoundError: No module named 'perf'
cause The library was renamed from 'perf' to 'pyperf' in version 1.6.0. Your code is trying to import the old name.fixUpdate your import statements from `import perf` to `import pyperf`, and update any references to `perf` within your code. -
RuntimeError: pyperf requires Python 3.9 or newer
cause You are attempting to run a pyperf version (>=2.0.0) that is incompatible with your current Python interpreter (e.g., Python 3.8 or older).fixUpgrade your Python environment to version 3.9 or higher. If you must use Python 2.7, install an older version of pyperf: `pip install pyperf==1.7.1`. -
WARNING: the benchmark result may be unstable * the maximum (X us) is Y% greater than the mean (Z us)
cause pyperf detected a significant variance in benchmark results, indicating potential instability in measurements or insufficient data points.fixIncrease the number of runs, values, and/or loops by passing options like `--runs N`, `--values N`, `--loops N` to your benchmark command. Running `python3 -m pyperf system tune` can also help reduce system jitter. -
AttributeError: 'Runner' object has no attribute 'bench_sample_func'
cause In pyperf 2.0.0, the `bench_sample_func` method (and similar 'sample' terminology) was renamed to `bench_time_func` (and 'value' terminology).fixUpdate your code to use the newer method name: `runner.bench_time_func()`. Similarly, update any command-line options like `--samples` to `--values`.
Warnings
- breaking pyperf 2.x requires Python 3.9 or newer. Python 3.8 and older are no longer supported. Attempting to use newer pyperf versions on older Python versions will result in a `RuntimeError`.
- breaking The project was renamed from `perf` to `pyperf` in version 1.6.0. Old `import perf` statements will cause `ModuleNotFoundError`.
- deprecated `pyperf.perf_counter()` was deprecated in version 2.0.0. It is recommended to use Python's built-in `time.perf_counter()` directly.
- gotcha Benchmarks may report 'WARNING: the benchmark result may be unstable' due to high variance, often from system jitter or insufficient measurement runs/values. This can lead to unreliable performance figures.
- gotcha Memory tracking using the `--track-memory` option requires the optional `psutil` package. If `psutil` is not installed, memory tracking will not function, particularly on macOS, and may lead to unexpected behavior or incomplete data.
- breaking In pyperf 2.0.0, the terminology changed from 'sample' to 'value' across the API and command-line options. For example, `Benchmark.get_samples()` became `Benchmark.get_values()`, and `--samples` became `--values`.
Install
-
pip install pyperf
Imports
- Runner
from pyperf import Runner
- pyperf
import perf
import pyperf
Quickstart
import pyperf
import time
def my_benchmark_func():
time.sleep(0.001)
runner = pyperf.Runner()
# Benchmark a simple function
runner.bench_func('sleep_1ms', my_benchmark_func)
# Or benchmark a statement
# runner.timeit(
# name="sort a sorted list",
# stmt="sorted(s, key=f)",
# setup="f = lambda x: x; s = list(range(1000))"
# )
# Results are printed to stdout by default. Use -o output.json for file output.