pyperf: Python Benchmark Suite

2.10.0 · active · verified Thu Apr 16

pyperf is a Python module designed to write, run, and analyze benchmarks. It provides a robust API for reliable performance measurements, including automatic calibration, multi-process execution, statistical analysis with outlier detection, and comprehensive metadata collection. Currently at version 2.10.0, pyperf maintains an active release cadence with regular updates and requires Python 3.9 or newer.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates benchmarking a simple function using `pyperf.Runner`. It shows how to define a function to be benchmarked and then execute it with the runner. The output will include mean execution time and standard deviation. For more complex scenarios, `runner.timeit()` is suitable for benchmarking single statements, and results can be saved to a JSON file using the `-o` command-line option when running the script.

import pyperf
import time

def my_benchmark_func():
    time.sleep(0.001)

runner = pyperf.Runner()
# Benchmark a simple function
runner.bench_func('sleep_1ms', my_benchmark_func)

# Or benchmark a statement
# runner.timeit(
#     name="sort a sorted list",
#     stmt="sorted(s, key=f)",
#     setup="f = lambda x: x; s = list(range(1000))"
# )
# Results are printed to stdout by default. Use -o output.json for file output.

view raw JSON →