MLCflow

raw JSON →
1.2.1 verified Sat May 09 auth: no python

MLCflow is an automation interface for CPU/GPU benchmarking, providing a unified Python API to run, manage, and compare benchmark experiments. It supports hardware detection, resource monitoring, and result aggregation. The current PyPI version is 1.2.1, with a release cadence of approximately monthly.

pip install mlcflow
error ModuleNotFoundError: No module named 'mlcflow'
cause Package not installed or installed with wrong name (e.g., 'mlc-flow').
fix
Run pip install mlcflow (hyphen vs underscore not relevant, but install with the correct PyPI name).
error KeyError: 'summary' on results
cause Assuming `results` is a single object with `.summary()` method; actually it is a list.
fix
Use results[0].summary() or iterate over list.
gotcha The `run()` method returns a list of result dicts, not a single dict. Iterate over the list or use `results[0]` if only one config.
fix Access results via indexing: `results[0]['metric']`.
deprecated The `BenchmarkConfig` parameter `iterations` is deprecated since v1.1.0; use `extra='--iterations N'` instead.
fix Pass iterations via `extra` parameter: `bench.create_config('cpu', 'simple', extra='--iterations 10')`.
gotcha When changing hardware (e.g., GPU type), you must create a new `MLCBench` instance; reusing the same instance may cache old hardware info.
fix Create a fresh `MLCBench()` for each hardware configuration.

Create a simple CPU benchmark, run it, and print summary.

from mlcflow import MLCBench
bench = MLCBench()
config = bench.create_config('cpu', 'simple', extra='--iterations 10')
results = bench.run(config)
print(results.summary())