CMA-ES (pycma) for Numerical Optimization
CMA-ES (pycma) is a Python implementation of the Covariance Matrix Adaptation Evolution Strategy, a robust randomized derivative-free numerical optimization algorithm. It is designed for challenging non-convex, ill-conditioned, multi-modal, rugged, and noisy problems in continuous and mixed-integer search spaces. The library is currently at version 4.4.4 and maintains an active release cadence with regular improvements and bug fixes.
Warnings
- breaking The return signature of `cma.fmin` changed significantly. Prior to version 2.4.2, `cma.fmin` returned a 10-tuple. Since version 2.4.2, `cma.fmin2` (the recommended interface) returns a `(x_best, es)` tuple, where `es` is a `CMAEvolutionStrategy` instance containing all detailed results in `es.result`. Update old code to use `fmin2` and access results through the `es` object.
- breaking The `cma.constraints_handling.BoundTransform` class was deprecated/temporarily missing in version 4.1.0. While re-added in 4.4.2 for compatibility, the recommended and stable way to handle bound constraints is `cma.BoundDomainTransform`.
- gotcha The library explicitly states that optimization in 1-D is not supported and will raise a `ValueError`. CMA-ES is typically designed for higher-dimensional problems.
- gotcha Older versions of `pycma` (prior to 4.4.2) experienced plotting issues when used with `matplotlib > 3.8.0`. These compatibility problems have been addressed in recent releases.
- gotcha The `cma.purecma` submodule, while having minimal dependencies (no `numpy` requirement), is significantly slower than the main `cma` implementation, which heavily relies on `numpy` for performance.
Install
-
pip install cma
Imports
- fmin2
import cma cma.fmin2(...)
- CMAEvolutionStrategy
import cma es = cma.CMAEvolutionStrategy(...) es.ask() es.tell(...)
Quickstart
import cma
import numpy as np
def rosenbrock(x):
"""The Rosenbrock function for demonstration."""
return sum(
100.0 * (x[i + 1] - x[i]**2)**2 + (1 - x[i])**2
for i in range(len(x) - 1)
)
# Define initial solution and initial step-size (sigma)
# For a 10-dimensional problem, starting near the origin.
initial_solution = 10 * [0.1] # [0.1, 0.1, ..., 0.1]
initial_sigma = 0.5
# Run the CMA-ES optimization using fmin2
# fmin2 returns (x_best, CMAEvolutionStrategy_instance)
x_best, es = cma.fmin2(
rosenbrock,
initial_solution,
initial_sigma,
options={'maxfevals': 10000, 'verb_log': 0} # Limit evaluations, suppress logging for quick run
)
print(f"Optimization finished after {es.result.evaluations} evaluations.")
print(f"Best solution found: {np.round(x_best, 4)}")
print(f"Objective value at best solution: {es.result.fbest}")
# Detailed results can be accessed via es.result or es.result_pretty()
# print(es.result_pretty())