opt-einsum

raw JSON →
3.4.0 verified Tue May 12 auth: no python install: verified

opt-einsum is a Python library that optimizes the contraction order of Einstein summation expressions, significantly reducing the execution time of einsum-like operations in various backends such as NumPy, Dask, PyTorch, TensorFlow, and JAX. It achieves this by finding efficient contraction paths, often dispatching operations to highly optimized routines like BLAS or cuBLAS. The library is currently at version 3.4.0 and is actively maintained, serving as the underlying optimization engine for `numpy.einsum(..., optimize=True)` and `torch.einsum` when installed.

pip install opt-einsum
error ValueError: invalid subscript 'Ų' in einstein sum subscripts string, subscripts must be letters
cause The optimized contraction path, especially when `memory_limit` is active or with very complex expressions, generates an intermediate `einsum` equation with more unique indices than available single lowercase letters (a-z), which some backends and `opt_einsum` itself enforce.
fix
Simplify the einsum expression, break it into smaller parts, or explore different optimization strategies (e.g., optimize='greedy') if the default or memory_limit causes this. If using contract_path, inspect the path_info to understand the intermediate expression.
error ImportError: cannot import name 'tensorflow' from 'opt_einsum.backends'
cause This error occurs when `opt_einsum` attempts to load its TensorFlow backend but cannot find the `tensorflow` module as expected, often due to an incomplete or corrupted `opt_einsum` installation or an environment issue.
fix
Uninstall and reinstall opt_einsum (pip uninstall opt_einsum && pip install opt_einsum). Ensure TensorFlow is correctly installed and accessible in the Python environment.
error AttributeError: module 'torch.backends' has no attribute 'opt_einsum'
cause In some newer PyTorch versions, `torch.backends.opt_einsum` might not be directly exposed as a public attribute, even though `opt_einsum` can still be used as a backend for `torch.einsum` if installed.
fix
To explicitly interact with the opt_einsum backend for PyTorch, use import torch.backends.opt_einsum directly. Alternatively, simply ensure opt-einsum is installed (pip install opt-einsum) for torch.einsum to leverage it automatically for optimization.
error ValueError: axes don't match array
cause This error typically occurs when `numpy.einsum` is called with `optimize=True`, which often delegates to `opt_einsum`'s path-finding algorithms. If the original `einsum` expression contains implicit summations or index patterns that the optimized path (which might use `tensordot`) cannot correctly interpret due to mismatching or implicitly summed axes, this error can arise.
error np.einsum('bdc,ac->ab', a, b, optimize=True) fails but works with optimize=False
cause When `numpy.einsum` with `optimize=True` tries to find an optimal contraction path, it might use intermediate operations like `tensordot` that do not inherently support certain implicit summations or index patterns as flexibly as the default `einsum` implementation, leading to axis mismatch errors.
breaking The `path` keyword argument in `opt_einsum.contract` has been changed to `optimize` to align more closely with NumPy's API. The `path` keyword will be deprecated in future versions.
fix Replace `path=...` with `optimize=...` in calls to `opt_einsum.contract`.
gotcha When using `numpy.einsum`, explicitly setting `optimize=True` (or a specific path strategy) is crucial for performance. Without `optimize`, `numpy.einsum` defaults to a left-to-right contraction order, which can be highly inefficient for complex expressions. `opt-einsum.contract` applies optimization by default.
fix Always use `np.einsum(..., optimize=True)` or `opt_einsum.contract(...)` for performance-critical einsum operations.
gotcha Finding the truly optimal contraction path for an einsum expression is an NP-hard problem. While `opt-einsum` offers an 'optimal' strategy, it can scale factorially with the number of terms and quickly become intractable for many tensors. For larger expressions, heuristic algorithms like 'greedy' or 'random-greedy' are used.
fix For complex or many-tensor contractions, prefer `optimize='auto'` (the default) or explicitly choose a heuristic path like `'greedy'` or `'random-greedy-128'` to balance path quality and computation time. Avoid `'optimal'` for expressions with more than a few tensors.
gotcha The `memory_limit` parameter in `opt_einsum.contract` can constrain the size of intermediate tensors. While useful for memory management, imposing a limit can make contractions exponentially slower to perform if it restricts the optimizer from finding the most efficient path. The default is `None`, meaning no memory limit.
fix Carefully consider the trade-off between memory usage and performance when setting `memory_limit`. Only use it if memory constraints are strict, and be aware of potential performance degradation.
breaking `numpy` is a required dependency for `opt-einsum` and must be installed for `opt-einsum` functionality (and any `numpy.einsum` usage). The `ModuleNotFoundError` indicates `numpy` was not found in the environment.
fix Ensure `numpy` is installed in the environment (e.g., `pip install numpy`) before using `opt-einsum` or any related functionality.
breaking The `opt-einsum` library requires `numpy` as a core dependency. A `ModuleNotFoundError` indicates that `numpy` is not installed in the environment, preventing the library from functioning.
fix Ensure `numpy` is installed in your Python environment, typically by running `pip install numpy`.
python os / libc status wheel install import disk
3.10 alpine (musl) wheel - 0.03s 18.3M
3.10 alpine (musl) - - 0.03s 18.3M
3.10 slim (glibc) wheel 1.5s 0.02s 19M
3.10 slim (glibc) - - 0.02s 19M
3.11 alpine (musl) wheel - 0.05s 20.3M
3.11 alpine (musl) - - 0.05s 20.3M
3.11 slim (glibc) wheel 1.6s 0.04s 21M
3.11 slim (glibc) - - 0.04s 21M
3.12 alpine (musl) wheel - 0.04s 12.2M
3.12 alpine (musl) - - 0.04s 12.2M
3.12 slim (glibc) wheel 1.5s 0.04s 13M
3.12 slim (glibc) - - 0.04s 13M
3.13 alpine (musl) wheel - 0.04s 11.9M
3.13 alpine (musl) - - 0.04s 11.8M
3.13 slim (glibc) wheel 1.5s 0.03s 12M
3.13 slim (glibc) - - 0.04s 12M
3.9 alpine (musl) wheel - 0.03s 17.8M
3.9 alpine (musl) - - 0.03s 17.8M
3.9 slim (glibc) wheel 1.7s 0.02s 18M
3.9 slim (glibc) - - 0.02s 18M

This quickstart demonstrates how to use `opt_einsum.contract` as a drop-in replacement for `numpy.einsum` to automatically optimize the tensor contraction order and achieve significant performance improvements.

import numpy as np
from opt_einsum import contract

N = 10
C = np.random.rand(N, N)
I = np.random.rand(N, N, N, N)

# Using unoptimized numpy.einsum (for comparison)
# result_np = np.einsum('pi,qj,ijkl,rk,sl->pqrs', C, C, I, C, C)

# Using opt_einsum.contract for optimized performance
result_opt = contract('pi,qj,ijkl,rk,sl->pqrs', C, C, I, C, C)

print(f"Optimized result shape: {result_opt.shape}")