cotengra - Hyper optimized tensor network contraction
cotengra is a Python library designed for the hyper-optimized contraction of large tensor networks and einsums. It provides advanced pathfinding algorithms, including those based on hyper-optimization, to minimize computational cost (FLOPs, memory). The current version is 0.7.5, and it is actively maintained with regular releases focusing on performance enhancements, new optimization strategies, and bug fixes.
Common errors
-
ModuleNotFoundError: No module named 'optuna'
cause Attempting to use `HyperOptimizer` or `ContractionTree.tune()` with an optimization method that requires `optuna` or `cmaes` without having the respective library installed.fixInstall the required optional dependency: `pip install cotengra[optuna]` or `pip install cotengra[cmaes]` (or both with `pip install cotengra[optuna,cmaes]`). -
ValueError: input '...' has shape '(D1,D2,...)' but need '(...)' for index 'x'
cause The dimensions implied by the einsum expression for a given index do not match the actual dimension of the corresponding array.fixCarefully check the `expr` string and ensure that the `shapes` provided in the list match the order and dimensions of the indices for each tensor. -
TypeError: 'NoneType' object is not callable (often related to multiprocessing in notebooks)
cause Multiprocessing pools (used for optimization) can fail to initialize correctly in interactive environments like Jupyter notebooks or on Windows without proper guards.fixWhen using `HyperOptimizer` or `ContractionTree.tune()` in an interactive environment, consider setting `n_workers=1` explicitly. If using a script with custom multiprocessing, ensure code that creates child processes is guarded by `if __name__ == '__main__':`.
Warnings
- breaking The method for determining the number of workers for non-distributed pools changed significantly. It now prioritizes `COTENGRA_NUM_WORKERS` environment variable, then `OMP_NUM_THREADS`, then `os.cpu_count()`.
- gotcha The default hyper-optimizer used by the `'auto'` preset changed from `optuna` to `cmaes` (if available), potentially altering performance characteristics for existing code.
- gotcha Behavior of `strip_exponent` with `gather_slices` and `check_zero` changed. Slices with zero value will now return `float('-inf')` as exponent, and `gather_slices` combines exponents.
- gotcha Warnings about missing recommended dependencies (e.g., `optuna`, `cmaes`) are now only issued when those dependencies would actually be used (e.g., during hyper-optimization calls).
Install
-
pip install cotengra -
pip install cotengra[optuna,cmaes]
Imports
- ContractionTree
from cotengra import ContractionTree
- HyperOptimizer
from cotengra import HyperOptimizer
- contract_path
from cotengra import contract_path
Quickstart
import cotengra as ctg
import numpy as np
import opt_einsum as oe
# Define an einsum expression and corresponding tensor shapes
expr = 'ijkl,klmn,mnop->ijop'
shapes = [(2,3,4,5), (4,5,6,7), (6,7,8,9)]
arrays = [np.random.rand(*s) for s in shapes]
# Initialize a ContractionTree with the expression and shapes
tree = ctg.ContractionTree(expr, shapes)
# Tune the contraction path using cotengra's hyper-optimizer
# For real use, consider increasing max_time and max_repeats
tree.tune(max_time=5, max_repeats=16)
# Perform the contraction
result = tree.contract(arrays)
print("Contraction result shape:", result.shape)
print("Optimized path:", tree.path)