{"id":9624,"library":"cotengra","title":"cotengra - Hyper optimized tensor network contraction","description":"cotengra is a Python library designed for the hyper-optimized contraction of large tensor networks and einsums. It provides advanced pathfinding algorithms, including those based on hyper-optimization, to minimize computational cost (FLOPs, memory). The current version is 0.7.5, and it is actively maintained with regular releases focusing on performance enhancements, new optimization strategies, and bug fixes.","status":"active","version":"0.7.5","language":"en","source_language":"en","source_url":"https://github.com/jcmgray/cotengra/","tags":["tensor network","einsum","optimization","physics","quantum computing","high performance computing"],"install":[{"cmd":"pip install cotengra","lang":"bash","label":"Basic installation"},{"cmd":"pip install cotengra[optuna,cmaes]","lang":"bash","label":"With recommended hyper-optimization backends"}],"dependencies":[{"reason":"Provides the 'optuna' hyper-optimization backend for pathfinding. Recommended for optimal performance.","package":"optuna","optional":true},{"reason":"Provides the 'cmaes' hyper-optimization backend. Used by default for the 'auto' preset from v0.7.0 if available.","package":"cmaes","optional":true}],"imports":[{"symbol":"ContractionTree","correct":"from cotengra import ContractionTree"},{"symbol":"HyperOptimizer","correct":"from cotengra import HyperOptimizer"},{"symbol":"contract_path","correct":"from cotengra import contract_path"}],"quickstart":{"code":"import cotengra as ctg\nimport numpy as np\nimport opt_einsum as oe\n\n# Define an einsum expression and corresponding tensor shapes\nexpr = 'ijkl,klmn,mnop->ijop'\nshapes = [(2,3,4,5), (4,5,6,7), (6,7,8,9)]\narrays = [np.random.rand(*s) for s in shapes]\n\n# Initialize a ContractionTree with the expression and shapes\ntree = ctg.ContractionTree(expr, shapes)\n\n# Tune the contraction path using cotengra's hyper-optimizer\n# For real use, consider increasing max_time and max_repeats\ntree.tune(max_time=5, max_repeats=16)\n\n# Perform the contraction\nresult = tree.contract(arrays)\n\nprint(\"Contraction result shape:\", result.shape)\nprint(\"Optimized path:\", tree.path)","lang":"python","description":"This quickstart demonstrates how to define a tensor network using an einsum expression and array shapes, initialize a `ContractionTree`, tune its contraction path using cotengra's hyper-optimization, and then perform the contraction. It also shows how to get the resulting shape and the optimized path."},"warnings":[{"fix":"Explicitly set `n_workers` in `HyperOptimizer` or `ContractionTree` methods, or manage the `COTENGRA_NUM_WORKERS` environment variable for consistent parallel execution behavior.","message":"The method for determining the number of workers for non-distributed pools changed significantly. It now prioritizes `COTENGRA_NUM_WORKERS` environment variable, then `OMP_NUM_THREADS`, then `os.cpu_count()`.","severity":"breaking","affected_versions":">=0.6.1"},{"fix":"To explicitly use `optuna` or `cmaes`, specify `optimizer_options={'optlib': 'optuna'}` or `{'optlib': 'cmaes'}` when initializing `HyperOptimizer` or calling `tree.tune()`.","message":"The default hyper-optimizer used by the `'auto'` preset changed from `optuna` to `cmaes` (if available), potentially altering performance characteristics for existing code.","severity":"gotcha","affected_versions":">=0.7.0"},{"fix":"Review code that relies on the exact numerical representation of exponents when using `strip_exponent` with slicing, especially for zero-value slices. Adapt expectations for `float('-inf')` and combined exponents.","message":"Behavior of `strip_exponent` with `gather_slices` and `check_zero` changed. Slices with zero value will now return `float('-inf')` as exponent, and `gather_slices` combines exponents.","severity":"gotcha","affected_versions":">=0.7.2"},{"fix":"Ensure `optuna` or `cmaes` are installed (e.g., `pip install cotengra[optuna,cmaes]`) if you intend to use advanced hyper-optimization features to avoid runtime warnings and ensure full functionality.","message":"Warnings about missing recommended dependencies (e.g., `optuna`, `cmaes`) are now only issued when those dependencies would actually be used (e.g., during hyper-optimization calls).","severity":"gotcha","affected_versions":">=0.7.5"}],"env_vars":null,"last_verified":"2026-04-17T00:00:00.000Z","next_check":"2026-07-16T00:00:00.000Z","problems":[{"fix":"Install the required optional dependency: `pip install cotengra[optuna]` or `pip install cotengra[cmaes]` (or both with `pip install cotengra[optuna,cmaes]`).","cause":"Attempting to use `HyperOptimizer` or `ContractionTree.tune()` with an optimization method that requires `optuna` or `cmaes` without having the respective library installed.","error":"ModuleNotFoundError: No module named 'optuna'"},{"fix":"Carefully check the `expr` string and ensure that the `shapes` provided in the list match the order and dimensions of the indices for each tensor.","cause":"The dimensions implied by the einsum expression for a given index do not match the actual dimension of the corresponding array.","error":"ValueError: input '...' has shape '(D1,D2,...)' but need '(...)' for index 'x'"},{"fix":"When using `HyperOptimizer` or `ContractionTree.tune()` in an interactive environment, consider setting `n_workers=1` explicitly. If using a script with custom multiprocessing, ensure code that creates child processes is guarded by `if __name__ == '__main__':`.","cause":"Multiprocessing pools (used for optimization) can fail to initialize correctly in interactive environments like Jupyter notebooks or on Windows without proper guards.","error":"TypeError: 'NoneType' object is not callable (often related to multiprocessing in notebooks)"}]}