{"id":3331,"library":"einx","title":"einx: Universal Notation for Tensor Operations","description":"einx is a Python library that provides a universal interface to formulate tensor operations in frameworks such as Numpy, PyTorch, Jax, Tensorflow, and MLX, using an Einstein-inspired notation. It offers a streamlined approach to complex tensor manipulations, often by compiling operations to backend-specific function calls, which helps minimize overhead. The current version is 0.4.3, with frequent minor releases addressing fixes and adding support for new backends.","status":"active","version":"0.4.3","language":"en","source_language":"en","source_url":"https://github.com/fferflo/einx","tags":["tensor operations","einsum","numpy","pytorch","jax","tensorflow","mlx","backend-agnostic","deep learning"],"install":[{"cmd":"pip install einx","lang":"bash","label":"Base installation"},{"cmd":"pip install einx[torch]","lang":"bash","label":"For PyTorch-specific dependencies (enforces version requirements)"}],"dependencies":[{"reason":"Backend for tensor operations","package":"numpy","optional":true},{"reason":"Backend for tensor operations","package":"torch","optional":true},{"reason":"Backend for tensor operations","package":"jax","optional":true},{"reason":"Backend for tensor operations","package":"tensorflow","optional":true},{"reason":"Backend for tensor operations","package":"mlx","optional":true},{"reason":"Backend for tensor operations","package":"tinygrad","optional":true},{"reason":"Required for Array API backend support","package":"array-api-compat","optional":true}],"imports":[{"symbol":"einx","correct":"import einx"},{"note":"Most operations are accessed directly from the `einx` module.","symbol":"einx.sum","correct":"import einx\n# ...\neinx.sum(...)"}],"quickstart":{"code":"import einx\nimport numpy as np # Can be any supported backend like torch, jax, tensorflow, mlx\n\nx = np.ones((10, 20, 30))\nprint(f\"Input shape: {x.shape}\")\n\n# Sum-reduction along the second (vectorized) axis\ny = einx.sum(\"a [b] c\", x)\nprint(f\"Output shape after sum: {y.shape}\")\n\n# Permute and (un)flatten axes with the identity operation\nz = einx.id(\"a (b c) -> (b a) c\", x, b=2)\nprint(f\"Output shape after id: {z.shape}\")","lang":"python","description":"This example demonstrates basic tensor operations using `einx` with a NumPy array. `einx` automatically detects and uses an available backend (e.g., NumPy, PyTorch, JAX) for the tensor operations. The string notation defines how axes are manipulated."},"warnings":[{"fix":"Ensure your desired tensor backend (e.g., `pip install torch`) is installed. For PyTorch, use `pip install einx[torch]`.","message":"einx itself is a lightweight notation library and does not include its own tensor implementation. It requires a separate tensor framework (e.g., NumPy, PyTorch, JAX, TensorFlow) to be installed and available in your environment to perform operations. For PyTorch, explicitly installing with `pip install einx[torch]` is recommended to ensure compatible backend versions.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Review any code that traces or compiles `einx` operations and ensure that any necessary backend functions are explicitly imported or accessed via the backend's module rather than assuming `einx`'s presence in the compiled context.","message":"Starting from `v0.2.1`, compiled `einx` functions no longer implicitly include the `einx` namespace in their dependency graph. Instead, they directly import and use the backend's namespace (e.g., `import torch`). If you were previously relying on `einx` being implicitly available within traced or compiled graphs, this change will break such workflows.","severity":"breaking","affected_versions":">=0.2.1"},{"fix":"Adjust static type checking configurations or explicit type hints to account for `einx`'s more flexible `typing.Any` annotations for tensor parameters.","message":"Version `0.4.3` changed tensor parameter annotations from `typing.TypeVar` to `typing.Any`. This fixed issues where previous strict typing did not always hold (e.g., with mixed-type inputs or backend-dependent output types). While a fix, users relying on strict static analysis with earlier versions might notice changes in type checking behavior or need to adjust their assumptions about tensor type propagation.","severity":"gotcha","affected_versions":">=0.4.3"},{"fix":"Consult the updated documentation and tutorials (especially the 'How does the notation work?' section) to understand the full vectorization analogy and ensure your `einx` expressions align with this paradigm.","message":"Version `0.4.0` fully embraced vectorization as its core abstraction, defining expressions by analogy with loop notation. While intended as an improvement for clarity and consistency, users accustomed to an older understanding of `einx` expressions might need to re-evaluate how complex notations are interpreted, especially concerning implicit loop structures and vectorized operations.","severity":"gotcha","affected_versions":">=0.4.0"}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}