SevenNet (sevenn)
SevenNet is a Python library implementing Scalable EquiVariance Enabled Neural Networks, primarily for atomistic simulations and materials science. It integrates with the Atomic Simulation Environment (ASE) for molecular dynamics, energy, and force calculations. The library is actively developed, with frequent minor releases and specific checkpoint releases for new pre-trained models. The current stable version is 0.12.1.
Common errors
-
AttributeError: module 'sevenn' has no attribute 'train'
cause Attempting to use the old CLI command style (e.g., `sevenn.train.run()`) after the CLI refactor in `v0.11.1`.fixUse the new subcommand-based CLI interface: `from sevenn.cli import run` and call `run(['train', ...])`, or execute `sevenn train ...` from the terminal. -
RuntimeError: Could not find a suitable E3nn version. The installed version is X.Y.Z, but SevenNet requires A.B.C.
cause `e3nn` library version incompatibility with your installed `sevenn` version.fixReinstall `sevenn` to ensure `e3nn` is installed correctly via its dependencies, or manually ensure your `e3nn` version matches the requirements for your `sevenn` version (e.g., `pip install 'e3nn>=0.5.0,<0.6.0'` if specified). -
ModuleNotFoundError: No module named 'ninja'
cause Attempting to enable or use the FlashTP feature without the `ninja` package installed.fixInstall the `ninja` package: `pip install ninja`. This dependency is required for FlashTP's optimized kernel compilation.
Warnings
- breaking The command-line interface (CLI) was significantly refactored in v0.11.1, changing from a single `sevenn_train` or `sevenn_inference` script to a subcommand-based structure (e.g., `sevenn train`, `sevenn inference`).
- gotcha SevenNet has strict compatibility requirements with the `e3nn` library. Versions `0.10.4` and older are compatible with `e3nn < 0.5.0`, while `0.11.1` and newer likely require `e3nn >= 0.5.0` or a specific range. Installing an incompatible `e3nn` version can lead to runtime errors.
- gotcha Using the FlashTP feature for accelerated calculations requires the `ninja` package to be installed, as it's used for compiling optimized kernels. If `ninja` is not present, FlashTP will fail to initialize.
- deprecated Inconsistent argument naming for FlashTP-related settings in CLI and configuration files was fixed in `v0.12.1`. Older scripts or configs might use deprecated argument names.
Install
-
pip install sevenn -
pip install sevenn[gpu]
Imports
- SevenNetCalculator
from sevenn import SevenNetCalculator
from sevenn.calculator import SevenNetCalculator
- SevenNet
from sevenn import SevenNet
from sevenn.model import SevenNet
- run
import sevenn.train
from sevenn.cli import run
Quickstart
import os
import numpy as np
from ase.build import bulk
from sevenn.calculator import SevenNetCalculator
# Assuming a pre-trained model checkpoint path
# Replace with your actual model path or a path to a downloaded checkpoint
# Example: sevenn_cp download 'SevenNet-Omni-i8' will download to current dir
MODEL_PATH = os.environ.get('SEVENN_MODEL_PATH', 'SevenNet-Omni-i8.ckpt')
# Create an ASE atom object
atoms = bulk('Si', 'diamond', a=5.43, cubic=True)
atoms.set_positions(atoms.get_positions() + np.random.rand(*atoms.get_positions().shape) * 0.1)
# Initialize the SevenNetCalculator
calculator = SevenNetCalculator(
path=MODEL_PATH,
device='cuda' if os.environ.get('SEVENN_USE_CUDA', 'false').lower() == 'true' else 'cpu'
)
atoms.set_calculator(calculator)
# Get energy and forces
energy = atoms.get_potential_energy()
forces = atoms.get_forces()
stress = atoms.get_stress()
print(f"Energy: {energy:.4f} eV")
print(f"Forces (first atom): {forces[0]}")
print(f"Stress: {stress}")
# Clean up (optional, if you're done with the calculator)
calculator.cleanup()