{"id":24551,"library":"sae-lens","title":"SAE Lens","description":"SAE Lens is a library for training, loading, and analyzing sparse autoencoders (SAEs) on transformer language models. Current version is 6.43.0, with frequent releases (multiple versions per month).","status":"active","version":"6.43.0","language":"python","source_language":"en","source_url":"https://github.com/decoderesearch/SAELens","tags":["sparse-autoencoder","interpretability","mechanistic-interpretability","transformers"],"install":[{"cmd":"pip install sae-lens","lang":"bash","label":"Install from PyPI"}],"dependencies":[{"reason":"Core dependency for tensor operations and model loading","package":"torch","optional":false},{"reason":"Used for model hooks and activation caching","package":"transformer-lens","optional":false},{"reason":"For loading activation datasets","package":"datasets","optional":false},{"reason":"Optional for experiment logging","package":"wandb","optional":true}],"imports":[{"note":"Standard import for loading a pretrained SAE.","symbol":"SAE","correct":"from sae_lens import SAE"},{"note":"SAEConfig is in sae_lens.config, not top-level.","wrong":"from sae_lens import SAEConfig","symbol":"SAEConfig","correct":"from sae_lens.config import SAEConfig"},{"note":"Wraps a HookedTransformer to cache activations.","symbol":"HookedSAETransformer","correct":"from sae_lens import HookedSAETransformer"}],"quickstart":{"code":"from sae_lens import SAE\nfrom transformer_lens import HookedTransformer\n\nmodel = HookedTransformer.from_pretrained(\"gpt2-small\", device=\"cpu\")\nsae, cfg_dict, sparsity = SAE.from_pretrained(release=\"gpt2-small-res-jb\", sae_id=\"blocks.0.hook_resid_pre\", device=\"cpu\")\nsae.to(\"cpu\")\n\n# Example: get SAE feature activations for a prompt\nprompt = \"Hello, world!\"\n_, cache = model.run_with_cache(prompt, names_filter=[sae.cfg.hook_name])\nact = cache[sae.cfg.hook_name]\nsae_acts = sae.encode(act)\nprint(sae_acts.shape)\n","lang":"python","description":"Load a pretrained SAE and compute feature activations for a prompt."},"warnings":[{"fix":"Update to: SAE.from_pretrained(release=..., sae_id=...)","message":"In v6.x, the `SAE.from_pretrained` signature changed: the `release` argument is now the first positional argument and required. Old code using `SAE.from_pretrained(sae_id=...)` without `release` will break.","severity":"breaking","affected_versions":"<6.0"},{"fix":"Ensure model and SAE are on the same device (use sae.to(device) and model.to(device)).","message":"SAE expects activations on the same device as the SAE itself. Cross-device (e.g., model on GPU, SAE on CPU) can cause silent errors or crashes.","severity":"gotcha","affected_versions":"all"},{"fix":"Switch to using `model.run_with_cache` and pass the activations directly to `sae.encode`.","message":"The `cache` parameter in `SAE.encode` is deprecated and will be removed in a future version. Use `HookedSAETransformer` or `model.run_with_cache` explicitly.","severity":"deprecated","affected_versions":">=6.30.0"}],"env_vars":null,"last_verified":"2026-05-01T00:00:00.000Z","next_check":"2026-07-30T00:00:00.000Z","problems":[{"fix":"Run `pip install sae-lens`.","cause":"Package not installed.","error":"ModuleNotFoundError: No module named 'sae_lens'"},{"fix":"Check that activations are a 3D tensor on the same device as the SAE. Use `sae.encode(act)` where `act` is shape (batch, seq_len, d_model).","cause":"Activations passed to SAE.encode have incorrect shape or are on wrong device.","error":"AssertionError: Expected activation shape (batch, seq_len, d_model) but got ..."},{"fix":"Use a valid release name from `sae_lens.known_releases()` or check the docs.","cause":"Provided release name does not exist in the SAE registry.","error":"ValueError: Unknown release: ..."}],"ecosystem":"pypi","meta_description":null,"install_score":null,"install_tag":null,"quickstart_score":null,"quickstart_tag":null}