FAIR ESM (Evolutionary Scale Modeling)
raw JSON → 2.0.0 verified Mon Apr 27 auth: no python
FAIR ESM provides pretrained transformer language models for proteins, including ESM-2 and ESM-1b. Version 2.0.0 adds new models and enhancements. The library is actively maintained by Meta AI.
pip install fair-esm Common errors
error RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' ↓
cause Half precision not supported on CPU.
fix
Use torch.float32 or run on GPU with torch.cuda.amp if supported.
error KeyError: 'representations' ↓
cause Incorrect repr_layers argument; model does not have requested layer.
fix
Set repr_layers to a valid layer number, e.g., [33] for ESM-2 650M which has 33 layers.
error ModuleNotFoundError: No module named 'esm' ↓
cause Package not installed or wrong import name.
fix
Run: pip install fair-esm, then import as esm (not fair_esm).
error AttributeError: module 'esm' has no attribute 'pretrained' ↓
cause Old version of fair-esm (<2.0.0?) or import error.
fix
Upgrade to latest: pip install --upgrade fair-esm, or use torch.hub.load for older versions.
Warnings
breaking In v2.0.0, model loading through torch.hub is deprecated; use esm.pretrained instead. ↓
fix Use from esm.pretrained import ... instead of torch.hub.load('facebookresearch/esm', ...)
breaking The 'esm.pretrained' module no longer provides the old ESM-1b model alias 'esm1b_t33_650M_UR50S' directly; use 'esm1b_t33_650M_UR50S()' remains but check model list. ↓
fix Refer to the model zoo documentation for updated model names.
gotcha GPU memory is very high (e.g., ESM-2 15B requires ~300GB); smaller models like 650M are recommended for most users. ↓
fix Use esm2_t33_650M_UR50D (650M params) for typical usage.
gotcha The 'return_contacts' flag in forward() can cause OOM; only use if needed. ↓
fix Set return_contacts=False (default) unless you need attention map.
deprecated The 'esm.model' submodule is being reorganized; direct class imports may break in future versions. ↓
fix Use esm.pretrained for model loading and esm for Alphabet.
Imports
- ESM2
from esm.pretrained import esm2_t48_15B_UR50D - Alphabet wrong
from fair_esm import Alphabetcorrectfrom esm import Alphabet
Quickstart
import torch
from esm.pretrained import esm2_t33_650M_UR50D
from esm import Alphabet
model, alphabet = esm2_t33_650M_UR50D()
batch_converter = alphabet.get_batch_converter()
model.eval()
data = [
("protein1", "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"),
]
batch_labels, batch_strs, batch_tokens = batch_converter(data)
with torch.no_grad():
results = model(batch_tokens, repr_layers=[33], return_contacts=True)
token_representations = results["representations"][33]
print(token_representations.shape)