timm
raw JSON → 1.0.15 verified Tue May 12 auth: no python install: stale quickstart: stale
PyTorch Image Models — collection of SOTA vision models, pretrained weights, layers, optimizers, and training utilities by Ross Wightman. Current version is 1.0.15 (Mar 2026). Primary weight source is now Hugging Face Hub. Import path for layers changed: timm.models.layers → timm.layers.
pip install timm Common errors
error ModuleNotFoundError: No module named 'timm.models.layers' ↓
cause The import path for utility layers in `timm` changed from `timm.models.layers` to `timm.layers` starting from version 0.10.0 (and is present in 1.0.15).
fix
Replace
from timm.models.layers import ... with from timm.layers import .... error OSError: Specified pretrained weights did not exist for model 'model_name'. Check your entry and try again. ↓
cause The model name provided to `timm.create_model` might be incorrect, there could be network issues preventing download from Hugging Face Hub, or the specified model does not have pretrained weights available.
fix
Verify the model name against
timm.list_models(pretrained=True), ensure an active internet connection, and check if the model actually supports pretrained weights. error RuntimeError: Expected 4D input for conv2d, got 3D input instead ↓
cause The input tensor provided to the model's `forward` method is missing the expected batch dimension. `timm` models typically expect input in `[batch_size, channels, height, width]` format.
fix
Add a batch dimension to your input tensor, typically by using
input_tensor.unsqueeze(0) for a single image. error AttributeError: 'ResNet' object has no attribute 'fc' (or 'EfficientNet' object has no attribute 'classifier') ↓
cause The name of the final classification layer (or head) varies across different `timm` models and architectures. Common names include `head`, `fc`, or `classifier`.
fix
Inspect the model's structure (e.g.,
print(model)) to identify the correct attribute name for the classification head, which is often model.head for many timm models. Warnings
breaking timm.models.layers module moved to timm.layers in 0.9.x. Direct module imports (import timm.models.layers.module) fail. Only top-level from timm.models.layers import name still works via deprecation shim — which will be removed. ↓
fix Replace from timm.models.layers import X with from timm.layers import X throughout your codebase.
breaking Model naming changed to architecture.pretrained_tag format in 0.9+. Old names like resnet50_21k still work via deprecation remapping but new weight variants are only accessible via the new format (e.g. resnet50.a1_in1k). ↓
fix Use timm.list_models() to discover available model names. For specific weight variants use the architecture.tag format.
breaking Pretrained weights now loaded from Hugging Face Hub (https://huggingface.co/timm) not GitHub releases. Old GitHub release URLs hardcoded in custom code will 404. ↓
fix Use timm.create_model(name, pretrained=True) — weight URLs are managed automatically. Do not hardcode weight URLs.
gotcha timm.create_model with num_classes=0 removes the classifier entirely and returns features. num_classes=None is NOT the same — it keeps the default head. Setting wrong num_classes silently produces wrong output shapes. ↓
fix For feature extraction: num_classes=0 (removes head). For fine-tuning with N classes: num_classes=N (replaces head with random init). Use model.reset_classifier(num_classes=N) to change after creation.
gotcha Each model has its own expected preprocessing (mean, std, input size). Using generic ImageNet normalization values directly instead of model-specific config produces degraded accuracy. ↓
fix Always use timm's preprocessing utilities: config = timm.data.resolve_data_config({}, model=model); transform = timm.data.create_transform(**config).
gotcha Not all model variants have pretrained weights — timm lists models without weights too. timm.create_model('some_model', pretrained=True) raises RuntimeError if no weights exist for that variant. ↓
fix Use timm.list_models(pretrained=True) to list only models with available weights. Or check: timm.list_pretrained().
breaking Installing `timm` can lead to dependency conflicts with other packages in the environment (e.g., `torch`, `torchvision`, `Pillow`, `packaging`) or specific Python versions, resulting in `ERROR: ResolutionImpossible` during `pip install`. ↓
fix Ensure your Python environment is clean or use a virtual environment. Try to install `timm` in isolation or verify dependency compatibility with other packages. Consider explicitly installing compatible versions of `timm`'s main dependencies (like `torch`, `torchvision`) before installing `timm`.
Install compatibility stale last tested: 2026-05-12
python os / libc status wheel install import disk
3.10 alpine (musl) - - - -
3.10 slim (glibc) - - 11.32s 4.7G
3.11 alpine (musl) - - - -
3.11 slim (glibc) - - 15.07s 4.8G
3.12 alpine (musl) - - - -
3.12 slim (glibc) - - 14.21s 4.8G
3.13 alpine (musl) - - - -
3.13 slim (glibc) - - 13.20s 4.8G
3.9 alpine (musl) - - - -
3.9 slim (glibc) - - - -
Imports
- timm.layers wrong
from timm.models.layers import PatchEmbed, Mlp, DropPath # moved in 0.9 — deprecated mapping exists but will be removedcorrectfrom timm.layers import PatchEmbed, Mlp, DropPath # or import timm.layers - create_model wrong
# Old-style weight loading from GitHub releases (no longer primary source) model = timm.create_model('resnet50', pretrained=True) # Weights now come from HF Hub: https://huggingface.co/timmcorrectimport timm # Load with pretrained weights model = timm.create_model('resnet50', pretrained=True) # Load specific weight variant using architecture.tag format model = timm.create_model('resnet50.a1_in1k', pretrained=True) # Custom num_classes for fine-tuning model = timm.create_model('efficientnet_b0', pretrained=True, num_classes=10) # Feature extraction (removes classifier) model = timm.create_model('resnet50', features_only=True, pretrained=True)
Quickstart stale last tested: 2026-04-23
import timm
import torch
from PIL import Image
from timm.data import resolve_data_config, create_transform
# List available models
print(timm.list_models('resnet*')[:5])
# Load pretrained model
model = timm.create_model('efficientnet_b0.ra_in1k', pretrained=True)
model.eval()
# Get model-specific preprocessing
config = resolve_data_config({}, model=model)
transform = create_transform(**config)
# Inference
img = Image.open('image.jpg').convert('RGB')
tensor = transform(img).unsqueeze(0)
with torch.no_grad():
output = model(tensor) # [1, 1000] logits
probs = torch.softmax(output, dim=1)
top5 = torch.topk(probs, 5)
# Fine-tune with custom head
model = timm.create_model('resnet50', pretrained=True, num_classes=10)