Hugging Face Hub
Official Python client for the Hugging Face Hub. Handles model/dataset/Space downloading, uploading, caching, repo management, and Hub API interactions. Used as a dependency by transformers, datasets, sentence-transformers, and most HF ecosystem packages. The CLI was previously huggingface-cli; in v1.x it was rebranded as hf (also available as a standalone pip install hf package for CLI-only use). Import name: huggingface_hub (underscore). Package name: huggingface-hub (hyphen). These are different from the hf CLI package.
Warnings
- breaking The CLI command changed from huggingface-cli to hf in recent versions. huggingface-cli still works as an alias but hf is now the canonical name. CI scripts using huggingface-cli login should still work but new docs use hf auth login.
- breaking huggingface_hub switched from requests to httpx as the HTTP backend in v1.x (aligned with transformers v5). Code that patches or mocks requests for HF Hub calls will silently stop working.
- gotcha Gated models (meta-llama/*, google/gemma-*, etc.) require: (1) a HF account, (2) accepted license on the model page, (3) a valid HF_TOKEN. from_pretrained() raises a 401/403 with a confusing error if any of these are missing.
- gotcha snapshot_download() and hf_hub_download() cache by default. Re-running always returns the cached version unless local_files_only=False and the remote has changed. In long-running services, stale model versions can persist silently.
- gotcha Package name is huggingface-hub (hyphen) but import name is huggingface_hub (underscore). pip install huggingface_hub (underscore) will NOT find the package on older pip versions and may install nothing or the wrong thing.
Install
-
pip install huggingface-hub -
pip install huggingface-hub[hf-xet] -
pip install huggingface-hub[torch] -
uvx hf version
Imports
- hf_hub_download
from huggingface_hub import hf_hub_download
- snapshot_download
from huggingface_hub import snapshot_download
- login
from huggingface_hub import login
Quickstart
from huggingface_hub import hf_hub_download, snapshot_download, login
import os
# Authenticate (or set HF_TOKEN env var)
# login(token=os.environ["HF_TOKEN"])
# Download a single file
config_path = hf_hub_download(
repo_id="bert-base-uncased",
filename="config.json"
)
print(config_path) # local cached path
# Download entire model repo
model_dir = snapshot_download(repo_id="Qwen/Qwen2.5-0.5B-Instruct")
print(model_dir) # local directory with all model files
# Upload a file
from huggingface_hub import upload_file
upload_file(
path_or_fileobj="./my_model.bin",
path_in_repo="model.bin",
repo_id="your-username/your-model",
token=os.environ["HF_TOKEN"]
)
# Search models
from huggingface_hub import HfApi
api = HfApi()
models = list(api.list_models(filter="text-classification", limit=5))
for m in models:
print(m.id)