Hugging Face Hub

1.5.0 · active · verified Sat Feb 28

Official Python client for the Hugging Face Hub. Handles model/dataset/Space downloading, uploading, caching, repo management, and Hub API interactions. Used as a dependency by transformers, datasets, sentence-transformers, and most HF ecosystem packages. The CLI was previously huggingface-cli; in v1.x it was rebranded as hf (also available as a standalone pip install hf package for CLI-only use). Import name: huggingface_hub (underscore). Package name: huggingface-hub (hyphen). These are different from the hf CLI package.

Warnings

Install

Imports

Quickstart

Files are cached locally in HF_HOME (default: ~/.cache/huggingface). hf_hub_download returns local path. Gated models (Llama, Gemma) require login() or HF_TOKEN with accepted license.

from huggingface_hub import hf_hub_download, snapshot_download, login
import os

# Authenticate (or set HF_TOKEN env var)
# login(token=os.environ["HF_TOKEN"])

# Download a single file
config_path = hf_hub_download(
    repo_id="bert-base-uncased",
    filename="config.json"
)
print(config_path)  # local cached path

# Download entire model repo
model_dir = snapshot_download(repo_id="Qwen/Qwen2.5-0.5B-Instruct")
print(model_dir)  # local directory with all model files

# Upload a file
from huggingface_hub import upload_file
upload_file(
    path_or_fileobj="./my_model.bin",
    path_in_repo="model.bin",
    repo_id="your-username/your-model",
    token=os.environ["HF_TOKEN"]
)

# Search models
from huggingface_hub import HfApi
api = HfApi()
models = list(api.list_models(filter="text-classification", limit=5))
for m in models:
    print(m.id)

view raw JSON →