{"id":214,"library":"accelerate","title":"Accelerate","description":"Hugging Face library to run PyTorch training across any distributed configuration with minimal code changes. Current version is 1.13.0 (Mar 2026). Requires Python >=3.10. Core pattern: Accelerator() + accelerator.prepare() + accelerator.backward(). Must run accelerate config before first use.","status":"active","version":"1.13.0","language":"python","source_language":"en","source_url":"https://github.com/huggingface/accelerate/releases","tags":["training","distributed","huggingface","mixed-precision","deepspeed","fsdp","multi-gpu"],"install":[{"cmd":"pip install accelerate","lang":"bash","label":"Standard"},{"cmd":"accelerate config","lang":"bash","label":"Required first-time setup (interactive CLI)"},{"cmd":"python -c \"from accelerate.utils import write_basic_config; write_basic_config(mixed_precision='fp16')\"","lang":"bash","label":"Non-interactive basic config"}],"dependencies":[{"reason":"Required. Not installed automatically — install PyTorch separately first.","package":"torch>=1.10.0","optional":false},{"reason":"Required. Installed automatically.","package":"huggingface-hub","optional":false},{"reason":"Required. Installed automatically.","package":"safetensors","optional":false},{"reason":"Optional. Required for DeepSpeed ZeRO integration.","package":"deepspeed","optional":true}],"imports":[{"note":"For notebook_launcher (Colab/Jupyter multi-GPU), Accelerator() must be initialized INSIDE the training function, never at module/notebook level.","wrong":"# Module-level Accelerator initialization breaks notebook_launcher multi-GPU\naccelerator = Accelerator()  # at top of notebook cell\n\ndef training_function():\n    # ValueError: Accelerator should only be initialized inside your training function","symbol":"Accelerator","correct":"from accelerate import Accelerator\n\ndef training_function():\n    # Accelerator MUST be initialized inside the training function for notebook_launcher\n    accelerator = Accelerator(mixed_precision='fp16')\n    model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader)\n    \n    for batch in dataloader:\n        optimizer.zero_grad()\n        loss = model(batch)\n        accelerator.backward(loss)  # NOT loss.backward()\n        optimizer.step()"},{"note":"Always use accelerator.backward(loss) instead of loss.backward(). Direct loss.backward() bypasses Accelerate's mixed precision gradient scaling and gradient accumulation logic.","wrong":"loss = criterion(outputs, targets)\nloss.backward()  # bypasses mixed precision scaling and gradient accumulation handling","symbol":"accelerator.backward","correct":"loss = criterion(outputs, targets)\naccelerator.backward(loss)"}],"quickstart":{"code":"from accelerate import Accelerator\nimport torch\nimport torch.nn as nn\n\ndef train():\n    accelerator = Accelerator(mixed_precision='bf16')\n    \n    model = nn.Linear(10, 1)\n    optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3)\n    dataloader = ...  # your DataLoader\n    \n    # prepare() handles device placement and distributed wrapping\n    model, optimizer, dataloader = accelerator.prepare(\n        model, optimizer, dataloader\n    )\n    \n    model.train()\n    for batch in dataloader:\n        optimizer.zero_grad()\n        outputs = model(batch['input'])\n        loss = nn.functional.mse_loss(outputs, batch['target'])\n        accelerator.backward(loss)  # not loss.backward()\n        optimizer.step()\n    \n    # Save on main process only\n    accelerator.wait_for_everyone()\n    if accelerator.is_main_process:\n        accelerator.save_model(model, 'output/')","lang":"python","description":"Core Accelerate pattern. Run with: accelerate launch train.py"},"warnings":[{"fix":"Run accelerate config once after install, or programmatically: from accelerate.utils import write_basic_config; write_basic_config(). For CI: set ACCELERATE_CONFIG_FILE env var pointing to a pre-built config.","message":"accelerate config must be run before first use. Without a config file, Accelerate falls back to single-process CPU mode silently — multi-GPU training simply won't use multiple GPUs.","severity":"breaking","affected_versions":"all"},{"fix":"Upgrade Python to 3.10+. Pin accelerate<1.13.0 for Python 3.9 environments.","message":"Python 3.9 support dropped in 1.13.0. Accelerate now requires Python >=3.10.","severity":"breaking","affected_versions":">= 1.13.0"},{"fix":"Always initialize Accelerator() inside the training function passed to notebook_launcher. Never create it at notebook cell level or module level when using multi-GPU in notebooks.","message":"Accelerator() initialized outside the training function raises ValueError when using notebook_launcher for multi-GPU. Silently falls back to 1 GPU without error if no notebook_launcher is used.","severity":"breaking","affected_versions":"all"},{"fix":"Use torch.serialization.add_safe_globals([ListConfig]) to allowlist custom types, or pass weights_only=False to the underlying load call if the checkpoint source is trusted.","message":"accelerator.load_state() fails with PyTorch 2.6+ due to torch.load weights_only=True default flip. Optimizer states with custom objects (omegaconf.ListConfig, etc.) raise UnpicklingError.","severity":"breaking","affected_versions":">= 1.6.0 with PyTorch >= 2.6"},{"fix":"With DeepSpeed, create a separate Accelerator instance per model, or merge models before wrapping.","message":"DeepSpeed integration: only one nn.Module per Accelerator instance is supported. Passing multiple models to accelerator.prepare() with DeepSpeed raises AssertionError.","severity":"breaking","affected_versions":"all"},{"fix":"Use: accelerate launch script.py --my-arg value. If ambiguous: accelerate launch -- script.py --my-arg value.","message":"accelerate launch ignores Python script argument ordering. Flags intended for the script must come after --, otherwise they are parsed as accelerate launch flags.","severity":"gotcha","affected_versions":"all"},{"fix":"Replace all loss.backward() calls with accelerator.backward(loss) throughout the training loop.","message":"loss.backward() instead of accelerator.backward(loss) silently bypasses mixed precision gradient scaling. Training proceeds but gradients are wrong under fp16/bf16 — numerical instability or NaN loss.","severity":"gotcha","affected_versions":"all"},{"fix":"Ensure that build-essential tools, including a C compiler (e.g., gcc, g++), are installed in your environment before attempting to install Python packages that require compilation. For Alpine-based images, this typically involves `apk add build-base python3-dev`.","message":"Installation of core dependencies like numpy fails due to missing C compilers in the environment, particularly common in minimal Docker images (e.g., Alpine). This prevents packages requiring compilation from being built from source.","severity":"breaking","affected_versions":"all"}],"env_vars":null,"last_verified":"2026-05-12T11:06:01.126Z","next_check":"2026-06-26T00:00:00.000Z","problems":[{"fix":"Install the library using 'pip install accelerate'.","cause":"The 'accelerate' library is not installed or not accessible in the current Python environment.","error":"ModuleNotFoundError: No module named 'accelerate'"},{"fix":"Ensure 'accelerate' is installed and accessible by checking the installation path and verifying the PATH environment variable.","cause":"The 'accelerate' command-line tool is not found, possibly due to installation issues or PATH misconfiguration.","error":"bash: accelerate: command not found"},{"fix":"Verify the correct module name and import statement; refer to the 'accelerate' documentation for accurate usage.","cause":"Attempting to import a non-existent 'partialstate' module from the 'accelerate' library.","error":"ImportError: cannot import name 'partialstate' from 'accelerate'"},{"fix":"Update the 'openai' library to the latest version and check the documentation for the correct usage of 'ChatCompletion'.","cause":"The 'openai' module does not have an attribute named 'ChatCompletion', possibly due to an outdated version or incorrect import.","error":"AttributeError: module 'openai' has no attribute 'ChatCompletion'"},{"fix":"Ensure that the variable is properly initialized and assigned an iterable value before iteration.","cause":"An operation is attempting to iterate over a 'None' object, indicating that a variable expected to be iterable is 'None'.","error":"TypeError: 'NoneType' object is not iterable"}],"ecosystem":"pypi","meta_description":null,"install_score":0,"install_tag":"stale","quickstart_score":0,"quickstart_tag":"stale","pypi_latest":null,"install_checks":{"last_tested":"2026-05-12","tag":"stale","tag_description":"widespread failures or data too old to trust","results":[{"runtime":"python:3.10-alpine","python_version":"3.10","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.10-slim","python_version":"3.10","os_libc":"slim (glibc)","variant":"default","exit_code":0,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":4,"mem_mb":63.1,"disk_size":"4.7G"},{"runtime":"python:3.11-alpine","python_version":"3.11","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.11-slim","python_version":"3.11","os_libc":"slim (glibc)","variant":"default","exit_code":0,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":6.93,"mem_mb":68.2,"disk_size":"4.7G"},{"runtime":"python:3.12-alpine","python_version":"3.12","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.12-slim","python_version":"3.12","os_libc":"slim (glibc)","variant":"default","exit_code":0,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":7.68,"mem_mb":66.6,"disk_size":"4.7G"},{"runtime":"python:3.13-alpine","python_version":"3.13","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.13-slim","python_version":"3.13","os_libc":"slim (glibc)","variant":"default","exit_code":0,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":5.69,"mem_mb":66.9,"disk_size":"4.7G"},{"runtime":"python:3.9-alpine","python_version":"3.9","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.9-slim","python_version":"3.9","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null}]},"quickstart_checks":{"last_tested":"2026-04-23","tag":"stale","tag_description":"widespread failures or data too old to trust","results":[{"runtime":"python:3.10-alpine","exit_code":1},{"runtime":"python:3.10-slim","exit_code":-1},{"runtime":"python:3.11-alpine","exit_code":1},{"runtime":"python:3.11-slim","exit_code":-1},{"runtime":"python:3.12-alpine","exit_code":1},{"runtime":"python:3.12-slim","exit_code":-1},{"runtime":"python:3.13-alpine","exit_code":1},{"runtime":"python:3.13-slim","exit_code":-1},{"runtime":"python:3.9-alpine","exit_code":1},{"runtime":"python:3.9-slim","exit_code":-1}]}}