NVIDIA CUDA Toolkit (PyPI meta-package)
The `cuda-toolkit` meta-package on PyPI facilitates the installation of NVIDIA CUDA runtime libraries and their dependencies (like cuBLAS, cuDNN) for Python environments. It doesn't provide direct Python APIs but serves as an underlying dependency for deep learning frameworks like PyTorch and TensorFlow to leverage NVIDIA GPUs. The current version is 13.2.0, with updates generally aligning with NVIDIA's main CUDA Toolkit releases, typically a few times per year.
Warnings
- gotcha The `cuda-toolkit` PyPI package does NOT install NVIDIA GPU drivers. You must manually install compatible drivers from NVIDIA's website for your GPU before attempting to use CUDA.
- gotcha This package is a meta-package for C++/binary libraries and does not provide direct Python imports. You interact with CUDA through deep learning frameworks (e.g., PyTorch, TensorFlow) or specialized libraries (e.g., Numba, JAX) that link against these installed binaries.
- breaking Major CUDA version changes (e.g., from 11.x to 12.x) can introduce incompatibilities with existing deep learning frameworks. Frameworks like PyTorch or TensorFlow are typically built against specific CUDA versions and might not work correctly with significantly newer or older CUDA installations.
- gotcha The `pip install cuda-toolkit` primarily provides the runtime libraries. For full CUDA development (e.g., compiling custom CUDA kernels with `nvcc`), you may still need a full system-wide NVIDIA CUDA Toolkit installation from NVIDIA's developer site, which includes compilers, debuggers, and development headers.
- gotcha Despite `pip` installations, certain applications or older scripts might still rely on environment variables like `CUDA_HOME` or `LD_LIBRARY_PATH` to locate CUDA libraries. PyPI installations typically place libraries in Python's site-packages, which might not be on the system's default search paths for all tools.
Install
-
pip install cuda-toolkit
Imports
- No direct imports
This package is a meta-package for C++/binary libraries and does not provide direct Python imports. Functionality is exposed via frameworks like PyTorch or TensorFlow.
Quickstart
import torch
if torch.cuda.is_available():
print(f"CUDA is available! Device name: {torch.cuda.get_device_name(0)}")
print(f"CUDA version: {torch.version.cuda}")
print(f"PyTorch CUDA version: {torch.cuda.get_device_capability(0)}")
else:
print("CUDA is NOT available. Check your NVIDIA drivers and cuda-toolkit installation.")