NVIDIA CUDA Runtime (CUDA 12)
This package provides the native CUDA Runtime libraries as Python Wheels, enabling Python applications to leverage GPU acceleration by providing core runtime functionalities. It is part of NVIDIA's initiative to offer native Python support for CUDA, simplifying GPU-based parallel processing for high-performance computing, data science, and AI workloads. The current version is 12.9.79, with releases generally aligning with the NVIDIA CUDA Toolkit.
Warnings
- breaking Incompatible NVIDIA GPU drivers with the installed CUDA version can lead to runtime errors (e.g., 'CUDA driver version is insufficient'). The GPU driver must be sufficiently new to support the installed CUDA toolkit version.
- gotcha This package provides CUDA runtime libraries, not the full CUDA development toolkit (e.g., it does not include `nvcc`). If you need to compile CUDA code (e.g., custom kernels or certain libraries from source), a separate, full CUDA Toolkit installation is required.
- gotcha CUDA binaries and libraries (like `cudart64_12.dll` on Windows or `libcudart.so.12` on Linux) need to be correctly discoverable via system environment variables (`PATH`, `LD_LIBRARY_PATH` on Linux, or `CUDA_PATH` on Windows). Incorrect setup can result in applications failing to find the CUDA runtime.
- gotcha When upgrading CUDA (especially major versions), some older APIs or functions might be deprecated or behave differently, potentially requiring modifications to existing CUDA C/C++ code.
- gotcha For CUDA versions 12.2 and newer, applications that exhibit hanging during the first kernel launch might resolve the issue by setting the `CUDA_MODULE_LOADING` environment variable to `EAGER`.
Install
-
pip install nvidia-cuda-runtime-cu12
Imports
- Not directly imported
This package primarily provides native shared libraries. High-level Python libraries (e.g., PyTorch, TensorFlow, or the `cuda-python` project's `cuda.core` module) implicitly link against and utilize these runtime components, rather than requiring direct imports of `nvidia_cuda_runtime_cu12` itself in user code.
Quickstart
import torch
if torch.cuda.is_available():
print(f"CUDA is available! Version: {torch.version.cuda}")
print(f"Number of GPUs: {torch.cuda.device_count()}")
print(f"Current GPU name: {torch.cuda.get_device_name(0)}")
else:
print("CUDA is not available. Please check your installation and drivers.")