NVIDIA CUDA Runtime (CUDA 12)

raw JSON →
12.9.79 verified Tue May 12 auth: no python install: stale quickstart: stale

This package provides the native CUDA Runtime libraries as Python Wheels, enabling Python applications to leverage GPU acceleration by providing core runtime functionalities. It is part of NVIDIA's initiative to offer native Python support for CUDA, simplifying GPU-based parallel processing for high-performance computing, data science, and AI workloads. The current version is 12.9.79, with releases generally aligning with the NVIDIA CUDA Toolkit.

pip install nvidia-cuda-runtime-cu12
error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully.
cause This error during `pip install` often indicates a failure in building a dependent package's metadata due to an improperly configured CUDA development environment, a missing CUDA Toolkit, or an incompatible `nvcc` compiler version.
fix
Ensure the full NVIDIA CUDA Toolkit is installed and accessible in your system's PATH, or ensure that a compatible nvcc compiler is available in your environment if the package requires building CUDA extensions. For nvidia-cuda-runtime-cu12 itself, ensure your pip and setuptools are up-to-date and consider installing nvidia-pyindex first: pip install --upgrade setuptools pip wheel && pip install nvidia-pyindex.
error RuntimeError: CUDA error: an illegal memory access was encountered
cause This runtime error typically points to an issue where a CUDA kernel attempts to access memory it doesn't have permission for, often due to out-of-bounds access, invalid pointers, or driver/hardware issues.
fix
Debug your CUDA code for memory access violations. For Python libraries, ensure correct input shapes and data types, update GPU drivers, or try setting CUDA_LAUNCH_BLOCKING=1 environment variable for synchronous error reporting during debugging.
error ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory
cause This error (or similar for other .so or .dll files like `libcublas.so.12`, `libcusparse.so.12`) means that a required CUDA-dependent dynamic library cannot be found by the system's dynamic linker, often because its path is not included in `LD_LIBRARY_PATH` (Linux) or it's missing on the system (Windows: `OSError: [WinError 126] The specified module could not be found`).
fix
Ensure the NVIDIA CUDA Toolkit and cuDNN (if needed) are properly installed, and their library paths are added to your system's LD_LIBRARY_PATH environment variable on Linux (e.g., export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH) or to the system PATH on Windows.
error RuntimeError: CUDA error: no kernel image is available for execution on the device
cause This error occurs when the GPU cannot find a suitable 'kernel image' to execute, usually due to a mismatch between the compiled CUDA code's compute capability and the actual GPU's architecture, or an outdated GPU driver.
fix
Ensure your NVIDIA GPU drivers are up-to-date. If compiling CUDA code, ensure it's compiled for the correct compute capability of your GPU. For pre-built binaries (like PyTorch wheels), ensure you've installed the version compatible with your GPU's CUDA capability and your installed CUDA runtime.
error RuntimeError: CUDA Setup failed despite GPU being available.
cause This specific error, often reported by higher-level libraries (e.g., `bitsandbytes`), indicates that while a GPU is detected, the library cannot properly initialize or interact with the CUDA environment, often due to incorrect `LD_LIBRARY_PATH` settings or an incomplete CUDA installation.
fix
Verify your CUDA Toolkit installation. Run python -m bitsandbytes (or the equivalent for the failing library) to inspect CUDA library paths. Add the necessary CUDA library directories to your LD_LIBRARY_PATH environment variable.
breaking Incompatible NVIDIA GPU drivers with the installed CUDA version can lead to runtime errors (e.g., 'CUDA driver version is insufficient'). The GPU driver must be sufficiently new to support the installed CUDA toolkit version.
fix Ensure your NVIDIA GPU drivers are updated to a version compatible with CUDA 12.x. Check NVIDIA's official documentation for driver requirements corresponding to your specific CUDA version.
gotcha This package provides CUDA runtime libraries, not the full CUDA development toolkit (e.g., it does not include `nvcc`). If you need to compile CUDA code (e.g., custom kernels or certain libraries from source), a separate, full CUDA Toolkit installation is required.
fix For compilation, download and install the complete NVIDIA CUDA Toolkit from developer.nvidia.com/cuda-downloads alongside this runtime package. Ensure `nvcc` is in your system's PATH.
gotcha CUDA binaries and libraries (like `cudart64_12.dll` on Windows or `libcudart.so.12` on Linux) need to be correctly discoverable via system environment variables (`PATH`, `LD_LIBRARY_PATH` on Linux, or `CUDA_PATH` on Windows). Incorrect setup can result in applications failing to find the CUDA runtime.
fix Manually add the CUDA binary and library paths (e.g., `/usr/local/cuda/bin`, `/usr/local/cuda/lib64` on Linux, or `%CUDA_PATH%\bin`, `%CUDA_PATH%\lib\x64` on Windows) to your system's environment variables. Restart your shell or IDE after changes.
gotcha When upgrading CUDA (especially major versions), some older APIs or functions might be deprecated or behave differently, potentially requiring modifications to existing CUDA C/C++ code.
fix Consult the NVIDIA CUDA Toolkit Release Notes and Programming Guide for details on deprecated features and API changes during major upgrades. Thoroughly test your applications after upgrading.
gotcha For CUDA versions 12.2 and newer, applications that exhibit hanging during the first kernel launch might resolve the issue by setting the `CUDA_MODULE_LOADING` environment variable to `EAGER`.
fix Set `CUDA_MODULE_LOADING=EAGER` in your environment before running the application. For example, `export CUDA_MODULE_LOADING=EAGER && python my_script.py` on Linux/macOS.
breaking The 'torch' Python package is not installed or cannot be found in the current Python environment. This prevents the application from importing the necessary PyTorch libraries.
fix Ensure that the 'torch' package is installed in your Python environment using pip (e.g., `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121` for CUDA 12.1, adjusting for your specific CUDA version or 'cpu' if no GPU is used). Verify that the correct Python interpreter and environment are being used.
breaking The package you are trying to install (e.g., `nvidia-cuda-runtime-cu12`) is a placeholder on PyPI.org. These packages are hosted on the NVIDIA Python Package Index, and direct installation from PyPI will result in a runtime error indicating an incorrect package source.
fix To install packages from the NVIDIA Python Package Index, first install the `nvidia-pyindex` package: `pip install nvidia-pyindex`. Then, proceed with the installation of the desired CUDA runtime package, e.g., `pip install nvidia-cuda-runtime-cu12`.
python os / libc status wheel install import disk
3.10 alpine (musl) build_error - - - -
3.10 alpine (musl) - - - -
3.10 slim (glibc) wheel 1.6s - 28M
3.10 slim (glibc) - - - -
3.11 alpine (musl) build_error - - - -
3.11 alpine (musl) - - - -
3.11 slim (glibc) wheel 1.7s - 29M
3.11 slim (glibc) - - - -
3.12 alpine (musl) build_error - - - -
3.12 alpine (musl) - - - -
3.12 slim (glibc) wheel 1.6s - 21M
3.12 slim (glibc) - - - -
3.13 alpine (musl) build_error - - - -
3.13 alpine (musl) - - - -
3.13 slim (glibc) wheel 1.6s - 21M
3.13 slim (glibc) - - - -
3.9 alpine (musl) build_error - - - -
3.9 alpine (musl) - - - -
3.9 slim (glibc) wheel 1.8s - 27M
3.9 slim (glibc) - - - -

While `nvidia-cuda-runtime-cu12` itself doesn't offer direct high-level Python APIs, its successful installation allows frameworks like PyTorch to leverage the CUDA runtime. This snippet demonstrates how to verify that a CUDA-enabled PyTorch (which depends on this runtime) can detect and utilize your GPU.

import torch

if torch.cuda.is_available():
    print(f"CUDA is available! Version: {torch.version.cuda}")
    print(f"Number of GPUs: {torch.cuda.device_count()}")
    print(f"Current GPU name: {torch.cuda.get_device_name(0)}")
else:
    print("CUDA is not available. Please check your installation and drivers.")