{"id":1588,"library":"nvidia-cuda-runtime","title":"NVIDIA CUDA Runtime","description":"The `nvidia-cuda-runtime` package provides essential native CUDA runtime libraries (e.g., `libcudart.so`) required for Python applications to utilize NVIDIA GPUs. It ensures these libraries are discoverable within a Python environment, typically without requiring a full system-wide CUDA Toolkit installation. It is primarily a dependency for deep learning frameworks like PyTorch and TensorFlow, rather than offering direct Python APIs. The current version is 13.2.51, with releases tied to NVIDIA CUDA Toolkit updates.","status":"active","version":"13.2.51","language":"en","source_language":"en","source_url":"https://github.com/NVIDIA/nvidia-driver-container/tree/main/packaging/pypi","tags":["cuda","gpu","runtime","nvidia","deep learning utility","native libraries"],"install":[{"cmd":"pip install nvidia-cuda-runtime","lang":"bash","label":"Install the package"}],"dependencies":[],"imports":[{"note":"The impact of this package is on the environment's `LD_LIBRARY_PATH` and availability of native CUDA libraries for other GPU-accelerated Python libraries.","symbol":"No direct Python imports","correct":"This package primarily provides native shared libraries and does not expose Python APIs for direct import and usage."}],"quickstart":{"code":"# This package does not provide direct Python APIs to import.\n# Its purpose is to make native CUDA libraries available for other Python libraries.\n# You can verify its effect by checking a CUDA-enabled library like PyTorch:\n\ntry:\n    import torch\n    if torch.cuda.is_available():\n        print(\"PyTorch detected CUDA and can use NVIDIA GPU.\")\n        print(f\"CUDA Version: {torch.version.cuda}\")\n        print(f\"Number of GPUs: {torch.cuda.device_count()}\")\n        if torch.cuda.device_count() > 0:\n            print(f\"Current GPU: {torch.cuda.get_device_name(0)}\")\n    else:\n        print(\"PyTorch cannot detect CUDA. GPU acceleration is not available.\")\nexcept ImportError:\n    print(\"PyTorch is not installed. Please install 'torch' to verify CUDA availability.\")\n\nprint(\"\\nNote: nvidia-cuda-runtime primarily manages LD_LIBRARY_PATH and provides native shared objects.\")\nprint(\"It does not expose Python functions for direct import and usage by end-users.\")","lang":"python","description":"Since `nvidia-cuda-runtime` doesn't offer direct Python APIs, its quickstart focuses on verifying that a CUDA-enabled library (like PyTorch) can successfully detect and utilize the GPU, confirming the runtime libraries provided by this package are effective."},"warnings":[{"fix":"Always check the compatibility matrix provided by your deep learning framework for the required CUDA Toolkit and driver versions. If using `nvidia-cuda-runtime` via `pip`, ensure its version aligns with what your framework expects. Consider using the `pip install torch --index-url https://download.pytorch.org/whl/cuXXX` method, which often includes the correct runtime dependencies.","message":"Version mismatches between `nvidia-cuda-runtime`, your NVIDIA GPU driver, the CUDA Toolkit version expected by your deep learning framework (e.g., PyTorch, TensorFlow), and the `cudnn` libraries can lead to runtime errors, crashes, or incorrect behavior. Ensure all components are compatible.","severity":"breaking","affected_versions":"All versions"},{"fix":"If you need to compile CUDA code or use tools like `nvcc`, you must install the full NVIDIA CUDA Toolkit either system-wide or via a container (e.g., Docker).","message":"This package provides *only* the CUDA runtime shared libraries (e.g., `libcudart.so`), not the full CUDA development toolkit (which includes compilers like `nvcc`, headers, and other development tools). You cannot compile CUDA C++ code with just this package.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Be mindful of your `LD_LIBRARY_PATH` environment variable. In virtual environments, `nvidia-cuda-runtime` is designed to handle this. If conflicts arise, consider creating clean environments, using containers, or explicitly managing `LD_LIBRARY_PATH` to prioritize the desired CUDA installation.","message":"Using `nvidia-cuda-runtime` alongside a system-wide CUDA Toolkit installation can lead to `LD_LIBRARY_PATH` conflicts. The Python environment's library paths (managed by this package) might prepend the system paths, causing an unintended version of CUDA libraries to be loaded.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Do not attempt to `import nvidia_cuda_runtime` or expect Python functions from this package. Verify its presence and functionality by checking the CUDA availability within a high-level library like PyTorch (`torch.cuda.is_available()`) or TensorFlow.","message":"The `nvidia-cuda-runtime` package does not expose any direct Python APIs for users to import or call. Its sole purpose is to make the underlying native CUDA shared libraries discoverable for other Python libraries that depend on them.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}