NVIDIA CUDA Runtime

13.2.51 · active · verified Thu Apr 09

The `nvidia-cuda-runtime` package provides essential native CUDA runtime libraries (e.g., `libcudart.so`) required for Python applications to utilize NVIDIA GPUs. It ensures these libraries are discoverable within a Python environment, typically without requiring a full system-wide CUDA Toolkit installation. It is primarily a dependency for deep learning frameworks like PyTorch and TensorFlow, rather than offering direct Python APIs. The current version is 13.2.51, with releases tied to NVIDIA CUDA Toolkit updates.

Warnings

Install

Imports

Quickstart

Since `nvidia-cuda-runtime` doesn't offer direct Python APIs, its quickstart focuses on verifying that a CUDA-enabled library (like PyTorch) can successfully detect and utilize the GPU, confirming the runtime libraries provided by this package are effective.

# This package does not provide direct Python APIs to import.
# Its purpose is to make native CUDA libraries available for other Python libraries.
# You can verify its effect by checking a CUDA-enabled library like PyTorch:

try:
    import torch
    if torch.cuda.is_available():
        print("PyTorch detected CUDA and can use NVIDIA GPU.")
        print(f"CUDA Version: {torch.version.cuda}")
        print(f"Number of GPUs: {torch.cuda.device_count()}")
        if torch.cuda.device_count() > 0:
            print(f"Current GPU: {torch.cuda.get_device_name(0)}")
    else:
        print("PyTorch cannot detect CUDA. GPU acceleration is not available.")
except ImportError:
    print("PyTorch is not installed. Please install 'torch' to verify CUDA availability.")

print("\nNote: nvidia-cuda-runtime primarily manages LD_LIBRARY_PATH and provides native shared objects.")
print("It does not expose Python functions for direct import and usage by end-users.")

view raw JSON →