NVIDIA CUDA Runtime
The `nvidia-cuda-runtime` package provides essential native CUDA runtime libraries (e.g., `libcudart.so`) required for Python applications to utilize NVIDIA GPUs. It ensures these libraries are discoverable within a Python environment, typically without requiring a full system-wide CUDA Toolkit installation. It is primarily a dependency for deep learning frameworks like PyTorch and TensorFlow, rather than offering direct Python APIs. The current version is 13.2.51, with releases tied to NVIDIA CUDA Toolkit updates.
Warnings
- breaking Version mismatches between `nvidia-cuda-runtime`, your NVIDIA GPU driver, the CUDA Toolkit version expected by your deep learning framework (e.g., PyTorch, TensorFlow), and the `cudnn` libraries can lead to runtime errors, crashes, or incorrect behavior. Ensure all components are compatible.
- gotcha This package provides *only* the CUDA runtime shared libraries (e.g., `libcudart.so`), not the full CUDA development toolkit (which includes compilers like `nvcc`, headers, and other development tools). You cannot compile CUDA C++ code with just this package.
- gotcha Using `nvidia-cuda-runtime` alongside a system-wide CUDA Toolkit installation can lead to `LD_LIBRARY_PATH` conflicts. The Python environment's library paths (managed by this package) might prepend the system paths, causing an unintended version of CUDA libraries to be loaded.
- gotcha The `nvidia-cuda-runtime` package does not expose any direct Python APIs for users to import or call. Its sole purpose is to make the underlying native CUDA shared libraries discoverable for other Python libraries that depend on them.
Install
-
pip install nvidia-cuda-runtime
Imports
- No direct Python imports
This package primarily provides native shared libraries and does not expose Python APIs for direct import and usage.
Quickstart
# This package does not provide direct Python APIs to import.
# Its purpose is to make native CUDA libraries available for other Python libraries.
# You can verify its effect by checking a CUDA-enabled library like PyTorch:
try:
import torch
if torch.cuda.is_available():
print("PyTorch detected CUDA and can use NVIDIA GPU.")
print(f"CUDA Version: {torch.version.cuda}")
print(f"Number of GPUs: {torch.cuda.device_count()}")
if torch.cuda.device_count() > 0:
print(f"Current GPU: {torch.cuda.get_device_name(0)}")
else:
print("PyTorch cannot detect CUDA. GPU acceleration is not available.")
except ImportError:
print("PyTorch is not installed. Please install 'torch' to verify CUDA availability.")
print("\nNote: nvidia-cuda-runtime primarily manages LD_LIBRARY_PATH and provides native shared objects.")
print("It does not expose Python functions for direct import and usage by end-users.")