NVIDIA CUDA Runtime (cu11)

11.8.89 · active · verified Sat Apr 11

The `nvidia-cuda-runtime-cu11` package provides the native CUDA Runtime libraries for Python applications. It acts as a foundational component, offering Cython/Python wrappers for CUDA driver and runtime APIs, enabling higher-level Python libraries to leverage NVIDIA GPUs. This is a low-level runtime dependency rather than a library with direct user-facing Python APIs. The current version is 11.8.89. It is actively maintained by NVIDIA.

Warnings

Install

Quickstart

This package doesn't expose direct Python classes or functions for general use. Its presence enables other CUDA-aware Python libraries (like PyTorch or TensorFlow) to utilize the GPU. This quickstart demonstrates how to verify CUDA availability using PyTorch, a common library that depends on CUDA runtime libraries.

import os

# This package primarily provides runtime libraries.
# To verify successful installation and CUDA availability in a Python environment,
# you typically check via a framework that utilizes CUDA, like PyTorch.
# Ensure 'torch' is installed (e.g., pip install torch --index-url https://download.pytorch.org/whl/cu118)

try:
    import torch
    if torch.cuda.is_available():
        print(f"CUDA is available! Device name: {torch.cuda.get_device_name(0)}")
    else:
        print("CUDA is not available according to PyTorch.")
except ImportError:
    print("PyTorch not installed. Install it to verify CUDA availability:")
    print("pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118")
except Exception as e:
    print(f"An error occurred while checking CUDA with PyTorch: {e}")

view raw JSON →