NVIDIA CUDA Runtime (CUDA 12)

12.9.79 · active · verified Sat Mar 28

This package provides the native CUDA Runtime libraries as Python Wheels, enabling Python applications to leverage GPU acceleration by providing core runtime functionalities. It is part of NVIDIA's initiative to offer native Python support for CUDA, simplifying GPU-based parallel processing for high-performance computing, data science, and AI workloads. The current version is 12.9.79, with releases generally aligning with the NVIDIA CUDA Toolkit.

Warnings

Install

Imports

Quickstart

While `nvidia-cuda-runtime-cu12` itself doesn't offer direct high-level Python APIs, its successful installation allows frameworks like PyTorch to leverage the CUDA runtime. This snippet demonstrates how to verify that a CUDA-enabled PyTorch (which depends on this runtime) can detect and utilize your GPU.

import torch

if torch.cuda.is_available():
    print(f"CUDA is available! Version: {torch.version.cuda}")
    print(f"Number of GPUs: {torch.cuda.device_count()}")
    print(f"Current GPU name: {torch.cuda.get_device_name(0)}")
else:
    print("CUDA is not available. Please check your installation and drivers.")

view raw JSON →