NVIDIA CUDA NVRTC for CUDA 12.x

12.9.86 · active · verified Sat Mar 28

NVRTC (NVIDIA Runtime Compiler) is a library designed for runtime compilation of CUDA C++ source code into PTX (Parallel Thread Execution) assembly. This Python package, `nvidia-cuda-nvrtc-cu12`, provides the native runtime libraries (DLLs/SOs) for CUDA 12.x, enabling dynamic code generation and execution on NVIDIA GPUs. It's a fundamental component of the CUDA Toolkit, actively maintained by NVIDIA, and crucial for other Python libraries that leverage JIT CUDA compilation.

Warnings

Install

Quickstart

The `nvidia-cuda-nvrtc-cu12` package does not expose a direct Python API for end-user import. Instead, it provides the underlying native NVRTC runtime libraries that are consumed by other CUDA-enabled Python frameworks like PyTorch or CuPy. This quickstart demonstrates verifying CUDA's availability using these frameworks, which implicitly confirms that `nvidia-cuda-nvrtc-cu12` is correctly installed as a runtime component. You might need to install `torch` or `cupy` separately (e.g., `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121` or `pip install cupy-cuda12x`).

import os

# This package primarily provides native runtime libraries (DLLs/SOs)
# for CUDA's NVRTC component, which are consumed by other higher-level
# CUDA-enabled Python libraries (e.g., PyTorch, CuPy) for JIT compilation.
# There is no direct Python API exposed by this specific package for end-user import.
# The following code demonstrates verifying CUDA's availability, which implicitly
# relies on the correctly installed underlying CUDA runtime components like NVRTC.

try:
    import torch
    print(f"PyTorch CUDA available: {torch.cuda.is_available()}")
    if torch.cuda.is_available():
        print(f"PyTorch CUDA version: {torch.version.cuda}")
        print(f"Current CUDA device: {torch.cuda.get_device_name(0)}")
except ImportError:
    print("PyTorch not installed. Cannot verify CUDA availability via PyTorch.")

try:
    import cupy
    print(f"CuPy CUDA available: {cupy.cuda.is_available()}")
    if cupy.cuda.is_available():
        print(f"CuPy CUDA version: {cupy.cuda.runtime.get_version()}")
        print(f"Current CUDA device: {cupy.cuda.Device(0).name}")
except ImportError:
    print("CuPy not installed. Cannot verify CUDA availability via CuPy.")

# A successful installation of nvidia-cuda-nvrtc-cu12 means these libraries
# are available for use by such frameworks.

view raw JSON →