NVIDIA cuDNN for CUDA 13.x

9.20.0.48 · active · verified Thu Apr 09

This package provides the cuDNN runtime libraries for CUDA 13.x, essential for accelerating deep learning operations on NVIDIA GPUs. It's a low-level library primarily consumed by deep learning frameworks like TensorFlow and PyTorch. The current version is 9.20.0.48, with new releases typically tied to cuDNN and CUDA toolkit updates.

Warnings

Install

Imports

Quickstart

Since `nvidia-cudnn-cu13` provides low-level runtime libraries and not a direct Python API, its usage is implicitly managed by deep learning frameworks. This quickstart demonstrates how to check if cuDNN is detected and utilized by PyTorch, assuming a compatible NVIDIA GPU and driver are installed.

import torch

# This package itself has no direct Python API.
# To verify cuDNN is available and used, check a framework that depends on it.
# Example: PyTorch

if torch.cuda.is_available():
    print(f"CUDA is available. Device: {torch.cuda.get_device_name(0)}")
    if torch.backends.cudnn.is_available():
        print(f"cuDNN is available and version: {torch.backends.cudnn.version()}")
        print(f"cuDNN enabled: {torch.backends.cudnn.enabled}")
        # Optional: Run a simple operation to ensure it uses cuDNN
        x = torch.randn(128, 128, 3, 3).cuda()
        w = torch.randn(256, 128, 3, 3).cuda()
        y = torch.nn.functional.conv2d(x, w)
        print("Successfully performed a CUDA/cuDNN operation with PyTorch.")
    else:
        print("CUDA is available, but cuDNN is NOT detected by PyTorch.")
else:
    print("CUDA is not available. cuDNN requires an NVIDIA GPU.")

view raw JSON →