NVIDIA cuDNN for CUDA 13.x
This package provides the cuDNN runtime libraries for CUDA 13.x, essential for accelerating deep learning operations on NVIDIA GPUs. It's a low-level library primarily consumed by deep learning frameworks like TensorFlow and PyTorch. The current version is 9.20.0.48, with new releases typically tied to cuDNN and CUDA toolkit updates.
Warnings
- gotcha This package is a runtime dependency and does NOT expose a direct Python API for import or use. It installs shared libraries (`.so`, `.dll`) that deep learning frameworks dynamically link against.
- breaking The `cu13` suffix indicates compatibility with CUDA Toolkit 13.x. Installing this version requires a matching CUDA Toolkit and NVIDIA GPU driver on your system. Mismatched versions can lead to runtime errors or performance issues.
- gotcha This package requires an NVIDIA GPU and a compatible NVIDIA driver. It will not provide any benefit or function correctly on systems without NVIDIA hardware.
- gotcha While `pip install nvidia-cudnn-cu13` installs the cuDNN runtime libraries, it does not install the full CUDA Toolkit or its development headers. Deep learning frameworks usually bring their own CUDA dependencies or assume a system-wide CUDA installation.
Install
-
pip install nvidia-cudnn-cu13
Imports
- No direct Python import
This package does not expose a direct Python API for import.
Quickstart
import torch
# This package itself has no direct Python API.
# To verify cuDNN is available and used, check a framework that depends on it.
# Example: PyTorch
if torch.cuda.is_available():
print(f"CUDA is available. Device: {torch.cuda.get_device_name(0)}")
if torch.backends.cudnn.is_available():
print(f"cuDNN is available and version: {torch.backends.cudnn.version()}")
print(f"cuDNN enabled: {torch.backends.cudnn.enabled}")
# Optional: Run a simple operation to ensure it uses cuDNN
x = torch.randn(128, 128, 3, 3).cuda()
w = torch.randn(256, 128, 3, 3).cuda()
y = torch.nn.functional.conv2d(x, w)
print("Successfully performed a CUDA/cuDNN operation with PyTorch.")
else:
print("CUDA is available, but cuDNN is NOT detected by PyTorch.")
else:
print("CUDA is not available. cuDNN requires an NVIDIA GPU.")