NVIDIA cuDNN for CUDA 13.x
This package provides the cuDNN runtime libraries for CUDA 13.x, essential for accelerating deep learning operations on NVIDIA GPUs. It's a low-level library primarily consumed by deep learning frameworks like TensorFlow and PyTorch. The current version is 9.20.0.48, with new releases typically tied to cuDNN and CUDA toolkit updates.
Common errors
-
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
cause This error occurs when the cuDNN library fails to initialize properly, often due to incorrect installation, incompatible CUDA/cuDNN versions, or issues with GPU drivers or memory.fixVerify that your NVIDIA GPU drivers, CUDA Toolkit, and `nvidia-cudnn-cu13` package are all compatible and correctly installed, ensuring environment variables like `LD_LIBRARY_PATH` (Linux) or `PATH` (Windows) correctly point to the cuDNN and CUDA libraries. If using PyTorch, ensure `torch.backends.cudnn.enabled = True` and try setting `torch.backends.cudnn.benchmark = False` for debugging. -
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
cause This usually indicates an internal error within the cuDNN library during an operation, frequently caused by out-of-memory conditions on the GPU, corrupted data, or subtle incompatibilities between the deep learning framework, CUDA, and cuDNN versions.fixReduce batch size, check for memory leaks in your code, ensure your data inputs are valid, and verify compatibility between your deep learning framework (e.g., PyTorch, TensorFlow), CUDA Toolkit, and `nvidia-cudnn-cu13`. Consider updating GPU drivers. Switching to CPU for debugging can provide more explicit error messages. -
Could not load library libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
cause The system cannot find the necessary cuDNN shared library file, typically due to incorrect installation paths, missing environment variables (`LD_LIBRARY_PATH` on Linux or `PATH` on Windows), or the library file not existing in expected locations.fixEnsure the cuDNN library files (e.g., `libcudnn.so` and related files on Linux, or `cudnn*.dll` on Windows) are correctly placed within your CUDA installation directories (e.g., `/usr/local/cuda/lib64` or `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin`) and that your system's `LD_LIBRARY_PATH` or `PATH` environment variable includes these directories. -
The detected CUDA version (X.X) mismatches the version that was used to compile PyTorch (Y.Y). Please make sure to use the same CUDA versions.
cause This error occurs when the PyTorch (or another framework) binary was compiled with a different CUDA version than the CUDA runtime and `nvidia-cudnn-cu13` package installed on your system.fixInstall a PyTorch (or framework) version that explicitly matches your installed CUDA Toolkit version (e.g., CUDA 13.x for `nvidia-cudnn-cu13`). Alternatively, if possible, align your CUDA Toolkit installation to match the framework's compiled version. Using environment managers like Conda or virtual environments is recommended to manage compatible packages. -
Loaded runtime CuDNN library: X.Y.Z but source was compiled with: A.B.C. CuDNN library needs to have matching major version and equal or higher minor version.
cause This specific version mismatch indicates that the cuDNN library loaded at runtime (provided by `nvidia-cudnn-cu13`) has a different version than the one the deep learning framework (e.g., TensorFlow) was compiled against.fixEnsure that the `nvidia-cudnn-cu13` package version precisely matches the cuDNN version required by your deep learning framework. Consult the framework's official documentation for its tested build configurations (CUDA and cuDNN compatibility matrices) and install the appropriate `nvidia-cudnn-cu13` version.
Warnings
- gotcha This package is a runtime dependency and does NOT expose a direct Python API for import or use. It installs shared libraries (`.so`, `.dll`) that deep learning frameworks dynamically link against.
- breaking The `cu13` suffix indicates compatibility with CUDA Toolkit 13.x. Installing this version requires a matching CUDA Toolkit and NVIDIA GPU driver on your system. Mismatched versions can lead to runtime errors or performance issues.
- gotcha This package requires an NVIDIA GPU and a compatible NVIDIA driver. It will not provide any benefit or function correctly on systems without NVIDIA hardware.
- gotcha While `pip install nvidia-cudnn-cu13` installs the cuDNN runtime libraries, it does not install the full CUDA Toolkit or its development headers. Deep learning frameworks usually bring their own CUDA dependencies or assume a system-wide CUDA installation.
Install
-
pip install nvidia-cudnn-cu13
Imports
- No direct Python import
This package does not expose a direct Python API for import.
Quickstart
import torch
# This package itself has no direct Python API.
# To verify cuDNN is available and used, check a framework that depends on it.
# Example: PyTorch
if torch.cuda.is_available():
print(f"CUDA is available. Device: {torch.cuda.get_device_name(0)}")
if torch.backends.cudnn.is_available():
print(f"cuDNN is available and version: {torch.backends.cudnn.version()}")
print(f"cuDNN enabled: {torch.backends.cudnn.enabled}")
# Optional: Run a simple operation to ensure it uses cuDNN
x = torch.randn(128, 128, 3, 3).cuda()
w = torch.randn(256, 128, 3, 3).cuda()
y = torch.nn.functional.conv2d(x, w)
print("Successfully performed a CUDA/cuDNN operation with PyTorch.")
else:
print("CUDA is available, but cuDNN is NOT detected by PyTorch.")
else:
print("CUDA is not available. cuDNN requires an NVIDIA GPU.")