{"id":1589,"library":"nvidia-cudnn-cu13","title":"NVIDIA cuDNN for CUDA 13.x","description":"This package provides the cuDNN runtime libraries for CUDA 13.x, essential for accelerating deep learning operations on NVIDIA GPUs. It's a low-level library primarily consumed by deep learning frameworks like TensorFlow and PyTorch. The current version is 9.20.0.48, with new releases typically tied to cuDNN and CUDA toolkit updates.","status":"active","version":"9.20.0.48","language":"en","source_language":"en","source_url":"https://github.com/NVIDIA/python-packaging","tags":["deep learning","gpu","cuda","cudnn","nvidia","runtime library"],"install":[{"cmd":"pip install nvidia-cudnn-cu13","lang":"bash","label":"Install cuDNN runtime"}],"dependencies":[],"imports":[{"note":"nvidia-cudnn-cu13 installs shared libraries (e.g., .so, .dll) into your environment. Deep learning frameworks (like PyTorch, TensorFlow) then discover and link against these shared libraries at runtime via their C++ backends, rather than through Python's import mechanism. Attempting 'import cudnn' or similar will fail.","symbol":"No direct Python import","correct":"This package does not expose a direct Python API for import."}],"quickstart":{"code":"import torch\n\n# This package itself has no direct Python API.\n# To verify cuDNN is available and used, check a framework that depends on it.\n# Example: PyTorch\n\nif torch.cuda.is_available():\n    print(f\"CUDA is available. Device: {torch.cuda.get_device_name(0)}\")\n    if torch.backends.cudnn.is_available():\n        print(f\"cuDNN is available and version: {torch.backends.cudnn.version()}\")\n        print(f\"cuDNN enabled: {torch.backends.cudnn.enabled}\")\n        # Optional: Run a simple operation to ensure it uses cuDNN\n        x = torch.randn(128, 128, 3, 3).cuda()\n        w = torch.randn(256, 128, 3, 3).cuda()\n        y = torch.nn.functional.conv2d(x, w)\n        print(\"Successfully performed a CUDA/cuDNN operation with PyTorch.\")\n    else:\n        print(\"CUDA is available, but cuDNN is NOT detected by PyTorch.\")\nelse:\n    print(\"CUDA is not available. cuDNN requires an NVIDIA GPU.\")\n","lang":"python","description":"Since `nvidia-cudnn-cu13` provides low-level runtime libraries and not a direct Python API, its usage is implicitly managed by deep learning frameworks. This quickstart demonstrates how to check if cuDNN is detected and utilized by PyTorch, assuming a compatible NVIDIA GPU and driver are installed."},"warnings":[{"fix":"Do not attempt `import nvidia-cudnn-cu13` or similar. Verify its presence via framework-specific checks (e.g., `torch.backends.cudnn.is_available()` for PyTorch).","message":"This package is a runtime dependency and does NOT expose a direct Python API for import or use. It installs shared libraries (`.so`, `.dll`) that deep learning frameworks dynamically link against.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Ensure your installed NVIDIA GPU driver and CUDA Toolkit version (if manually installed) are compatible with CUDA 13.x. For optimal compatibility, use the `pip install nvidia-cudnn-cu13` along with other `nvidia-*cu13` packages (e.g., `nvidia-cuda-runtime-cu13`, `nvidia-cublas-cu13`).","message":"The `cu13` suffix indicates compatibility with CUDA Toolkit 13.x. Installing this version requires a matching CUDA Toolkit and NVIDIA GPU driver on your system. Mismatched versions can lead to runtime errors or performance issues.","severity":"breaking","affected_versions":"All versions with `cuXY` suffix"},{"fix":"Ensure you have an NVIDIA GPU and the latest compatible drivers installed for your operating system. For Linux, ensure `libnvidia-compute`, `libnvidia-decode`, etc., are correctly installed.","message":"This package requires an NVIDIA GPU and a compatible NVIDIA driver. It will not provide any benefit or function correctly on systems without NVIDIA hardware.","severity":"gotcha","affected_versions":"All versions"},{"fix":"For development or compiling custom CUDA extensions, you might still need to install the full CUDA Toolkit from NVIDIA. For most users, `pip install`ing the `nvidia-*cuXX` packages (runtime, cublas, cudnn) is sufficient for frameworks.","message":"While `pip install nvidia-cudnn-cu13` installs the cuDNN runtime libraries, it does not install the full CUDA Toolkit or its development headers. Deep learning frameworks usually bring their own CUDA dependencies or assume a system-wide CUDA installation.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}