{"id":547,"library":"nvidia-cudnn-cu12","title":"NVIDIA cuDNN Runtime Libraries for CUDA 12","description":"The `nvidia-cudnn-cu12` package provides the NVIDIA CUDA Deep Neural Network (cuDNN) runtime libraries, which are GPU-accelerated primitives essential for deep neural network operations such as convolutions, attention, and matrix multiplication. It acts as a critical low-level dependency, enabling deep learning frameworks like TensorFlow and PyTorch to efficiently leverage NVIDIA GPUs. This specific package targets CUDA 12.x environments. The current version is 9.20.0.48, with releases frequently updated to align with new CUDA Toolkit versions and cuDNN backend enhancements.","status":"active","version":"9.20.0.48","language":"python","source_language":"en","source_url":"https://developer.nvidia.com/cudnn","tags":["deep learning","GPU","CUDA","cuDNN","NVIDIA","runtime","backend","machine learning"],"install":[{"cmd":"pip install nvidia-cudnn-cu12","lang":"bash","label":"Install latest version"},{"cmd":"pip install nvidia-cudnn-cu12==9.20.0.48","lang":"bash","label":"Install specific version"}],"dependencies":[{"reason":"Required CUDA BLAS library for GPU computations.","package":"nvidia-cublas-cu12","optional":false}],"imports":[{"note":"The `nvidia-cudnn-cu12` package provides the underlying cuDNN runtime libraries, not a direct Python API for computation. Python interaction for defining and executing operations is typically achieved through the `nvidia-cudnn-frontend` package (installed separately via `pip install nvidia-cudnn-frontend`), which then imports as `cudnn`.","wrong":"import nvidia_cudnn_cu12","symbol":"cudnn","correct":"import cudnn"}],"quickstart":{"code":"import tensorflow as tf\nimport os\n\n# Ensure TensorFlow doesn't pre-allocate all GPU memory\nos.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'\n\n# Check if TensorFlow can detect and use GPUs\ngpus = tf.config.list_physical_devices('GPU')\n\nif gpus:\n    print(f\"TensorFlow detected the following GPUs: {gpus}\")\n    try:\n        # Limit GPU memory growth to avoid allocating all memory at once (alternative to env var)\n        for gpu in gpus:\n            tf.config.experimental.set_memory_growth(gpu, True)\n        print(\"GPU memory growth set to True.\")\n    except RuntimeError as e:\n        # Memory growth must be set before GPUs have been initialized\n        print(f\"Error setting memory growth: {e}\")\n    print(f\"TensorFlow is built with CUDA: {tf.test.is_built_with_cuda()}\")\n    # TensorFlow's built-in cuDNN version (indicates what TF was compiled with)\n    print(f\"TensorFlow's built-in cuDNN version: {tf.sysconfig.get_build_info().get('CUDNN_VERSION', 'N/A')}\")\n\n    # A small operation to trigger GPU usage if available\n    try:\n        with tf.device('/GPU:0'):\n            a = tf.constant([[1.0, 2.0], [3.0, 4.0]])\n            b = tf.constant([[1.0, 1.0], [1.0, 1.0]])\n            c = tf.matmul(a, b)\n            print(f\"Simple matrix multiplication on GPU: {c.numpy()}\")\n    except RuntimeError as e:\n        print(f\"Could not run on GPU: {e}. Running on CPU instead.\")\n        a = tf.constant([[1.0, 2.0], [3.0, 4.0]])\n        b = tf.constant([[1.0, 1.0], [1.0, 1.0]])\n        c = tf.matmul(a, b)\n        print(f\"Simple matrix multiplication on CPU: {c.numpy()}\")\nelse:\n    print(\"TensorFlow did not detect any GPUs. Please ensure CUDA and cuDNN are correctly installed and configured.\")","lang":"python","description":"This quickstart demonstrates how to verify that a deep learning framework like TensorFlow can detect and utilize your GPU, and implicitly, the underlying cuDNN runtime. Successful execution of this code confirms that `nvidia-cudnn-cu12` is likely correctly installed and accessible by TensorFlow. Note that the cuDNN version reported by TensorFlow is the version it was *built with*, not necessarily the exact version dynamically loaded, though they should be compatible."},"warnings":[{"fix":"Ensure you are installing `nvidia-cudnn-cu12` (or the appropriate CUDA version) via pip, or manually managing separate cuDNN archives if not using Python wheels, and verify compatibility with your CUDA Toolkit version.","message":"Starting with CUDA 12.5 and later, cuDNN is no longer bundled directly within the CUDA Toolkit installer. This change requires users (especially C++ toolchain developers) to manage cuDNN installation and versioning separately, although `pip install nvidia-cudnn-cu12` simplifies this for Python environments.","severity":"breaking","affected_versions":"CUDA Toolkit 12.5 and higher, cuDNN 9.x series"},{"fix":"If you intend to use cuDNN's API directly in Python, install `nvidia-cudnn-frontend` via `pip install nvidia-cudnn-frontend` and then `import cudnn` in your Python code.","message":"Direct Python API calls for `nvidia-cudnn-cu12` are not available. This package provides the low-level runtime binaries. To programmatically interact with cuDNN functionality in Python (e.g., build computation graphs), you must install the `nvidia-cudnn-frontend` package separately and import it as `cudnn`.","severity":"gotcha","affected_versions":"All versions of `nvidia-cudnn-cu12`"},{"fix":"Always refer to the official documentation of your deep learning framework for recommended or required CUDA and cuDNN versions. When possible, allow the framework's installation (e.g., `pip install tensorflow[and-cuda]`) to manage cuDNN dependencies, or carefully match versions if installing separately.","message":"Version compatibility between `nvidia-cudnn-cu12`, the installed NVIDIA CUDA Toolkit, and your deep learning framework (e.g., TensorFlow, PyTorch) is crucial. Frameworks are often built against specific cuDNN versions. Installing a standalone `nvidia-cudnn-cu12` might not be compatible with the version your framework expects, leading to runtime errors (e.g., 'DLL load failed' or 'cuDNN initialization error').","severity":"gotcha","affected_versions":"All versions"},{"fix":"Ensure your installed NVIDIA CUDA Toolkit version is 12.x and that your GPU drivers are up-to-date and compatible with CUDA 12.x.","message":"`nvidia-cudnn-cu12` implies compatibility with CUDA Toolkit 12.x. Using it with an older or incompatible CUDA Toolkit version installed on your system can lead to runtime issues or failures in GPU acceleration.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Install the package by explicitly adding NVIDIA's PyPI as an extra index URL: `pip install --extra-index-url https://pypi.nvidia.com nvidia-cudnn-cu12`.","message":"The `nvidia-cudnn-cu12` package is a placeholder that requires downloading the actual wheel from NVIDIA's PyPI index. If `https://pypi.nvidia.com` is not implicitly used or specified, `pip` will fail to find or download the actual package, leading to an 'Didn't find wheel' error during metadata preparation.","severity":"breaking","affected_versions":"All versions of `nvidia-cudnn-cu12`"}],"env_vars":null,"last_verified":"2026-05-12T14:58:22.305Z","next_check":"2026-06-26T00:00:00.000Z","problems":[{"fix":"Ensure CUDA Toolkit and `nvidia-cudnn-cu12` are installed. For manual cuDNN installations (less common with the wheel), add the `bin` directory of your CUDA and cuDNN installation (e.g., `C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.x\\bin` and `C:\\Program Files\\NVIDIA\\CUDNN\\v9.x\\bin` on Windows) to your system's PATH environment variable, and on Linux, add the `lib64` directories to `LD_LIBRARY_PATH`. If using the `nvidia-cudnn-cu12` wheel, verify that the package installed its components in discoverable locations or that the framework is correctly configured to use pip-installed NVIDIA libraries.","cause":"The system or deep learning framework cannot find the necessary cuDNN shared libraries because they are either not installed correctly, their location is not in the system's PATH (Windows) or LD_LIBRARY_PATH (Linux) environment variable, or there's a file permissions issue.","error":"Could not load dynamic library 'cudnn64_9.dll'"},{"fix":"Update your GPU drivers to the latest compatible version for your CUDA toolkit. Ensure enough GPU memory is available before running operations that utilize cuDNN, potentially by clearing the PyTorch cache (`torch.cuda.empty_cache()`) or reducing batch sizes. Verify that CUDA and cuDNN are correctly installed and compatible with your deep learning framework.","cause":"cuDNN failed to initialize, often due to insufficient GPU memory, an outdated or incompatible GPU driver, or a problem during the initial setup of the cuDNN context within the deep learning framework.","error":"RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED"},{"fix":"Ensure that the installed `nvidia-cudnn-cu12` version is compatible with your deep learning framework's specific CUDA and cuDNN requirements. If using pip, try reinstalling the framework to ensure it picks up the correct cuDNN dependencies or specify the exact `nvidia-cudnn-cu12` version required by your framework. Use virtual environments to isolate different CUDA/cuDNN configurations.","cause":"The version of the cuDNN library loaded at runtime (either from the system or an installed wheel) does not match the version that the deep learning framework (e.g., TensorFlow, JAX, or PyTorch) was compiled against. This can occur when mixing different installation methods or having multiple cuDNN versions accessible.","error":"Loaded runtime CuDNN library: X but source was compiled with: Y. CuDNN library needs to have matching major version and equal or higher minor version."},{"fix":"Verify that the NVIDIA CUDA repository for your distribution is correctly added and updated. Use `apt-cache search cudnn` (Ubuntu/Debian) or similar commands to find the exact package names available for your CUDA version (e.g., `libcudnn8` instead of `cudnn9-cuda-12`). Follow the official NVIDIA cuDNN installation guide for your Linux distribution and CUDA version.","cause":"The Linux package manager (apt/dnf/zypper) cannot find the specified cuDNN package for CUDA 12 in its configured repositories. This can happen if the NVIDIA repositories are not correctly added or enabled, or the package name is incorrect for the specific cuDNN and CUDA version combination.","error":"E: Unable to locate package cudnn9-cuda-12"},{"fix":"Carefully review the code performing the operation that triggers the error. Check tensor shapes, data types (e.g., float vs. long), and convolution parameters (padding, stride, dilation) to ensure they are valid and consistent with cuDNN's requirements and the neural network architecture. Switching to CPU for debugging can sometimes provide more detailed error messages.","cause":"A deep learning operation using cuDNN was called with invalid input parameters (e.g., incorrect tensor dimensions, incompatible data types, or invalid convolution settings). This error often indicates a logical issue in the application code rather than an installation problem.","error":"cuDNN Error: CUDNN_STATUS_BAD_PARAM"}],"ecosystem":"pypi","meta_description":null,"install_score":0,"install_tag":"stale","quickstart_score":0,"quickstart_tag":"stale","pypi_latest":null,"install_checks":{"last_tested":"2026-05-12","tag":"stale","tag_description":"widespread failures or data too old to trust","results":[{"runtime":"python:3.10-alpine","python_version":"3.10","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.10-alpine","python_version":"3.10","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.10-slim","python_version":"3.10","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.10-slim","python_version":"3.10","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.11-alpine","python_version":"3.11","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.11-alpine","python_version":"3.11","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.11-slim","python_version":"3.11","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.11-slim","python_version":"3.11","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.12-alpine","python_version":"3.12","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.12-alpine","python_version":"3.12","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.12-slim","python_version":"3.12","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.12-slim","python_version":"3.12","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.13-alpine","python_version":"3.13","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.13-alpine","python_version":"3.13","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.13-slim","python_version":"3.13","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.13-slim","python_version":"3.13","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.9-alpine","python_version":"3.9","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.9-alpine","python_version":"3.9","os_libc":"alpine (musl)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.9-slim","python_version":"3.9","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null},{"runtime":"python:3.9-slim","python_version":"3.9","os_libc":"slim (glibc)","variant":"default","exit_code":1,"wheel_type":null,"failure_reason":null,"install_time_s":null,"import_time_s":null,"mem_mb":null,"disk_size":null}]},"quickstart_checks":{"last_tested":"2026-04-23","tag":"stale","tag_description":"widespread failures or data too old to trust","results":[{"runtime":"python:3.10-alpine","exit_code":1},{"runtime":"python:3.10-slim","exit_code":-1},{"runtime":"python:3.11-alpine","exit_code":1},{"runtime":"python:3.11-slim","exit_code":-1},{"runtime":"python:3.12-alpine","exit_code":1},{"runtime":"python:3.12-slim","exit_code":-1},{"runtime":"python:3.13-alpine","exit_code":1},{"runtime":"python:3.13-slim","exit_code":-1},{"runtime":"python:3.9-alpine","exit_code":1},{"runtime":"python:3.9-slim","exit_code":-1}]}}