TensorRT Libraries for CUDA 13

raw JSON →
10.16.1.11 verified Mon Apr 27 auth: no python

NVIDIA TensorRT is a high-performance deep learning inference optimizer and runtime. This package provides the TensorRT libraries for CUDA 13.x. Version 10.16.1.11 is the latest, with major releases every few months. Note that this is the libraries-only package (not the full Python bindings).

pip install tensorrt-cu13-libs
error ImportError: libnvinfer.so.10: cannot open shared object file: No such file or directory
cause The shared library is not installed or not in LD_LIBRARY_PATH.
fix
Install tensorrt-cu13-libs and ensure LD_LIBRARY_PATH includes the installation site-packages directory (e.g., ~/.local/lib/python3.10/site-packages/tensorrt/).
error AttributeError: module 'tensorrt' has no attribute 'init_libnvinfer'
cause tensorrt-cu13-libs only provides libraries; the Python module is separate.
fix
Install 'tensorrt-cu13-bindings' or 'tensorrt' full package to get the Python API.
breaking CUDA 13 only: tensorrt-cu13-libs is built for CUDA 13.x. Installing on a system with older CUDA will cause runtime errors.
fix Use tensorrt-cu12-libs (or older) for CUDA 12.x systems.
gotcha This package only contains the native libraries (libnvinfer.so, etc.). You also need tensorrt-cu13-bindings (or the full tensorrt package) to use the Python API.
fix If you need the Python API, install 'tensorrt-cu13-bindings' or 'tensorrt'.
deprecated TensorRT 10.16 drops support for Python < 3.10 and Ubuntu 20.04.
fix Upgrade to Python 3.10+ and use Ubuntu 22.04 or newer.

Initialize and print TensorRT library version.

import tensorrt as trt
tr = trt.init_libnvinfer()
print(trt.getVersionString())