TensorRT cu13
raw JSON → 10.16.1.11 verified Mon Apr 27 auth: no python
NVIDIA TensorRT is a high-performance deep learning inference library. The cu13 variant targets CUDA 13.x. Current version 10.16.1.11, release cadence approximately monthly.
pip install tensorrt-cu13 Common errors
error ModuleNotFoundError: No module named 'tensorrt' ↓
cause The tensorrt package is not installed or installed under a different name (e.g., tensorrt-cu13 is required).
fix
Run 'pip install tensorrt-cu13'.
error ImportError: libcudart.so.xx: cannot open shared object file: No such file or directory ↓
cause CUDA runtime library not found or incompatible with the TensorRT variant.
fix
Install the matching CUDA version. For tensorrt-cu13, ensure CUDA 13.x is installed and LD_LIBRARY_PATH set correctly.
Warnings
breaking From TensorRT 10.13, CUDA 11.x support dropped. This cu13 variant requires CUDA 13.x; mixing with older CUDA will fail at import or runtime. ↓
fix Ensure your environment has CUDA 13.2+ (for TRT 10.16) or matching CUDA version for the TensorRT release.
breaking In TensorRT 10.14, pycuda was removed in favor of cuda-python. Code using pycuda will break. ↓
fix Replace pycuda imports with cuda-python equivalents, e.g., 'import pycuda.driver as cuda' -> 'import cuda'.
deprecated Plugin version 1 of several plugins (e.g., cropAndResizeDynamic, modulatedDeformConv) deprecated starting 10.12, will be removed in future versions. ↓
fix Migrate to IPluginV3-based version 2 plugins.
gotcha The package 'tensorrt-cu13' is CUDA-specific. Installing on a system without compatible CUDA runtime may cause import errors like 'Could not find cudart64_13.dll'. ↓
fix Install the correct TensorRT variant matching your CUDA version (e.g., tensorrt-cu12, tensorrt-cu11).
Install
pip install tensorrt-cu13==10.16.1.11 Imports
- tensorrt wrong
import tensorrt as trtcorrectimport tensorrt - Builder
import tensorrt; tensorrt.Builder
Quickstart
import tensorrt as trt
import os
def get_engine():
logger = trt.Logger(trt.Logger.WARNING)
builder = trt.Builder(logger)
network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
print(f"TensorRT version: {trt.__version__}")
if __name__ == '__main__':
get_engine()