ONNX Runtime ROCm

raw JSON →
1.22.2.post1 verified Fri May 01 auth: no python

ONNX Runtime is a cross-platform machine-learning model accelerator. The onnxruntime-rocm package provides the ROCm execution provider for AMD GPUs. Current version is 1.22.2.post1, derived from upstream onnxruntime 1.22.x. Release cadence follows upstream but may lag.

pip install onnxruntime-rocm
error ModuleNotFoundError: No module named 'onnxruntime'
cause Package not installed, or only onnxruntime-rocm installed but import fails because onnxruntime is not a top-level package of the rocm variant? Actually onnxruntime-rocm provides onnxruntime, but if you accidentally installed something else or your environment is broken.
fix
Run pip install onnxruntime-rocm then python -c "import onnxruntime; print(onnxruntime.__version__)"
error onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : No execution provider 'ROCMExecutionProvider' is registered.
cause You have onnxruntime (CPU) installed, not onnxruntime-rocm. The ROCm provider is not available.
fix
Uninstall onnxruntime (CPU) and install onnxruntime-rocm.
error ImportError: libamdhip64.so.5: cannot open shared object file: No such file or directory
cause ROCm runtime libraries (including amdhip64.so) are not installed or not in LD_LIBRARY_PATH. The onnxruntime-rocm wheel depends on system ROCm libraries.
fix
Install ROCm: follow instructions at https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html and set LD_LIBRARY_PATH to /opt/rocm/lib.
error onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: DNN Library load failed: Cannot find DNN library at ...
cause MIOpen (AMD's DNN library) is not found. ROCm installation incomplete.
fix
Install MIOpen via ROCm packages: sudo apt install miopen-hip or ensure full ROCm stack is installed.
breaking Version 1.22.2.post1 is based on ORT 1.22.x; there is no 1.22.2 upstream, expect missing features from newer ORT releases (e.g., CUDA 12.0+ in 1.25).
fix Monitor https://github.com/Looong01/onnxruntime-rocm-build for newer builds.
gotcha Install onnxruntime-rocm, but import onnxruntime. The package registers the ROCm provider upon import of onnxruntime.
fix Use `import onnxruntime` as usual.
gotcha ROCmExecutionProvider is only available if installed with onnxruntime-rocm. If you also have onnxruntime (CPU) or onnxruntime-gpu, they may conflict. Uninstall other variants first.
fix Run `pip uninstall onnxruntime onnxruntime-gpu onnxruntime-rocm` and then `pip install onnxruntime-rocm`.
deprecated This build uses ROCm 5.6/6.0 (not latest). Newer ROCm versions (6.1+) are not supported. Check build notes for compatibility.
fix Use a Docker image with matching ROCm version or build from source.
breaking ONNX Runtime 1.25 (upstream) dropped CUDA 11.x; however onnxruntime-rocm is based on earlier ORT so may retain support but will not receive updates.
fix Stick to 1.22.x or use official AMD builds from ROCm repository.
pip install onnxruntime-rocm==1.22.2.post1

Verify installation and run a basic inference with ROCm EP.

import onnxruntime
print(onnxruntime.__version__)
print('Available providers:', onnxruntime.get_available_providers())
# Example with ROCm provider
import numpy as np
sess = onnxruntime.InferenceSession(
    'model.onnx',
    providers=['ROCMExecutionProvider']
)
input_name = sess.get_inputs()[0].name
result = sess.run(None, {input_name: np.random.randn(1, 3, 224, 224).astype(np.float32)})
print(result[0].shape)