{"id":27202,"library":"onnxruntime-rocm","title":"ONNX Runtime ROCm","description":"ONNX Runtime is a cross-platform machine-learning model accelerator. The onnxruntime-rocm package provides the ROCm execution provider for AMD GPUs. Current version is 1.22.2.post1, derived from upstream onnxruntime 1.22.x. Release cadence follows upstream but may lag.","status":"active","version":"1.22.2.post1","language":"python","source_language":"en","source_url":"https://github.com/Looong01/onnxruntime-rocm-build","tags":["onnx","rocm","amd","machine-learning","deep-learning","inference"],"install":[{"cmd":"pip install onnxruntime-rocm","lang":"bash","label":"Install onnxruntime-rocm from PyPI"},{"cmd":"pip install onnxruntime-rocm==1.22.2.post1","lang":"bash","label":"Install specific version"}],"dependencies":[{"reason":"Commonly used together for model export; not required at runtime","package":"torch","optional":true},{"reason":"Needed for model format conversions if not using ORT format","package":"onnx","optional":true}],"imports":[{"note":"InferenceSession is in the onnxruntime module, not onnxruntime_rocm. Install the ROCm variant but import onnxruntime.","wrong":"import onnxruntime_rocm","symbol":"InferenceSession","correct":"import onnxruntime"},{"note":"Providers are registered at import; use onnxruntime to list them.","wrong":"from onnxruntime_rocm import ...","symbol":"get_available_providers","correct":"onnxruntime.get_available_providers()"}],"quickstart":{"code":"import onnxruntime\nprint(onnxruntime.__version__)\nprint('Available providers:', onnxruntime.get_available_providers())\n# Example with ROCm provider\nimport numpy as np\nsess = onnxruntime.InferenceSession(\n    'model.onnx',\n    providers=['ROCMExecutionProvider']\n)\ninput_name = sess.get_inputs()[0].name\nresult = sess.run(None, {input_name: np.random.randn(1, 3, 224, 224).astype(np.float32)})\nprint(result[0].shape)","lang":"python","description":"Verify installation and run a basic inference with ROCm EP."},"warnings":[{"fix":"Monitor https://github.com/Looong01/onnxruntime-rocm-build for newer builds.","message":"Version 1.22.2.post1 is based on ORT 1.22.x; there is no 1.22.2 upstream, expect missing features from newer ORT releases (e.g., CUDA 12.0+ in 1.25).","severity":"breaking","affected_versions":">=1.22.2"},{"fix":"Use `import onnxruntime` as usual.","message":"Install onnxruntime-rocm, but import onnxruntime. The package registers the ROCm provider upon import of onnxruntime.","severity":"gotcha","affected_versions":"all"},{"fix":"Run `pip uninstall onnxruntime onnxruntime-gpu onnxruntime-rocm` and then `pip install onnxruntime-rocm`.","message":"ROCmExecutionProvider is only available if installed with onnxruntime-rocm. If you also have onnxruntime (CPU) or onnxruntime-gpu, they may conflict. Uninstall other variants first.","severity":"gotcha","affected_versions":"all"},{"fix":"Use a Docker image with matching ROCm version or build from source.","message":"This build uses ROCm 5.6/6.0 (not latest). Newer ROCm versions (6.1+) are not supported. Check build notes for compatibility.","severity":"deprecated","affected_versions":"1.22.2.post1"},{"fix":"Stick to 1.22.x or use official AMD builds from ROCm repository.","message":"ONNX Runtime 1.25 (upstream) dropped CUDA 11.x; however onnxruntime-rocm is based on earlier ORT so may retain support but will not receive updates.","severity":"breaking","affected_versions":"upstream >=1.25"}],"env_vars":null,"last_verified":"2026-05-01T00:00:00.000Z","next_check":"2026-07-30T00:00:00.000Z","problems":[{"fix":"Run `pip install onnxruntime-rocm` then `python -c \"import onnxruntime; print(onnxruntime.__version__)\"`","cause":"Package not installed, or only onnxruntime-rocm installed but import fails because onnxruntime is not a top-level package of the rocm variant? Actually onnxruntime-rocm provides onnxruntime, but if you accidentally installed something else or your environment is broken.","error":"ModuleNotFoundError: No module named 'onnxruntime'"},{"fix":"Uninstall onnxruntime (CPU) and install onnxruntime-rocm.","cause":"You have onnxruntime (CPU) installed, not onnxruntime-rocm. The ROCm provider is not available.","error":"onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : No execution provider 'ROCMExecutionProvider' is registered."},{"fix":"Install ROCm: follow instructions at https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html and set LD_LIBRARY_PATH to /opt/rocm/lib.","cause":"ROCm runtime libraries (including amdhip64.so) are not installed or not in LD_LIBRARY_PATH. The onnxruntime-rocm wheel depends on system ROCm libraries.","error":"ImportError: libamdhip64.so.5: cannot open shared object file: No such file or directory"},{"fix":"Install MIOpen via ROCm packages: `sudo apt install miopen-hip` or ensure full ROCm stack is installed.","cause":"MIOpen (AMD's DNN library) is not found. ROCm installation incomplete.","error":"onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: DNN Library load failed: Cannot find DNN library at ..."}],"ecosystem":"pypi","meta_description":null,"install_score":null,"install_tag":null,"quickstart_score":null,"quickstart_tag":null}