{"library":"onnx","title":"ONNX (Open Neural Network Exchange)","description":"ONNX (Open Neural Network Exchange) is an open standard format designed to represent machine learning models, facilitating interoperability between different deep learning frameworks. The library is actively maintained with a regular release cadence, typically seeing major versions every few months interspersed with patch releases. It currently requires Python 3.10 or newer.","status":"active","version":"1.21.0","language":"en","source_language":"en","source_url":"https://github.com/onnx/onnx","tags":["machine learning","neural networks","model exchange","deep learning","inference","AI"],"install":[{"cmd":"pip install onnx","lang":"bash","label":"Install latest stable version"}],"dependencies":[{"reason":"Minimum required Python version for recent ONNX releases.","package":"python","optional":false},{"reason":"Introduced around v1.19.1 to support additional machine learning data types (e.g., bfloat16, int4) in NumPy arrays for the reference evaluator and helper functions, improving interoperability.","package":"ml_dtypes","optional":true}],"imports":[{"symbol":"onnx","correct":"import onnx"},{"symbol":"onnx.helper","correct":"from onnx import helper"},{"symbol":"onnx.checker","correct":"from onnx import checker"},{"note":"TensorProto is directly available from the top-level 'onnx' package in recent versions; old imports might point to internal proto structures which can change.","wrong":"from onnx.onnx_pb import TensorProto","symbol":"TensorProto","correct":"from onnx import TensorProto"}],"quickstart":{"code":"import onnx\nfrom onnx import helper, checker, TensorProto\nimport numpy as np\nimport os\n\n# Create a simple ONNX model: Y = X + A\n# Define model inputs and outputs\nX = helper.make_tensor_value_info('X', TensorProto.FLOAT, [None, 2])\nA = helper.make_tensor_value_info('A', TensorProto.FLOAT, [2])\nY = helper.make_tensor_value_info('Y', TensorProto.FLOAT, [None, 2])\n\n# Create a node for the Add operation\nnode_def = helper.make_node(\n    'Add',\n    inputs=['X', 'A'],\n    outputs=['Y'],\n)\n\n# Create the graph\ngraph_def = helper.make_graph(\n    [node_def],\n    'simple-add-model',\n    [X, A],\n    [Y],\n)\n\n# Create the model with specified opset_imports (e.g., opset 13)\n# Opset 13 is commonly used and widely supported.\nmodel_def = helper.make_model(\n    graph_def,\n    producer_name='onnx-example',\n    opset_imports=[helper.make_opsetid('', 13)]\n)\n\n# Check the model for validity\nchecker.check_model(model_def)\nprint('Model is valid!')\n\n# Save the model to a file\nmodel_path = 'simple_add_model.onnx'\nonnx.save(model_def, model_path)\nprint(f'Model saved to {model_path}')\n\n# Optional: Load the model back and print its structure\nloaded_model = onnx.load(model_path)\nprint('\\nLoaded model:\\n', loaded_model.graph.node)\n\n# Clean up the created file\n# os.remove(model_path)\n","lang":"python","description":"This quickstart demonstrates how to programmatically create a simple ONNX model (Y = X + A), validate it using `onnx.checker`, and then save it to a `.onnx` file using the ONNX Python API. It also shows how to load the model back for inspection."},"warnings":[{"fix":"Remove any code referencing the defunct model hub integration. Consult ONNX documentation for alternative model management or sharing methods.","message":"The `model hub integration` feature was removed in ONNX v1.21.0. If your workflow relied on this integration, you will need to update your code to remove references to it. [cite: github_release_v1.21.0]","severity":"breaking","affected_versions":">=1.21.0"},{"fix":"Always verify that the ONNX opset version used for exporting or creating the model aligns with the supported versions of your target runtime or framework. Use `helper.make_opsetid('', <version_number>)` to explicitly set the opset.","message":"Incompatible ONNX Opset Versions: Models created with a specific ONNX opset version (e.g., opset 13) may not be compatible with older runtimes or frameworks that do not support that opset. This can lead to conversion failures or unexpected behavior during inference.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Reimplement unsupported operations using ONNX-compatible alternatives or define custom ONNX operators if the functionality is critical and cannot be approximated. Consult the ONNX operator schema for available operations.","message":"Unsupported Operations or Custom Layers: When converting models from deep learning frameworks (e.g., PyTorch, TensorFlow) to ONNX, certain custom layers or framework-specific operations may not have direct equivalents in the ONNX operator set. This will cause conversion to fail or produce incorrect models.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Explicitly define input shapes, and consider using fixed dimensions when possible during export. For dynamic shapes, ensure they are correctly specified in the ONNX graph using symbolic dimensions.","message":"Shape and Dimension Mismatches: ONNX requires strict tensor shape definitions. Models relying on dynamic input shapes or having inconsistent batch size definitions between frameworks can lead to validation or runtime errors.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Explicitly install the `ml_dtypes` package: `pip install ml_dtypes`.","message":"Missing 'ml_dtypes' dependency: Starting around ONNX v1.19.0, `ml_dtypes` became a crucial dependency for handling advanced data types. If you are using `onnx` with `onnxruntime` (especially versions like 1.24) and encounter a `ModuleNotFoundError` for `ml_dtypes`, it means this dependency is missing.","severity":"gotcha","affected_versions":">=1.19.0 (especially when paired with onnxruntime >=1.24)"}],"env_vars":null,"last_verified":"2026-04-05T00:00:00.000Z","next_check":"2026-07-04T00:00:00.000Z"}