Open Neural Network Exchange (ONNX) - Weekly Builds
ONNX (Open Neural Network Exchange) is an open ecosystem for AI developers, providing an open standard format for machine learning models, including deep learning and traditional ML. It defines an extensible computation graph model, built-in operators, and standard data types to enable model interoperability across various frameworks. The `onnx-weekly` package offers continuous integration builds, providing early access to experimental features and allowing users to test upcoming changes ahead of official stable releases. The current version is 1.22.0.dev20260330, reflecting a rapid release cadence for development purposes.
Warnings
- breaking The `model hub integration` feature was removed in ONNX v1.21.0. If your workflow relied on this integration, you will need to update your code.
- gotcha The `ml_dtypes` package became a direct dependency for ONNX versions >= 1.19.0. Users upgrading to these versions or using tools like `onnxruntime` with older `onnx` installations might encounter `ModuleNotFoundError` if `ml_dtypes` is not explicitly installed. This is particularly relevant when working with extended data types like FLOAT8.
- gotcha There's a common confusion between the `onnx` package (which defines the model format and provides utilities to build/manipulate ONNX graphs) and `onnxruntime` (which is the inference engine used to execute ONNX models efficiently). The `onnx-weekly` package only provides the `onnx` library.
- gotcha ONNX models are versioned by IR version and operator set (Opset) versions. Breaking changes to the IR format or operator semantics require version increments. Ensure that the ONNX Runtime or target inference environment you are using supports the specific Opset version of your ONNX model to avoid compatibility issues.
- gotcha ONNX models are serialized using Google's Protocol Buffers, which imposes a 2GB size limit on individual model files. Very large models may fail to serialize or load correctly.
Install
-
pip install onnx-weekly
Imports
- onnx
import onnx
- helper
from onnx import helper
- checker
from onnx import checker
- TensorProto
from onnx import TensorProto
- parser
import onnx.parser
- shape_inference
import onnx.shape_inference
Quickstart
import onnx
from onnx import helper, checker, TensorProto
# Create a simple ONNX graph for Y = X * A + B
# Define inputs and outputs
X = helper.make_tensor_value_info('X', TensorProto.FLOAT, [None, 2])
A = helper.make_tensor_value_info('A', TensorProto.FLOAT, [2, 3])
B = helper.make_tensor_value_info('B', TensorProto.FLOAT, [3])
Y = helper.make_tensor_value_info('Y', TensorProto.FLOAT, [None, 3])
# Create nodes (operators)
# MatMul operation: C = X * A
node_matmul = helper.make_node(
'MatMul',
inputs=['X', 'A'],
outputs=['C'],
)
# Add operation: Y = C + B
node_add = helper.make_node(
'Add',
inputs=['C', 'B'],
outputs=['Y'],
)
# Create the graph
graph_def = helper.make_graph(
[node_matmul, node_add], # Nodes in the graph
'simple-linear-regression', # Graph name
[X, A, B], # Graph inputs
[Y], # Graph outputs
)
# Create the model
model_def = helper.make_model(graph_def, producer_name='onnx-example')
# Check the model for validity
try:
checker.check_model(model_def)
print("Model is valid!")
except checker.ValidationError as e:
print(f"Model is invalid: {e}")
# Optionally, save the model
# onnx.save(model_def, "simple_model.onnx")