ONNX Converter and Optimization Tools

1.16.0 · active · verified Sun Apr 12

The `onnxconverter-common` package provides common functions and utilities for use in converters from various AI frameworks to ONNX. It also enables different converters to work together, such as converting a scikit-learn pipeline embedding an XGBoost model. It is actively maintained by Microsoft with frequent releases, often tied to ONNX and ONNX Runtime updates, focusing on compatibility and optimization.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to use `onnxconverter-common` to convert an ONNX model from float32 to float16 precision. This is a common optimization to reduce model size and potentially improve inference performance on compatible hardware. It creates a dummy ONNX model, converts it, and saves the result.

import onnx
from onnxconverter_common import float16
import os

# Create a dummy ONNX model for demonstration
# In a real scenario, you would load your model: model = onnx.load("path/to/model.onnx")

# Example: A simple Add operation
nodes = [onnx.helper.make_node("Add", ["input1", "input2"], ["output"]) ]
graph = onnx.helper.make_graph(
    nodes,
    "simple-graph",
    [
        onnx.helper.make_tensor_value_info("input1", onnx.TensorProto.FLOAT, [None, 2]),
        onnx.helper.make_tensor_value_info("input2", onnx.TensorProto.FLOAT, [None, 2]),
    ],
    [
        onnx.helper.make_tensor_value_info("output", onnx.TensorProto.FLOAT, [None, 2]),
    ],
)
model_fp32 = onnx.helper.make_model(graph, producer_name="onnx-example")

# Convert the model to float16
model_fp16 = float16.convert_float_to_float16(model_fp32)

# Save the converted model
output_path = "dummy_model_fp16.onnx"
onnx.save(model_fp16, output_path)
print(f"FP32 model converted to FP16 and saved to {output_path}")

# Clean up the dummy file
os.remove(output_path)

view raw JSON →