{"id":2623,"library":"onnxscript","title":"ONNX Script","description":"ONNX Script is a Python library that enables developers to naturally author ONNX functions and models using a subset of Python. It provides tools to translate Python functions into serialized ONNX graphs, offering an expressive, simple, and debuggable way to define ONNX models. The library is actively maintained with frequent patch releases addressing bug fixes and minor improvements.","status":"active","version":"0.6.2","language":"en","source_language":"en","source_url":"https://github.com/microsoft/onnxscript","tags":["ONNX","Deep Learning","MLOps","Model Export","Python to ONNX","AI/ML"],"install":[{"cmd":"pip install onnxscript","lang":"bash","label":"Install stable version"}],"dependencies":[{"reason":"Core dependency for ONNX graph representation and manipulation. ONNX Script builds upon the ONNX standard.","package":"onnx","optional":false},{"reason":"Commonly used for array operations in Python functions that are then converted to ONNX.","package":"numpy","optional":false},{"reason":"Utilized for the Abstract Syntax Tree (AST) conversion and intermediate representation in newer versions.","package":"onnx-ir","optional":false},{"reason":"Handles machine learning specific data types.","package":"ml-dtypes","optional":true}],"imports":[{"note":"The primary decorator to mark a Python function for ONNX conversion.","symbol":"script","correct":"from onnxscript import script"},{"note":"Import specific ONNX opsets (e.g., opset15, opset17) to access ONNX operators as Python functions.","symbol":"opsetXX","correct":"from onnxscript import opset15 as op"},{"note":"Used for type annotations to specify ONNX tensor types. Other types like INT64, BOOL are also available.","symbol":"FLOAT","correct":"from onnxscript.onnx_types import FLOAT"}],"quickstart":{"code":"import onnx\nfrom onnxscript import script, FLOAT\nfrom onnxscript import opset15 as op\nimport numpy as np\n\n# Define an ONNX function using the @script decorator\n@script()\ndef MatmulAdd(X: FLOAT['N', 'K'], Wt: FLOAT['K', 'M'], Bias: FLOAT['M',]) -> FLOAT['N', 'M']:\n    return op.MatMul(X, Wt) + Bias\n\n# Create some dummy input data\nx_data = np.random.rand(64, 128).astype(np.float32)\nwt_data = np.random.rand(128, 10).astype(np.float32)\nbias_data = np.random.rand(10,).astype(np.float32)\n\n# Evaluate the ONNX Script function in eager mode (for debugging/testing)\nresult_eager = MatmulAdd(x_data, wt_data, bias_data)\nprint(f\"Eager mode output shape: {result_eager.shape}\")\n\n# Convert the ONNX Script function to an ONNX ModelProto\nmodel_proto = MatmulAdd.to_model_proto(\n    (x_data, wt_data, bias_data),  # Example inputs for tracing shapes\n    output_names=['output']\n)\n\n# Save the ONNX model\nonnx_file_path = \"matmul_add_model.onnx\"\nonnx.save(model_proto, onnx_file_path)\nprint(f\"ONNX model saved to {onnx_file_path}\")\n\n# Optionally, check the model for validity\ntry:\n    onnx.checker.check_model(model_proto)\n    print(\"ONNX model is valid!\")\nexcept onnx.checker.ValidationError as e:\n    print(f\"ONNX model validation error: {e}\")","lang":"python","description":"This quickstart demonstrates defining a simple ONNX function `MatmulAdd` using the `@script` decorator and ONNX operators from `opset15`. It shows how to use type annotations for inputs and outputs, evaluate the function in eager mode, convert it to an ONNX ModelProto, and save it to a file. The example includes basic input data generation and ONNX model validation."},"warnings":[{"fix":"Migrate code to use `ONNXFunction.op_signature` for accessing operator signatures.","message":"In v0.6.0, the `.param_schemas` and `schema` properties of `ONNXFunction` were removed. They are replaced by the more flexible `.op_signature` property.","severity":"breaking","affected_versions":">=0.6.0"},{"fix":"Review models optimized with constant folding to ensure compatibility with tools expecting constant nodes. Adapt parsing logic if necessary.","message":"In v0.5.5, a change to the constant folding pass resulted in the creation of initializers instead of constant nodes. This might affect downstream tools or expectations regarding the ONNX graph structure.","severity":"breaking","affected_versions":">=0.5.5"},{"fix":"Refer to the official documentation for the supported Python subset. Design ONNX functions with the ONNX operator set in mind, focusing on numerical and tensor operations.","message":"ONNX Script only supports a *subset* of Python. Not all Python language constructs (e.g., complex control flows, arbitrary data structures) can be translated into valid ONNX graphs, which can lead to unexpected errors during scripting.","severity":"gotcha","affected_versions":"All"},{"fix":"For production inference, always export the ONNX Script function to an ONNX model and use a high-performance ONNX runtime (e.g., ONNX Runtime).","message":"The eager mode evaluation of ONNX Script functions is primarily intended for debugging and understanding the function's behavior within Python. It is not optimized for performance and should not be used for high-performance inference.","severity":"gotcha","affected_versions":"All"},{"fix":"Always provide clear and correct type annotations, leveraging `onnxscript.onnx_types` for tensors and standard Python types for attributes, matching the expected ONNX operator signatures.","message":"Explicit type annotations for inputs, outputs, and attributes are crucial when defining functions with `@script()`. Missing or incorrect annotations (e.g., for tensor types, shapes, or attribute types like `int`, `float`) can lead to conversion errors or incorrect ONNX graph generation.","severity":"gotcha","affected_versions":"All"}],"env_vars":null,"last_verified":"2026-04-10T00:00:00.000Z","next_check":"2026-07-09T00:00:00.000Z"}