{"id":6914,"library":"tf2onnx","title":"TensorFlow to ONNX Converter","description":"tf2onnx is a Python library that enables the conversion of TensorFlow models into the ONNX (Open Neural Network Exchange) format. This allows users to deploy models trained in TensorFlow to various ONNX-compatible runtimes and hardware accelerators. The current version is 1.17.0, with minor releases typically occurring every 1-3 months to keep pace with TensorFlow and ONNX updates.","status":"active","version":"1.17.0","language":"en","source_language":"en","source_url":"https://github.com/onnx/tensorflow-onnx","tags":["tensorflow","onnx","model conversion","deep learning","ai"],"install":[{"cmd":"pip install tf2onnx","lang":"bash","label":"Basic installation"},{"cmd":"pip install tf2onnx tensorflow onnx onnxruntime","lang":"bash","label":"With common dependencies"}],"dependencies":[{"reason":"Required for loading and converting TensorFlow models.","package":"tensorflow","optional":false},{"reason":"Required for building and validating ONNX models.","package":"onnx","optional":false},{"reason":"Commonly used for inference and verification of converted ONNX models.","package":"onnxruntime","optional":true},{"reason":"Strict version requirements often lead to conflicts; specific versions are required by both TensorFlow and ONNX.","package":"protobuf","optional":false}],"imports":[{"note":"The 'convert' module contains the main conversion functions like from_keras, from_saved_model.","symbol":"convert","correct":"import tf2onnx.convert"}],"quickstart":{"code":"import tensorflow as tf\nimport tf2onnx\nfrom onnx.checker import check_model\nimport shutil\nimport os\n\n# 1. Create a simple Keras model\nmodel = tf.keras.models.Sequential([\n    tf.keras.layers.Dense(10, input_shape=(784,), activation='relu'),\n    tf.keras.layers.Dense(10, activation='softmax')\n])\nmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n\n# 2. Save the Keras model in the TensorFlow SavedModel format\n# This is the recommended approach for converting TF2 models.\ntf.saved_model.save(model, \"my_keras_model\")\n\n# 3. Convert the SavedModel to ONNX\n# Specify input_signature and output_names for robust conversion.\nonnx_model_proto, _ = tf2onnx.convert.from_saved_model(\n    \"my_keras_model\",\n    input_signature=[tf.TensorSpec([None, 784], tf.float32, name=\"input_0\")],\n    output_names=[\"output_0\"]\n)\n\n# 4. Save the ONNX model to a file\nwith open(\"model.onnx\", \"wb\") as f:\n    f.write(onnx_model_proto.SerializeToString())\n\nprint(\"ONNX model converted and saved as model.onnx\")\n\n# Optional: Verify the ONNX model structure\ntry:\n    check_model(onnx_model_proto)\n    print(\"ONNX model is valid.\")\nexcept Exception as e:\n    print(f\"ONNX model validation failed: {e}\")\n\n# Clean up created files/directories\nshutil.rmtree(\"my_keras_model\")\nos.remove(\"model.onnx\")","lang":"python","description":"This quickstart demonstrates converting a simple Keras sequential model (saved as a TensorFlow SavedModel) to the ONNX format using `tf2onnx.convert.from_saved_model`. It includes saving the Keras model, performing the conversion, saving the ONNX output, and optionally verifying the ONNX model's validity."},"warnings":[{"fix":"Ensure your `tf2onnx` version is compatible with your installed `tensorflow` version. Check the `tf2onnx` release notes for specific TensorFlow version support. Upgrade both `tf2onnx` and `tensorflow` in tandem if encountering issues.","message":"TensorFlow Version Compatibility: `tf2onnx` closely tracks TensorFlow releases. Older `tf2onnx` versions may not support newer TensorFlow features, and newer `tf2onnx` versions might drop support for older TensorFlow versions. This often leads to conversion failures or incorrect model behavior if not aligned.","severity":"breaking","affected_versions":"<1.16.1 (for TF 2.14/2.15 support); general issue across all versions"},{"fix":"After installing `tensorflow` and `onnx`, let `pip` resolve `protobuf` dependencies. If conflicts arise, consider using a virtual environment or explicitly pinning `protobuf` to a version known to work with your specific `tensorflow` and `onnx` versions (e.g., `pip install protobuf==3.20.3`).","message":"Protobuf Version Conflicts: `tf2onnx` (and its dependencies like `onnx` and `tensorflow`) have strict and often conflicting requirements for the `protobuf` library. Installing incorrect `protobuf` versions can lead to 'module not found' errors, serialization issues, or obscure runtime crashes.","severity":"gotcha","affected_versions":"All versions, particularly problematic around `protobuf` major version bumps (e.g., 3.x to 4.x or different minor versions)."},{"fix":"Simplify your TensorFlow model to use standard operations where possible. For truly custom ops, you may need to implement a custom ONNX operator, or perform graph surgery (e.g., separate parts of the model for conversion).","message":"Custom TensorFlow Operations: TensorFlow models containing custom operations, layers, or complex control flow that do not have direct equivalents in the ONNX operator set will fail conversion. `tf2onnx` provides some fallback mechanisms but they are not exhaustive.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always explicitly define `input_signature` and `output_names` when converting. Use tools like `saved_model_cli show` to inspect your TensorFlow SavedModel's input/output signatures and ensure they match the conversion parameters exactly.","message":"Input/Output Signature Mismatch: Incorrectly specifying input signatures (`input_signature`) or output names (`output_names`) during conversion, especially with `from_saved_model` or `from_graph_def`, can lead to truncated models, missing outputs, or graph extraction errors.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-15T00:00:00.000Z","next_check":"2026-07-14T00:00:00.000Z","problems":[]}