PNNX
PNNX (PyTorch to NCNN eXporter) is a Python command-line tool designed for PyTorch model interoperability, primarily converting PyTorch models into the NCNN deep learning inference framework format, or ONNX. It is maintained by Tencent and is part of the larger NCNN project. The current PyPI version is 20260409, with frequent updates often tied to the NCNN project's release cycle.
Common errors
-
ModuleNotFoundError: No module named 'torch'
cause PyTorch is not installed in the environment where pnnx is being run.fixInstall PyTorch: `pip install torch` (or the appropriate command for your system/CUDA version). -
pnnx: command not found
cause The 'pnnx' executable is not in your system's PATH, or pnnx was not installed correctly.fixEnsure `pip install pnnx` completed successfully. Verify that your environment's Python scripts directory (e.g., `~/.local/bin` or `venv/bin`) is included in your system's PATH. -
Failed to parse input arguments
cause Incorrect command-line arguments passed to pnnx. Common mistakes include missing `inputshape` or wrong format for arguments.fixConsult `pnnx --help` for the correct syntax and available options. Ensure `inputshape` is correctly formatted, e.g., `inputshape=[1,3,224,224]`. -
RuntimeError: Tracing a graph failed! Ensure the input to the trace is a Python function or a 'torch.nn.Module' and that the trace is a valid 'torch.jit.ScriptModule'.
cause The PyTorch model provided to `torch.jit.trace` or `pnnx` directly cannot be successfully traced into a TorchScript graph. This often happens with models using dynamic control flow or unsupported operations.fixSimplify your PyTorch model, remove dynamic elements, or ensure all operations are TorchScript-compatible. Consider using `torch.jit.script` for models with control flow, or debugging the tracing process in PyTorch directly.
Warnings
- gotcha PNNX is primarily a command-line tool. While it exposes internal Python modules (e.g., `pnnx.converter`), most users will interact with it via the `pnnx` shell command for model conversion.
- gotcha PNNX relies on `torch.jit.trace` or `torch.jit.script` for model ingestion. Models with dynamic control flow (e.g., `if` statements, dynamic loops) or shape-dependent operations may not be correctly traced and will fail conversion. `torch.jit.trace` records a single execution path.
- gotcha The `pnnx` PyPI package does not explicitly list `torch` as a dependency, meaning `pip install pnnx` will not automatically install PyTorch. However, PyTorch is absolutely essential for PNNX to function, as it processes PyTorch models.
- gotcha PNNX's versioning scheme is date-based (e.g., `20260409`), which is different from semantic versioning (e.g., 1.0.0). This can make tracking breaking changes or specific feature availability less intuitive.
Install
-
pip install pnnx
Imports
- pnnx
import pnnx
Quickstart
import torch
import os
# 1. Define a simple PyTorch model
class MyModel(torch.nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv = torch.nn.Conv2d(3, 32, 3, padding=1)
self.relu = torch.nn.ReLU()
def forward(self, x):
return self.relu(self.conv(x))
model = MyModel()
dummy_input = torch.rand(1, 3, 64, 64)
# 2. Trace the model using torch.jit.trace
# This creates a TorchScript module which pnnx can convert.
traced_model = torch.jit.trace(model, dummy_input)
# 3. Save the traced model to a file
model_path = "my_model.pt"
traced_model.save(model_path)
print(f"PyTorch model saved to {model_path}")
print("\nNow, open your terminal and run the following command to convert the model:")
print(f"pnnx {model_path} inputshape=[{','.join(map(str, dummy_input.shape))}] outputpath=converted_model")
print("\nThis will generate NCNN model files (e.g., converted_model.param, converted_model.bin) in the 'converted_model' directory.")
# You can also specify ONNX output:
# print("Or for ONNX output:")
# print(f"pnnx {model_path} inputshape=[{','.join(map(str, dummy_input.shape))}] outputpath=converted_model --onnx")