PNNX

20260409 · active · verified Thu Apr 16

PNNX (PyTorch to NCNN eXporter) is a Python command-line tool designed for PyTorch model interoperability, primarily converting PyTorch models into the NCNN deep learning inference framework format, or ONNX. It is maintained by Tencent and is part of the larger NCNN project. The current PyPI version is 20260409, with frequent updates often tied to the NCNN project's release cycle.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to define a simple PyTorch model, trace it using `torch.jit.trace`, save it, and then use the `pnnx` command-line tool to convert it to the NCNN format. Ensure you have `torch` installed alongside `pnnx`.

import torch
import os

# 1. Define a simple PyTorch model
class MyModel(torch.nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.conv = torch.nn.Conv2d(3, 32, 3, padding=1)
        self.relu = torch.nn.ReLU()

    def forward(self, x):
        return self.relu(self.conv(x))

model = MyModel()
dummy_input = torch.rand(1, 3, 64, 64)

# 2. Trace the model using torch.jit.trace
# This creates a TorchScript module which pnnx can convert.
traced_model = torch.jit.trace(model, dummy_input)

# 3. Save the traced model to a file
model_path = "my_model.pt"
traced_model.save(model_path)

print(f"PyTorch model saved to {model_path}")
print("\nNow, open your terminal and run the following command to convert the model:")
print(f"pnnx {model_path} inputshape=[{','.join(map(str, dummy_input.shape))}] outputpath=converted_model")
print("\nThis will generate NCNN model files (e.g., converted_model.param, converted_model.bin) in the 'converted_model' directory.")

# You can also specify ONNX output:
# print("Or for ONNX output:")
# print(f"pnnx {model_path} inputshape=[{','.join(map(str, dummy_input.shape))}] outputpath=converted_model --onnx")

view raw JSON →