Torch Model Archiver

0.12.0 · active · verified Sat Apr 11

Torch Model Archiver is a dedicated command-line tool used for creating archives of trained PyTorch neural network models (typically `.pth` or TorchScript files) into a `.mar` (Model ARchive) format. These `.mar` files are specifically designed to be consumed and served by TorchServe for inference. The library is part of the larger PyTorch/Serve ecosystem and frequently updates in conjunction with TorchServe releases, with the current version being 0.12.0.

Warnings

Install

Imports

Quickstart

The primary use of `torch-model-archiver` is via its command-line interface to package model artifacts into a `.mar` file. This example demonstrates the typical command structure. A real execution requires actual model architecture (`.py`) and serialized model (`.pth`, TorchScript, etc.) files, and optionally a custom handler (`.py`) or use of a default one (e.g., `image_classifier`). The `-f` flag forces overwriting an existing archive.

# Assume you have a PyTorch model 'model.py' and a serialized state_dict 'model.pth'
# Also assume you have a handler 'handler.py' (or use a default one like 'image_classifier')

# Create a simple dummy model.py and handler.py for demonstration:
# model.py:
# import torch.nn as nn
# class MyModel(nn.Module):
#     def __init__(self):
#         super(MyModel, self).__init__()
#         self.linear = nn.Linear(10, 1)
#     def forward(self, x):
#         return self.linear(x)
#
# handler.py (minimal):
# from ts.torch_handler.base_handler import BaseHandler
# class MyHandler(BaseHandler):
#     def preprocess(self, data):
#         # Implement your data preprocessing logic
#         return data
#     def postprocess(self, data):
#         # Implement your data postprocessing logic
#         return data

# Command to archive a model (example with a hypothetical densenet161 setup):
# Ensure 'densenet161_model.py', 'densenet161_state.pth', and 'index_to_name.json' exist
# For a real run, replace paths with actual files and ensure handler logic matches the model.

# Example from TorchServe docs (adjust paths if running locally without cloning the repo)
# This assumes a model file like 'densenet_161/model.py' and a state dict like 'densenet161-8d451a50.pth'
# and a default handler 'image_classifier'
#
# Make a dummy model_store directory
import os
os.makedirs('model_store', exist_ok=True)

# This example is illustrative. For a runnable quickstart, you'd need to provide actual model.py, .pth, and handler files.
# A fully runnable quickstart often involves downloading example assets from the TorchServe repo.
# This specific command uses a generic handler and placeholder files.
# In a real scenario, you'd replace 'my_model.py', 'my_model_state.pth', and 'my_handler.py' with your actual files.
# We are using 'image_classifier' as a built-in handler for demonstration purposes.

print("To create a model archive (.mar) file:")
print("torch-model-archiver --model-name mymodel --version 1.0 --model-file path/to/my_model.py --serialized-file path/to/my_model_state.pth --handler image_classifier --export-path model_store -f")
print("\nThis command will create 'model_store/mymodel.mar'")

# Example using subprocess (if you wanted to run it from Python)
import subprocess
# This path is relative to the torchserve repo; adjust if you cloned it elsewhere or use your own model files.
# For a truly isolated example, you'd need to create dummy files or download real ones.
model_name = "densenet161"
model_version = "1.0"
# Placeholder paths for demonstration
model_file_path = "./dummy_model.py"
serialized_file_path = "./dummy_state.pth"
export_path = "model_store"
handler_name = "image_classifier" # Using a default handler for simplicity

# Create dummy files if they don't exist for the subprocess command to not error immediately
with open(model_file_path, "w") as f:
    f.write("import torch.nn as nn\nclass MyModel(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.linear = nn.Linear(10, 1)\n    def forward(self, x):\n        return self.linear(x)")
# Create a dummy serialized file (e.g., an empty file or a minimal PyTorch save)
import torch
torch.save({'state_dict': {}}, serialized_file_path)


cmd = [
    "torch-model-archiver",
    "--model-name", model_name,
    "--version", model_version,
    "--model-file", model_file_path,
    "--serialized-file", serialized_file_path,
    "--handler", handler_name,
    "--export-path", export_path,
    "-f" # Force overwrite if file exists
]

try:
    # Not actually running this in a quickstart as it requires external files, just showing the structure
    # subprocess.run(cmd, check=True, capture_output=True)
    # print(f"Successfully created {export_path}/{model_name}.mar")
    pass # Suppress actual execution for quickstart to avoid requiring external files
except subprocess.CalledProcessError as e:
    print(f"Error archiving model: {e.stderr.decode()}")
except FileNotFoundError:
    print("Error: 'torch-model-archiver' command not found. Please ensure the library is installed and in your PATH.")

# Clean up dummy files
os.remove(model_file_path)
os.remove(serialized_file_path)
# os.rmdir(export_path) # Don't remove if you expect a .mar file for a real test

view raw JSON →