{"id":4285,"library":"torch-model-archiver","title":"Torch Model Archiver","description":"Torch Model Archiver is a dedicated command-line tool used for creating archives of trained PyTorch neural network models (typically `.pth` or TorchScript files) into a `.mar` (Model ARchive) format. These `.mar` files are specifically designed to be consumed and served by TorchServe for inference. The library is part of the larger PyTorch/Serve ecosystem and frequently updates in conjunction with TorchServe releases, with the current version being 0.12.0.","status":"active","version":"0.12.0","language":"en","source_language":"en","source_url":"https://github.com/pytorch/serve/blob/master/model-archiver","tags":["pytorch","model deployment","machine learning","cli tool","torchserve","model archiving"],"install":[{"cmd":"pip install torch-model-archiver","lang":"bash","label":"Install via pip"}],"dependencies":[{"reason":"Used for working with PyTorch models that are to be archived.","package":"torch","optional":false}],"imports":[{"note":"The primary interface is the `torch-model-archiver` command-line tool. Python programs would typically execute it via `subprocess` if programmatic archiving is required, rather than importing specific classes or functions.","symbol":"torch-model-archiver","correct":"This library is primarily a command-line interface (CLI) tool and is not typically imported for direct Python usage. It's executed as a shell command."}],"quickstart":{"code":"# Assume you have a PyTorch model 'model.py' and a serialized state_dict 'model.pth'\n# Also assume you have a handler 'handler.py' (or use a default one like 'image_classifier')\n\n# Create a simple dummy model.py and handler.py for demonstration:\n# model.py:\n# import torch.nn as nn\n# class MyModel(nn.Module):\n#     def __init__(self):\n#         super(MyModel, self).__init__()\n#         self.linear = nn.Linear(10, 1)\n#     def forward(self, x):\n#         return self.linear(x)\n#\n# handler.py (minimal):\n# from ts.torch_handler.base_handler import BaseHandler\n# class MyHandler(BaseHandler):\n#     def preprocess(self, data):\n#         # Implement your data preprocessing logic\n#         return data\n#     def postprocess(self, data):\n#         # Implement your data postprocessing logic\n#         return data\n\n# Command to archive a model (example with a hypothetical densenet161 setup):\n# Ensure 'densenet161_model.py', 'densenet161_state.pth', and 'index_to_name.json' exist\n# For a real run, replace paths with actual files and ensure handler logic matches the model.\n\n# Example from TorchServe docs (adjust paths if running locally without cloning the repo)\n# This assumes a model file like 'densenet_161/model.py' and a state dict like 'densenet161-8d451a50.pth'\n# and a default handler 'image_classifier'\n#\n# Make a dummy model_store directory\nimport os\nos.makedirs('model_store', exist_ok=True)\n\n# This example is illustrative. For a runnable quickstart, you'd need to provide actual model.py, .pth, and handler files.\n# A fully runnable quickstart often involves downloading example assets from the TorchServe repo.\n# This specific command uses a generic handler and placeholder files.\n# In a real scenario, you'd replace 'my_model.py', 'my_model_state.pth', and 'my_handler.py' with your actual files.\n# We are using 'image_classifier' as a built-in handler for demonstration purposes.\n\nprint(\"To create a model archive (.mar) file:\")\nprint(\"torch-model-archiver --model-name mymodel --version 1.0 --model-file path/to/my_model.py --serialized-file path/to/my_model_state.pth --handler image_classifier --export-path model_store -f\")\nprint(\"\\nThis command will create 'model_store/mymodel.mar'\")\n\n# Example using subprocess (if you wanted to run it from Python)\nimport subprocess\n# This path is relative to the torchserve repo; adjust if you cloned it elsewhere or use your own model files.\n# For a truly isolated example, you'd need to create dummy files or download real ones.\nmodel_name = \"densenet161\"\nmodel_version = \"1.0\"\n# Placeholder paths for demonstration\nmodel_file_path = \"./dummy_model.py\"\nserialized_file_path = \"./dummy_state.pth\"\nexport_path = \"model_store\"\nhandler_name = \"image_classifier\" # Using a default handler for simplicity\n\n# Create dummy files if they don't exist for the subprocess command to not error immediately\nwith open(model_file_path, \"w\") as f:\n    f.write(\"import torch.nn as nn\\nclass MyModel(nn.Module):\\n    def __init__(self):\\n        super().__init__()\\n        self.linear = nn.Linear(10, 1)\\n    def forward(self, x):\\n        return self.linear(x)\")\n# Create a dummy serialized file (e.g., an empty file or a minimal PyTorch save)\nimport torch\ntorch.save({'state_dict': {}}, serialized_file_path)\n\n\ncmd = [\n    \"torch-model-archiver\",\n    \"--model-name\", model_name,\n    \"--version\", model_version,\n    \"--model-file\", model_file_path,\n    \"--serialized-file\", serialized_file_path,\n    \"--handler\", handler_name,\n    \"--export-path\", export_path,\n    \"-f\" # Force overwrite if file exists\n]\n\ntry:\n    # Not actually running this in a quickstart as it requires external files, just showing the structure\n    # subprocess.run(cmd, check=True, capture_output=True)\n    # print(f\"Successfully created {export_path}/{model_name}.mar\")\n    pass # Suppress actual execution for quickstart to avoid requiring external files\nexcept subprocess.CalledProcessError as e:\n    print(f\"Error archiving model: {e.stderr.decode()}\")\nexcept FileNotFoundError:\n    print(\"Error: 'torch-model-archiver' command not found. Please ensure the library is installed and in your PATH.\")\n\n# Clean up dummy files\nos.remove(model_file_path)\nos.remove(serialized_file_path)\n# os.rmdir(export_path) # Don't remove if you expect a .mar file for a real test","lang":"python","description":"The primary use of `torch-model-archiver` is via its command-line interface to package model artifacts into a `.mar` file. This example demonstrates the typical command structure. A real execution requires actual model architecture (`.py`) and serialized model (`.pth`, TorchScript, etc.) files, and optionally a custom handler (`.py`) or use of a default one (e.g., `image_classifier`). The `-f` flag forces overwriting an existing archive."},"warnings":[{"fix":"Users must provide the correct authorization token when making API calls to TorchServe. To disable this, set the `TS_DISABLE_TOKEN_AUTHORIZATION` environment variable, though this is not recommended for production.","message":"Starting with TorchServe v0.11.1 (and consequently `torch-model-archiver` being part of this ecosystem), token authorization is enabled by default for all HTTP/S and gRPC APIs.","severity":"breaking","affected_versions":">=0.11.1"},{"fix":"For metrics that are not counters (e.g., gauges), explicitly specify the metric type in the `add_metric` call: `metrics.add_metric(name='MyGauge', value=10, type='gauge')`.","message":"When archiving a model that uses custom metrics with `add_metric`, the default metric type inferred changed to `COUNTER` in v0.8.2.","severity":"gotcha","affected_versions":">=0.8.2"},{"fix":"Adjust import statements in your handler or model code to reflect a flat directory structure (e.g., `from . import my_utility` might need to become `import my_utility` if `my_utility.py` was in `--extra-files`).","message":"When using `--extra-files` to include additional Python modules or configuration files, remember that all files are flattened into a single folder within the `.mar` archive. This might require adjusting relative import paths in your handler or model code.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always install `torch-model-archiver` and `torchserve` from the same major (and preferably minor) release to ensure compatibility. For example, if using TorchServe v0.12.0, use `torch-model-archiver` v0.12.0.","message":"Compatibility between `torch-model-archiver` and `TorchServe` versions is critical. Using mismatched versions can lead to unexpected behavior or failure to serve models correctly.","severity":"gotcha","affected_versions":"All versions"},{"fix":"For models trained with PyTorch 2.x or later, ensure you are using `torch-model-archiver` version 0.10.0 or newer to leverage extended support and optimizations.","message":"Older versions of `torch-model-archiver` might not fully support or be optimized for newer PyTorch versions, especially PyTorch 2.x features.","severity":"gotcha","affected_versions":"<0.10.0"}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}