{"id":7637,"library":"pytorch-ignite","title":"PyTorch-Ignite","description":"PyTorch-Ignite is a lightweight and user-friendly library designed to simplify training and evaluating neural networks with PyTorch. It provides a high-level API for setting up training loops, handling events, and integrating various experiment tracking tools. Currently at version 0.5.4, it maintains an active release cadence with frequent bug fixes and feature enhancements.","status":"active","version":"0.5.4","language":"en","source_language":"en","source_url":"https://github.com/pytorch/ignite","tags":["pytorch","deep learning","training","machine learning","utility","distributed"],"install":[{"cmd":"pip install pytorch-ignite","lang":"bash","label":"Install stable version"}],"dependencies":[{"reason":"PyTorch-Ignite is built on top of PyTorch and requires a compatible version of PyTorch to function. It is usually assumed to be pre-installed.","package":"torch","optional":false}],"imports":[{"symbol":"Engine","correct":"from ignite.engine import Engine"},{"symbol":"Events","correct":"from ignite.engine import Events"},{"symbol":"create_supervised_trainer","correct":"from ignite.engine import create_supervised_trainer"},{"note":"As of v0.5.0, `ignite.contrib.metrics` was moved to `ignite.metrics`.","wrong":"from ignite.contrib.metrics import Accuracy","symbol":"Accuracy","correct":"from ignite.metrics import Accuracy"},{"note":"As of v0.5.0, `ignite.contrib.handlers` was moved to `ignite.handlers`.","wrong":"from ignite.contrib.handlers import ModelCheckpoint","symbol":"ModelCheckpoint","correct":"from ignite.handlers import ModelCheckpoint"}],"quickstart":{"code":"import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader, TensorDataset\n\nfrom ignite.engine import Engine, Events, create_supervised_trainer, create_supervised_evaluator\nfrom ignite.metrics import Accuracy, Loss\n\n# 1. Define a simple model, optimizer, loss function\nclass SimpleModel(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.fc = nn.Linear(10, 2)\n    def forward(self, x):\n        return self.fc(x)\n\nmodel = SimpleModel()\noptimizer = optim.SGD(model.parameters(), lr=0.01)\ncriterion = nn.CrossEntropyLoss()\n\n# 2. Create dummy data\nX = torch.randn(100, 10)\ny = torch.randint(0, 2, (100,))\ndataset = TensorDataset(X, y)\ndataloader = DataLoader(dataset, batch_size=10)\n\n# 3. Create trainer and evaluator\ntrainer = create_supervised_trainer(model, optimizer, criterion)\nevaluator = create_supervised_evaluator(model, criterion, metrics={'accuracy': Accuracy(), 'nll': Loss(criterion)})\n\n# 4. Define handlers for events\n@trainer.on(Events.EPOCH_COMPLETED)\ndef log_training_results(engine):\n    evaluator.run(dataloader)\n    metrics = evaluator.state.metrics\n    print(f\"Epoch {engine.state.epoch}/{engine.state.max_epochs} - Avg accuracy: {metrics['accuracy']:.2f}, Avg loss: {metrics['nll']:.2f}\")\n\n# 5. Run the training\ntrainer.run(dataloader, max_epochs=2)\n\nprint(\"\\nTraining complete.\")","lang":"python","description":"This quickstart demonstrates setting up a basic training loop with PyTorch-Ignite. It defines a simple PyTorch model, creates a trainer and evaluator using `create_supervised_trainer` and `create_supervised_evaluator`, attaches a handler to log results after each epoch, and runs the training process. The example includes dummy data for immediate execution."},"warnings":[{"fix":"Update all `from ignite.contrib.metrics import ...` to `from ignite.metrics import ...` and `from ignite.contrib.handlers import ...` to `from ignite.handlers import ...`.","message":"All modules under `ignite.contrib.metrics` and `ignite.contrib.handlers` were moved directly to `ignite.metrics` and `ignite.handlers` respectively.","severity":"breaking","affected_versions":">=0.5.0"},{"fix":"Review the official documentation for `ignite.handlers.LRScheduler` to adapt to the new API. Instead of calling it, attach it to the `trainer` engine with the optimizer, e.g., `LRScheduler(optimizer, CosineAnnealingScheduler(lr_values=[1e-1, 1e-3], cycle_size=100)).attach(trainer, Events.ITERATION_STARTED)`.","message":"The `LRScheduler` handler was refactored to be attached to `Events.ITERATION_STARTED` and now requires an optimizer argument, changing its usage pattern significantly.","severity":"breaking","affected_versions":">=0.4.9"},{"fix":"Before any `idist` calls, ensure `from ignite.distributed import auto_model_and_optimizer_distributed, init_distributed, ...` and call `init_distributed()` or similar initialization routines appropriate for your distributed setup.","message":"When using `ignite.distributed` (idist) for distributed training, ensure the distributed backend is properly initialized (e.g., `idist.initialize()`) before using `idist` utilities or distributed engines.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always test event handler logic with simple examples. Refer to the official documentation on 'Event Filtering' for detailed explanations and examples of how `Events.X(every=N, once=M, before=Y, after=Z)` combinations work.","message":"Event filtering with `every`, `once`, `before`, `after` can be powerful but also complex. Misunderstanding their interaction can lead to handlers not being triggered as expected.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Change import statements. For example, `from ignite.contrib.metrics import Accuracy` should become `from ignite.metrics import Accuracy`.","cause":"Attempting to import metrics or handlers from the deprecated `ignite.contrib` module after PyTorch-Ignite v0.5.0.","error":"AttributeError: module 'ignite.contrib' has no attribute 'metrics'"},{"fix":"Instead of calling, attach the `LRScheduler` instance to the trainer. Example: `LRScheduler(optimizer, lr_scheduler_function).attach(trainer, Events.ITERATION_STARTED)`.","cause":"Trying to call an `LRScheduler` instance directly (e.g., `lr_scheduler(engine)`) after PyTorch-Ignite v0.4.9, where its API changed to an attachable handler.","error":"TypeError: 'LRScheduler' object is not callable"},{"fix":"Call `ignite.distributed.init_distributed()` or similar initialization function relevant to your setup (e.g., `torch.distributed.init_process_group` if managing manually) at the start of your script before any distributed operations.","cause":"Using `ignite.distributed` functionalities (e.g., `idist.spawn`, `idist.get_rank()`) without properly initializing the distributed backend first.","error":"ValueError: Distributed environment is not initialized."},{"fix":"Ensure your model and input data are consistently on the same device. Use `.to(device)` on models, tensors, and data loaders (via custom collate_fn) where `device = 'cuda' if torch.cuda.is_available() else 'cpu'`.","cause":"A common PyTorch error that can occur in Ignite if models or data are not explicitly moved to the correct device (CPU/GPU) or if different parts of the pipeline are on mixed devices.","error":"RuntimeError: Expected all tensors to be on the same device, but found tensors on cuda:0 and cpu"}]}