{"library":"pytorch-lightning","title":"PyTorch Lightning","description":"PyTorch Lightning is a lightweight PyTorch wrapper designed to simplify the training and evaluation of deep learning models. It abstracts away common boilerplate code, allowing researchers and engineers to focus on model architecture and experimental logic. The library is actively maintained, currently at version 2.6.1, and follows a release cadence where minor versions may introduce backwards-incompatible changes with deprecations, and major versions may do so without.","status":"active","version":"2.6.1","language":"en","source_language":"en","source_url":"https://github.com/Lightning-AI/lightning","tags":["Machine Learning","Deep Learning","PyTorch","AI Training"],"install":[{"cmd":"pip install pytorch-lightning","lang":"bash","label":"Original package name (backward compatibility)"},{"cmd":"pip install lightning","lang":"bash","label":"Recommended modern installation"}],"dependencies":[{"reason":"Core deep learning framework dependency, usually installed separately to select CUDA version.","package":"torch","optional":false},{"reason":"Requires Python 3.10 or higher.","package":"python","optional":false}],"imports":[{"note":"The primary import path changed significantly in version 2.0. The new canonical import uses `lightning` as the top-level package name.","wrong":"from pytorch_lightning.core.lightning import LightningModule","symbol":"LightningModule","correct":"from lightning import LightningModule"},{"note":"Similar to LightningModule, the Trainer class's import path was refactored in version 2.0 to use the new `lightning` package.","wrong":"from pytorch_lightning import Trainer","symbol":"Trainer","correct":"from lightning import Trainer"}],"quickstart":{"code":"import os\nfrom torch import optim, nn, utils, Tensor\nfrom torchvision.datasets import MNIST\nfrom torchvision.transforms import ToTensor\nimport lightning as L\n\n# 1. Define any number of nn.Modules (or use your current ones)\nencoder = nn.Sequential(nn.Linear(28 * 28, 64), nn.ReLU(), nn.Linear(64, 3))\ndecoder = nn.Sequential(nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, 28 * 28))\n\n# 2. Define the LightningModule\nclass LitAutoEncoder(L.LightningModule):\n    def __init__(self, encoder, decoder):\n        super().__init__()\n        self.encoder = encoder\n        self.decoder = decoder\n\n    def training_step(self, batch, batch_idx):\n        x, _ = batch\n        x = x.view(x.size(0), -1)\n        z = self.encoder(x)\n        x_hat = self.decoder(z)\n        loss = nn.functional.mse_loss(x_hat, x)\n        self.log('train_loss', loss)\n        return loss\n\n    def configure_optimizers(self):\n        optimizer = optim.Adam(self.parameters(), lr=1e-3)\n        return optimizer\n\n# 3. Define a dataset\ndataset = MNIST(os.environ.get('DATASET_PATH', os.getcwd()), download=True, transform=ToTensor())\ntrain_dataloader = utils.data.DataLoader(dataset, batch_size=128)\n\n# 4. Train the model\nmodel = LitAutoEncoder(encoder, decoder)\ntrainer = L.Trainer(limit_train_batches=100, max_epochs=1)\ntrainer.fit(model, train_dataloader)\n\n# 5. Use the model (optional, example prediction step)\n# For inference, set model to eval mode and disable gradients\nmodel.eval()\nwith Tensor.no_grad():\n    sample_input, _ = dataset[0]\n    sample_input = sample_input.view(1, -1)\n    encoded_output = model.encoder(sample_input)\n    decoded_output = model.decoder(encoded_output)\n    print(f\"Original shape: {sample_input.shape}, Encoded shape: {encoded_output.shape}, Decoded shape: {decoded_output.shape}\")","lang":"python","description":"This quickstart demonstrates a minimal autoencoder training loop using `lightning`. It covers defining a `LightningModule`, setting up data loaders, and training with the `Trainer`. The code shows how Lightning automatically handles the training loop, backward passes, and optimizer steps, reducing boilerplate. A simple inference step is included to show how to use the trained model."},"warnings":[{"fix":"Update your `pip install` command to `pip install lightning`, change all `import pytorch_lightning` statements to `import lightning as L`, and migrate `Trainer` arguments to the new unified accelerator API. Consult the official migration guide for a detailed overview.","message":"Major API and package renaming in version 2.0. The primary package name for installation changed from `pytorch-lightning` to `lightning`, and imports moved from `pytorch_lightning` (e.g., `pytorch_lightning.Trainer`) to `lightning` (e.g., `lightning.Trainer`). Additionally, many `Trainer` arguments, such as `gpus`, `tpus`, etc., were deprecated in 1.x and removed/refactored in 2.0 in favor of accelerator configurations (e.g., `accelerator='gpu', devices=4`).","severity":"breaking","affected_versions":"2.0.0 and later"},{"fix":"Use alternative methods for TorchScript export or refer to the latest Lightning documentation for recommended export patterns.","message":"The `to_torchscript` method on `LightningModule` was deprecated in version 2.6.1.","severity":"deprecated","affected_versions":"2.6.1 and later"},{"fix":"Remove explicit `.cuda()` or `.to(device)` calls for your model and tensors that are part of the training loop. Lightning will place them on the correct device. If initializing new tensors, use `new_tensor = torch.Tensor(...).to(existing_tensor)` to ensure correct device placement.","message":"Manual device placement (e.g., `.cuda()`, `.to(device)`) is generally not needed within a `LightningModule` and can cause issues. Lightning's `Trainer` handles device management automatically.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Do not manually instantiate `torch.utils.data.DistributedSampler` for your data loaders when using `lightning.Trainer` with a distributed strategy. Simply pass your standard `DataLoader` to `trainer.fit()`, and Lightning will handle the distributed sampling.","message":"For distributed training, `DistributedSampler` is automatically applied to `DataLoader`s by the `Trainer` when a distributed strategy is used. Manually wrapping your `DataLoader` with `DistributedSampler` can lead to incorrect behavior or errors.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-05T00:00:00.000Z","next_check":"2026-07-04T00:00:00.000Z"}