PyTorch Ranger Optimizer

0.1.1 · maintenance · verified Thu Apr 16

Ranger is a synergistic PyTorch optimizer that combines Rectified Adam (RAdam) and LookAhead techniques to improve training stability and convergence in deep learning models. The PyPI package, `pytorch-ranger`, provides an implementation of this optimizer, though its last update was in March 2020. More recent developments and features are primarily found in the original author's GitHub repository or the `Ranger21` project.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize a `Ranger` optimizer with a simple PyTorch model's parameters and perform a single forward and backward pass, followed by an optimization step.

import torch
import torch.nn as nn
from pytorch_ranger import Ranger

# 1. Define a simple PyTorch model
class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(10, 1)

    def forward(self, x):
        return self.linear(x)

model = SimpleModel()

# 2. Define dummy data and target
inputs = torch.randn(32, 10) # Example batch of 32 samples, 10 features
targets = torch.randn(32, 1) # Example batch of 32 targets

# 3. Instantiate the Ranger optimizer
# Pass model.parameters() to the optimizer
optimizer = Ranger(model.parameters(), lr=0.001)

# 4. Define a loss function
criterion = nn.MSELoss()

# 5. Perform a single training step (in a real scenario, this would be in a loop)
optimizer.zero_grad() # Zero the gradients
outputs = model(inputs) # Forward pass
loss = criterion(outputs, targets) # Compute loss
loss.backward() # Backward pass (compute gradients)
optimizer.step() # Update model parameters

print(f"Loss after one step: {loss.item():.4f}")

view raw JSON →