PyTorch

2.2.2 · active · verified Thu Apr 16

PyTorch is an open-source machine learning framework that accelerates the path from research prototyping to production deployment. It provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration, and a deep neural network library built on a tape-based autograd system. The `pytorch` PyPI meta-package (current version 2.2.2) provides a convenient way to install the core `torch`, `torchvision`, and `torchaudio` libraries. PyTorch has frequent updates, typically releasing major stable versions multiple times a year, with minor patch releases in between.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates a simple linear regression model in PyTorch. It covers defining a dataset and dataloader, creating a neural network module, setting up a loss function and optimizer, and running a basic training loop. It uses randomly generated data for a simple y = 2x + 1 relationship.

import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader

# 1. Prepare Data
x_data = torch.randn(100, 1)
y_data = 2 * x_data + 1 + torch.randn(100, 1) * 0.1 # y = 2x + 1 + noise

# Create a Dataset and DataLoader
dataset = TensorDataset(x_data, y_data)
dataloader = DataLoader(dataset, batch_size=10, shuffle=True)

# 2. Define Model
class LinearRegression(nn.Module):
    def __init__(self):
        super(LinearRegression, self).__init__()
        self.linear = nn.Linear(1, 1) # One input feature, one output feature

    def forward(self, x):
        return self.linear(x)

model = LinearRegression()

# 3. Define Loss and Optimizer
criterion = nn.MSELoss() # Mean Squared Error Loss
optimizer = optim.SGD(model.parameters(), lr=0.01) # Stochastic Gradient Descent

# 4. Train the Model
num_epochs = 100
for epoch in range(num_epochs):
    for batch_x, batch_y in dataloader:
        # Forward pass
        outputs = model(batch_x)
        loss = criterion(outputs, batch_y)

        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    if (epoch+1) % 10 == 0:
        print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')

# 5. Make Predictions
predicted_value = model(torch.tensor([[5.0]]))
print(f"\nPredicted value for x=5.0: {predicted_value.item():.4f}")
print(f"Learned parameters: Weight={model.linear.weight.item():.4f}, Bias={model.linear.bias.item():.4f}")

view raw JSON →