BoTorch: Bayesian Optimization in PyTorch

0.17.2 · active · verified Mon Apr 13

BoTorch (pronounced "bow-torch") is a library for Bayesian Optimization research built on top of PyTorch, leveraging auto-differentiation, GPU support, and a dynamic computation graph. It provides a modular and extensible interface for composing Bayesian Optimization primitives like models, acquisition functions, and optimizers. Currently at version 0.17.2, it is under active development with frequent maintenance and feature releases.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates a basic Bayesian Optimization loop with BoTorch: initializing training data, fitting a Gaussian Process model, constructing an acquisition function (LogExpectedImprovement for numerical stability), and optimizing it to propose the next best candidate.

import torch
from botorch.models import SingleTaskGP
from botorch.acquisition import LogExpectedImprovement
from botorch.fit import fit_gpytorch_mll
from gpytorch.mlls import ExactMarginalLogLikelihood
from botorch.optim import optimize_acqf
from botorch.models.transforms import Normalize, Standardize

# 1. Define objective function (e.g., a simple 2D function)
def objective_function(x):
    return 1 - (x - 0.5).norm(dim=-1, keepdim=True)

# 2. Generate initial training data
train_X = torch.rand(10, 2, dtype=torch.double) * 2
train_Y = objective_function(train_X)
train_Y += 0.1 * torch.randn_like(train_Y) # Add some noise

# 3. Fit a Gaussian Process model
gp = SingleTaskGP(
    train_X=train_X,
    train_Y=train_Y,
    input_transform=Normalize(d=2),
    outcome_transform=Standardize(m=1),
)
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
fit_gpytorch_mll(mll)

# 4. Construct an acquisition function
# Use LogExpectedImprovement for better numerical stability
log_ei = LogExpectedImprovement(model=gp, best_f=train_Y.max())

# 5. Optimize the acquisition function to get the next candidate
bounds = torch.stack([torch.zeros(2), torch.ones(2)]).to(torch.double)
candidate, acq_value = optimize_acqf(
    acq_function=log_ei,
    bounds=bounds,
    q=1,
    num_restarts=5,
    raw_samples=20,
)

print(f"Next candidate: {candidate}")
print(f"Acquisition function value at candidate: {acq_value}")

# (Optional) Evaluate the new candidate and update the model in a loop
# new_X = candidate
# new_Y = objective_function(new_X)
# train_X = torch.cat([train_X, new_X])
# train_Y = torch.cat([train_Y, new_Y])
# ... refit model ...

view raw JSON →