BoTorch: Bayesian Optimization in PyTorch
BoTorch (pronounced "bow-torch") is a library for Bayesian Optimization research built on top of PyTorch, leveraging auto-differentiation, GPU support, and a dynamic computation graph. It provides a modular and extensible interface for composing Bayesian Optimization primitives like models, acquisition functions, and optimizers. Currently at version 0.17.2, it is under active development with frequent maintenance and feature releases.
Warnings
- breaking BoTorch v0.17.0+ requires Python >=3.11 and PyTorch >=2.2. Ensure your environment meets these minimum versions before upgrading.
- breaking BoTorch v0.17.1+ requires GPyTorch >=1.15.2 and `linear_operator >=0.6.1`. Older versions of these dependencies will cause compatibility issues.
- deprecated The `qExpectedImprovement` acquisition function has known numerical issues and is strongly recommended to be replaced by `qLogExpectedImprovement` for improved stability and performance.
- breaking Several APIs were removed in BoTorch v0.17. These include `get_fitted_map_saas_ensemble`, `qMultiObjectiveMaxValueEntropy`, `FullyBayesianPosterior`, the `task_feature` parameter from `SingleTaskGP.construct_inputs`, and the `fixed_features` argument from `optimize_acqf_homotopy`.
- gotcha BoTorch is a low-level API for Bayesian Optimization research. For general-purpose Bayesian Optimization and experiment management, users not actively doing research on BO are recommended to use Ax, which provides a user-friendly interface on top of BoTorch.
- breaking The default hyperparameter priors for most models were updated in v0.12.0 to use dimension-scaled log-normal priors. This significantly improves robustness to dimensionality but might alter results for models fitted with older versions.
Install
-
pip install botorch
Imports
- SingleTaskGP
from botorch.models import SingleTaskGP
- LogExpectedImprovement
from botorch.acquisition import LogExpectedImprovement
- fit_gpytorch_mll
from botorch.fit import fit_gpytorch_mll
- ExactMarginalLogLikelihood
from gpytorch.mlls import ExactMarginalLogLikelihood
- optimize_acqf
from botorch.optim import optimize_acqf
Quickstart
import torch
from botorch.models import SingleTaskGP
from botorch.acquisition import LogExpectedImprovement
from botorch.fit import fit_gpytorch_mll
from gpytorch.mlls import ExactMarginalLogLikelihood
from botorch.optim import optimize_acqf
from botorch.models.transforms import Normalize, Standardize
# 1. Define objective function (e.g., a simple 2D function)
def objective_function(x):
return 1 - (x - 0.5).norm(dim=-1, keepdim=True)
# 2. Generate initial training data
train_X = torch.rand(10, 2, dtype=torch.double) * 2
train_Y = objective_function(train_X)
train_Y += 0.1 * torch.randn_like(train_Y) # Add some noise
# 3. Fit a Gaussian Process model
gp = SingleTaskGP(
train_X=train_X,
train_Y=train_Y,
input_transform=Normalize(d=2),
outcome_transform=Standardize(m=1),
)
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
fit_gpytorch_mll(mll)
# 4. Construct an acquisition function
# Use LogExpectedImprovement for better numerical stability
log_ei = LogExpectedImprovement(model=gp, best_f=train_Y.max())
# 5. Optimize the acquisition function to get the next candidate
bounds = torch.stack([torch.zeros(2), torch.ones(2)]).to(torch.double)
candidate, acq_value = optimize_acqf(
acq_function=log_ei,
bounds=bounds,
q=1,
num_restarts=5,
raw_samples=20,
)
print(f"Next candidate: {candidate}")
print(f"Acquisition function value at candidate: {acq_value}")
# (Optional) Evaluate the new candidate and update the model in a loop
# new_X = candidate
# new_Y = objective_function(new_X)
# train_X = torch.cat([train_X, new_X])
# train_Y = torch.cat([train_Y, new_Y])
# ... refit model ...