NLopt Python Bindings
NLopt is a free/open-source library providing a common interface to a variety of nonlinear optimization algorithms, encompassing both global and local, constrained and unconstrained problems. The `nlopt` Python package offers bindings to this library, enabling Python users to leverage its extensive suite of optimization routines. The current version is 2.10.0, and the project actively maintains and releases new versions, often aligning with updates to the underlying C library.
Warnings
- gotcha Objective and constraint functions must modify the `grad` array in-place, rather than reassigning it. Operations like `grad = 2*x` will not work as they create a new array; instead, use `grad[:] = 2*x` to overwrite the contents of the existing `grad` array.
- breaking Python version compatibility has changed across releases. As of version 2.10.0, `nlopt` officially supports Python 3.9 and above. Older Python versions (e.g., Python 3.8) were explicitly deprecated in NLopt 2.8.0.
- gotcha Incorrect or missing gradient information can lead to non-convergence or suboptimal results, especially when using gradient-based algorithms. Many NLopt algorithms expect analytically derived gradients for efficiency and accuracy.
- gotcha NLopt expects nonlinear inequality constraints to be formulated in the form `h(x) <= 0`. Incorrectly formulating these constraints (e.g., as `h(x) >= 0`) will lead to optimization issues or incorrect results.
- deprecated Specific algorithm constants, particularly those for sub-algorithms, may be removed or renamed in different underlying NLopt library versions. For instance, `NLOPT_LD_LBFGS_NOCEDAL` was temporarily removed in versions 2.9.x of the underlying NLopt library (affecting its R bindings `nloptr`) before being reintroduced in 2.10.0. While this specific change might not directly impact Python bindings in the same way, it indicates a potential for algorithm identifiers to change.
Install
-
pip install nlopt
Imports
- nlopt
import nlopt
- numpy
import numpy as np
Quickstart
import nlopt
import numpy as np
# Objective function to minimize: f(x) = sqrt(x[1])
# subject to x[1] >= (a*x[0] + b)**3 and x[1] >= 0
# for a1=2, b1=0, a2=-1, b2=1
def myfunc(x, grad):
if grad.size > 0:
grad[0] = 0.0
grad[1] = 0.5 / np.sqrt(x[1])
return np.sqrt(x[1])
def myconstraint(x, grad, a, b):
if grad.size > 0:
grad[0] = 3 * a * (a * x[0] + b)**2
grad[1] = -1.0
return (a * x[0] + b)**3 - x[1]
# Problem dimension
n = 2
# Create an optimizer object
opt = nlopt.opt(nlopt.LD_MMA, n)
# Set minimization objective
opt.set_min_objective(myfunc)
# Set bounds
opt.set_lower_bounds([-float('inf'), 0.0]) # x[1] >= 0
# Add nonlinear inequality constraints (h(x) <= 0)
# Constraint 1: x[1] >= (2*x[0])**3 => (2*x[0])**3 - x[1] <= 0
opt.add_inequality_constraint(lambda x, grad: myconstraint(x, grad, 2.0, 0.0), 1e-8)
# Constraint 2: x[1] >= (-1*x[0] + 1)**3 => (-1*x[0] + 1)**3 - x[1] <= 0
opt.add_inequality_constraint(lambda x, grad: myconstraint(x, grad, -1.0, 1.0), 1e-8)
# Set stopping criteria
opt.set_xtol_rel(1e-4)
opt.set_maxeval(1000)
# Initial guess
x0 = np.array([1.234, 5.678])
try:
x_opt = opt.optimize(x0)
minf = opt.last_optimum_value()
result_code = opt.last_optimize_result()
print(f"Optimized result: x = {x_opt}, f(x) = {minf}, return code: {result_code}")
except nlopt.RunTimeError as e:
print(f"NLopt failed: {e}")
# Example of using a derivative-free algorithm
opt_df = nlopt.opt(nlopt.LN_COBYLA, n)
opt_df.set_min_objective(myfunc) # Note: grad argument will be empty for derivative-free
opt_df.set_lower_bounds([-float('inf'), 0.0])
opt_df.add_inequality_constraint(lambda x, grad: myconstraint(x, grad, 2.0, 0.0), 1e-8)
opt_df.add_inequality_constraint(lambda x, grad: myconstraint(x, grad, -1.0, 1.0), 1e-8)
opt_df.set_xtol_rel(1e-4)
try:
x_opt_df = opt_df.optimize(x0)
minf_df = opt_df.last_optimum_value()
print(f"Derivative-free result: x = {x_opt_df}, f(x) = {minf_df}")
except nlopt.RunTimeError as e:
print(f"NLopt (derivative-free) failed: {e}")