PyLBFGS
raw JSON → 0.2.0.16 verified Fri May 01 auth: no python maintenance
PyLBFGS provides L-BFGS and OWL-QN optimization algorithms for large-scale unconstrained and bound-constrained optimization. Current version is 0.2.0.16, but development appears to be in maintenance mode with infrequent releases.
pip install pylbfgs Common errors
error ImportError: No module named pylbfgs ↓
cause pylbfgs is not installed or the package name is confused with similar libraries (e.g., pyLBFGS).
fix
Run: pip install pylbfgs
error OSError: [WinError 126] The specified module could not be found ↓
cause Windows DLL missing; the C extension may not be compiled correctly for your Python version.
fix
Use a conda environment or install from a precompiled wheel if available. Alternatively, use scipy.
error TypeError: 'numpy.float64' object is not callable ↓
cause Typo in evaluate function: returning a float instead of a callable? Actually this error occurs when the objective function does not return the function value correctly.
fix
Ensure evaluate returns a float or numpy scalar. Check the function signature.
Warnings
gotcha The library does not support Python 3.10+ out of the box due to missing C extension compilation. Users may need to install from source or use a conda-forge build. ↓
fix Prefer using scipy.optimize.minimize with method='L-BFGS-B' as a more maintained alternative.
deprecated The library is effectively unmaintained; the last release was in 2017. No updates for compatibility with modern Python or NumPy. ↓
fix Consider switching to scipy.optimize (L-BFGS-B) or pyLBFGS (note different casing: pylbfgs vs pyLBFGS).
gotcha The evaluate function must modify the gradient array in-place; failing to do so leads to incorrect optimization. ↓
fix Always assign g[:] = ... or use numpy's in-place operations like np.copyto(g, ...).
Imports
- LBFGS
from pylbfgs import LBFGS - owl_qn
from pylbfgs import owl_qn
Quickstart
import numpy as np
from pylbfgs import LBFGS
def evaluate(x, g):
# f(x) = (x-2)^2, gradient = 2*(x-2)
f = np.sum((x - 2.0) ** 2)
g[:] = 2.0 * (x - 2.0)
return f
x0 = np.array([0.0, 0.0])
optimizer = LBFGS()
result = optimizer.minimize(evaluate, x0)
print("Optimal x:", result)