{"id":3176,"library":"nlopt","title":"NLopt Python Bindings","description":"NLopt is a free/open-source library providing a common interface to a variety of nonlinear optimization algorithms, encompassing both global and local, constrained and unconstrained problems. The `nlopt` Python package offers bindings to this library, enabling Python users to leverage its extensive suite of optimization routines. The current version is 2.10.0, and the project actively maintains and releases new versions, often aligning with updates to the underlying C library.","status":"active","version":"2.10.0","language":"en","source_language":"en","source_url":"https://github.com/DanielBok/nlopt-python","tags":["optimization","nonlinear","global optimization","local optimization","constrained optimization","unconstrained optimization","numerical methods","scientific computing"],"install":[{"cmd":"pip install nlopt","lang":"bash","label":"Install nlopt"}],"dependencies":[{"reason":"Required for array data types used to communicate with NLopt's Python interface for objective functions, gradients, and optimization parameters.","package":"numpy","optional":false},{"reason":"The library explicitly requires Python versions 3.9 and above.","package":"python","optional":false}],"imports":[{"symbol":"nlopt","correct":"import nlopt"},{"note":"While older documentation or examples might use 'from numpy import *', 'import numpy as np' is the modern and recommended Python idiom to avoid namespace pollution and improve code readability.","wrong":"from numpy import *","symbol":"numpy","correct":"import numpy as np"}],"quickstart":{"code":"import nlopt\nimport numpy as np\n\n# Objective function to minimize: f(x) = sqrt(x[1])\n# subject to x[1] >= (a*x[0] + b)**3 and x[1] >= 0\n# for a1=2, b1=0, a2=-1, b2=1\n\ndef myfunc(x, grad):\n    if grad.size > 0:\n        grad[0] = 0.0\n        grad[1] = 0.5 / np.sqrt(x[1])\n    return np.sqrt(x[1])\n\ndef myconstraint(x, grad, a, b):\n    if grad.size > 0:\n        grad[0] = 3 * a * (a * x[0] + b)**2\n        grad[1] = -1.0\n    return (a * x[0] + b)**3 - x[1]\n\n# Problem dimension\nn = 2\n\n# Create an optimizer object\nopt = nlopt.opt(nlopt.LD_MMA, n)\n\n# Set minimization objective\nopt.set_min_objective(myfunc)\n\n# Set bounds\nopt.set_lower_bounds([-float('inf'), 0.0]) # x[1] >= 0\n\n# Add nonlinear inequality constraints (h(x) <= 0)\n# Constraint 1: x[1] >= (2*x[0])**3  =>  (2*x[0])**3 - x[1] <= 0\nopt.add_inequality_constraint(lambda x, grad: myconstraint(x, grad, 2.0, 0.0), 1e-8)\n# Constraint 2: x[1] >= (-1*x[0] + 1)**3 => (-1*x[0] + 1)**3 - x[1] <= 0\nopt.add_inequality_constraint(lambda x, grad: myconstraint(x, grad, -1.0, 1.0), 1e-8)\n\n# Set stopping criteria\nopt.set_xtol_rel(1e-4)\nopt.set_maxeval(1000)\n\n# Initial guess\nx0 = np.array([1.234, 5.678])\n\ntry:\n    x_opt = opt.optimize(x0)\n    minf = opt.last_optimum_value()\n    result_code = opt.last_optimize_result()\n    print(f\"Optimized result: x = {x_opt}, f(x) = {minf}, return code: {result_code}\")\nexcept nlopt.RunTimeError as e:\n    print(f\"NLopt failed: {e}\")\n\n# Example of using a derivative-free algorithm\nopt_df = nlopt.opt(nlopt.LN_COBYLA, n)\nopt_df.set_min_objective(myfunc) # Note: grad argument will be empty for derivative-free\nopt_df.set_lower_bounds([-float('inf'), 0.0])\nopt_df.add_inequality_constraint(lambda x, grad: myconstraint(x, grad, 2.0, 0.0), 1e-8)\nopt_df.add_inequality_constraint(lambda x, grad: myconstraint(x, grad, -1.0, 1.0), 1e-8)\nopt_df.set_xtol_rel(1e-4)\n\ntry:\n    x_opt_df = opt_df.optimize(x0)\n    minf_df = opt_df.last_optimum_value()\n    print(f\"Derivative-free result: x = {x_opt_df}, f(x) = {minf_df}\")\nexcept nlopt.RunTimeError as e:\n    print(f\"NLopt (derivative-free) failed: {e}\")","lang":"python","description":"This quickstart demonstrates how to set up and run a constrained minimization problem using NLopt. It defines an objective function and nonlinear inequality constraints, both capable of providing gradients. It then initializes an `nlopt.opt` object with a gradient-based algorithm (`LD_MMA`), sets the objective, bounds, and constraints, and performs the optimization. An example using a derivative-free algorithm (`LN_COBYLA`) is also included to show the common pattern for algorithms that do not utilize gradients."},"warnings":[{"fix":"Ensure that any gradient calculations within your objective or constraint functions modify the `grad` parameter's contents directly using slice assignment (e.g., `grad[:] = ...`) or in-place operations.","message":"Objective and constraint functions must modify the `grad` array in-place, rather than reassigning it. Operations like `grad = 2*x` will not work as they create a new array; instead, use `grad[:] = 2*x` to overwrite the contents of the existing `grad` array.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Upgrade your Python environment to version 3.9 or newer to ensure compatibility and access to the latest `nlopt` features and bug fixes. Check the `nlopt` PyPI page or GitHub releases for specific `requires_python` information for the version you intend to use.","message":"Python version compatibility has changed across releases. As of version 2.10.0, `nlopt` officially supports Python 3.9 and above. Older Python versions (e.g., Python 3.8) were explicitly deprecated in NLopt 2.8.0.","severity":"breaking","affected_versions":">=2.8.0"},{"fix":"If possible, always provide accurate analytical gradients to gradient-based algorithms (e.g., `LD_MMA`, `LD_LBFGS`). If gradients are difficult or impossible to obtain, explicitly choose a derivative-free algorithm (e.g., `LN_COBYLA`, `LN_BOBYQA`). Test your gradient implementations by comparing them against finite-difference approximations.","message":"Incorrect or missing gradient information can lead to non-convergence or suboptimal results, especially when using gradient-based algorithms. Many NLopt algorithms expect analytically derived gradients for efficiency and accuracy.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always rewrite your inequality constraints to fit the `h(x) <= 0` format. For example, a constraint `g(x) >= C` should be written as `C - g(x) <= 0` in your constraint function.","message":"NLopt expects nonlinear inequality constraints to be formulated in the form `h(x) <= 0`. Incorrectly formulating these constraints (e.g., as `h(x) >= 0`) will lead to optimization issues or incorrect results.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Always consult the official NLopt documentation or the `nlopt` Python module's attributes (`dir(nlopt)`) for the exact algorithm constants available in your installed version. When upgrading, review the changelog for any mentions of algorithm name changes or removals.","message":"Specific algorithm constants, particularly those for sub-algorithms, may be removed or renamed in different underlying NLopt library versions. For instance, `NLOPT_LD_LBFGS_NOCEDAL` was temporarily removed in versions 2.9.x of the underlying NLopt library (affecting its R bindings `nloptr`) before being reintroduced in 2.10.0. While this specific change might not directly impact Python bindings in the same way, it indicates a potential for algorithm identifiers to change.","severity":"deprecated","affected_versions":"Potentially between major NLopt C library versions (e.g., 2.9.x vs 2.10.0)"}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}