Autograd
Autograd is a Python library that efficiently computes derivatives of native Python and NumPy code using automatic differentiation. The current version, 1.8.0, supports Python 3.9-3.13 and has a release cadence that includes compatibility updates for new Python and NumPy versions, along with bug fixes and minor features.
Warnings
- breaking Autograd v1.7.0 (and subsequent versions) dropped support for Python 2.x. Python 3.9 or newer is now required.
- breaking While Autograd v1.7.0 added compatibility for NumPy 2.0.0, the latest v1.8.0 explicitly pins dependencies to NumPy < 3 to ensure stability. Future NumPy 3.x versions might introduce breaking changes.
- gotcha Autograd does not properly handle in-place array operations (e.g., `A[0,0] = x`, `a += b`) or method-style dot products (`A.dot(B)`). Always use non-in-place assignments (`a = a + b`) and function-style operations (`np.dot(A, B)`).
- gotcha Avoid implicit casting of Python lists to NumPy arrays within functions that require differentiation (e.g., `np.sum([x, y])`). Explicitly convert lists to NumPy arrays: `np.sum(np.array([x, y]))`.
- gotcha The `grad` function expects the function being differentiated to return a scalar value. If you need to differentiate a vectorized function that returns an array of scalar derivatives, use `autograd.elementwise_grad`.
- gotcha Performance can be significantly impacted by extensive pure Python logic (e.g., complex loops, conditional statements) within the function being differentiated, as Python overheads are amplified during automatic differentiation.
Install
-
pip install autograd -
pip install "autograd[scipy]"
Imports
- numpy
import autograd.numpy as np
- grad
from autograd import grad
- elementwise_grad
from autograd import elementwise_grad as egrad
Quickstart
import autograd.numpy as np
from autograd import grad
def tanh(x):
return (1.0 - np.exp((-2 * x))) / (1.0 + np.exp(-(2 * x)))
grad_tanh = grad(tanh)
x_val = np.float64(1.0)
print(f"tanh({x_val}) = {tanh(x_val)}")
print(f"Gradient of tanh at {x_val} = {grad_tanh(x_val)}")