DyNet
raw JSON → 2.1.2 verified Mon Apr 27 auth: no python
DyNet is the Dynamic Neural Network Toolkit, a C++ library with Python bindings for training neural networks, particularly focused on natural language processing. Current version 2.1.2 supports implicitly cast parameters as expressions, Python 3.8+, and advanced slicing. Release cadence is irregular.
pip install dynet Common errors
error ImportError: No module named dynet ↓
cause DyNet not installed or Python environment not correct.
fix
Run
pip install dynet and ensure you are using the correct Python interpreter. error TypeError: 'Parameter' object is not callable ↓
cause Using `p()` instead of `p.expr()` in DyNet <2.1, or using `p.expr()` in DyNet >=2.1 where parameters are already expressions.
fix
In DyNet >=2.1 just use
p (not p() or p.expr()). In older versions use p.expr(). error ValueError: The model has already been trained, or the computation graph has already been built. ↓
cause Attempting to reuse a computation graph without calling `dy.renew_cg()`.
fix
Call
dy.renew_cg() before each training iteration or when building a new graph. Warnings
gotcha In DyNet 2.1, parameters are implicitly cast to expressions. Do not call `dy.parameter(p)` or `p.expr()`; just use `p` directly. ↓
fix Use `p` instead of `p.expr()` or `dy.parameter(p)`.
breaking DyNet 2.0 removed dependency on Boost. Model files saved with v1.x are incompatible. You must re-train or convert models. ↓
fix Retrain models with v2.0+ or use a conversion script (if available).
gotcha Always call `dy.renew_cg()` before creating new computation graph nodes in a loop, otherwise memory usage grows unbounded. ↓
fix Call `dy.renew_cg()` at the start of each training iteration.
Imports
- dynet wrong
from dynet import *correctimport dynet as dy - Model wrong
import dynet.Modelcorrectfrom dynet import Model
Quickstart
import dynet as dy
model = dy.Model()
trainer = dy.SimpleSGDTrainer(model)
pW = model.add_parameters((2, 2))
pb = model.add_parameters(2)
x = dy.vecInput(2)
y = dy.scalarInput(0)
for epoch in range(5):
loss = dy.pickneglogsoftmax(pW.expr() * x, y)
loss.backward()
trainer.update()
print(f"Epoch {epoch}: loss = {loss.value()}")