DSPy
Stanford's framework for programming—not prompting—language models. Core primitives: Signatures (typed input/output specs), Modules (dspy.Predict, dspy.ChainOfThought, dspy.ReAct, etc.), and Optimizers (MIPROv2, GEPA, SIMBA, BootstrapFewShot, GRPO). Instead of hand-crafted prompts, you write compositional Python programs and let optimizers automatically tune instructions and few-shot demos against a metric. Supports any LiteLLM-compatible model.
Warnings
- breaking Python 3.9 support dropped in DSPy 3.0.0 (August 2025). DSPy now requires Python >=3.10.
- breaking dspy.Program alias removed. Code subclassing dspy.Program instead of dspy.Module will raise AttributeError.
- breaking Legacy provider-specific LM clients removed (dspy.OpenAI, dspy.Cohere, dspy.HFClientTGI, dspy.Anyscale, etc.). These old per-vendor client classes were removed across the 2.6→3.0 transition.
- breaking Community retriever integrations removed (ColBERTv2 hosted endpoint, YouRM, various third-party retrievers). These were removed in 3.0.0b1 (#8073) as unmaintained.
- breaking dspy.BaseType renamed to dspy.Type in 3.0b2. Code importing or subclassing dspy.BaseType will fail.
- breaking Old functional/ and dsp/ client paths (legacy from DSP era) and legacy cache/examples/tests removed in 3.0. Any imports from dspy.functional, dspy.dsp, or related legacy submodules will fail.
- gotcha Both 'dspy' and 'dspy-ai' on PyPI resolve to the same package at the same version (3.1.3). However, many old tutorials and LLM-generated code use 'pip install dspy-ai' — either works. The GitHub repo and official docs standardize on 'dspy'.
- gotcha dspy.settings.configure(lm=...) still works but dspy.configure(lm=...) is the shorter canonical form in 3.x docs. Both are valid. LLM-generated code often uses dspy.settings.configure from pre-3.0 examples — it won't break but is the older pattern.
- gotcha MIPROv2 is the correct optimizer name. BayesianSignatureOptimizer (an older name used in many tutorials) was renamed to MIPROv2 in DSPy 2.x and is not available in 3.x. LLM-generated code frequently uses the old name.
- gotcha LM model strings use LiteLLM provider/model format: 'openai/gpt-4o', 'anthropic/claude-sonnet-4-5', 'ollama/llama3.2'. Many older examples use bare model names like 'gpt-4o' — this still works for OpenAI but fails for other providers.
Install
-
pip install dspy-ai -
pip install dspy
Imports
- configure LM
dspy.configure(lm=dspy.LM('openai/gpt-4o')) - Signature fields
class MySig(dspy.Signature): question: str = dspy.InputField() answer: str = dspy.OutputField() - dspy.Program
class MyProgram(dspy.Module): ...
- BaseType
dspy.Type
- Optimizers
from dspy.teleprompt import MIPROv2, BootstrapFewShot
Quickstart
import dspy
# Configure LM (uses LiteLLM-style model strings)
dspy.configure(lm=dspy.LM('openai/gpt-4o-mini'))
# Define a Signature
class QA(dspy.Signature):
"""Answer questions with short factual answers."""
question: str = dspy.InputField()
answer: str = dspy.OutputField(desc='often a few words')
# Define a Module
class CoTQA(dspy.Module):
def __init__(self):
self.generate = dspy.ChainOfThought(QA)
def forward(self, question):
return self.generate(question=question)
# Run the module
program = CoTQA()
result = program(question='What is the capital of France?')
print(result.answer)