Instructor

1.14.5 · active · verified Sat Feb 28

Structured data extraction from LLMs via Pydantic models. Patches or wraps provider clients (OpenAI, Anthropic, Gemini, Cohere, Mistral, Groq, Ollama, and 15+ others) to add response_model, automatic validation, and retry logic. Uses tool-calling or JSON mode depending on provider. Core interface: client.chat.completions.create(response_model=MyModel, ...) returns a validated Pydantic instance. Maintained by Jason Liu / jxnl.

Warnings

Install

Imports

Quickstart

from_provider() is the 1.x unified interface. For per-provider clients use instructor.from_openai(), instructor.from_anthropic(), etc.

import instructor
from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

# Unified provider interface (1.x recommended)
client = instructor.from_provider('openai/gpt-4o-mini')

user = client.chat.completions.create(
    response_model=User,
    messages=[{'role': 'user', 'content': 'John is 25 years old'}],
)
print(user)  # User(name='John', age=25)

view raw JSON →