Trustcall
Trustcall is a Python library that provides tenacious and trustworthy tool calling capabilities built on LangGraph. It addresses common challenges with Large Language Models (LLMs) in generating and updating complex, nested JSON schemas by employing a 'patch-don't-post' methodology. This approach enables faster, cheaper, and more resilient structured output generation, as well as accurate updates to existing schemas without information loss. The library is actively developed, with its current version being 0.0.39, and new minor versions are released regularly.
Warnings
- gotcha Trustcall relies on Pydantic models for schema definition, and its examples often use `pydantic.v1` imports. While Pydantic V2 is backward compatible, users might encounter unexpected behavior if mixing V1 and V2 syntax without careful consideration, especially for complex custom validators or field definitions.
- gotcha Specific LLM integrations within the LangChain ecosystem may not work seamlessly with Trustcall out-of-the-box, even if they generally support tool calling. Known issues have been reported with `ChatLlamaCpp` and certain Gemini models (e.g., `gemini-1.5-pro-002` via `ChatVertexAI`).
- breaking Under certain conditions, particularly with complex nested schemas or during iterative patching, internal errors related to 'patch application failure' or 'errors in _ExtractUpdates' might occur, and these may not propagate clearly, making debugging challenging. This could lead to an incomplete or incorrect final extracted schema.
- deprecated As Trustcall is built on LangGraph, it can inherit deprecation warnings from LangGraph itself, such as `LangGraphDeprecatedSinceV10: Importing Send from langgraph.constants is deprecated`. While these might not directly break Trustcall's functionality, they indicate underlying library changes.
Install
-
pip install -U trustcall langchain-fireworks -
pip install -U trustcall
Imports
- create_extractor
from trustcall import create_extractor
Quickstart
import os
from typing import List
from langchain_fireworks import ChatFireworks
from pydantic.v1 import BaseModel, Field, validator
from trustcall import create_extractor
# Ensure FIREWORKS_API_KEY is set in your environment variables
# os.environ["FIREWORKS_API_KEY"] = os.environ.get('FIREWORKS_API_KEY', 'YOUR_FIREWORKS_API_KEY')
class Preferences(BaseModel):
foods: List[str] = Field(description="Favorite foods")
@validator("foods")
def at_least_three_foods(cls, v):
# This validator serves as a demonstration of error recovery
if len(v) < 3:
raise ValueError("Must have at least three favorite foods")
return v
llm = ChatFireworks(model="accounts/fireworks/models/firefunction-v2")
extractor = create_extractor(llm, tools=[Preferences], tool_choice="Preferences")
# Example 1: Initial extraction with validation error, Trustcall recovers
res = extractor.invoke({"messages": [("user", "I like apple pie and ice cream.")]})
msg = res["messages"][-1]
print("Initial extraction (recovered):", msg.tool_calls)
# Expected output for foods list is now >= 3 items due to recovery
# Example 2: Updating an existing schema
class UserProfile(BaseModel):
name: str
hobbies: List[str]
existing_profile = UserProfile(name="Alice", hobbies=["reading", "hiking"])
update_extractor = create_extractor(
llm,
tools=[UserProfile],
tool_choice="UserProfile"
)
updated_res = update_extractor.invoke({
"messages": [("user", "My new hobby is painting.")],
"existing": {"UserProfile": existing_profile}
})
updated_msg = updated_res["messages"][-1]
print("\nUpdated profile:", updated_msg.tool_calls)