LangWatch Python SDK

0.18.0 · active · verified Thu Apr 16

LangWatch is a Python SDK for monitoring, evaluating, and testing LLM-powered applications and AI agents. It provides end-to-end observability by capturing traces and spans for LLM calls, RAG retrievals, and other pipeline steps, helping developers debug, prevent regressions, and optimize their AI systems. The current version is 0.18.0, and the library undergoes active development with regular releases.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize LangWatch, enable automatic OpenAI instrumentation, and trace an asynchronous function that interacts with the OpenAI API. It highlights the use of `langwatch.setup()` and the `@langwatch.trace()` decorator, along with handling the `LANGWATCH_API_KEY`.

import os
import langwatch
from langwatch.instrumentors import OpenAIInstrumentor
from openai import OpenAI

# Ensure your API key is set as an environment variable or pass it directly
# os.environ["LANGWATCH_API_KEY"] = "YOUR_LANGWATCH_API_KEY"
api_key = os.environ.get("LANGWATCH_API_KEY", "")

if not api_key:
    print("Warning: LANGWATCH_API_KEY environment variable not set. Traces will not be sent.")
    # For demonstration, we'll proceed but you should set your key.
    # In a real application, you might raise an error or configure LangWatch to only log locally.

# Initialize LangWatch early in your application
# Automatically instruments OpenAI calls within the decorated functions
langwatch.setup(
    api_key=api_key,
    instrumentors=[OpenAIInstrumentor()]
)

client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "")) # Assume OpenAI API key is also set

@langwatch.trace(name="UserInteraction")
async def handle_user_query(query: str):
    print(f"Processing query: {query}")
    try:
        response = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": "You are a helpful assistant."}, 
                {"role": "user", "content": query}
            ]
        )
        result = response.choices[0].message.content
        print(f"Assistant's response: {result}")
        return result
    except Exception as e:
        langwatch.get_current_trace().error(str(e)) # Record error if something goes wrong
        print(f"An error occurred: {e}")
        raise

async def main():
    await handle_user_query("Tell me a fun fact about Python.")
    # Ensure traces are flushed before the application exits
    await langwatch.shutdown()

import asyncio
asyncio.run(main())

view raw JSON →