Literal AI SDK

0.1.300 · active · verified Wed Apr 15

The Literal AI Python SDK provides observability for large language model (LLM) applications. It enables developers to trace, monitor, and debug their LLM interactions and application flows directly from their Python code. As of version 0.1.300, it's an actively developed library with frequent releases in its 0.1.x series, indicating ongoing feature development and potential API refinements.

Warnings

Install

Imports

Quickstart

This quickstart initializes the Literal AI client and demonstrates how to trace a simple function representing an LLM call using the `@client.trace` decorator. It checks for the `LITERAL_API_KEY` environment variable to ensure the example runs correctly.

import os
from literalai import LiteralClient

# Initialize the client with your API key
# Ensure LITERAL_API_KEY environment variable is set or pass it directly
client = LiteralClient(api_key=os.environ.get("LITERAL_API_KEY", ""))

@client.trace(name="my_simple_llm_call_trace")
def my_simple_llm_call(prompt: str):
    """Simulates an LLM call and returns a response."""
    print(f"Processing prompt: {prompt}")
    response = f"Simulated response to: {prompt}"
    # In a real scenario, this would involve calling an actual LLM
    # e.g., OpenAI, Anthropic, etc., and potentially logging its output
    return {"output": response, "model": "simulated-model-v1"}

# Run the traced function
if os.environ.get("LITERAL_API_KEY"):
    result = my_simple_llm_call("Tell me a short story about a brave knight.")
    print(f"Trace result: {result}")
else:
    print("LITERAL_API_KEY not set. Skipping trace execution. Please set it to run the example.")

view raw JSON →