Literal AI SDK
The Literal AI Python SDK provides observability for large language model (LLM) applications. It enables developers to trace, monitor, and debug their LLM interactions and application flows directly from their Python code. As of version 0.1.300, it's an actively developed library with frequent releases in its 0.1.x series, indicating ongoing feature development and potential API refinements.
Warnings
- gotcha The `LITERAL_API_KEY` environment variable is required for authentication with the Literal AI platform. While older versions (specifically around `0.1.139`) experimented with `LITERAL_CLIENT_ID` and `LITERAL_CLIENT_SECRET`, the current recommended and supported method for the `0.1.x` series uses a single `LITERAL_API_KEY`.
- breaking As the library is in its `0.1.x` release series, API contracts can change between minor versions (e.g., `0.1.X` to `0.1.Y`) without strict adherence to semantic versioning. Frequent updates may introduce breaking changes to method signatures, class names, or expected parameters.
- gotcha When working with asynchronous code (e.g., `async def` functions), ensure you use `await` with Literal AI's async methods (e.g., `await client.atrack()`, `await client.atrace()`). Incorrectly mixing synchronous and asynchronous calls can lead to `RuntimeWarning`s, unhandled exceptions, or unexpected behavior in tracing.
Install
-
pip install literalai
Imports
- LiteralClient
from literalai import LiteralClient
- get_literal_context
from literalai import get_literal_context
Quickstart
import os
from literalai import LiteralClient
# Initialize the client with your API key
# Ensure LITERAL_API_KEY environment variable is set or pass it directly
client = LiteralClient(api_key=os.environ.get("LITERAL_API_KEY", ""))
@client.trace(name="my_simple_llm_call_trace")
def my_simple_llm_call(prompt: str):
"""Simulates an LLM call and returns a response."""
print(f"Processing prompt: {prompt}")
response = f"Simulated response to: {prompt}"
# In a real scenario, this would involve calling an actual LLM
# e.g., OpenAI, Anthropic, etc., and potentially logging its output
return {"output": response, "model": "simulated-model-v1"}
# Run the traced function
if os.environ.get("LITERAL_API_KEY"):
result = my_simple_llm_call("Tell me a short story about a brave knight.")
print(f"Trace result: {result}")
else:
print("LITERAL_API_KEY not set. Skipping trace execution. Please set it to run the example.")