Prompt Flow Tracing
The `promptflow-tracing` package provides tracing capabilities for Prompt Flow, enabling the capture and visualization of internal execution processes for both DAG and Flex flows. It's designed to be compatible with OpenTelemetry, offering comprehensive observability for LLM-based applications, including those using frameworks like Langchain or OpenAI. The current version is 1.18.4, and the library is actively developed with frequent releases.
Warnings
- breaking Tracing is now disabled by default from Prompt Flow version 1.17.0/1.17.1. Flows will no longer automatically generate traces unless explicitly enabled.
- deprecated Python 3.8 support was dropped in Prompt Flow version 1.17.0 for security reasons. Users on Python 3.8 will need to upgrade their Python environment.
- gotcha When deploying Prompt Flow applications with tracing enabled, a `TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'` might occur in `promptflow.tracing._trace.py`. This issue is related to incomplete token usage telemetry data during deployment.
- gotcha Users have reported instances where `promptflow-tracing` emits multiple unexpected traces (e.g., 4 traces instead of 1), especially when interacting with other frameworks like AutoGen. This may indicate default instrumentation beyond explicit calls.
- gotcha Since Prompt Flow version 1.8.0, the main `promptflow` package was split into several sub-packages, including `promptflow-tracing`, `promptflow-core`, and `promptflow-devkit`. While `pip install promptflow` still installs these sub-packages, direct dependencies on `promptflow-tracing` or other specific sub-packages might require explicit installation or careful version management.
Install
-
pip install promptflow-tracing -
pip install promptflow
Imports
- start_trace
from promptflow.tracing import start_trace
- trace
from promptflow.tracing import trace
Quickstart
import os
from openai import OpenAI
from promptflow.tracing import start_trace, trace
# Ensure OPENAI_API_KEY is set in your environment
# or pass it explicitly to OpenAI(api_key=...) if not using env var.
# For Azure OpenAI, set AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_VERSION, AZURE_OPENAI_DEPLOYMENT_NAME
# Start tracing. This will instrument supported libraries like OpenAI.
start_trace()
client = OpenAI(api_key=os.environ.get('OPENAI_API_KEY', ''))
@trace
def poetic_explanation(concept: str) -> str:
try:
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": f"Compose a short poem that explains the concept of {concept} in programming."}
]
)
return completion.choices[0].message.content
except Exception as e:
print(f"Error calling OpenAI: {e}")
return "Failed to get a poetic explanation."
if os.environ.get('OPENAI_API_KEY'): # Only run if API key is present
print("--- Tracing LLM call with start_trace() ---")
poem = poetic_explanation("recursion")
print(poem)
print("\nCheck your console for a URL to the trace UI (requires promptflow-devkit).")
else:
print("Skipping quickstart: OPENAI_API_KEY environment variable not set.")