OpenTelemetry Instrumentation for CrewAI
OpenTelemetry crewAI instrumentation. This library provides automatic distributed tracing and observability for agentic workflows built with the CrewAI framework. It captures performance and operational statistics, including LLM calls and agent steps, using OpenTelemetry semantic conventions for generative AI.
Warnings
- breaking The OpenTelemetry Generative AI Semantic Conventions are under active development and frequently change. Recent library versions (0.53.0 onwards) have updated to support these evolving conventions, which may alter span and attribute names or structures in your traces.
- gotcha By default, this instrumentation logs prompts, completions, and embeddings to span attributes. This may expose highly sensitive data in your traces.
- gotcha For comprehensive tracing that includes the underlying Large Language Model (LLM) calls made by CrewAI agents, you must instrument *both* `opentelemetry-instrumentation-crewai` and the specific OpenTelemetry instrumentation for your chosen LLM provider (e.g., `opentelemetry-instrumentation-openai`, `opentelemetry-instrumentation-anthropic`).
- gotcha OpenTelemetry instrumentation must be enabled *before* any CrewAI agents or tasks are defined or executed within your application. If not, early operations might not be traced.
Install
-
pip install opentelemetry-instrumentation-crewai crewai opentelemetry-sdk opentelemetry-exporter-console opentelemetry-instrumentation-openai
Imports
- CrewAIInstrumentor
from opentelemetry.instrumentation.crewai import CrewAIInstrumentor
- OpenAIInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
Quickstart
import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleExportSpanProcessor
from opentelemetry.instrumentation.crewai import CrewAIInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor # For LLM calls within CrewAI
# Configure OpenTelemetry to export traces to the console
resource = Resource.create({"service.name": "crewai-otel-example"})
tracer_provider = TracerProvider(resource=resource)
span_processor = SimpleExportSpanProcessor(ConsoleSpanExporter())
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)
# Instrument CrewAI and the underlying LLM (e.g., OpenAI)
# It's crucial to instrument *both* for full visibility across agentic workflow and LLM calls.
CrewAIInstrumentor().instrument()
OpenAIInstrumentor().instrument() # Instrument your LLM provider as well
# Ensure API key is set (e.g., from environment variable)
# Replace with your actual LLM API key if not using OpenAI or different env var
if not os.environ.get("OPENAI_API_KEY"):
print("Please set the OPENAI_API_KEY environment variable to run this example.")
exit(1)
# Minimal CrewAI example
from crewai import Agent, Task, Crew, Process
# Define your agents
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover groundbreaking insights about AI advancements',
backstory='A meticulous analyst dedicated to revealing the next big thing in AI.',
verbose=True,
allow_delegation=False
)
writer = Agent(
role='Content Strategist',
goal='Craft compelling narratives about AI innovations',
backstory='A skilled writer who transforms complex technical topics into engaging stories.',
verbose=True,
allow_delegation=False
)
# Define your tasks
task1 = Task(
description='Research the latest trends in generative AI for the year 2026.',
agent=researcher
)
task2 = Task(
description='Write a blog post summary (approx 200 words) based on the research findings.',
agent=writer
)
# Form the crew
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
process=Process.sequential,
verbose=2
)
# Kickoff the crew
print("\n### CrewAI Workflow Started (traces will be printed to console) ###")
result = crew.kickoff()
print("\n### CrewAI Workflow Finished ###")
print(f"\nFinal Result: {result}")