OpenTelemetry Instrumentation for CrewAI

0.58.0 · active · verified Thu Apr 09

OpenTelemetry crewAI instrumentation. This library provides automatic distributed tracing and observability for agentic workflows built with the CrewAI framework. It captures performance and operational statistics, including LLM calls and agent steps, using OpenTelemetry semantic conventions for generative AI.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to set up OpenTelemetry for `opentelemetry-instrumentation-crewai` using a `ConsoleSpanExporter`. It initializes the OpenTelemetry `TracerProvider`, instruments `CrewAI` and `OpenAI` (assuming an OpenAI LLM is used by CrewAI), and then runs a basic CrewAI multi-agent workflow. Traces for agent steps and LLM calls will be printed to the console.

import os
from opentelemetry import trace
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleExportSpanProcessor
from opentelemetry.instrumentation.crewai import CrewAIInstrumentor
from opentelemetry.instrumentation.openai import OpenAIInstrumentor # For LLM calls within CrewAI

# Configure OpenTelemetry to export traces to the console
resource = Resource.create({"service.name": "crewai-otel-example"})
tracer_provider = TracerProvider(resource=resource)
span_processor = SimpleExportSpanProcessor(ConsoleSpanExporter())
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)

# Instrument CrewAI and the underlying LLM (e.g., OpenAI)
# It's crucial to instrument *both* for full visibility across agentic workflow and LLM calls.
CrewAIInstrumentor().instrument()
OpenAIInstrumentor().instrument() # Instrument your LLM provider as well

# Ensure API key is set (e.g., from environment variable)
# Replace with your actual LLM API key if not using OpenAI or different env var
if not os.environ.get("OPENAI_API_KEY"):
    print("Please set the OPENAI_API_KEY environment variable to run this example.")
    exit(1)

# Minimal CrewAI example
from crewai import Agent, Task, Crew, Process

# Define your agents
researcher = Agent(
    role='Senior Research Analyst',
    goal='Uncover groundbreaking insights about AI advancements',
    backstory='A meticulous analyst dedicated to revealing the next big thing in AI.',
    verbose=True,
    allow_delegation=False
)

writer = Agent(
    role='Content Strategist',
    goal='Craft compelling narratives about AI innovations',
    backstory='A skilled writer who transforms complex technical topics into engaging stories.',
    verbose=True,
    allow_delegation=False
)

# Define your tasks
task1 = Task(
    description='Research the latest trends in generative AI for the year 2026.',
    agent=researcher
)

task2 = Task(
    description='Write a blog post summary (approx 200 words) based on the research findings.',
    agent=writer
)

# Form the crew
crew = Crew(
    agents=[researcher, writer],
    tasks=[task1, task2],
    process=Process.sequential,
    verbose=2
)

# Kickoff the crew
print("\n### CrewAI Workflow Started (traces will be printed to console) ###")
result = crew.kickoff()
print("\n### CrewAI Workflow Finished ###")
print(f"\nFinal Result: {result}")

view raw JSON →