{"id":2157,"library":"opentelemetry-instrumentation-crewai","title":"OpenTelemetry Instrumentation for CrewAI","description":"OpenTelemetry crewAI instrumentation. This library provides automatic distributed tracing and observability for agentic workflows built with the CrewAI framework. It captures performance and operational statistics, including LLM calls and agent steps, using OpenTelemetry semantic conventions for generative AI.","status":"active","version":"0.58.0","language":"en","source_language":"en","source_url":"https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-crewai","tags":["opentelemetry","observability","tracing","crewai","llm","ai-agents","instrumentation"],"install":[{"cmd":"pip install opentelemetry-instrumentation-crewai crewai opentelemetry-sdk opentelemetry-exporter-console opentelemetry-instrumentation-openai","lang":"bash","label":"Install with dependencies for a console exporter and OpenAI LLM"}],"dependencies":[{"reason":"The core framework being instrumented, required for the instrumentation to function.","package":"crewai"},{"reason":"Provides the OpenTelemetry API interfaces.","package":"opentelemetry-api"},{"reason":"Provides the OpenTelemetry SDK implementation.","package":"opentelemetry-sdk"},{"reason":"Base package for OpenTelemetry instrumentations.","package":"opentelemetry-instrumentation"},{"reason":"Provides semantic conventions specific to AI and LLM operations, crucial for meaningful traces.","package":"opentelemetry-semantic-conventions-ai"}],"imports":[{"note":"The main class to enable CrewAI specific tracing.","symbol":"CrewAIInstrumentor","correct":"from opentelemetry.instrumentation.crewai import CrewAIInstrumentor"},{"note":"Often needed alongside CrewAI instrumentation for comprehensive tracing of LLM calls made by agents.","symbol":"OpenAIInstrumentor","correct":"from opentelemetry.instrumentation.openai import OpenAIInstrumentor"}],"quickstart":{"code":"import os\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleExportSpanProcessor\nfrom opentelemetry.instrumentation.crewai import CrewAIInstrumentor\nfrom opentelemetry.instrumentation.openai import OpenAIInstrumentor # For LLM calls within CrewAI\n\n# Configure OpenTelemetry to export traces to the console\nresource = Resource.create({\"service.name\": \"crewai-otel-example\"})\ntracer_provider = TracerProvider(resource=resource)\nspan_processor = SimpleExportSpanProcessor(ConsoleSpanExporter())\ntracer_provider.add_span_processor(span_processor)\ntrace.set_tracer_provider(tracer_provider)\n\n# Instrument CrewAI and the underlying LLM (e.g., OpenAI)\n# It's crucial to instrument *both* for full visibility across agentic workflow and LLM calls.\nCrewAIInstrumentor().instrument()\nOpenAIInstrumentor().instrument() # Instrument your LLM provider as well\n\n# Ensure API key is set (e.g., from environment variable)\n# Replace with your actual LLM API key if not using OpenAI or different env var\nif not os.environ.get(\"OPENAI_API_KEY\"):\n    print(\"Please set the OPENAI_API_KEY environment variable to run this example.\")\n    exit(1)\n\n# Minimal CrewAI example\nfrom crewai import Agent, Task, Crew, Process\n\n# Define your agents\nresearcher = Agent(\n    role='Senior Research Analyst',\n    goal='Uncover groundbreaking insights about AI advancements',\n    backstory='A meticulous analyst dedicated to revealing the next big thing in AI.',\n    verbose=True,\n    allow_delegation=False\n)\n\nwriter = Agent(\n    role='Content Strategist',\n    goal='Craft compelling narratives about AI innovations',\n    backstory='A skilled writer who transforms complex technical topics into engaging stories.',\n    verbose=True,\n    allow_delegation=False\n)\n\n# Define your tasks\ntask1 = Task(\n    description='Research the latest trends in generative AI for the year 2026.',\n    agent=researcher\n)\n\ntask2 = Task(\n    description='Write a blog post summary (approx 200 words) based on the research findings.',\n    agent=writer\n)\n\n# Form the crew\ncrew = Crew(\n    agents=[researcher, writer],\n    tasks=[task1, task2],\n    process=Process.sequential,\n    verbose=2\n)\n\n# Kickoff the crew\nprint(\"\\n### CrewAI Workflow Started (traces will be printed to console) ###\")\nresult = crew.kickoff()\nprint(\"\\n### CrewAI Workflow Finished ###\")\nprint(f\"\\nFinal Result: {result}\")\n","lang":"python","description":"This quickstart demonstrates how to set up OpenTelemetry for `opentelemetry-instrumentation-crewai` using a `ConsoleSpanExporter`. It initializes the OpenTelemetry `TracerProvider`, instruments `CrewAI` and `OpenAI` (assuming an OpenAI LLM is used by CrewAI), and then runs a basic CrewAI multi-agent workflow. Traces for agent steps and LLM calls will be printed to the console."},"warnings":[{"fix":"Monitor the OpenTelemetry GenAI Semantic Conventions documentation for changes. To control which version of conventions are emitted, use the `OTEL_SEMCONV_STABILITY_OPT_IN` environment variable (e.g., `OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_latest_experimental`) if your instrumentation supports it.","message":"The OpenTelemetry Generative AI Semantic Conventions are under active development and frequently change. Recent library versions (0.53.0 onwards) have updated to support these evolving conventions, which may alter span and attribute names or structures in your traces.","severity":"breaking","affected_versions":">=0.53.0"},{"fix":"To disable logging of potentially sensitive content, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false` (e.g., `TRACELOOP_TRACE_CONTENT=false`).","message":"By default, this instrumentation logs prompts, completions, and embeddings to span attributes. This may expose highly sensitive data in your traces.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Ensure you install and call `instrument()` on the instrumentor for your LLM provider in addition to `CrewAIInstrumentor`. For example: `OpenAIInstrumentor().instrument()`.","message":"For comprehensive tracing that includes the underlying Large Language Model (LLM) calls made by CrewAI agents, you must instrument *both* `opentelemetry-instrumentation-crewai` and the specific OpenTelemetry instrumentation for your chosen LLM provider (e.g., `opentelemetry-instrumentation-openai`, `opentelemetry-instrumentation-anthropic`).","severity":"gotcha","affected_versions":"All versions"},{"fix":"Place all OpenTelemetry setup and instrumentation calls at the very beginning of your application's entry point, before importing or instantiating CrewAI components.","message":"OpenTelemetry instrumentation must be enabled *before* any CrewAI agents or tasks are defined or executed within your application. If not, early operations might not be traced.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}