{"id":2579,"library":"llama-index-workflows","title":"LlamaIndex Workflows","description":"LlamaIndex Workflows is an event-driven, async-first, step-based framework designed to control the execution flow of AI applications, especially agents. It allows developers to build complex, multi-step processes by orchestrating various components, including Large Language Models (LLMs) and external APIs, and to maintain state across different steps. The library is currently at version 2.17.3 and undergoes frequent updates to enhance features and improve performance. [1, 7, 25, 26]","status":"active","version":"2.17.3","language":"en","source_language":"en","source_url":"https://github.com/run-llama/workflows-py","tags":["AI","LLM","workflow","agent","orchestration","async","event-driven","LlamaIndex"],"install":[{"cmd":"pip install llama-index-workflows","lang":"bash","label":"Install LlamaIndex Workflows"},{"cmd":"pip install 'llama-index-workflows[openai]'  # Install with OpenAI dependencies","lang":"bash","label":"Install with OpenAI support"}],"dependencies":[{"reason":"Used for defining typed workflow events and state models.","package":"pydantic","optional":false},{"reason":"Provides core LlamaIndex functionalities like LLM abstractions. Workflows can be used standalone but often integrate with LlamaIndex components.","package":"llama-index-core","optional":true},{"reason":"Commonly used LLM integration for examples and agentic workflows.","package":"llama-index-llms-openai","optional":true},{"reason":"Optional integration for observability with tools like OpenTelemetry and Arize Phoenix.","package":"llama-index-instrumentation","optional":true}],"imports":[{"note":"When installing `llama-index-workflows` directly (the standalone package), core classes are imported from the top-level `workflows` package. If using `llama-index-core` which re-exports Workflows, use `from llama_index.core.workflow import Workflow`.","wrong":"from llama_index.workflows import Workflow","symbol":"Workflow","correct":"from workflows import Workflow"},{"note":"Decorator to mark an async function as a workflow step.","symbol":"step","correct":"from workflows import step"},{"symbol":"Context","correct":"from workflows import Context"},{"note":"Similar to `Workflow`, event classes for the standalone package are imported from `workflows.events`.","wrong":"from llama_index.workflows.events import WorkflowEvent","symbol":"WorkflowEvent","correct":"from workflows.events import WorkflowEvent"},{"note":"Special event class to initiate a workflow run.","symbol":"StartEvent","correct":"from workflows.events import StartEvent"},{"note":"Special event class to signal the completion of a workflow run.","symbol":"StopEvent","correct":"from workflows.events import StopEvent"}],"quickstart":{"code":"import asyncio\nimport os\nfrom pydantic import BaseModel, Field\nfrom workflows import Context, Workflow, step\nfrom workflows.events import WorkflowEvent, StartEvent, StopEvent\nfrom llama_index.core.llms import LLM\nfrom llama_index.llms.openai import OpenAI # Ensure llama-index-llms-openai is installed\n\n# Define custom event types for the workflow\nclass JokeTopicEvent(WorkflowEvent):\n    topic: str = Field(description=\"The topic for the joke.\")\n\nclass JokeEvent(WorkflowEvent):\n    joke: str = Field(description=\"The generated joke.\")\n\nclass CritiqueEvent(WorkflowEvent):\n    joke: str = Field(description=\"The original joke.\")\n    critique: str = Field(description=\"The critique of the joke.\")\n\nclass FinalJokeEvent(WorkflowEvent):\n    final_joke: str = Field(description=\"The final, refined joke.\")\n\nasync def main():\n    # Initialize an LLM (requires OPENAI_API_KEY environment variable)\n    llm = OpenAI(model=\"gpt-4o-mini\", api_key=os.environ.get(\"OPENAI_API_KEY\", \"\"))\n\n    # Create a Workflow instance\n    workflow = Workflow()\n\n    # Define workflow steps using the @step decorator\n    @step\n    async def generate_joke(ctx: Context, event: JokeTopicEvent):\n        print(f\"[Step] Generating joke about: {event.topic}\")\n        response = await llm.complete(f\"Tell me a short joke about {event.topic}.\")\n        return JokeEvent(joke=response.text)\n\n    @step\n    async def critique_joke(ctx: Context, event: JokeEvent):\n        print(f\"[Step] Critiquing joke: {event.joke}\")\n        response = await llm.complete(f\"Critique this joke and suggest an improvement: '{event.joke}'\")\n        return CritiqueEvent(joke=event.joke, critique=response.text)\n\n    @step\n    async def refine_joke(ctx: Context, event: CritiqueEvent):\n        print(f\"[Step] Refining joke based on critique: {event.critique}\")\n        response = await llm.complete(f\"Original joke: '{event.joke}'\\nCritique: '{event.critique}'\\nRefine the joke based on the critique to make it funnier or clearer.\")\n        return FinalJokeEvent(final_joke=response.text)\n\n    # Register steps with the workflow\n    workflow.add_step(generate_joke)\n    workflow.add_step(critique_joke)\n    workflow.add_step(refine_joke)\n\n    print(\"Starting workflow to generate and refine a joke...\")\n    # Trigger the initial event to start the workflow\n    events_stream = workflow.run(StartEvent(payload=JokeTopicEvent(topic=\"dogs\")))\n\n    # Process events as they arrive\n    async for event in events_stream:\n        if isinstance(event, FinalJokeEvent):\n            print(f\"\\n[Final Result] {event.final_joke}\")\n            break\n        elif isinstance(event, StopEvent):\n            print(f\"\\n[Workflow Stopped] {event.result}\")\n            break\n        else:\n            print(f\"[Event Emitted] {event.__class__.__name__}: {event.payload}\")\n\nif __name__ == \"__main__\":\n    # Ensure OPENAI_API_KEY is set for the example to run correctly\n    # For local testing without a real key, you can set a dummy key to bypass initialization errors\n    # but LLM calls will fail without a valid key.\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        print(\"WARNING: OPENAI_API_KEY environment variable not set. LLM calls will fail.\")\n    asyncio.run(main())","lang":"python","description":"This quickstart demonstrates a simple event-driven workflow to generate and refine a joke using an LLM. It defines custom event types, creates asynchronous steps with the `@step` decorator, registers them with a `Workflow` instance, and then runs the workflow by emitting a `StartEvent`. The example uses OpenAI as the LLM provider, requiring `llama-index-llms-openai` and an `OPENAI_API_KEY` environment variable. [5, 7, 8]"},"warnings":[{"fix":"Install `llama-index-workflows` and update imports to `from workflows import ...` or `from llama_index.core.workflow import ...` if using the `llama-index-core` re-export. Consult migration guides for `Query Pipelines` to `Workflows`.","message":"LlamaIndex Workflows became a standalone package (v1.0), deprecating older in-tree workflow-like implementations within `llama-index` itself (pre-0.11). Users migrating from `Query Pipelines` or older internal `AgentRunner`/`AgentWorker` classes in `llama-index.core.agent` must refactor their code to use the new `workflows` package structure. [1, 6, 14, 17, 22]","severity":"breaking","affected_versions":"<1.0.0 (of `llama-index-workflows`) and <0.11.0 (of `llama-index`)"},{"fix":"Ensure your workflow execution is within an `async` function and called with `asyncio.run()`: `async def main(): ...; if __name__ == \"__main__\": asyncio.run(main())`.","message":"LlamaIndex Workflows are designed as 'async-first'. This means all steps and the `run` method are asynchronous. Users running workflows in non-async environments (e.g., top-level scripts) need to wrap their execution with `asyncio.run()` or similar. In Jupyter/Colab, this is often handled, but direct script execution requires explicit async handling. [5, 9, 10]","severity":"gotcha","affected_versions":"All versions"},{"fix":"Implement `try-except` blocks around external calls, use retry libraries (e.g., `tenacity` or built-in retry policies for steps), perform input validation, and integrate with external databases (e.g., Redis) for manual state snapshotting and recovery for long-running processes.","message":"Error handling is critical for robust workflows. External API calls (like to LLMs), data validation, and network issues can cause `WorkflowRuntimeError`, `WorkflowTimeoutError`, or other exceptions. Workflows do not automatically snapshot state for recovery without explicit integration. [5, 16, 19]","severity":"gotcha","affected_versions":"All versions"},{"fix":"Refactor `Query Pipelines` logic into a `Workflow` using `@step` decorated functions and `WorkflowEvent`s.","message":"The `Query Pipelines` feature in `llama-index` was deprecated in favor of Workflows in LlamaIndex version 0.11. Existing code using `Query Pipelines` should be migrated. [6, 22]","severity":"deprecated","affected_versions":"LlamaIndex <0.11.0"},{"fix":"Implement strategies like asynchronous operations, batch processing, pre-computing and caching embeddings, optimizing retrieval and reranking steps, and profiling code to identify and address bottlenecks. Leverage `asyncio` fully within workflow steps.","message":"Complex workflows, especially those involving multiple LLM calls or external services, can experience high latency. Default configurations might not be optimized for performance. [18]","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-10T00:00:00.000Z","next_check":"2026-07-09T00:00:00.000Z"}