{"id":8174,"library":"flowllm","title":"FlowLLM: Simplifying LLM-based HTTP/MCP Service Development","description":"FlowLLM is a Python library designed to simplify the development of LLM-based HTTP/MCP (Message Control Protocol) services. It provides a structured way to define and manage LLM workflows using `Flow` and `Step` components, allowing developers to quickly build and deploy AI-powered APIs. The library is actively maintained with frequent minor releases in its `0.2.x` series.","status":"active","version":"0.2.0.10","language":"en","source_language":"en","source_url":"https://github.com/flowllm-ai/flowllm","tags":["LLM","AI","API","microservices","workflow","fastapi"],"install":[{"cmd":"pip install flowllm","lang":"bash","label":"Install latest version"}],"dependencies":[{"reason":"Powers the HTTP server functionality.","package":"fastapi"},{"reason":"ASGI server for running FastAPI applications.","package":"uvicorn"},{"reason":"Default LLM provider integration.","package":"openai"},{"reason":"Data validation and settings management for models and configurations.","package":"pydantic"},{"reason":"Core components for LLM interactions and abstractions.","package":"langchain-core"},{"reason":"Manages environment variables and settings loading.","package":"pydantic-settings"},{"reason":"Loads environment variables from .env files.","package":"python-dotenv"},{"reason":"OpenAI specific integration with LangChain.","package":"langchain-openai"}],"imports":[{"symbol":"Flow","correct":"from flowllm import Flow"},{"symbol":"Step","correct":"from flowllm import Step"},{"symbol":"run_flow_server","correct":"from flowllm import run_flow_server"},{"symbol":"ChatInput","correct":"from flowllm.models import ChatInput"},{"symbol":"ChatResponse","correct":"from flowllm.models import ChatResponse"}],"quickstart":{"code":"import os\nfrom flowllm import Flow, Step, run_flow_server\nfrom flowllm.models import ChatInput, ChatResponse\n\n# Define your LLM flow\nclass MyChatFlow(Flow):\n    def __init__(self):\n        super().__init__(\n            name=\"my_chat_flow\",\n            version=\"1.0.0\",\n            description=\"A simple chat flow.\",\n            input_model=ChatInput,\n            output_model=ChatResponse,\n        )\n        self.add_step(\n            Step(\n                name=\"chat_step\",\n                prompt=\"You are a helpful AI assistant. User message: {{input.message}}\",\n                output_key=\"response\",\n            )\n        )\n\n    def process(self, input_data: ChatInput, context: dict) -> ChatResponse:\n        response_text = context[\"response\"].choices[0].message.content\n        return ChatResponse(response=response_text)\n\n# Initialize and run the server\nif __name__ == \"__main__\":\n    # Set your OpenAI API key. In a real application, use os.environ.get for safety.\n    os.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"\")\n\n    if not os.environ[\"OPENAI_API_KEY\"]:\n        print(\"Warning: OPENAI_API_KEY not set. LLM calls may fail.\")\n\n    flow = MyChatFlow()\n    print(\"Starting FlowLLM server on http://0.0.0.0:8000\")\n    run_flow_server(flow, host=\"0.0.0.0\", port=8000)","lang":"python","description":"This quickstart defines a simple LLM chat flow using `Flow` and `Step` components, then runs it as an HTTP service using `run_flow_server`. Ensure `OPENAI_API_KEY` is set in your environment for LLM interactions."},"warnings":[{"fix":"Set the `OPENAI_API_KEY` environment variable (e.g., `export OPENAI_API_KEY='sk-...'`) or configure another LLM provider as per documentation before running your flow.","message":"FlowLLM typically requires an external LLM API key (e.g., OpenAI) to function. Without a valid key, LLM calls will fail with authentication errors.","severity":"gotcha","affected_versions":">=0.2.0.1"},{"fix":"Pin specific versions (`flowllm==0.2.0.10`) in your `requirements.txt` to ensure stability and carefully review the GitHub changelog for each update before upgrading.","message":"As a rapidly developing library in its `0.2.x` series, minor API adjustments or changes in behavior might occur between patch versions. Always review release notes for updates.","severity":"gotcha","affected_versions":"0.2.0.1 - 0.2.0.10"},{"fix":"Specify a different port using the `port` argument in `run_flow_server(flow, host=\"0.0.0.0\", port=8001)` for each additional server.","message":"Running multiple FlowLLM servers locally on the default port (8000) will result in port conflicts.","severity":"gotcha","affected_versions":">=0.2.0.1"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Run `pip install flowllm` to install the library.","cause":"The FlowLLM package is not installed in the current Python environment.","error":"ModuleNotFoundError: No module named 'flowllm'"},{"fix":"Ensure the `OPENAI_API_KEY` environment variable is set correctly with a valid OpenAI API key (e.g., `export OPENAI_API_KEY='sk-...'`) before launching your application.","cause":"The OpenAI API key configured (via environment variable or explicit setting) is missing or invalid.","error":"openai.AuthenticationError: Incorrect API key provided"},{"fix":"Stop the conflicting process, or run your FlowLLM server on an alternative port by specifying it in `run_flow_server(flow, port=8001)`.","cause":"Another process is already listening on the port (default 8000) that FlowLLM attempts to bind to.","error":"RuntimeError: Address already in use"}]}