FlowLLM: Simplifying LLM-based HTTP/MCP Service Development
FlowLLM is a Python library designed to simplify the development of LLM-based HTTP/MCP (Message Control Protocol) services. It provides a structured way to define and manage LLM workflows using `Flow` and `Step` components, allowing developers to quickly build and deploy AI-powered APIs. The library is actively maintained with frequent minor releases in its `0.2.x` series.
Common errors
-
ModuleNotFoundError: No module named 'flowllm'
cause The FlowLLM package is not installed in the current Python environment.fixRun `pip install flowllm` to install the library. -
openai.AuthenticationError: Incorrect API key provided
cause The OpenAI API key configured (via environment variable or explicit setting) is missing or invalid.fixEnsure the `OPENAI_API_KEY` environment variable is set correctly with a valid OpenAI API key (e.g., `export OPENAI_API_KEY='sk-...'`) before launching your application. -
RuntimeError: Address already in use
cause Another process is already listening on the port (default 8000) that FlowLLM attempts to bind to.fixStop the conflicting process, or run your FlowLLM server on an alternative port by specifying it in `run_flow_server(flow, port=8001)`.
Warnings
- gotcha FlowLLM typically requires an external LLM API key (e.g., OpenAI) to function. Without a valid key, LLM calls will fail with authentication errors.
- gotcha As a rapidly developing library in its `0.2.x` series, minor API adjustments or changes in behavior might occur between patch versions. Always review release notes for updates.
- gotcha Running multiple FlowLLM servers locally on the default port (8000) will result in port conflicts.
Install
-
pip install flowllm
Imports
- Flow
from flowllm import Flow
- Step
from flowllm import Step
- run_flow_server
from flowllm import run_flow_server
- ChatInput
from flowllm.models import ChatInput
- ChatResponse
from flowllm.models import ChatResponse
Quickstart
import os
from flowllm import Flow, Step, run_flow_server
from flowllm.models import ChatInput, ChatResponse
# Define your LLM flow
class MyChatFlow(Flow):
def __init__(self):
super().__init__(
name="my_chat_flow",
version="1.0.0",
description="A simple chat flow.",
input_model=ChatInput,
output_model=ChatResponse,
)
self.add_step(
Step(
name="chat_step",
prompt="You are a helpful AI assistant. User message: {{input.message}}",
output_key="response",
)
)
def process(self, input_data: ChatInput, context: dict) -> ChatResponse:
response_text = context["response"].choices[0].message.content
return ChatResponse(response=response_text)
# Initialize and run the server
if __name__ == "__main__":
# Set your OpenAI API key. In a real application, use os.environ.get for safety.
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "")
if not os.environ["OPENAI_API_KEY"]:
print("Warning: OPENAI_API_KEY not set. LLM calls may fail.")
flow = MyChatFlow()
print("Starting FlowLLM server on http://0.0.0.0:8000")
run_flow_server(flow, host="0.0.0.0", port=8000)