Arize Phoenix (arize-phoenix)
Open-source AI observability and evaluation platform from Arize AI. Phoenix provides tracing, evaluation, prompt management, datasets, and experiments for LLM applications. Built on OpenTelemetry and OpenInference. Can be run locally, self-hosted (Docker/K8s), or accessed via cloud at app.phoenix.arize.com. CRITICAL: Phoenix is a separate product from Arize AX (the enterprise cloud platform). They share the Arize brand but use different packages, credentials, and endpoints: Phoenix uses phoenix.otel + PHOENIX_API_KEY; Arize AX uses arize.otel + ARIZE_SPACE_ID + ARIZE_API_KEY. arize-phoenix is the full-platform bundle. For production deployments, the modular sub-packages (arize-phoenix-otel, arize-phoenix-client, arize-phoenix-evals) are preferred. LICENSE: Elastic License 2.0 (ELv2) — NOT MIT/Apache. Restrictions apply to providing Phoenix as a managed service to third parties.
Warnings
- breaking Phoenix (arize-phoenix) and Arize AX (arize) are two completely different products that share the Arize brand. They have different packages, different credentials, different endpoints, and different import paths. The most dangerous confusion: both have a register() function (phoenix.otel.register vs arize.otel.register) that look identical but export to completely different backends. Using the wrong one sends traces to the wrong service silently with no error.
- breaking auto_instrument=True in register() does NOT work out of the box. It only instruments providers for which you have separately installed the corresponding openinference-instrumentation-* package. For example, to auto-trace OpenAI calls, you must install openinference-instrumentation-openai separately. If the instrumentation package is absent, auto_instrument silently skips that provider.
- gotcha arize-phoenix is licensed under Elastic License 2.0 (ELv2), NOT MIT or Apache. ELv2 prohibits providing Phoenix as a managed service to third parties (i.e., you cannot resell or host Phoenix as a product feature that your own customers access directly). Internal use and self-hosting for your own team are fully permitted.
- gotcha pip install arize-phoenix installs the full platform bundle including a FastAPI server, SQLite/PostgreSQL ORM, and a bundled React frontend — it's a heavy install (~100MB+). In production app code (your LLM service), you almost never want the full bundle. You only need arize-phoenix-otel (for tracing) and/or arize-phoenix-client (for querying).
- gotcha PHOENIX_CLIENT_HEADERS uses equals-sign-separated key=value format (matching OTel conventions), not HTTP colon-separated format. PHOENIX_CLIENT_HEADERS='Authorization=Bearer token' is correct. Using 'Authorization: Bearer token' (colon) silently fails.
- gotcha By default, arize-phoenix collects basic web analytics (page views, UI interactions) from the Phoenix UI. This applies to the hosted server only — not to trace data or LLM inputs/outputs. Self-hosted instances have this disabled by default as of recent versions.
Install
-
pip install arize-phoenix -
pip install arize-phoenix-otel arize-phoenix-client arize-phoenix-evals -
pip install 'arize-phoenix[evals]' -
pip install 'arize-phoenix[pg]'
Imports
- register (phoenix.otel)
from phoenix.otel import register
- Client (phoenix.client)
from phoenix.client import Client
- llm_classify (evals)
from phoenix.evals import llm_classify, OpenAIModel
Quickstart
# === OPTION A: Local dev (run Phoenix server + instrument your app) ===
# 1. Launch Phoenix server locally
import phoenix as px
px.launch_app() # Opens UI at http://localhost:6006
# 2. Instrument with OTel
from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer_provider = register(
project_name='my-app',
# auto_instrument=True # Only works if openinference-instrumentation-* pkgs are installed
)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model='gpt-4o',
messages=[{'role': 'user', 'content': 'Hello!'}]
)
# Trace visible in Phoenix UI at http://localhost:6006
# === OPTION B: Production (app code → deployed Phoenix server) ===
import os
os.environ['PHOENIX_COLLECTOR_ENDPOINT'] = 'https://app.phoenix.arize.com/s/my-space'
os.environ['PHOENIX_API_KEY'] = 'your-phoenix-api-key'
from phoenix.otel import register
tracer_provider = register(project_name='prod-app', batch=True)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
# === Phoenix Client (REST API) ===
from phoenix.client import Client
ph = Client() # reads PHOENIX_BASE_URL and PHOENIX_API_KEY from env
# Query traces
spans = ph.spans.query(project_name='my-app')
# === Evaluations ===
from phoenix.evals import llm_classify, OpenAIModel
model = OpenAIModel(model='gpt-4o')
results = llm_classify(
dataframe=spans_df,
model=model,
template='hallucination', # built-in template
rails=['factual', 'hallucinated']
)