MLflow Tracing SDK

3.10.1 · active · verified Sun Apr 05

MLflow Tracing SDK (mlflow-tracing) is an open-source, lightweight Python package that provides a minimum set of dependencies and functionality to instrument your code, models, or agents with MLflow Tracing. It is designed for production environments to enable faster deployment, simplified dependency management, enhanced portability, and reduced security risks compared to the full MLflow package. It supports LLM and AI agent observability, capturing inputs, outputs, and metadata for each step of a request.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to set up MLflow Tracing for an OpenAI call. It configures the MLflow tracking URI, sets an experiment, enables autologging for OpenAI, and then performs a simple API call. The trace for this call will be automatically logged and viewable in the MLflow UI. Make sure an MLflow server is running and `OPENAI_API_KEY` is set in your environment.

import os
import mlflow
from openai import OpenAI

# Set your MLflow Tracking URI (replace with your server, e.g., 'http://localhost:5000')
# For Databricks, use 'databricks' and ensure DATABRICKS_HOST/TOKEN are set.
mlflow.set_tracking_uri(os.environ.get('MLFLOW_TRACKING_URI', 'http://127.0.0.1:5000'))

# Set a new MLflow experiment to log traces to
mlflow.set_experiment("my_genai_app_traces")

# Ensure OpenAI API key is set for the example
if not os.environ.get("OPENAI_API_KEY"): 
    # In a real app, use a secure way to load keys (e.g., environment variable, secret manager)
    # For quick testing, you can set it directly here, but it's not recommended for production
    print("WARNING: OPENAI_API_KEY environment variable not set. Skipping OpenAI example.")
    openai_client = None
else:
    # Enable auto-tracing for OpenAI calls
    mlflow.openai.autolog()
    
    # Initialize OpenAI client
    openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

    # Make an OpenAI call - this will be automatically traced
    print("Invoking OpenAI completion...")
    response = openai_client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful AI assistant."},
            {"role": "user", "content": "Tell me a fun fact about Python programming."}
        ],
        max_tokens=50
    )
    print("OpenAI Response:", response.choices[0].message.content)
    print("Trace should now be visible in MLflow UI under 'my_genai_app_traces' experiment.")

view raw JSON →