{"library":"mlflow-tracing","title":"MLflow Tracing SDK","description":"MLflow Tracing SDK (mlflow-tracing) is an open-source, lightweight Python package that provides a minimum set of dependencies and functionality to instrument your code, models, or agents with MLflow Tracing. It is designed for production environments to enable faster deployment, simplified dependency management, enhanced portability, and reduced security risks compared to the full MLflow package. It supports LLM and AI agent observability, capturing inputs, outputs, and metadata for each step of a request.","status":"active","version":"3.10.1","language":"en","source_language":"en","source_url":"https://github.com/mlflow/mlflow/tree/master/libs/tracing","tags":["ml","observability","tracing","llm","ai-agents","genai","production-monitoring"],"install":[{"cmd":"pip install mlflow-tracing","lang":"bash","label":"Install MLflow Tracing SDK"}],"dependencies":[{"reason":"This package is designed as a lightweight alternative to the full 'mlflow' package. Co-installing with 'mlflow' is explicitly discouraged and can lead to version conflicts and namespace resolution issues.","package":"mlflow","optional":true},{"reason":"Requires Python 3.10 or newer.","package":"Python","optional":false}],"imports":[{"note":"While `mlflow.tracing` contains utility functions, the main entry points for autologging, manual tracing (e.g., `@mlflow.trace`), and configuration are directly under the `mlflow` namespace.","wrong":"from mlflow import tracing","symbol":"mlflow","correct":"import mlflow"},{"note":"Used for advanced configuration like span processors.","symbol":"mlflow.tracing.configure","correct":"from mlflow import tracing\ntracing.configure(...)"},{"note":"Used to temporarily disable tracing.","symbol":"mlflow.tracing.disable","correct":"from mlflow import tracing\ntracing.disable()"},{"note":"Used to re-enable tracing if previously disabled.","symbol":"mlflow.tracing.enable","correct":"from mlflow import tracing\ntracing.enable()"},{"note":"Decorator for manual function instrumentation.","symbol":"@mlflow.trace","correct":"import mlflow\n\n@mlflow.trace\ndef my_traced_function():\n    pass"},{"note":"Context manager for manual code block instrumentation.","symbol":"mlflow.start_span","correct":"import mlflow\n\nwith mlflow.start_span('my_span_name'):\n    # ... code to trace ..."}],"quickstart":{"code":"import os\nimport mlflow\nfrom openai import OpenAI\n\n# Set your MLflow Tracking URI (replace with your server, e.g., 'http://localhost:5000')\n# For Databricks, use 'databricks' and ensure DATABRICKS_HOST/TOKEN are set.\nmlflow.set_tracking_uri(os.environ.get('MLFLOW_TRACKING_URI', 'http://127.0.0.1:5000'))\n\n# Set a new MLflow experiment to log traces to\nmlflow.set_experiment(\"my_genai_app_traces\")\n\n# Ensure OpenAI API key is set for the example\nif not os.environ.get(\"OPENAI_API_KEY\"): \n    # In a real app, use a secure way to load keys (e.g., environment variable, secret manager)\n    # For quick testing, you can set it directly here, but it's not recommended for production\n    print(\"WARNING: OPENAI_API_KEY environment variable not set. Skipping OpenAI example.\")\n    openai_client = None\nelse:\n    # Enable auto-tracing for OpenAI calls\n    mlflow.openai.autolog()\n    \n    # Initialize OpenAI client\n    openai_client = OpenAI(api_key=os.environ.get(\"OPENAI_API_KEY\"))\n\n    # Make an OpenAI call - this will be automatically traced\n    print(\"Invoking OpenAI completion...\")\n    response = openai_client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[\n            {\"role\": \"system\", \"content\": \"You are a helpful AI assistant.\"},\n            {\"role\": \"user\", \"content\": \"Tell me a fun fact about Python programming.\"}\n        ],\n        max_tokens=50\n    )\n    print(\"OpenAI Response:\", response.choices[0].message.content)\n    print(\"Trace should now be visible in MLflow UI under 'my_genai_app_traces' experiment.\")","lang":"python","description":"This quickstart demonstrates how to set up MLflow Tracing for an OpenAI call. It configures the MLflow tracking URI, sets an experiment, enables autologging for OpenAI, and then performs a simple API call. The trace for this call will be automatically logged and viewable in the MLflow UI. Make sure an MLflow server is running and `OPENAI_API_KEY` is set in your environment."},"warnings":[{"fix":"If you need the full MLflow features, install `mlflow`. If you only need tracing in production, use `mlflow-tracing`. Uninstall one before installing the other (`pip uninstall mlflow && pip install mlflow-tracing` or vice-versa).","message":"Do NOT co-install `mlflow-tracing` with the full `mlflow` package.","severity":"breaking","affected_versions":"All versions where both packages exist."},{"fix":"To avoid silent data loss or breaking other telemetry, ensure `mlflow.set_tracking_uri()` is called early in your application's lifecycle, preferably before other OpenTelemetry instrumentation is initialized. This issue mainly applies when combining MLflow tracing with external OTel auto-instrumentation.","message":"Calling `mlflow.set_tracking_uri()` after OpenTelemetry auto-instrumentation can reset the global `TracerProvider`.","severity":"gotcha","affected_versions":"All versions where OpenTelemetry integration is used."},{"fix":"For better performance in production or with significant trace volume, configure your MLflow server to use a database-based backend store (e.g., PostgreSQL, MySQL, SQLite with `--backend-store-uri sqlite:///mlflow.db`).","message":"Using a file-based backend store for the MLflow server can lead to poor UI/SDK performance.","severity":"gotcha","affected_versions":"All versions when using default local file storage."},{"fix":"Set the `MLFLOW_TRACE_TIMEOUT_SECONDS` environment variable to automatically halt and export traces that exceed a specified duration. This allows analysis even for stuck traces.","message":"Traces might get stuck in 'in progress' and not be viewable if a process hangs or runs too long.","severity":"gotcha","affected_versions":"All versions."},{"fix":"While `mlflow-tracing` SDK and the MLflow server within the same major version are generally compatible, it's recommended to keep both client and server up-to-date to ensure new tracing features (introduced in MLflow 2.14.0 and enhanced in 3.x) are available and function correctly.","message":"MLflow client and server version compatibility is important for new features.","severity":"gotcha","affected_versions":"All versions."}],"env_vars":null,"last_verified":"2026-04-05T00:00:00.000Z","next_check":"2026-07-04T00:00:00.000Z"}