{"id":9121,"library":"mlserver-mlflow","title":"MLServer MLflow Runtime","description":"mlserver-mlflow provides an MLflow runtime for MLServer, allowing users to serve models logged with MLflow using the MLServer inference server. It's currently at version 1.7.1 and maintains a release cadence aligned with MLServer's development, receiving updates for bug fixes and compatibility with new MLflow/MLServer versions.","status":"active","version":"1.7.1","language":"en","source_language":"en","source_url":"https://github.com/SeldonIO/mlserver-mlflow","tags":["mlserver","mlflow","machine learning","serving","inference","model deployment"],"install":[{"cmd":"pip install mlserver-mlflow","lang":"bash","label":"Install mlserver-mlflow"}],"dependencies":[{"reason":"Core MLServer library, required for runtime functionality.","package":"mlserver","optional":false},{"reason":"MLflow library, required for loading and interpreting MLflow models.","package":"mlflow","optional":false}],"imports":[{"symbol":"MLflowRuntime","correct":"from mlserver_mlflow import MLflowRuntime"}],"quickstart":{"code":"import os\nimport tempfile\nimport mlflow\nimport mlflow.sklearn\nfrom sklearn.linear_model import LogisticRegression\nimport numpy as np\nimport asyncio\nfrom mlserver_mlflow import MLflowRuntime\nfrom mlserver.settings import ModelSettings\nfrom mlserver.types import InferenceRequest, RequestInput\n\n# 1. Create a dummy MLflow model and log it locally\n#    (In a real scenario, this model would already be logged)\ntemp_dir = tempfile.TemporaryDirectory()\nmodel_base_path = os.path.join(temp_dir.name, \"mlflow_models\")\nmlflow.set_tracking_uri(f\"file://{model_base_path}/mlruns\")\nwith mlflow.start_run():\n    model = LogisticRegression()\n    model.fit(np.array([[0,0],[1,1]]), np.array([0,1]))\n    mlflow.sklearn.log_model(model, \"model_artifact\")\n    model_uri = f\"file://{mlflow.active_run().info.artifact_uri}/model_artifact\"\n\n# 2. Instantiate and load MLflowRuntime\nasync def main():\n    model_settings = ModelSettings(\n        name=\"my-mlflow-model\",\n        implementation=\"mlserver_mlflow.MLflowRuntime\",\n        parameters={\n            \"uri\": model_uri\n        }\n    )\n    mlflow_runtime = MLflowRuntime(model_settings)\n    await mlflow_runtime.load()\n\n    # 3. Prepare and send inference request\n    request_input = RequestInput(\n        name=\"predict\",\n        shape=[1, 2],\n        datatype=\"FP32\",\n        data=[[0.5, 0.5]]\n    )\n    inference_request = InferenceRequest(inputs=[request_input])\n\n    response = await mlflow_runtime.predict(inference_request)\n    print(\"Prediction:\", response.outputs[0].data)\n\n    await mlflow_runtime.unload()\n    temp_dir.cleanup() # Clean up temporary model files\n\nasyncio.run(main())\n","lang":"python","description":"This quickstart demonstrates how to programmatically use `MLflowRuntime` to load an MLflow model and perform an inference. It first creates a dummy MLflow model and logs it locally, then uses its URI to instantiate `MLflowRuntime` within MLServer's `ModelSettings`, loads the model, and makes a prediction. The `asyncio.run(main())` block executes the asynchronous model loading and inference."},"warnings":[{"fix":"Refer to the MLServer 1.x documentation for updated `ModelSettings`, `InferenceRequest`, and `InferenceResponse` formats. Ensure `mlserver` itself is `^1.0.0`.","message":"MLServer 0.x to 1.x API changes directly impact `mlserver-mlflow` users. If migrating from older MLServer versions, you'll need to update your `model-settings.json` configuration, `ModelSettings` objects, and client inference request/response structures to align with MLServer 1.x's API.","severity":"breaking","affected_versions":"<1.0.0"},{"fix":"Explicitly install all required model dependencies in your MLServer environment (e.g., `pip install xgboost`). For complex environments, consider building a custom Docker image for your `mlserver-mlflow` deployment that includes all necessary packages.","message":"Missing dependencies for MLflow models are a common source of errors. MLflow models, especially `pyfunc` types, often define `conda_env` or `pip_requirements`. If these dependencies (e.g., `xgboost`, `tensorflow`, custom packages) are not installed in the environment where `mlserver-mlflow` is running, model loading will fail with `ModuleNotFoundError` or similar.","severity":"gotcha","affected_versions":"All"},{"fix":"Carefully verify the `uri` parameter in your `ModelSettings` (or `model-settings.json`). Ensure it's a valid MLflow model URI (e.g., `models:/my_model/Production`, `runs:/<run_id>/path/to/artifact`, or `file:///absolute/path/to/model_dir`). If using `models:/` or `runs:/` schemes, ensure your MLflow tracking server or registry is running and accessible.","message":"Incorrect or inaccessible MLflow model URIs lead to 'model not found' errors. Users sometimes confuse different URI formats (e.g., run-relative artifact URIs, MLflow Model Registry URIs, local file paths).","severity":"gotcha","affected_versions":"All"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Install the package: `pip install mlserver-mlflow`","cause":"The `mlserver-mlflow` package is not installed in the Python environment where MLServer is being run, or the environment is not correctly activated.","error":"ModuleNotFoundError: No module named 'mlserver_mlflow'"},{"fix":"Identify and install the missing dependency. For example, `pip install scikit-learn`. For comprehensive dependency management, ensure your deployment environment matches the MLflow model's `conda_env` or `pip_requirements`.","cause":"The MLflow model being loaded requires a specific Python package (e.g., `scikit-learn`, `xgboost`, `tensorflow`) that is not installed in the `mlserver-mlflow` serving environment.","error":"mlserver.errors.ModelLoadingError: Failed to load model 'my-model': No module named 'scikit-learn'"},{"fix":"Double-check the `uri` in your `ModelSettings` (or `model-settings.json`). Verify the model's existence in your MLflow Tracking Server or file system. Ensure network connectivity to the MLflow Tracking Server if using remote URIs.","cause":"The specified MLflow model URI is incorrect, the model does not exist at the given URI, or the MLflow tracking server/registry is not accessible from the MLServer instance.","error":"mlserver.errors.ModelLoadingError: Failed to load model 'my-model': No MLflow model found at URI: 'models:/my-model/Production'"}]}