Azure ML Telemetry
The `azureml-telemetry` package, currently at version 1.62.0, is a core component within the Azure Machine Learning Python SDK. It is primarily used to collect various telemetry data, including log messages, metrics, events, and activity messages, generated by Azure Machine Learning processes. This data is often routed to Azure Application Insights for monitoring and analysis. The package underpins the telemetry collection mechanisms used by other Azure ML SDK components, rather than providing a direct-use client API for end-users to send custom telemetry. Microsoft is actively transitioning towards OpenTelemetry for broader Azure Monitor integration.
Warnings
- deprecated Azure Machine Learning SDK v1 (which uses `azureml.core.Run` for direct logging) is deprecated as of March 31, 2025, with support ending on June 30, 2026. Users are strongly recommended to migrate to Azure Machine Learning SDK v2 and leverage MLflow for experiment tracking and telemetry logging.
- gotcha While `azureml-telemetry` officially supports Python >=3.7, newer versions of related Azure ML SDKs (like `azure-ai-ml` and `azureml-core`) have deprecated support for Python 3.7 and 3.8. Relying on older Python versions within the broader Azure ML ecosystem may lead to compatibility issues or lack of support in the future.
- breaking Microsoft is actively migrating towards OpenTelemetry as the recommended standard for instrumenting applications for Azure Monitor. While `azureml-telemetry` continues to function, future enhancements and direct control over telemetry might increasingly rely on `azure-monitor-opentelemetry-exporter` and the OpenTelemetry standard.
- gotcha When using Azure Functions with the Azure Monitor OpenTelemetry Distro (for explicit OpenTelemetry instrumentation), enabling Azure Functions' native logging can result in duplicate telemetry entries in Application Insights.
Install
-
pip install azureml-telemetry
Imports
- Run
from azureml.core import Run
- mlflow
import mlflow
Quickstart
import os
import mlflow
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
# NOTE: This quickstart assumes you have an Azure Machine Learning workspace configured.
# Replace with your actual subscription, resource group, and workspace name.
# Set these environment variables or replace directly in the code for actual execution.
subscription_id = os.environ.get('AZURE_SUBSCRIPTION_ID', 'your-subscription-id')
resource_group = os.environ.get('AZURE_RESOURCE_GROUP', 'your-resource-group')
workspace_name = os.environ.get('AZURE_ML_WORKSPACE_NAME', 'your-ml-workspace')
# Authenticate and get MLClient
try:
ml_client = MLClient(
DefaultAzureCredential(), subscription_id, resource_group, workspace_name
)
print(f"Connected to Azure ML workspace: {ml_client.workspace_name}")
except Exception as e:
print(f"Could not connect to Azure ML workspace. Please ensure your credentials and workspace details are correct: {e}")
print("Skipping MLflow example as workspace connection failed.")
exit()
# Set the MLflow tracking URI to point to the Azure Machine Learning backend
# This ensures metrics and artifacts are logged to your workspace.
mlflow.set_tracking_uri(ml_client.workspaces.get(name=workspace_name).mlflow_tracking_uri)
# Start an MLflow run to log custom metrics
with mlflow.start_run() as run:
print(f"MLflow run ID: {run.info.run_id}")
# Log a simple custom metric
mlflow.log_metric("custom_accuracy", 0.95)
mlflow.log_metric("custom_loss", 0.05)
print("Logged custom_accuracy and custom_loss metrics.")
# Log a parameter
mlflow.log_param("model_type", "linear_regression")
print("Logged model_type parameter.")
# Simulate a loop and log multiple metric values
for i in range(5):
mlflow.log_metric("iteration_accuracy", 0.95 + i * 0.001, step=i)
print("Logged iteration_accuracy over steps.")
print("Telemetry logged successfully via MLflow to Azure ML.")