Azure Machine Learning SDK (v1)
The Azure Machine Learning SDK v1 (meta-package for `azureml-core`) is used to build and run machine learning workflows upon the Azure Machine Learning service. It enables managing cloud resources, training models, and deploying them as web services. As of March 31, 2025, SDK v1 has been deprecated, with support ending on June 30, 2026. Users are strongly advised to migrate to Azure Machine Learning Python SDK v2 (`azure-ai-ml`) for continued support and new features.
Warnings
- breaking Azure Machine Learning SDK v1 is officially deprecated as of March 31, 2025, with end of support on June 30, 2026. After this date, existing workflows may still run but will not receive technical support or updates, potentially exposing them to security risks or breaking changes.
- gotcha SDK v1 (`azureml-sdk`) and SDK v2 (`azure-ai-ml`) are incompatible and should generally not be installed in the same Python environment to avoid package clashes and confusion.
- gotcha Authentication to an Azure ML Workspace in v1 often relies on a `config.json` file (containing subscription ID, resource group, and workspace name) placed in a `.azureml` subdirectory or explicitly passed parameters. Without proper configuration, connection attempts will fail. Interactive authentication is also common for initial setup.
- deprecated `Estimator` classes, a common way to define training jobs in earlier v1 versions, are effectively superseded by `ScriptRunConfig` and later `Command` (in v2). While still functional in v1, `ScriptRunConfig` offers more flexibility.
Install
-
pip install azureml-sdk
Imports
- Workspace
from azureml.core import Workspace
- Experiment
from azureml.core import Experiment
- Environment
from azureml.core import Environment
- ScriptRunConfig
from azureml.core import ScriptRunConfig
- ComputeTarget
from azureml.core import ComputeTarget
from azureml.core.compute import ComputeTarget
Quickstart
import os
from azureml.core import Workspace, Experiment, Environment, ScriptRunConfig
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.conda_dependencies import CondaDependencies
# Create a dummy script file
with open('train_script.py', 'w') as f:
f.write("""
import argparse
import os
import time
print("Hello from Azure ML v1 training script!")
parser = argparse.ArgumentParser()
parser.add_argument('--arg1', type=str, default='default_value')
args = parser.parse_args()
print(f"Argument 1: {args.arg1}")
time.sleep(5) # Simulate work
print("Script finished.")
""")
# Create a dummy conda environment file
with open('conda_env.yml', 'w') as f:
f.write("""
name: my_env
dependencies:
- python=3.8
- pip:
- azureml-defaults
""")
# NOTE: For a real scenario, replace placeholder values and ensure a config.json is available
# Authenticate and connect to your workspace
try:
ws = Workspace.from_config(path='./.azureml', _file_name='config.json') # Reads from a local config.json
print(f"Connected to workspace {ws.name}")
except Exception as e:
print(f"Could not load workspace from config. Ensure .azureml/config.json exists or provide details manually: {e}")
# Fallback to manual connection (replace with your actual details)
subscription_id = os.environ.get('AZURE_SUBSCRIPTION_ID', 'YOUR_SUBSCRIPTION_ID')
resource_group = os.environ.get('AZURE_RESOURCE_GROUP', 'YOUR_RESOURCE_GROUP')
workspace_name = os.environ.get('AZURE_WORKSPACE_NAME', 'YOUR_WORKSPACE_NAME')
ws = Workspace(subscription_id, resource_group, workspace_name)
print(f"Connected to workspace {ws.name} via manual details.")
experiment_name = "my-first-v1-experiment"
experiment = Experiment(workspace=ws, name=experiment_name)
# Choose a name for your CPU cluster (or use 'local')
compute_name = "cpu-cluster"
compute_target = None
try:
compute_target = ComputeTarget(workspace=ws, name=compute_name)
print(f"Found existing compute target: {compute_name}")
except ComputeTargetException:
print(f"Creating a new compute target: {compute_name}")
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS1_V2', max_nodes=1)
compute_target = ComputeTarget.create(ws, compute_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# Define the environment
env = Environment.from_conda_specification(name='my-custom-env', file_path='conda_env.yml')
# Create a ScriptRunConfig
src = ScriptRunConfig(
source_directory='.',
script='train_script.py',
compute_target=compute_target,
environment=env,
arguments=['--arg1', 'hello_from_run']
)
# Submit the run
run = experiment.submit(src)
print(f"Submitted run: {run.get_portal_url()}")
run.wait_for_completion(show_output=True)
print(f"Run completed with status: {run.status}")