Comet ML

3.57.3 · active · verified Sat Apr 11

Comet ML is an MLOps platform for tracking, comparing, debugging, and optimizing machine learning models. It provides a comprehensive dashboard to visualize experiments, log code, metrics, hyperparameters, and artifacts. The current version is 3.57.3, and it receives frequent updates with new features and bug fixes.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to create an `Experiment` using a context manager, log hyperparameters, and track metrics during a simulated training loop. It leverages environment variables for authentication, which is a recommended best practice.

import os
from comet_ml import Experiment

# Ensure COMET_API_KEY and COMET_WORKSPACE are set as environment variables
# or use comet_ml.login() if you prefer interactive login.
# For example: os.environ['COMET_API_KEY'] = 'YOUR_API_KEY'
# os.environ['COMET_WORKSPACE'] = 'YOUR_WORKSPACE'

# Or, for local testing without an explicit API key (results only stored locally):
# experiment = Experiment(project_name="my-test-project", log_code=False, display_summary_to_terminal=False)

# Initialize an experiment
# It's best practice to use a context manager to ensure the experiment terminates correctly
with Experiment(project_name="my-quickstart-project", 
                api_key=os.environ.get('COMET_API_KEY', None),
                workspace=os.environ.get('COMET_WORKSPACE', None),
                auto_output_logging='simple', # capture print statements
                auto_metric_logging=True, # capture common metrics
                log_code=True # logs your script code
               ) as experiment:
    # Log hyperparameters
    hyper_params = {"learning_rate": 0.001, "epochs": 10, "batch_size": 32}
    experiment.log_parameters(hyper_params)

    # Simulate a training loop
    for epoch in range(hyper_params["epochs"]):
        # Simulate metric calculation
        accuracy = 0.5 + (epoch * 0.05) + (hyper_params["learning_rate"] * 100)
        loss = 1.0 - (epoch * 0.08) - (hyper_params["learning_rate"] * 50)

        # Log metrics for each epoch
        experiment.log_metric("accuracy", accuracy, step=epoch)
        experiment.log_metric("loss", loss, step=epoch)

    # Log a final metric or result
    final_accuracy = accuracy # from the last epoch
    experiment.log_metric("final_accuracy", final_accuracy)

    print(f"Experiment URL: {experiment.url}")

print("Experiment finished.")

view raw JSON →