Dataloop Metrics (dtlpymetrics)
Dataloop Metrics (dtlpymetrics) is a Python library that provides scoring and metrics functionality specifically for Dataloop AI projects. It allows users to define, calculate, and report custom metrics on datasets and annotation sets within the Dataloop platform. The current version is 1.2.32. Its release cadence is tied to the Dataloop platform and SDK updates, typically receiving frequent minor updates.
Warnings
- breaking The `dtlpymetrics` library is tightly coupled with the `dtlpy` SDK. Breaking changes or version mismatches in `dtlpy` (especially major versions or specific `~=` ranges) can lead to unexpected behavior or API errors in `dtlpymetrics`.
- gotcha Before reporting scores, the specified `metric_id` must be pre-defined within the Dataloop platform (associated with your project/dataset). Attempting to report to a non-existent metric will result in a `DLServerError`.
- gotcha All operations in `dtlpymetrics` require active authentication with the Dataloop platform via the `dtlpy` SDK. Additionally, providing correct Dataloop entity IDs (project_id, dataset_id, item_id, annotation_id, etc.) is crucial for successful metric operations.
Install
-
pip install dtlpymetrics
Imports
- DataloopMetrics
from dtlpymetrics.dtlpm import DataloopMetrics
Quickstart
import dtlpy as dl
from dtlpymetrics.dtlpm import DataloopMetrics
import os
# Ensure Dataloop SDK is logged in or configured
try:
if not dl.token_expired():
print("Already logged into Dataloop SDK.")
else:
# Replace with your actual Dataloop token or ensure `dl.login()` is called elsewhere
dl.login(token=os.environ.get('DATALOOP_API_TOKEN', ''))
except Exception as e:
print(f"Failed to login to Dataloop SDK. Please ensure 'dtlpy' is installed and you are logged in or DATALOOP_API_TOKEN is set. Error: {e}")
exit(1)
# 1. Initialize the metrics client
metrics_client = DataloopMetrics()
# 2. Define your target Dataloop Project and Dataset IDs
# Replace 'YOUR_PROJECT_ID' and 'YOUR_DATASET_ID' with actual IDs or set environment variables.
project_id = os.environ.get('DATALOOP_PROJECT_ID', 'YOUR_PROJECT_ID')
dataset_id = os.environ.get('DATALOOP_DATASET_ID', 'YOUR_DATASET_ID')
# Check if placeholder IDs are still present to guide the user
if project_id == 'YOUR_PROJECT_ID' or dataset_id == 'YOUR_DATASET_ID':
print("Warning: Please replace 'YOUR_PROJECT_ID' and 'YOUR_DATASET_ID' with actual Dataloop IDs or set the DATALOOP_PROJECT_ID/DATALOOP_DATASET_ID environment variables.")
exit(1)
# For demonstration, use a placeholder metric_id and score
metric_id = "example_accuracy_score"
score_value = 0.85
# 3. Report a score
try:
# Before reporting, ensure the metric_id exists in your Dataloop project/dataset.
# You can create metrics using metrics_client.metrics_create() if needed.
metrics_client.metrics_report_score(
metric_id=metric_id,
score=score_value,
project_id=project_id,
dataset_id=dataset_id
# Optional: Link to specific entities with filters or entity_id/entity_type
# filters=dl.Filters(resource=dl.FiltersResource.ITEM).add(field='filename', values=['my_image.jpg']),
# entity_id='your-item-id',
# entity_type='item'
)
print(f"Successfully reported score {score_value} for metric '{metric_id}' to Dataloop.")
except Exception as e:
print(f"Failed to report score: {e}. Make sure the metric_id exists and you have correct permissions for project '{project_id}' and dataset '{dataset_id}'.")