TorchEval

0.0.7 · active · verified Thu Apr 16

TorchEval is a PyTorch library providing a simple interface to create new metrics and an easy-to-use toolkit for metric computations and checkpointing. It offers a rich collection of high-performance metric calculations out-of-the-box, leveraging PyTorch's vectorization and GPU acceleration. Currently at version 0.0.7, it maintains an active release schedule with regular updates and new metric additions.

Common errors

Warnings

Install

Imports

Quickstart

This example demonstrates how to initialize a `BinaryAccuracy` metric, update it with predictions and targets, compute the current accuracy, and reset its internal state. Metrics accumulate data, so remember to call `reset()` for new evaluation runs (e.g., per epoch).

import torch
from torcheval.metrics import BinaryAccuracy

# Initialize the metric
metric = BinaryAccuracy()

# Simulate model predictions and ground truth labels
# Ensure inputs are tensors and on the correct device
predictions = torch.tensor([0.9, 0.1, 0.8, 0.2, 0.95])
targets = torch.tensor([1, 0, 1, 0, 1])

# Update the metric with a batch of data
metric.update(predictions, targets)

# Get the computed result
accuracy = metric.compute()
print(f"Binary Accuracy: {accuracy.item():.4f}")

# Example with another batch
predictions2 = torch.tensor([0.4, 0.6, 0.7])
targets2 = torch.tensor([0, 1, 0])
metric.update(predictions2, targets2)

# Compute cumulative accuracy
cumulative_accuracy = metric.compute()
print(f"Cumulative Binary Accuracy: {cumulative_accuracy.item():.4f}")

# Reset the metric's internal state
metric.reset()
print(f"Accuracy after reset and recompute: {metric.compute().item():.4f}")

view raw JSON →