Faster COCO Evaluation

1.7.2 · active · verified Tue Apr 14

Faster-COCO-Eval is a Python library that provides a highly optimized C++ implementation for COCO evaluation, offering significantly faster performance (3-4x speedup) compared to the standard pycocotools. It acts as a drop-in replacement, providing extended metrics, support for new IoU types, compatibility with various datasets (e.g., CrowdPose, LVIS), and advanced visualization tools. The library is actively maintained and continuously updated with new features and bug fixes, currently at version 1.7.2.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates two ways to use `faster-coco-eval`. The first method utilizes `faster_coco_eval.init_as_pycocotools()` to replace `pycocotools` imports with the faster implementation, allowing for seamless integration into existing code. The second method shows direct usage of `faster_coco_eval`'s `COCO` and `COCOeval_faster` classes. You will need COCO-formatted ground truth annotation and prediction JSON files.

import os
import faster_coco_eval

# Option 1: Use faster_coco_eval as a drop-in replacement for pycocotools
faster_coco_eval.init_as_pycocotools()
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval

# Create dummy COCO JSON files (replace with your actual paths)
# Example structure based on common COCO format expectations
# In a real scenario, you'd load these from actual files.
anno_json_path = "annotations_val2017.json" # Path to your ground truth annotations
pred_json_path = "results_predictions.json" # Path to your model's predictions

# Simulate creating dummy JSON files for demonstration
# In practice, these files would already exist.
if not os.path.exists(anno_json_path):
    with open(anno_json_path, 'w') as f:
        f.write('{"images": [], "annotations": [], "categories": []}')
if not os.path.exists(pred_json_path):
    with open(pred_json_path, 'w') as f:
        f.write('[]')

# Load annotations and predictions
# For a real run, ensure your JSON files contain actual data.
try:
    coco_gt = COCO(anno_json_path)
    coco_dt = coco_gt.loadRes(pred_json_path)

    # Evaluate bounding boxes
    coco_eval = COCOeval(coco_gt, coco_dt, "bbox")
    coco_eval.evaluate()
    coco_eval.accumulate()
    coco_eval.summarize()
    print("COCO evaluation (bbox) summarized.")

    # Option 2: Directly use faster_coco_eval classes (alternative to init_as_pycocotools)
    from faster_coco_eval import COCO as FasterCOCO, COCOeval_faster

    # Load annotations and predictions
    coco_gt_f = FasterCOCO(anno_json_path)
    coco_dt_f = coco_gt_f.loadRes(pred_json_path)

    # Evaluate segmentation masks
    coco_eval_f = COCOeval_faster(coco_gt_f, coco_dt_f, "segm")
    coco_eval_f.evaluate()
    coco_eval_f.accumulate()
    coco_eval_f.summarize()
    print("Faster COCO evaluation (segm) summarized.")

except Exception as e:
    print(f"An error occurred during COCO evaluation: {e}")
    print("Please ensure your annotation and prediction JSON files are valid and contain data.")

# Clean up dummy files
os.remove(anno_json_path)
os.remove(pred_json_path)

view raw JSON →