Faster COCO Evaluation
Faster-COCO-Eval is a Python library that provides a highly optimized C++ implementation for COCO evaluation, offering significantly faster performance (3-4x speedup) compared to the standard pycocotools. It acts as a drop-in replacement, providing extended metrics, support for new IoU types, compatibility with various datasets (e.g., CrowdPose, LVIS), and advanced visualization tools. The library is actively maintained and continuously updated with new features and bug fixes, currently at version 1.7.2.
Warnings
- breaking In version 1.2.2, the library removed its own precision-recall calculation implementation and switched to the COCO eval's method, leading to a loss of backward compatibility for related functionalities.
- breaking Version 1.4.2 introduced several breaking changes including `COCO.load_json` becoming a static function, the `in_percent` argument in `display_matrix` being replaced by `normalize`, and a rework of drawing functions.
- gotcha As of version 1.5.6, the `COCOevalEvaluateAccumulate` function was introduced to combine `COCOevalEvaluateImages` and `COCOevalAccumulate`. The `separate_eval` parameter, which defaults to `False`, controls this behavior. This changes the default evaluation flow.
- gotcha Support for `numpy>=2` was explicitly added in version 1.6.4. Older versions of `numpy` (e.g., `numpy<2`) might lead to compatibility issues, especially with Python 3.9+.
- gotcha Prior to version 1.7.2, the `extended_metrics` functionality could raise a `ValueError` if an `IoU` threshold of `0.50` was not explicitly present in the `iouThrs` list.
Install
-
pip install faster-coco-eval -
pip install faster-coco-eval[extra]
Imports
- COCO
from faster_coco_eval import COCO
- COCOeval
from faster_coco_eval import COCOeval
- COCOeval_faster
from faster_coco_eval import COCOeval_faster
- init_as_pycocotools
import faster_coco_eval faster_coco_eval.init_as_pycocotools()
- Curves
from faster_coco_eval.extra import Curves
Quickstart
import os
import faster_coco_eval
# Option 1: Use faster_coco_eval as a drop-in replacement for pycocotools
faster_coco_eval.init_as_pycocotools()
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
# Create dummy COCO JSON files (replace with your actual paths)
# Example structure based on common COCO format expectations
# In a real scenario, you'd load these from actual files.
anno_json_path = "annotations_val2017.json" # Path to your ground truth annotations
pred_json_path = "results_predictions.json" # Path to your model's predictions
# Simulate creating dummy JSON files for demonstration
# In practice, these files would already exist.
if not os.path.exists(anno_json_path):
with open(anno_json_path, 'w') as f:
f.write('{"images": [], "annotations": [], "categories": []}')
if not os.path.exists(pred_json_path):
with open(pred_json_path, 'w') as f:
f.write('[]')
# Load annotations and predictions
# For a real run, ensure your JSON files contain actual data.
try:
coco_gt = COCO(anno_json_path)
coco_dt = coco_gt.loadRes(pred_json_path)
# Evaluate bounding boxes
coco_eval = COCOeval(coco_gt, coco_dt, "bbox")
coco_eval.evaluate()
coco_eval.accumulate()
coco_eval.summarize()
print("COCO evaluation (bbox) summarized.")
# Option 2: Directly use faster_coco_eval classes (alternative to init_as_pycocotools)
from faster_coco_eval import COCO as FasterCOCO, COCOeval_faster
# Load annotations and predictions
coco_gt_f = FasterCOCO(anno_json_path)
coco_dt_f = coco_gt_f.loadRes(pred_json_path)
# Evaluate segmentation masks
coco_eval_f = COCOeval_faster(coco_gt_f, coco_dt_f, "segm")
coco_eval_f.evaluate()
coco_eval_f.accumulate()
coco_eval_f.summarize()
print("Faster COCO evaluation (segm) summarized.")
except Exception as e:
print(f"An error occurred during COCO evaluation: {e}")
print("Please ensure your annotation and prediction JSON files are valid and contain data.")
# Clean up dummy files
os.remove(anno_json_path)
os.remove(pred_json_path)