{"id":5926,"library":"faster-coco-eval","title":"Faster COCO Evaluation","description":"Faster-COCO-Eval is a Python library that provides a highly optimized C++ implementation for COCO evaluation, offering significantly faster performance (3-4x speedup) compared to the standard pycocotools. It acts as a drop-in replacement, providing extended metrics, support for new IoU types, compatibility with various datasets (e.g., CrowdPose, LVIS), and advanced visualization tools. The library is actively maintained and continuously updated with new features and bug fixes, currently at version 1.7.2.","status":"active","version":"1.7.2","language":"en","source_language":"en","source_url":"https://github.com/MiXaiLL76/faster_coco_eval","tags":["computer vision","object detection","segmentation","keypoint detection","COCO","evaluation","metrics","performance"],"install":[{"cmd":"pip install faster-coco-eval","lang":"bash","label":"Basic installation (core functionality)"},{"cmd":"pip install faster-coco-eval[extra]","lang":"bash","label":"Full installation (includes visualization tools)"}],"dependencies":[{"reason":"Fundamental package for scientific computing in Python, used throughout for array operations.","package":"numpy"},{"reason":"Although faster-coco-eval is a replacement, pycocotools is listed as a required dependency on PyPI, suggesting it provides underlying structures or format compatibility.","package":"pycocotools"},{"reason":"Required for advanced visualization features like metric curves. Included with '[extra]' installation.","package":"plotly","optional":true},{"reason":"Potentially used for mask API backends and other image processing utilities. Included with '[extra]' installation.","package":"opencv-python-headless","optional":true}],"imports":[{"note":"Main class for loading COCO annotations.","symbol":"COCO","correct":"from faster_coco_eval import COCO"},{"note":"Standard COCO evaluation class (faster implementation).","symbol":"COCOeval","correct":"from faster_coco_eval import COCOeval"},{"note":"Explicitly use the faster evaluation class.","symbol":"COCOeval_faster","correct":"from faster_coco_eval import COCOeval_faster"},{"note":"Activates faster-coco-eval as a drop-in replacement for pycocotools, allowing existing pycocotools import statements to use the faster backend.","symbol":"init_as_pycocotools","correct":"import faster_coco_eval\nfaster_coco_eval.init_as_pycocotools()"},{"note":"Utility for plotting precision-recall and other metric curves.","symbol":"Curves","correct":"from faster_coco_eval.extra import Curves"}],"quickstart":{"code":"import os\nimport faster_coco_eval\n\n# Option 1: Use faster_coco_eval as a drop-in replacement for pycocotools\nfaster_coco_eval.init_as_pycocotools()\nfrom pycocotools.coco import COCO\nfrom pycocotools.cocoeval import COCOeval\n\n# Create dummy COCO JSON files (replace with your actual paths)\n# Example structure based on common COCO format expectations\n# In a real scenario, you'd load these from actual files.\nanno_json_path = \"annotations_val2017.json\" # Path to your ground truth annotations\npred_json_path = \"results_predictions.json\" # Path to your model's predictions\n\n# Simulate creating dummy JSON files for demonstration\n# In practice, these files would already exist.\nif not os.path.exists(anno_json_path):\n    with open(anno_json_path, 'w') as f:\n        f.write('{\"images\": [], \"annotations\": [], \"categories\": []}')\nif not os.path.exists(pred_json_path):\n    with open(pred_json_path, 'w') as f:\n        f.write('[]')\n\n# Load annotations and predictions\n# For a real run, ensure your JSON files contain actual data.\ntry:\n    coco_gt = COCO(anno_json_path)\n    coco_dt = coco_gt.loadRes(pred_json_path)\n\n    # Evaluate bounding boxes\n    coco_eval = COCOeval(coco_gt, coco_dt, \"bbox\")\n    coco_eval.evaluate()\n    coco_eval.accumulate()\n    coco_eval.summarize()\n    print(\"COCO evaluation (bbox) summarized.\")\n\n    # Option 2: Directly use faster_coco_eval classes (alternative to init_as_pycocotools)\n    from faster_coco_eval import COCO as FasterCOCO, COCOeval_faster\n\n    # Load annotations and predictions\n    coco_gt_f = FasterCOCO(anno_json_path)\n    coco_dt_f = coco_gt_f.loadRes(pred_json_path)\n\n    # Evaluate segmentation masks\n    coco_eval_f = COCOeval_faster(coco_gt_f, coco_dt_f, \"segm\")\n    coco_eval_f.evaluate()\n    coco_eval_f.accumulate()\n    coco_eval_f.summarize()\n    print(\"Faster COCO evaluation (segm) summarized.\")\n\nexcept Exception as e:\n    print(f\"An error occurred during COCO evaluation: {e}\")\n    print(\"Please ensure your annotation and prediction JSON files are valid and contain data.\")\n\n# Clean up dummy files\nos.remove(anno_json_path)\nos.remove(pred_json_path)\n","lang":"python","description":"This quickstart demonstrates two ways to use `faster-coco-eval`. The first method utilizes `faster_coco_eval.init_as_pycocotools()` to replace `pycocotools` imports with the faster implementation, allowing for seamless integration into existing code. The second method shows direct usage of `faster_coco_eval`'s `COCO` and `COCOeval_faster` classes. You will need COCO-formatted ground truth annotation and prediction JSON files."},"warnings":[{"fix":"Review any custom logic relying on older precision-recall calculation methods and update to align with the standard COCO eval approach.","message":"In version 1.2.2, the library removed its own precision-recall calculation implementation and switched to the COCO eval's method, leading to a loss of backward compatibility for related functionalities.","severity":"breaking","affected_versions":">=1.2.2"},{"fix":"Update calls to `COCO.load_json` to use it as a static method, adjust `display_matrix` arguments, and review usage of drawing functions for API changes.","message":"Version 1.4.2 introduced several breaking changes including `COCO.load_json` becoming a static function, the `in_percent` argument in `display_matrix` being replaced by `normalize`, and a rework of drawing functions.","severity":"breaking","affected_versions":">=1.4.2"},{"fix":"Be aware of the combined evaluation flow. If you require separate evaluation steps, explicitly set `separate_eval=True` when initializing `COCOeval_faster` or `COCOeval`.","message":"As of version 1.5.6, the `COCOevalEvaluateAccumulate` function was introduced to combine `COCOevalEvaluateImages` and `COCOevalAccumulate`. The `separate_eval` parameter, which defaults to `False`, controls this behavior. This changes the default evaluation flow.","severity":"gotcha","affected_versions":">=1.5.6"},{"fix":"Ensure `numpy` is updated to version 2.0 or higher, particularly when using newer Python versions (3.9+).","message":"Support for `numpy>=2` was explicitly added in version 1.6.4. Older versions of `numpy` (e.g., `numpy<2`) might lead to compatibility issues, especially with Python 3.9+.","severity":"gotcha","affected_versions":"<1.6.4"},{"fix":"Ensure that `0.50` is included in your `iouThrs` list if using `extended_metrics` on older versions, or update to version 1.7.2 or later.","message":"Prior to version 1.7.2, the `extended_metrics` functionality could raise a `ValueError` if an `IoU` threshold of `0.50` was not explicitly present in the `iouThrs` list.","severity":"gotcha","affected_versions":"<1.7.2"}],"env_vars":null,"last_verified":"2026-04-14T00:00:00.000Z","next_check":"2026-07-13T00:00:00.000Z"}