Beta Calibration

1.1.0 · active · verified Tue Apr 14

BetaCal provides a Python implementation of Beta Calibration, a method for calibrating predicted probabilities from binary classifiers. It offers a well-founded and easily implemented improvement over traditional logistic calibration, particularly effective when classifiers suffer from scores that tend too much to the extremes or when an already well-calibrated model might be uncalibrated by logistic methods. The current version is 1.1.0, last released in April 2021. While functional and widely cited, the package's release cadence is slow, suggesting a maintenance rather than active development phase.

Warnings

Install

Imports

Quickstart

Demonstrates how to initialize, fit, and use BetaCalibration to transform raw classifier scores into calibrated probabilities. It uses synthetic data to illustrate the basic workflow.

import numpy as np
from betacal import BetaCalibration

# Generate some dummy data
np.random.seed(42)
scores = np.random.rand(100) # Raw classifier scores (probabilities)
labels = np.random.randint(0, 2, 100) # True binary labels

# Initialize BetaCalibration with default 'abm' parameters
# 'abm' (alpha, beta, mu) is a three-parameter model
bc = BetaCalibration(parameters='abm')

# Fit the calibrator to the scores and true labels
bc.fit(scores, labels)

# Predict calibrated probabilities
calibrated_scores = bc.predict(scores)

print(f"Original scores (first 5): {scores[:5].round(3)}")
print(f"Calibrated scores (first 5): {calibrated_scores[:5].round(3)}")

view raw JSON →