InterpretML

0.7.8 · active · verified Thu Apr 16

InterpretML is an open-source Python library designed for training inherently interpretable models (glassbox models) and explaining black-box machine learning systems. It provides a unified API for various interpretability techniques, including Explainable Boosting Machines (EBMs), LIME, and SHAP, along with interactive visualizations to help users understand model behavior globally and locally. The library is actively maintained, with frequent minor releases (e.g., several in early 2026), and is currently at version 0.7.8.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to train a glassbox model, specifically an Explainable Boosting Machine (EBM) Classifier, and generate a global explanation using the `interpret` library. It uses the Iris dataset, performs a train-test split, trains the EBM, and then calls `explain_global()` to get model insights. The `show()` function is used for interactive visualization, typically within a Jupyter notebook.

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show
from sklearn.datasets import load_iris

# Load data
data = load_iris()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)

# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)

# Train an Explainable Boosting Machine (EBM) classifier
ebm = ExplainableBoostingClassifier(random_state=42)
ebm.fit(X_train, y_train)

# Get a global explanation for the model
ebm_global = ebm.explain_global()

# Display the global explanation (typically in a Jupyter environment)
# show(ebm_global)

print("Model training complete. To view explanations, uncomment 'show(ebm_global)' in a Jupyter environment.")

view raw JSON →