InterpretML
InterpretML is an open-source Python library designed for training inherently interpretable models (glassbox models) and explaining black-box machine learning systems. It provides a unified API for various interpretability techniques, including Explainable Boosting Machines (EBMs), LIME, and SHAP, along with interactive visualizations to help users understand model behavior globally and locally. The library is actively maintained, with frequent minor releases (e.g., several in early 2026), and is currently at version 0.7.8.
Common errors
-
TypeError: 'numpy.ufunc' object is not callable
cause This error often occurs when older versions of `interpret` (e.g., <0.7.3) are used with specific data types, leading to a UFuncTypeError in `ShapKernel.explain_local`.fixUpgrade `interpret` to version 0.7.3 or later. If the issue persists, ensure your data types are compatible with NumPy operations expected by the explainer. -
TypeError: is_classifier or is_regressor only accepting valid estimators
cause This specific error indicates an incompatibility with scikit-learn versions 1.8 and above, where validation functions were tightened.fixUpdate `interpret` to version 0.7.4 or newer. This version specifically resolved the issue by adapting to the scikit-learn changes. -
AttributeError: module 'interpret.ebm.ebm_utils' has no attribute 'merge_models'
cause The `merge_models` function in `EBMUtils` was renamed to `merge_ebms` in a past breaking change.fixReplace `interpret.ebm.ebm_utils.merge_models` with `interpret.ebm.merge_ebms` in your code. -
NameError: name 'ComputeProvider' is not defined
cause The `ComputeProvider` abstraction was removed in version 0.7.3 to simplify the API.fixRemove any explicit references to `ComputeProvider` as it is no longer part of the public interface. The functionality is now handled internally.
Warnings
- breaking The shape of the `bags` parameter in EBM fitting functions changed from `(n_outer_bags, n_samples)` to `(n_samples, n_outer_bags)`.
- breaking The `EBMUtils.merge_models` function was renamed to `merge_ebms`.
- gotcha Incompatibility with scikit-learn 1.8+ due to changes in `is_classifier` and `is_regressor` only accepting valid estimators.
- gotcha The `ComputeProvider` abstraction was removed, simplifying the interface.
- gotcha When passing NumPy arrays to explainers, feature names are not automatically inferred. Visualizations might lack descriptive labels.
Install
-
pip install interpret -
pip install interpret-core -
pip install interpret[shap] -
pip install interpret[lime]
Imports
- ExplainableBoostingClassifier
from interpret.glassbox import ExplainableBoostingClassifier
- ExplainableBoostingRegressor
from interpret.glassbox import ExplainableBoostingRegressor
- show
import interpret.show
from interpret import show
- ShapKernel
from interpret.blackbox import ShapKernel
- PartialDependence
from interpret.blackbox import PartialDependence
Quickstart
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show
from sklearn.datasets import load_iris
# Load data
data = load_iris()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
# Train an Explainable Boosting Machine (EBM) classifier
ebm = ExplainableBoostingClassifier(random_state=42)
ebm.fit(X_train, y_train)
# Get a global explanation for the model
ebm_global = ebm.explain_global()
# Display the global explanation (typically in a Jupyter environment)
# show(ebm_global)
print("Model training complete. To view explanations, uncomment 'show(ebm_global)' in a Jupyter environment.")