Interpret-Core

0.7.8 · active · verified Sun Apr 12

Interpret-Core is the minimal dependency core system for the InterpretML library, providing tools to fit inherently interpretable models like Explainable Boosting Machines (EBMs) and explain black-box machine learning models. It is currently at version 0.7.8 and maintains an active release cadence with frequent updates.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to train an Explainable Boosting Machine (EBM) for a classification task and generate both global and local explanations. It covers data preparation, model fitting, and accessing explanation data programmatically. For interactive visualizations, `show(explanation_object)` would be used in a compatible environment like a Jupyter notebook.

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show

# Generate some synthetic data
np.random.seed(0)
X = pd.DataFrame({
    'feature_a': np.random.rand(100) * 10,
    'feature_b': np.random.randint(0, 3, 100).astype(str),
    'feature_c': np.random.randn(100)
})
y = (X['feature_a'] + (X['feature_b'].astype(int) * 2) + np.random.randn(100) > 10).astype(int)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize and fit an Explainable Boosting Machine (EBM)
ebm = ExplainableBoostingClassifier(random_state=42)
ebm.fit(X_train, y_train)

# Get global explanations (feature importances and shapes)
ebm_global = ebm.explain_global()

# In a notebook environment, you would call show(ebm_global) to visualize
# For script execution, we can print summary or a representation of the explanation
print(f"Global Explanation for EBM:\n{ebm_global.data()}")

# Get local explanations for a specific sample
ebm_local = ebm.explain_local(X_test[:1], y_test[:1])
print(f"\nLocal Explanation for first test sample:\n{ebm_local.data()}")

view raw JSON →