{"id":4580,"library":"interpret-core","title":"Interpret-Core","description":"Interpret-Core is the minimal dependency core system for the InterpretML library, providing tools to fit inherently interpretable models like Explainable Boosting Machines (EBMs) and explain black-box machine learning models. It is currently at version 0.7.8 and maintains an active release cadence with frequent updates.","status":"active","version":"0.7.8","language":"en","source_language":"en","source_url":"https://github.com/interpretml/interpret","tags":["machine learning","interpretability","XAI","explainable AI","EBM","glassbox models","model explanation"],"install":[{"cmd":"pip install interpret-core","lang":"bash","label":"Basic installation"},{"cmd":"pip install interpret-core[required,ebm,plotly,dash]","lang":"bash","label":"Installation with common optional dependencies (EBM, visualizations)"}],"dependencies":[{"reason":"Fundamental for numerical operations and data handling.","package":"numpy"},{"reason":"Often used alongside numpy for scientific computing functionalities.","package":"scipy"},{"reason":"Provides common ML estimators and utilities, ensuring compatibility with the broader ecosystem.","package":"scikit-learn"},{"reason":"For flexible data structures and analysis, although optimized handling exists if not installed.","package":"pandas"},{"reason":"Used for efficient parallel computing and caching.","package":"joblib"},{"reason":"Optional, for interactive visualizations in notebooks/web apps.","package":"dash","optional":true},{"reason":"Optional, for rich interactive data visualizations.","package":"plotly","optional":true},{"reason":"Optional, for LIME explainer integration.","package":"lime","optional":true},{"reason":"Optional, for SHAP explainer integration.","package":"shap","optional":true}],"imports":[{"symbol":"ExplainableBoostingClassifier","correct":"from interpret.glassbox import ExplainableBoostingClassifier"},{"symbol":"ExplainableBoostingRegressor","correct":"from interpret.glassbox import ExplainableBoostingRegressor"},{"note":"The `show` function for visualizations is directly available from the top-level `interpret` package, not a submodule like `visualize`.","wrong":"from interpret.visualize import show","symbol":"show","correct":"from interpret import show"}],"quickstart":{"code":"import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom interpret.glassbox import ExplainableBoostingClassifier\nfrom interpret import show\n\n# Generate some synthetic data\nnp.random.seed(0)\nX = pd.DataFrame({\n    'feature_a': np.random.rand(100) * 10,\n    'feature_b': np.random.randint(0, 3, 100).astype(str),\n    'feature_c': np.random.randn(100)\n})\ny = (X['feature_a'] + (X['feature_b'].astype(int) * 2) + np.random.randn(100) > 10).astype(int)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n\n# Initialize and fit an Explainable Boosting Machine (EBM)\nebm = ExplainableBoostingClassifier(random_state=42)\nebm.fit(X_train, y_train)\n\n# Get global explanations (feature importances and shapes)\nebm_global = ebm.explain_global()\n\n# In a notebook environment, you would call show(ebm_global) to visualize\n# For script execution, we can print summary or a representation of the explanation\nprint(f\"Global Explanation for EBM:\\n{ebm_global.data()}\")\n\n# Get local explanations for a specific sample\nebm_local = ebm.explain_local(X_test[:1], y_test[:1])\nprint(f\"\\nLocal Explanation for first test sample:\\n{ebm_local.data()}\")\n","lang":"python","description":"This quickstart demonstrates how to train an Explainable Boosting Machine (EBM) for a classification task and generate both global and local explanations. It covers data preparation, model fitting, and accessing explanation data programmatically. For interactive visualizations, `show(explanation_object)` would be used in a compatible environment like a Jupyter notebook."},"warnings":[{"fix":"Ensure the `bags` parameter is passed with the shape (n_samples, n_outer_bags) to align with `X` parameter's shape.","message":"The shape of the `bags` parameter in EBMs was changed from (n_outer_bags, n_samples) to (n_samples, n_outer_bags). In v0.7.0, the old format issued a warning and was accepted, but this behavior may be fully deprecated or removed in future major versions.","severity":"breaking","affected_versions":"v0.7.0 and later"},{"fix":"Refactor code to remove dependencies on `ComputeProvider`. Refer to the latest documentation for the updated simplified interface.","message":"The `ComputeProvider` abstraction was removed in v0.7.3, simplifying the interface. Code relying on this abstraction will break.","severity":"breaking","affected_versions":"v0.7.3 and later"},{"fix":"Upgrade `interpret-core` to v0.7.4 or newer to resolve `scikit-learn` compatibility issues. Alternatively, pin `scikit-learn` to a version below 1.8 if an upgrade is not possible.","message":"Older versions of `interpret-core` (prior to v0.7.4) had incompatibilities with `scikit-learn` versions 1.8 and above, specifically due to changes in `is_classifier` and `is_regressor` only accepting valid estimators.","severity":"gotcha","affected_versions":"< v0.7.4"},{"fix":"Pass a Pandas DataFrame with meaningful column names, or explicitly set the `feature_names` property when initializing or calling explain functions for array inputs.","message":"When using NumPy arrays as input to explainers, feature names might not appear in visualizations. This occurs because NumPy arrays lack inherent column names.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Remember that the y-axis represents log-odds (logits). Positive values push towards the positive class, but transformations are needed to convert to probabilities if desired for direct comparison.","message":"For `ExplainableBoostingClassifier`, the y-axis values in the generated global explanation graphs are in 'logit' space, not direct probabilities. This requires careful interpretation for classification tasks.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-12T00:00:00.000Z","next_check":"2026-07-11T00:00:00.000Z"}