Saliency

raw JSON →
0.2.1 verified Fri May 01 auth: no python

Framework-agnostic library for computing saliency maps (e.g., integrated gradients, SmoothGrad, XRAI) for deep learning models. Current version: 0.2.1. Release cadence is low, with updates driven by research contributions.

pip install saliency
error ModuleNotFoundError: No module named 'saliency.core'
cause Older version installed (<0.2.0) where core module didn't exist.
fix
Upgrade to latest: pip install --upgrade saliency
error AttributeError: module 'saliency' has no attribute 'IntegratedGradients'
cause Importing directly from top-level saliency instead of submodule.
fix
Use: from saliency.core import IntegratedGradients
error TypeError: GetMask() missing 1 required positional argument: 'x_steps'
cause x_steps parameter is required since version 0.2.0 (old default removed).
fix
Provide x_steps explicitly, e.g., GetMask(..., x_steps=25)
breaking In version 0.2.0, the API was overhauled: previous methods like `saliency.IntegratedGradients` moved to `saliency.core`. Code using old import paths will break.
fix Update imports: from saliency.core import IntegratedGradients (and others). Also update method calls (e.g., GetMask instead of GetMask).
deprecated The method `GetMask` in IntegratedGradients may be deprecated in future in favor of `compute_saliency` or similar naming. Check CHANGELOG.
fix Monitor repository for updated API; aim to use any newer method names once released.
gotcha XRAI method requires both positive and negative attributions; using only positive attributions will produce incorrect masks.
fix Ensure you pass the full attributions tensor (including negative values) to XRAI, not a rectified version.
gotcha The library expects models to output logits (pre-softmax) for gradient calculations. Using softmax outputs may lead to vanishing gradients.
fix Always use a model that returns logits, or modify the model function to return logits before activation.

Compute integrated gradients on a simple Keras model.

import tensorflow as tf
from saliency.core import IntegratedGradients, visualize

# Build a simple model
model = tf.keras.Sequential([
    tf.keras.layers.Dense(10, activation='relu', input_shape=(4,)),
    tf.keras.layers.Dense(1, activation='sigmoid')
])

# Dummy input and baseline
x_input = tf.constant([[1.0, 2.0, 3.0, 4.0]])
baseline = tf.zeros_like(x_input)

# Call model wrapper
def model_fn(x):
    return model(x)

# Compute integrated gradients
ig = IntegratedGradients()
attributions = ig.GetMask(x_input, model_fn, baseline, x_steps=25)
print(attributions)