Explanation by Progressive Exaggeration

Sumedha Singla, Brian Pollack, Junxiang Chen, Kayhan Batmanghelich

Keywords: black box, gan, interpretability

Thurs Session 4 (17:00-19:00 GMT) [Live QA] [Cal]
Thurs Session 5 (20:00-22:00 GMT) [Live QA] [Cal]
Thursday: Fairness, Interpretabiity and Deployment

Abstract: As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical. Classical approaches that assess feature importance (eg saliency maps) do not explain how and why a particular region of an image is relevant to the prediction. We propose a method that explains the outcome of a classification black-box by gradually exaggerating the semantic effect of a given class. Given a query input to a classifier, our method produces a progressive set of plausible variations of that query, which gradually change the posterior probability from its original class to its negation. These counter-factually generated samples preserve features unrelated to the classification decision, such that a user can employ our method as a ``tuning knob'' to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input.

Similar Papers

BayesOpt Adversarial Attack
Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal,
Transferable Perturbations of Deep Feature Distributions
Nathan Inkawhich, Kevin Liang, Lawrence Carin, Yiran Chen,
Jacobian Adversarially Regularized Networks for Robustness
Alvin Chan, Yi Tay, Yew Soon Ong, Jie Fu,