Timezone: »

Concise Explanations of Neural Networks using Adversarial Training
Prasad Chalasani · Jiefeng Chen · Amrita Roy Chowdhury · Xi Wu · Somesh Jha

Thu Jul 16 06:00 AM -- 06:45 AM & Thu Jul 16 05:00 PM -- 05:45 PM (PDT) @ None #None
We show new connections between adversarial learning and explainability for deep neural networks (DNNs). One form of explanation of the output of a neural network model in terms of its input features, is a vector of feature-attributions, which can be generated by various techniques such as Integrated Gradients (IG), DeepSHAP, LIME, and CXPlain. Two desirable characteristics of an attribution-based explanation are: (1) \textit{sparseness}: the attributions of irrelevant or weakly relevant features should be negligible, thus resulting in \textit{concise} explanations in terms of the significant features, and (2) \textit{stability}: it should not vary significantly within a small local neighborhood of the input. Our first contribution is a theoretical exploration of how these two properties (when using IG-based attributions) are related to adversarial training, for a class of 1-layer networks (which includes logistic regression models for binary and multi-class classification); for these networks we show that (a) adversarial training using an $\ell_\infty$-bounded adversary produces models with sparse attribution vectors, and (b) natural model-training while encouraging stable explanations (via an extra term in the loss function), is equivalent to adversarial training. Our second contribution is an empirical verification of phenomenon (a), which we show, somewhat surprisingly, occurs \textit{not only in 1-layer networks, but also DNNs trained on standard image datasets}, and extends beyond IG-based attributions, to those based on DeepSHAP: adversarial training with $\linf$-bounded perturbations yields significantly sparser attribution vectors, with little degradation in performance on natural test data, compared to natural training. Moreover, the sparseness of the attribution vectors is significantly better than that achievable via $\ell_1$-regularized natural training.

Author Information

Prasad Chalasani (XaiPient)

Prasad Chalasani is CEO of XaiPient, whose mission is Explainable AI for Humans. He has a BTech in Computer. Science from IIT, Kharagpur, and a PhD in ML from Carnegie Mellon University. His previous roles include Quant Researcher and Portfolio Manager at hedge funds (WorldQuant, HBK), and he has lead quant research and data science teams at Goldman Sachs and Yahoo. Most recently he was Chief Scientist at MediaMath, leading ML for advertising.

Jiefeng Chen (University of Wisconsin-Madison)
Amrita Roy Chowdhury (University of Wisconsin-Madison)
Xi Wu (Google)

Completed my PhD in Computer Science from UW-Madison, advised by Jeffrey F. Naughton and Somesh Jha. Now a software engineer at Google. [Google PhD Fellow 2016 in privacy and security](https://ai.googleblog.com/2016/03/announcing-2016-google-phd-fellows-for.html).

Somesh Jha (University of Wisconsin, Madison)

More from the Same Authors