Timezone: »

 
Poster
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy · Sijia Liu · Gaoyuan Zhang · Cynthia Liu · Pin-Yu Chen · Shiyu Chang · Luca Daniel

Wed Jul 15 08:00 AM -- 08:45 AM & Wed Jul 15 08:00 PM -- 08:45 PM (PDT) @ None #None

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks. In this paper, we theoretically show that with a proper measurement of interpretation, it is actually difficult to prevent prediction-evasion adversarial attacks from causing interpretation discrepancy, as confirmed by experiments on MNIST, CIFAR-10 and Restricted ImageNet. Spurred by that, we develop an interpretability-aware defensive scheme built only on promoting robust interpretation (without the need for resorting to adversarial loss minimization). We show that our defense achieves both robust classification and robust interpretation, outperforming state-of-the-art adversarial training methods against attacks of large perturbation in particular.

Author Information

Akhilan Boopathy (MIT)
Sijia Liu (MIT-IBM Watson AI Lab)

Sijia Liu is a Research Staff Member at MIT-IBM Watson AI Lab, IBM research. Prior to joining in IBM Research, he was a Postdoctoral Research Fellow at the University of Michigan, Ann Arbor. He received the Ph.D. degree (with All University Doctoral Prize) in electrical and computer engineering from Syracuse University, NY, USA, in 2016. His recent research interests include deep learning, adversarial machine learning, gradient-free optimization, nonconvex optimization, and graph data analytics. He received the Best Student Paper Finalist Award at Asilomar Conference on Signals, Systems, and Computers (Asilomar'13). He received the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'17). He served as a general chair of the Symposium 'Signal Processing for Adversarial Machine Learning' at GlobalSIP, 2018. He is also the co-chair of the workshop 'Adversarial Learning Methods for Machine Learning and Data Mining' at KDD, 2019.

Gaoyuan Zhang (IBM Research)
Cynthia Liu (Massachusetts Institute of Technology)
Pin-Yu Chen (IBM Research AI)
Shiyu Chang (MIT-IBM Watson AI Lab)
Luca Daniel (Massachusetts Institute of Technology)

More from the Same Authors