Timezone: »
Over recent years, devising classification algorithms that are robust to adversarial perturbations has emerged as a challenging problem. In particular, deep neural nets (DNNs) seem to be susceptible to small imperceptible changes over test instances. However, the line of work in provable robustness, so far, has been focused on information theoretic robustness, ruling out even the existence of any adversarial examples. In this work, we study whether there is a hope to benefit from algorithmic nature of an attacker that searches for adversarial examples, and ask whether there is any learning task for which it is possible to design classifiers that are only robust against polynomial-time adversaries. Indeed, numerous cryptographic tasks (eg encryption of long messages) can only be secure against computationally bounded adversaries, and are indeed impossible for computationally unbounded attackers. Thus, it is natural to ask if the same strategy could help robust learning. We show that computational limitation of attackers can indeed be useful in robust learning by demonstrating the possibility of a classifier for some learning task for which computational and information theoretic adversaries of bounded perturbations have very different power. Namely, while computationally unbounded adversaries can attack successfully and find adversarial examples with small perturbation, polynomial time adversaries are unable to do so unless they can break standard cryptographic hardness assumptions.
Short Biography
Somesh Jha received his B.Tech from Indian Institute of Technology, sNew Delhi in Electrical Engineering. He received his Ph.D. in Computer Science from Carnegie Mellon University under the supervision of Prof. Edmund Clarke (a Turing award winner). Currently, Somesh Jha is the Lubar Professor in the Computer Sciences Department at the University of Wisconsin (Madison). His work focuses on analysis of security protocols, survivability analysis, intrusion detection, formal methods for security, and analyzing malicious code. Recently, he has focused his interested on privacy and adversarial ML (AML). Somesh Jha has published several articles in highly-refereed conferences and prominent journals. He has won numerous best-paper and distinguished-paper awards. Prof. Jha is the fellow of the ACM, IEEE and AAAS.
Author Information
Somesh Jha (University of Wisconsin, Madison)
More from the Same Authors
-
2021 : A Shuffling Framework For Local Differential Privacy »
Casey M Meehan · Amrita Roy Chowdhury · Kamalika Chaudhuri · Somesh Jha -
2022 : The Trade-off between Label Efficiency and Universality of Representations from Contrastive Learning »
Zhenmei Shi · Zhenmei Shi · Jiefeng Chen · Jiefeng Chen · Kunyang Li · Kunyang Li · Jayaram Raghuram · Jayaram Raghuram · Xi Wu · Xi Wu · Yingyiu Liang · Yingyiu Liang · Somesh Jha · Somesh Jha -
2023 Poster: Concept-based Explanations for Out-of-Distribution Detectors »
Jihye Choi · Jayaram Raghuram · Ryan Feng · Jiefeng Chen · Somesh Jha · Atul Prakash -
2023 Poster: Stratified Adversarial Robustness with Rejection »
Jiefeng Chen · Jayaram Raghuram · Jihye Choi · Xi Wu · Yingyiu Liang · Somesh Jha -
2021 Poster: A General Framework For Detecting Anomalous Inputs to DNN Classifiers »
Jayaram Raghuram · Varun Chandrasekaran · Somesh Jha · Suman Banerjee -
2021 Oral: A General Framework For Detecting Anomalous Inputs to DNN Classifiers »
Jayaram Raghuram · Varun Chandrasekaran · Somesh Jha · Suman Banerjee -
2021 Poster: Sample Complexity of Robust Linear Classification on Separated Data »
Robi Bhattacharjee · Somesh Jha · Kamalika Chaudhuri -
2021 Spotlight: Sample Complexity of Robust Linear Classification on Separated Data »
Robi Bhattacharjee · Somesh Jha · Kamalika Chaudhuri -
2020 Poster: Data-Dependent Differentially Private Parameter Learning for Directed Graphical Models »
Amrita Roy Chowdhury · Theodoros Rekatsinas · Somesh Jha -
2020 Poster: Concise Explanations of Neural Networks using Adversarial Training »
Prasad Chalasani · Jiefeng Chen · Amrita Roy Chowdhury · Xi Wu · Somesh Jha -
2020 Poster: CAUSE: Learning Granger Causality from Event Sequences using Attribution Methods »
Wei Zhang · Thomas Panum · Somesh Jha · Prasad Chalasani · David Page -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song -
2018 Poster: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Yizhen Wang · Somesh Jha · Kamalika Chaudhuri -
2018 Oral: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Yizhen Wang · Somesh Jha · Kamalika Chaudhuri -
2018 Poster: Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training »
Xi Wu · Wooyeong Jang · Jiefeng Chen · Lingjiao Chen · Somesh Jha -
2018 Oral: Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training »
Xi Wu · Wooyeong Jang · Jiefeng Chen · Lingjiao Chen · Somesh Jha