Timezone: »
Motivated by safety-critical applications, test-time attacks on classifiers via adversarial examples has recently received a great deal of attention. However, there is a general lack of understanding on why adversarial examples arise; whether they originate due to inherent properties of data or due to lack of training samples remains ill-understood. In this work, we introduce a theoretical framework analogous to bias-variance theory for understanding these effects. We use our framework to analyze the robustness of a canonical non-parametric classifier – the k-nearest neighbors. Our analysis shows that its robustness properties depend critically on the value of k – the classifier may be inherently non-robust for small k, but its robustness approaches that of the Bayes Optimal classifier for fast-growing k. We propose a novel modified 1-nearest neighbor classifier, and guarantee its robustness in the large sample limit. Our experiments suggest that this classifier may have good robustness properties even for reasonable data set sizes.
Author Information
Yizhen Wang (UCSD)
Somesh Jha (University of Wisconsin, Madison)
Kamalika Chaudhuri (University of California at San Diego)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Fri. Jul 13th 03:40 -- 03:50 PM Room A6
More from the Same Authors
-
2021 : Understanding Instance-based Interpretability of Variational Auto-Encoders »
· Zhifeng Kong · Kamalika Chaudhuri -
2021 : Privacy Amplification by Bernoulli Sampling »
Jacob Imola · Kamalika Chaudhuri -
2021 : A Shuffling Framework For Local Differential Privacy »
Casey M Meehan · Amrita Roy Chowdhury · Kamalika Chaudhuri · Somesh Jha -
2021 : Privacy Amplification by Subsampling in Time Domain »
Tatsuki Koga · Casey M Meehan · Kamalika Chaudhuri -
2022 : Understanding Rare Spurious Correlations in Neural Networks »
Yao-Yuan Yang · Chi-Ning Chou · Kamalika Chaudhuri -
2022 : The Trade-off between Label Efficiency and Universality of Representations from Contrastive Learning »
Zhenmei Shi · Zhenmei Shi · Jiefeng Chen · Jiefeng Chen · Kunyang Li · Kunyang Li · Jayaram Raghuram · Jayaram Raghuram · Xi Wu · Xi Wu · Yingyiu Liang · Yingyiu Liang · Somesh Jha · Somesh Jha -
2023 : Theoretically Principled Trade-off for Stateful Defenses against Query-Based Black-Box Attacks »
Ashish Hooda · Neal Mangaokar · Ryan Feng · Kassem Fawaz · Somesh Jha · Atul Prakash -
2023 : Machine Learning with Feature Differential Privacy »
Saeed Mahloujifar · Chuan Guo · G. Edward Suh · Kamalika Chaudhuri -
2023 : Panel Discussion »
Peter Kairouz · Song Han · Kamalika Chaudhuri · Florian Tramer -
2023 : Kamalika Chaudhuri »
Kamalika Chaudhuri -
2023 Poster: Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design »
Chuan Guo · Kamalika Chaudhuri · Pierre Stock · Michael Rabbat -
2023 Poster: Concept-based Explanations for Out-of-Distribution Detectors »
Jihye Choi · Jayaram Raghuram · Ryan Feng · Jiefeng Chen · Somesh Jha · Atul Prakash -
2023 Poster: Stratified Adversarial Robustness with Rejection »
Jiefeng Chen · Jayaram Raghuram · Jihye Choi · Xi Wu · Yingyiu Liang · Somesh Jha -
2023 Oral: Why does Throwing Away Data Improve Worst-Group Error? »
Kamalika Chaudhuri · Kartik Ahuja · Martin Arjovsky · David Lopez-Paz -
2023 Poster: Data-Copying in Generative Models: A Formal Framework »
Robi Bhattacharjee · Sanjoy Dasgupta · Kamalika Chaudhuri -
2023 Poster: A Two-Stage Active Learning Algorithm for k-Nearest Neighbors »
Nicholas Rittler · Kamalika Chaudhuri -
2023 Poster: Why does Throwing Away Data Improve Worst-Group Error? »
Kamalika Chaudhuri · Kartik Ahuja · Martin Arjovsky · David Lopez-Paz -
2022 : Adversarial Robustness and Cryptography »
Somesh Jha -
2022 Poster: Thompson Sampling for Robust Transfer in Multi-Task Bandits »
Zhi Wang · Chicheng Zhang · Kamalika Chaudhuri -
2022 Spotlight: Thompson Sampling for Robust Transfer in Multi-Task Bandits »
Zhi Wang · Chicheng Zhang · Kamalika Chaudhuri -
2022 Poster: Bounding Training Data Reconstruction in Private (Deep) Learning »
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten -
2022 Oral: Bounding Training Data Reconstruction in Private (Deep) Learning »
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten -
2021 : Discussion Panel #2 »
Bo Li · Nicholas Carlini · Andrzej Banburski · Kamalika Chaudhuri · Will Xiao · Cihang Xie -
2021 : Invited Talk #9 »
Kamalika Chaudhuri -
2021 : Invited Talk: Kamalika Chaudhuri »
Kamalika Chaudhuri -
2021 : Invited Talk: Kamalika Chaudhuri »
Kamalika Chaudhuri -
2021 : Live Panel Discussion »
Thomas Dietterich · Chelsea Finn · Kamalika Chaudhuri · Yarin Gal · Uri Shalit -
2021 Poster: A General Framework For Detecting Anomalous Inputs to DNN Classifiers »
Jayaram Raghuram · Varun Chandrasekaran · Somesh Jha · Suman Banerjee -
2021 Oral: A General Framework For Detecting Anomalous Inputs to DNN Classifiers »
Jayaram Raghuram · Varun Chandrasekaran · Somesh Jha · Suman Banerjee -
2021 Poster: Sample Complexity of Robust Linear Classification on Separated Data »
Robi Bhattacharjee · Somesh Jha · Kamalika Chaudhuri -
2021 Spotlight: Sample Complexity of Robust Linear Classification on Separated Data »
Robi Bhattacharjee · Somesh Jha · Kamalika Chaudhuri -
2021 Poster: Connecting Interpretability and Robustness in Decision Trees through Separation »
Michal Moshkovitz · Yao-Yuan Yang · Kamalika Chaudhuri -
2021 Spotlight: Connecting Interpretability and Robustness in Decision Trees through Separation »
Michal Moshkovitz · Yao-Yuan Yang · Kamalika Chaudhuri -
2020 Poster: Data-Dependent Differentially Private Parameter Learning for Directed Graphical Models »
Amrita Roy Chowdhury · Theodoros Rekatsinas · Somesh Jha -
2020 Poster: Concise Explanations of Neural Networks using Adversarial Training »
Prasad Chalasani · Jiefeng Chen · Amrita Roy Chowdhury · Xi Wu · Somesh Jha -
2020 Poster: When are Non-Parametric Methods Robust? »
Robi Bhattacharjee · Kamalika Chaudhuri -
2020 Poster: CAUSE: Learning Granger Causality from Event Sequences using Attribution Methods »
Wei Zhang · Thomas Panum · Somesh Jha · Prasad Chalasani · David Page -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song -
2019 Talk: Opening Remarks »
Kamalika Chaudhuri · Ruslan Salakhutdinov -
2018 Poster: Active Learning with Logged Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2018 Oral: Active Learning with Logged Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2018 Poster: Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training »
Xi Wu · Wooyeong Jang · Jiefeng Chen · Lingjiao Chen · Somesh Jha -
2018 Oral: Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training »
Xi Wu · Wooyeong Jang · Jiefeng Chen · Lingjiao Chen · Somesh Jha -
2017 Workshop: Picky Learners: Choosing Alternative Ways to Process Data. »
Corinna Cortes · Kamalika Chaudhuri · Giulia DeSalvo · Ningshan Zhang · Chicheng Zhang -
2017 Poster: Active Heteroscedastic Regression »
Kamalika Chaudhuri · Prateek Jain · Nagarajan Natarajan -
2017 Talk: Active Heteroscedastic Regression »
Kamalika Chaudhuri · Prateek Jain · Nagarajan Natarajan