Timezone: »

 
Poster
Robust Learning for Data Poisoning Attacks
Yunjuan Wang · Poorya Mianjy · Raman Arora

Thu Jul 22 09:00 PM -- 11:00 PM (PDT) @ Virtual #None

We investigate the robustness of stochastic approximation approaches against data poisoning attacks. We focus on two-layer neural networks with ReLU activation and show that under a specific notion of separability in the RKHS induced by the infinite-width network, training (finite-width) networks with stochastic gradient descent is robust against data poisoning attacks. Interestingly, we find that in addition to a lower bound on the width of the network, which is standard in the literature, we also require a distribution-dependent upper bound on the width for robust generalization. We provide extensive empirical evaluations that support and validate our theoretical results.

Author Information

Yunjuan Wang (Johns Hopkins University)
Poorya Mianjy (Johns Hopkins University)
Raman Arora (Johns Hopkins University)
Raman Arora

Raman Arora received his M.S. and Ph.D. degrees in Electrical and Computer Engineering from the University of Wisconsin-Madison in 2005 and 2009, respectively. From 2009-2011, he was a Postdoctoral Research Associate at the University of Washington in Seattle and a Visiting Researcher at Microsoft Research Redmond. Since 2011, he has been with Toyota Technological Institute at Chicago (TTIC). His research interests include machine learning, speech recognition and statistical signal processing.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors