Robust Learning for Data Poisoning Attacks

Yunjuan Wang · Poorya Mianjy · Raman Arora


Keywords: [ Adversarial Examples ] [ Generative Models ] [ Algorithms ] [ Adversarial Networks ] [ Algorithms -> Unsupervised Learning; Deep Learning ]

[ Abstract ]
[ Slides
[ Paper ]
[ Visit Poster at Spot C4 in Virtual World ]
Thu 22 Jul 9 p.m. PDT — 11 p.m. PDT
Spotlight presentation: Adversarial Learning 3
Thu 22 Jul 5 p.m. PDT — 6 p.m. PDT


We investigate the robustness of stochastic approximation approaches against data poisoning attacks. We focus on two-layer neural networks with ReLU activation and show that under a specific notion of separability in the RKHS induced by the infinite-width network, training (finite-width) networks with stochastic gradient descent is robust against data poisoning attacks. Interestingly, we find that in addition to a lower bound on the width of the network, which is standard in the literature, we also require a distribution-dependent upper bound on the width for robust generalization. We provide extensive empirical evaluations that support and validate our theoretical results.

Chat is not available.