Skip to yearly menu bar Skip to main content


Spotlight

Robust Learning for Data Poisoning Attacks

Yunjuan Wang · Poorya Mianjy · Raman Arora

[ ] [ Livestream: Visit Adversarial Learning 3 ] [ Paper ]
[ Paper ]

Abstract:

We investigate the robustness of stochastic approximation approaches against data poisoning attacks. We focus on two-layer neural networks with ReLU activation and show that under a specific notion of separability in the RKHS induced by the infinite-width network, training (finite-width) networks with stochastic gradient descent is robust against data poisoning attacks. Interestingly, we find that in addition to a lower bound on the width of the network, which is standard in the literature, we also require a distribution-dependent upper bound on the width for robust generalization. We provide extensive empirical evaluations that support and validate our theoretical results.

Chat is not available.