Poster
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Pranjal Awasthi · Natalie Frank · Mehryar Mohri
Virtual
Keywords: [ Adversarial Examples ] [ Learning Theory ] [ Statistical Learning Theory ]
Abstract:
Adversarial or test time robustness measures the susceptibility of a
classifier to perturbations to the test input. While there has been
a flurry of recent work on designing defenses against such
perturbations, the theory of adversarial robustness is not well
understood. In order to make progress on this, we focus on the
problem of understanding generalization in adversarial settings, via
the lens of Rademacher complexity. We give upper and lower bounds for the adversarial empirical
Rademacher complexity of linear hypotheses with adversarial
perturbations measured in $l_r$-norm for an arbitrary $r \geq
1$.
We then extend our analysis to provide Rademacher complexity lower and
upper bounds for a single ReLU unit. Finally, we give adversarial
Rademacher complexity bounds for feed-forward neural networks with
one hidden layer.
Chat is not available.