Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Adversarial Machine Learning

Robust Empirical Risk Minimization with Tolerance

Robi Bhattacharjee · Max Hopkins · Akash Kumar · Hantao Yu · Kamalika Chaudhuri


Abstract: Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today’s tech-dominated world, and current theoretical techniques fall far from answering the need, often requiring exponential sample complexity and (necessarily) complicated improper learning rules. Towards this end, one natural path is to study relaxations of the standard model such as Ashtiani et. al. (2022)'s \textit{tolerant} learning, a recent notion where the output classifier is compared to the best achievable error over slightly larger perturbation sets. In this work, we show that under weak niceness conditions, achieving simple, sample-efficient robust learning is indeed possible: a natural tolerant variant of robust empirical risk minimization is in fact sufficient for learning over arbitrary perturbation sets of bounded diameter $D$ using only $O\left( \frac{vd\log \frac{dD}{\epsilon\gamma\delta}}{\epsilon^2}\right)$ samples for VC-dimension $v$ hypothesis classes over $\mathbb{R}^d$.

Chat is not available.