Timezone: »

Sharp Statistical Guaratees for Adversarially Robust Gaussian Classification
Chen Dan · Yuting Wei · Pradeep Ravikumar

Thu Jul 16 06:00 AM -- 06:45 AM & Thu Jul 16 06:00 PM -- 06:45 PM (PDT) @ Virtual #None
Adversarial robustness has become a fundamental requirement in modern machine learning applications. Yet, there has been surprisingly little statistical understanding so far. In this paper, we provide the first result of the \emph{optimal} minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}. The results are stated in terms of the \emph{Adversarial Signal-to-Noise Ratio (AdvSNR)}, which generalizes a similar notion for standard linear classification to the adversarial setting. For the Gaussian mixtures with AdvSNR value of $r$, we prove an excess risk lower bound of order $\Theta(e^{-(\frac{1}{2}+o(1)) r^2} \frac{d}{n})$ and design a computationally efficient estimator that achieves this optimal rate. Our results built upon minimal assumptions while cover a wide spectrum of adversarial perturbations including $\ell_p$ balls for any $p \ge 1$.

Author Information

Chen Dan (Carnegie Mellon University)
Yuting Wei (CMU)
Pradeep Ravikumar (Carnegie Mellon University)

More from the Same Authors