Skip to yearly menu bar Skip to main content


Poster
in
Workshop: PAC-Bayes Meets Interactive Learning

Computing non-vacuous PAC-Bayes generalization bounds for Models under Adversarial Corruptions

Waleed Mustafa · Philipp Liznerski · Dennis Wagner · Puyu Wang · Marius Kloft


Abstract: PAC-Bayes generalization bounds have been shown to provide non-vacuous performance certificates for several Machine Learning models. However, under adversarial corruptions, these bounds often fail to maintain their non-vacuous nature due to the increased empirical risk. In this work, we address this limitation by deriving and computing the first non-vacuous generalization bounds for models operating under adversarial conditions. Our approach combines the PAC-Bayes and Adversarial Smoothing frameworks to derive generalization bounds for randomly smoothed models. We empirically demonstrate the efficacy of our bounds in providing robust population risk certificates for stochastic Convolution Neural Networks (CNN) operating under $L_2$-bounded adversarial corruptions for both MNIST and CIFAR-10.

Chat is not available.