Skip to yearly menu bar Skip to main content


Oral

Differentiable Abstract Interpretation for Provably Robust Neural Networks

Matthew Mirman · Timon Gehr · Martin Vechev

Abstract:

We introduce a scalable method for training neural networks based on abstract interpretation. We show how to successfully apply an approximate end-to-end differentiable abstract interpreter to train large networks that are (i) certifiably more robust to adversarial perturbations, and (ii) have improved accuracy.

Chat is not available.