Skip to yearly menu bar Skip to main content


Poster

Differentiable Abstract Interpretation for Provably Robust Neural Networks

Matthew Mirman · Timon Gehr · Martin Vechev

Hall B #74

Abstract:

We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.

Live content is unavailable. Log in and register to view live content