Timezone: »

 
Oral
Differentiable Abstract Interpretation for Provably Robust Neural Networks
Matthew Mirman · Timon Gehr · Martin Vechev

Wed Jul 11 08:00 AM -- 08:20 AM (PDT) @ A7

We introduce a scalable method for training neural networks based on abstract interpretation. We show how to successfully apply an approximate end-to-end differentiable abstract interpreter to train large networks that are (i) certifiably more robust to adversarial perturbations, and (ii) have improved accuracy.

Author Information

Matthew Mirman (ETH Zürich)
Timon Gehr (ETH Zurich)
Martin Vechev (ETH Zurich)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors