Timezone: »

 
Poster
Differentiable Abstract Interpretation for Provably Robust Neural Networks
Matthew Mirman · Timon Gehr · Martin Vechev

Wed Jul 11 09:15 AM -- 12:00 PM (PDT) @ Hall B #74

We introduce a scalable method for training robust neural networks based on abstract interpretation. We present several abstract transformers which balance efficiency with precision and show these can be used to train large neural networks that are certifiably robust to adversarial perturbations.

Author Information

Matthew Mirman (ETH Zürich)
Timon Gehr (ETH Zurich)
Martin Vechev (ETH Zurich)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors