Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Distillation based Robustness Verification with PAC Guarantees

Patrick Indri · Peter Blohm · Anagha Athavale · Ezio Bartocci · Georg Weissenbacher · Matteo Maffei · Dejan Nickovic · Thomas Gärtner · SAGAR MALHOTRA

Keywords: [ Formal Verification ] [ PAC-Verification ] [ robustness ] [ knowledge distillation ]


Abstract:

We present a statistical approach to verify the robustness of a Neural Network (NN).Conventional formal verification methods cannot tractably assess the global robustness of real-world NNs.To address this, we take advantage of a gradient-aligned distillation framework to transfer the robustness properties from the larger teacher network to a smaller student network. We assume that a smaller student NN can be formally verified for global robustness. We theoretically investigate how robustness guarantees of the student network can be transferred to the teacher network. We draw from ideas in learning theory and derive a sample complexity for the distillation procedure that gives PAC-guarantees on global robustness of the teacher network.

Chat is not available.