Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Formal Verification of Machine Learning

Formal Verification for Counting Unsafe Inputs in Deep Neural Networks

Luca Marzari · Davide Corsi · Ferdinando Cicalese · Alessandro Farinelli


Abstract:

Traditionally, Formal Verification (FV) of Deep Neural Networks (DNN) can be employed to check whether a DNN is unsafe w.r.t. some given property (i.e., whether there is at least one unsafe input configuration). However, the binary answer typically returned could be not informative enough for other purposes, such as shielding, model selection, or training improvements. In this paper, we introduce the #DNN-Verification problem, which involves counting the number of input configurations of a DNN that result in a violation of a particular safety property. We analyze the complexity of this problem and propose a novel approach that returns the exact count of violations. Due to the #P-completeness of the problem, we also propose a randomized, approximate method that provides a provable probabilistic bound of the correct count while significantly reducing computational requirements tested on a set of safety-critical benchmarks.

Chat is not available.