Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Formal Verification of Machine Learning

Verification-friendly Networks: the Case for Parametric ReLUs

Patrick Henriksen · Francesco Leofante · Alessio Lomuscio


Abstract:

It has increasingly been recognised that verification can contribute to the validation and debugging of neural networks before deployment, particularly in safety-critical areas. While considerable progress has been made in recent years, present techniques still do not scale to large architectures used in many applications. In this paper we show that substantial gains can be obtained by employing Parametric ReLU activation functions in lieu of plain ReLU functions. We give training procedures that produce networks which achieve one order of magnitude gain in verification overheads and 30-100% fewer timeouts with an SoA Symbolic Interval Propagation-based verification toolkit, while not compromising the resulting accuracy. Furthermore, we show that adversarial training combined with our approach improves certified robustness up to 36% compared to adversarial training performed on baseline ReLU networks.

Chat is not available.