Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Formal Verification of Machine Learning

Safety Verification and Repair of Deep Neural Networks

Xiaodong Yang · Tomoya Yamaguchi · Bardh Hoxha · Danil Prokhorov · Taylor Johnson


Abstract: This paper presents the Veritex tool for reachability analysis and repair of deep neural networks (DNNs). Veritex includes methods for exact (sound and complete) analysis and over-approximative (sound and incomplete) reachability analysis of DNNs using novel set representations, including the facet-vertex incidence matrix, face lattice, and $\mathcal{V}$-zono. In addition to sound and complete safety verification of DNNs, these methods can also efficiently compute the exact output reachable domain, as well as the exact unsafe input space that causes safety violations of DNNs in the output. More importantly, based on the exact unsafe input-output reachable domain, Veritex can repair unsafe DNNs on multiple safety properties with negligible performance degradation. The repair is conducted by updating the DNN parameter through retraining. The approach also works in the absence of the safe model reference and the original dataset for learning. Veritex primarily addresses the issue of constructing provably safe DNNs, which is not yet significantly addressed in most of the current formal methods for trustworthy artificial intelligence (AI). The utility of Veritex is evaluated for these two aspects, specifically safety verification and DNN repair. Benchmarks for verification include the ACAS Xu networks, and benchmarks for the repair include unsafe ACAS Xu networks and an unsafe agent trained in deep reinforcement learning (DRL).

Chat is not available.