Timezone: »

PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach
Tsui-Wei Weng · Pin-Yu Chen · Lam Nguyen · Mark Squillante · Akhilan Boopathy · Ivan Oseledets · Luca Daniel

Tue Jun 11 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #70
We propose a novel framework PROVEN to \textbf{PRO}babilistically \textbf{VE}rify \textbf{N}eural network's robustness with statistical guarantees. PROVEN provides probability certificates of neural network robustness when the input perturbation follow distributional characterization. Notably, PROVEN is derived from current state-of-the-art worst-case neural network robustness verification frameworks, and therefore it can provide probability certificates with little computational overhead on top of existing methods such as Fast-Lin, CROWN and CNN-Cert. Experiments on small and large MNIST and CIFAR neural network models demonstrate our probabilistic approach can tighten up robustness certificate to around $1.8 \times$ and $3.5 \times$ with at least a $99.99\%$ confidence compared with the worst-case robustness certificate by CROWN and CNN-Cert.

Author Information

Tsui-Wei Weng (MIT)
Pin-Yu Chen (IBM Research AI)
Lam Nguyen (IBM Research, Thomas J. Watson Research Center)
Mark Squillante (IBM Research)
Akhilan Boopathy (MIT)
Ivan Oseledets (Skolkovo Institute of Science and Technology)
Luca Daniel (Massachusetts Institute of Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors