Timezone: »

 
Oral
POPQORN: Quantifying Robustness of Recurrent Neural Networks
CHING-YUN KO · Zhaoyang Lyu · Tsui-Wei Weng · Luca Daniel · Ngai Wong · Dahua Lin

Tue Jun 11 12:00 PM -- 12:05 PM (PDT) @ Grand Ballroom

The vulnerability to adversarial attacks has been a critical issue of deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute robustness certification for neural networks, namely, certified lower bounds of the minimum adversarial perturbation. Such methods, however, were devised for feed-forward networks, e.g. multi-layer perceptron or convolutional networks; while it remains an open problem to certify robustness for recurrent networks, especially LSTM and GRU. For such networks, there exist additional challenges in computing the robustness certification, such as handling the inputs at multiple steps and the interaction between gates and states. In this work, we propose POPCORN (Propagated-output Certified Robustness for RNNs), a general algorithm to certify robustness of RNNs, including vanilla RNNs, LSTMs, and GRUs. We demonstrate its effectiveness for different network architectures and show that the robustness certification on individual steps can lead to new insights.

Author Information

CHING-YUN KO (The University of Hong Kong)
Zhaoyang Lyu (The Chinese University of Hong Kong)
Tsui-Wei Weng (MIT)
Luca Daniel (Massachusetts Institute of Technology)
Ngai Wong (The University of Hong Kong)
Dahua Lin (The Chinese University of Hong Kong)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors