Skip to yearly menu bar Skip to main content


Poster
in
Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning

Adversarial Sample Detection via Channel Pruning

Zuohui Chen · RenXuan Wang · Yao Lu · jingyang Xiang · Qi Xuan

Keywords: [ Statistical Learning Theory ]


Abstract:

Adversarial attacks are the main security issue of deep neural networks. Detecting adversarial samples is an effective mechanism for defending adversarial attacks. Previous works on detecting adversarial samples show superior in accuracy but consume too much memory and computing resources. In this paper, we propose an adversarial sample detection method based on pruned models. We find that pruned neural network models are sensitive to adversarial samples, i.e., the pruned models tend to output labels different from the original model when given adversarial samples. Moreover, the channel pruned model has an extremely small model size and actual computational cost. Experiments on CIFAR10 and SVHN show that the FLOPs and size of our generated model are only 24.46\% and 4.86\% of the original model. It outperforms the SOTA multi-model based detection method (87.47\% and 63.00\%) by 5.29\% and 30.92\% on CIFAR10 and SVHN, respectively, with significantly fewer models used.

Chat is not available.