Skip to yearly menu bar Skip to main content


Spotlight Poster

A Theoretical Analysis of Backdoor Poisoning Attacks in Convolutional Neural Networks

Boqi Li · Weiwei Liu

Hall C 4-9 #2309
[ ] [ Paper PDF ]
[ Poster
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

The rising threat of backdoor poisoning attacks (BPAs) on Deep Neural Networks (DNNs) has become a significant concern in recent years. In such attacks, the adversaries strategically target a specific class and generate a poisoned training set. The neural network (NN), well-trained on the poisoned training set, is able to predict any input with the trigger pattern as the targeted label, while maintaining accurate outputs for clean inputs. However, why the BPAs work remains less explored. To fill this gap, we employ a dirty-label attack and conduct a detailed analysis of BPAs in a two-layer convolutional neural network. We provide theoretical insights and results on the effectiveness of BPAs. Our experimental results on two real-world datasets validate our theoretical findings.

Chat is not available.