Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Formal Verification of Machine Learning

One Pixel Adversarial Attacks via Sketched Programs

Tom Yuviler · Dana Drachsler-Cohen


Abstract:

Neural networks are susceptible to adversarial examples, including one pixel attacks. Existing one pixel attacks iteratively generate candidate adversarial examples and submit them to the network until finding a successful candidate. However, current attacks require a very large number of queries, which is infeasible in many practical settings. In this work, we leverage program synthesis and identify an expressive program sketch that enables the computation of adversarial examples using significantly fewer queries. We introduce OPPSLA, a synthesizer that instantiates the sketch with customized conditions. Experimental results show that OPPSLA achieves a state-of-the-art success rate while requiring an order of magnitude fewer queries than existing attacks.

Chat is not available.