Timezone: »
Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.
Author Information
Andrew Ilyas (Massachusetts Institute of Technology)
Logan Engstrom (MIT)
Anish Athalye (MIT CSAIL)
Jessy Lin (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Black-box Adversarial Attacks with Limited Queries and Information »
Thu. Jul 12th 12:30 -- 12:50 PM Room A7
More from the Same Authors
-
2022 : A Game-Theoretic Perspective on Trust in Recommendation »
Sarah Cen · Andrew Ilyas · Aleksander Madry -
2023 : ModelDiff: A Framework for Comparing Learning Algorithms »
Harshay Shah · Sung Min (Sam) Park · Andrew Ilyas · Aleksander Madry -
2023 : Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation »
Joshua Vendrow · Saachi Jain · Logan Engstrom · Aleksander Madry -
2023 : What Works in Chest X-Ray Classification? A Case Study of Design Choices »
Evan Vogelbaum · Logan Engstrom · Aleksander Madry -
2023 : Paper Spotlights »
Andrew Ilyas · AlizĂ©e Pace · Ji Won Park · Adam Breitholtz · Nari Johnson -
2023 Poster: TRAK: Attributing Model Behavior at Scale »
Sung Min (Sam) Park · Kristian Georgiev · Andrew Ilyas · Guillaume Leclerc · Aleksander Madry -
2023 Oral: TRAK: Attributing Model Behavior at Scale »
Sung Min (Sam) Park · Kristian Georgiev · Andrew Ilyas · Guillaume Leclerc · Aleksander Madry -
2023 Poster: ModelDiff: A Framework for Comparing Learning Algorithms »
Harshay Shah · Sung Min (Sam) Park · Andrew Ilyas · Aleksander Madry -
2023 Oral: Raising the Cost of Malicious AI-Powered Image Editing »
Hadi Salman · Alaa Khaddaj · Guillaume Leclerc · Andrew Ilyas · Aleksander Madry -
2023 Poster: Rethinking Backdoor Attacks »
Alaa Khaddaj · Guillaume Leclerc · Aleksandar Makelov · Kristian Georgiev · Hadi Salman · Andrew Ilyas · Aleksander Madry -
2023 Poster: Raising the Cost of Malicious AI-Powered Image Editing »
Hadi Salman · Alaa Khaddaj · Guillaume Leclerc · Andrew Ilyas · Aleksander Madry -
2022 Poster: Datamodels: Understanding Predictions with Data and Data with Predictions »
Andrew Ilyas · Sung Min (Sam) Park · Logan Engstrom · Guillaume Leclerc · Aleksander Madry -
2022 Spotlight: Datamodels: Understanding Predictions with Data and Data with Predictions »
Andrew Ilyas · Sung Min (Sam) Park · Logan Engstrom · Guillaume Leclerc · Aleksander Madry -
2020 Poster: From ImageNet to Image Classification: Contextualizing Progress on Benchmarks »
Dimitris Tsipras · Shibani Santurkar · Logan Engstrom · Andrew Ilyas · Aleksander Madry -
2020 Poster: Identifying Statistical Bias in Dataset Replication »
Logan Engstrom · Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Jacob Steinhardt · Aleksander Madry -
2019 Poster: Exploring the Landscape of Spatial Robustness »
Logan Engstrom · Brandon Tran · Dimitris Tsipras · Ludwig Schmidt · Aleksander Madry -
2019 Oral: Exploring the Landscape of Spatial Robustness »
Logan Engstrom · Brandon Tran · Dimitris Tsipras · Ludwig Schmidt · Aleksander Madry -
2018 Oral: Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples »
Anish Athalye · Nicholas Carlini · David Wagner -
2018 Poster: Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples »
Anish Athalye · Nicholas Carlini · David Wagner -
2018 Poster: Synthesizing Robust Adversarial Examples »
Anish Athalye · Logan Engstrom · Andrew Ilyas · Kevin Kwok -
2018 Oral: Synthesizing Robust Adversarial Examples »
Anish Athalye · Logan Engstrom · Andrew Ilyas · Kevin Kwok