Timezone: »
Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples on speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes progress on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets. Then, we make progress towards physical-world audio adversarial examples by constructing perturbations which remain effective even after applying highly-realistic simulated environmental distortions.
Author Information
Yao Qin (University of California, San Diego)
Nicholas Carlini (Google)
Garrison Cottrell (University of California, San Diego)
Ian Goodfellow (Google Brain)
Colin Raffel (Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition »
Thu. Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom #65
More from the Same Authors
-
2020 Poster: Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations »
Florian Tramer · Jens Behrmann · Nicholas Carlini · Nicolas Papernot · Joern-Henrik Jacobsen -
2019 Poster: TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing »
Augustus Odena · Catherine Olsson · David Andersen · Ian Goodfellow -
2019 Oral: TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing »
Augustus Odena · Catherine Olsson · David Andersen · Ian Goodfellow -
2019 Poster: Adversarial Examples Are a Natural Consequence of Test Error in Noise »
Justin Gilmer · Nicolas Ford · Nicholas Carlini · Ekin Dogus Cubuk -
2019 Oral: Adversarial Examples Are a Natural Consequence of Test Error in Noise »
Justin Gilmer · Nicolas Ford · Nicholas Carlini · Ekin Dogus Cubuk -
2019 Poster: Self-Attention Generative Adversarial Networks »
Han Zhang · Ian Goodfellow · Dimitris Metaxas · Augustus Odena -
2019 Oral: Self-Attention Generative Adversarial Networks »
Han Zhang · Ian Goodfellow · Dimitris Metaxas · Augustus Odena -
2018 Poster: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music »
Adam Roberts · Jesse Engel · Colin Raffel · Curtis Hawthorne · Douglas Eck -
2018 Poster: Is Generator Conditioning Causally Related to GAN Performance? »
Augustus Odena · Jacob Buckman · Catherine Olsson · Tom B Brown · Christopher Olah · Colin Raffel · Ian Goodfellow -
2018 Oral: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music »
Adam Roberts · Jesse Engel · Colin Raffel · Curtis Hawthorne · Douglas Eck -
2018 Oral: Is Generator Conditioning Causally Related to GAN Performance? »
Augustus Odena · Jacob Buckman · Catherine Olsson · Tom B Brown · Christopher Olah · Colin Raffel · Ian Goodfellow