Timezone: »

Illusory Attacks: Detectability Matters in Adversarial Attacks on Sequential Decision-Makers
Tim Franzmeyer · Stephen Mcaleer · Joao Henriques · Jakob Foerster · Phil Torr · Adel Bibi · Christian Schroeder
Event URL: https://openreview.net/forum?id=8kQBjQ6Dol »

Autonomous agents deployed in the real world need to be robust against adversarial attacks on sensory inputs. Robustifying agent policies requires anticipating the strongest attacks possible.We demonstrate that existing observation-space attacks on reinforcement learning agents have a common weakness: while effective, their lack of temporal consistency makes them \textit{detectable} using automated means or human inspection. Detectability is undesirable to adversaries as it may trigger security escalations.We introduce \textit{perfect illusory attacks}, a novel form of adversarial attack on sequential decision-makers that is both effective and provably \textit{statistically undetectable}. We then propose the more versatile \eattacks{}, which result in observation transitions that are consistent with the state-transition function of the adversary-free environment and can be learned end-to-end.Compared to existing attacks, we empirically find \eattacks{} to be significantly harder to detect with automated methods, and a small study with human subjects\footnote{IRB approval under reference xxxxxx/xxxxx} suggests they are similarly harder to detect for humans. We propose that undetectability should be a central concern in the study of adversarial attacks on mixed-autonomy settings.

Author Information

Tim Franzmeyer (Oxford University)
Stephen Mcaleer (UC Irvine)
Joao Henriques (University of Oxford)
Jakob Foerster (Oxford university)
Jakob Foerster

Jakob Foerster started as an Associate Professor at the department of engineering science at the University of Oxford in the fall of 2021. During his PhD at Oxford he helped bring deep multi-agent reinforcement learning to the forefront of AI research and interned at Google Brain, OpenAI, and DeepMind. After his PhD he worked as a research scientist at Facebook AI Research in California, where he continued doing foundational work. He was the lead organizer of the first Emergent Communication workshop at NeurIPS in 2017, which he has helped organize ever since and was awarded a prestigious CIFAR AI chair in 2019. His past work addresses how AI agents can learn to cooperate and communicate with other agents, most recently he has been developing and addressing the zero-shot coordination problem setting, a crucial step towards human-AI coordination.

Phil Torr (Oxford)
Adel Bibi (University of Oxford)
Christian Schroeder (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors