Timezone: »

Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions
Nodens Koren · Xingjun Ma · Qiuhong Ke · Yisen Wang · James Bailey

Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life. In this paper, we show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise to misinterpret the intention of an action in interaction scenarios. Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions, and demonstrate how DNN-based interaction models can be tricked to predict the participants' reactions in unexpected ways. Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.

Author Information

Nodens Koren (The University of Melbourne)
Xingjun Ma (Deakin University)
Qiuhong Ke (The University of Melbourne)
Yisen Wang (Peking University)
James Bailey (The University of Melbourne)

More from the Same Authors