Timezone: »
Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life. In this paper, we show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise to misinterpret the intention of an action in interaction scenarios. Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions, and demonstrate how DNN-based interaction models can be tricked to predict the participants' reactions in unexpected ways. Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
Author Information
Nodens Koren (The University of Melbourne)
Xingjun Ma (Deakin University)
Qiuhong Ke (The University of Melbourne)
Yisen Wang (Peking University)
James Bailey (The University of Melbourne)
More from the Same Authors
-
2021 : Demystifying Adversarial Training via A Unified Probabilistic Framework »
Yisen Wang · Jiansheng Yang · Zhouchen Lin · Yifei Wang -
2022 Poster: Certified Adversarial Robustness Under the Bounded Support Set »
Yiwen Kou · Qinyuan Zheng · Yisen Wang -
2022 Spotlight: Certified Adversarial Robustness Under the Bounded Support Set »
Yiwen Kou · Qinyuan Zheng · Yisen Wang -
2022 Poster: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Poster: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2022 Poster: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Spotlight: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2021 : Discussion Panel #1 »
Hang Su · Matthias Hein · Liwei Wang · Sven Gowal · Jan Hendrik Metzen · Henry Liu · Yisen Wang -
2021 Poster: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Poster: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Spotlight: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Oral: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Poster: Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? »
Dinghuai Zhang · Kartik Ahuja · Yilun Xu · Yisen Wang · Aaron Courville -
2021 Oral: Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? »
Dinghuai Zhang · Kartik Ahuja · Yilun Xu · Yisen Wang · Aaron Courville -
2020 Poster: Normalized Loss Functions for Deep Learning with Noisy Labels »
Xingjun Ma · Hanxun Huang · Yisen Wang · Simone Romano · Sarah Erfani · James Bailey -
2019 Poster: On the Convergence and Robustness of Adversarial Training »
Yisen Wang · Xingjun Ma · James Bailey · Jinfeng Yi · Bowen Zhou · Quanquan Gu -
2019 Oral: On the Convergence and Robustness of Adversarial Training »
Yisen Wang · Xingjun Ma · James Bailey · Jinfeng Yi · Bowen Zhou · Quanquan Gu -
2018 Poster: Dimensionality-Driven Learning with Noisy Labels »
Xingjun Ma · Yisen Wang · Michael E. Houle · Shuo Zhou · Sarah Erfani · Shutao Xia · Sudanthi Wijewickrema · James Bailey -
2018 Oral: Dimensionality-Driven Learning with Noisy Labels »
Xingjun Ma · Yisen Wang · Michael E. Houle · Shuo Zhou · Sarah Erfani · Shutao Xia · Sudanthi Wijewickrema · James Bailey -
2017 Poster: Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections »
zakaria mhammedi · Andrew Hellicar · James Bailey · Ashfaqur Rahman -
2017 Talk: Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections »
zakaria mhammedi · Andrew Hellicar · James Bailey · Ashfaqur Rahman