Timezone: »
A poisoning backdoor attack is a rising security concern for deep learning. This type of attack can result in the backdoored model functioning normally most of the time but exhibiting abnormal behavior when presented with inputs containing the backdoor trigger, making it difficult to detect and prevent. In this work, we propose the adaptability hypothesis to understand when and why a backdoor attack works for general learning models, including deep neural networks, based on the theoretical investigation of classical kernel-based learning models. The adaptability hypothesis postulates that for an effective attack, the effect of incorporating a new dataset on the predictions of the original data points will be small, provided that the original data points are distant from the new dataset. Experiments on benchmark image datasets and state-of-the-art backdoor attacks for deep neural networks are conducted to corroborate the hypothesis. Our finding provides insight into the factors that affect the attack's effectiveness and has implications for the design of future attacks and defenses.
Author Information
Xun Xian (University of Minnesota)
Ganghua Wang (University of Minnesota)
Jayanth Srinivasa (Cisco)
Ashish Kundu (Cisco Research)
Dr. Ashish Kundu is currently at Cisco Research as its Head of Cybersecurity Research. He worked at Nuro as its Head of Cybersecurity, and as Research Staff Member at IBM T J Watson Research Center.He is an ACM Distinguished Member, and has also been an ACM Distinguished Speaker. He has led security, privacy and compliance of self-driving cars, tele-operated driving, cloud-based healthcare, and cloud-based AI-driven education platforms. His research has led to more than 160 patents filed with more than 150 patents granted, and more than 50 research papers. He has been honored with the prestigious Master Inventor recognition multiple times by IBM Research. Dr. Kundu received his Ph.D. in Cybersecurity from Purdue University and received the prestigious CERIAS Diamond Award for outstanding contributions to cybersecurity.
Xuan Bi (University of Minnesota - Twin Cities)
Mingyi Hong (University of Minnesota)
Jie Ding (University of Minnesota)
More from the Same Authors
-
2021 : Understanding Clipped FedAvg: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Steven Wu · Mingyi Hong -
2022 : Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees »
Siliang Zeng · Chenliang Li · Alfredo Garcia · Mingyi Hong -
2023 : Distributed Architecture Search over Heterogeneous Distributions »
Erum Mushtaq · Chaoyang He · Jie Ding · Salman Avestimehr -
2023 : Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik’s Cube »
Kausik Lakkaraju · Vedant Khandelwal · Biplav Srivastava · Forest Agostinelli · Hengtao Tang · Prathamjeet Singh · Dezhi Wu · Matt Irvin · Ashish Kundu -
2023 : Visualizing and Analyzing the Topology of Neuron Activations in Deep Adversarial Training »
Youjia Zhou · Yi Zhou · Jie Ding · Bei Wang -
2023 : Robust Inverse Reinforcement Learning Through Bayesian Theory of Mind »
Ran Wei · Siliang Zeng · Chenliang Li · Alfredo Garcia · Anthony McDonald · Mingyi Hong -
2023 Poster: Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach »
Prashant Khanduri · Ioannis Tsaknakis · Yihua Zhang · Jia Liu · Sijia Liu · Jiawei Zhang · Mingyi Hong -
2023 Poster: FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks »
Bingqing Song · Prashant Khanduri · xinwei zhang · Jinfeng Yi · Mingyi Hong -
2022 Poster: A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms »
xinwei zhang · Mingyi Hong · Sairaj Dhople · Nicola Elia -
2022 Spotlight: A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms »
xinwei zhang · Mingyi Hong · Sairaj Dhople · Nicola Elia -
2022 Poster: Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Mingyi Hong · Steven Wu · Jinfeng Yi -
2022 Poster: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy »
xinwei zhang · Xiangyi Chen · Mingyi Hong · Steven Wu · Jinfeng Yi -
2021 Spotlight: Decentralized Riemannian Gradient Descent on the Stiefel Manifold »
Shixiang Chen · Alfredo Garcia · Mingyi Hong · Shahin Shahrampour -
2021 Poster: Decentralized Riemannian Gradient Descent on the Stiefel Manifold »
Shixiang Chen · Alfredo Garcia · Mingyi Hong · Shahin Shahrampour -
2020 Poster: Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization: Joint Gradient Estimation and Tracking »
Haoran Sun · Songtao Lu · Mingyi Hong -
2020 Poster: Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks »
Sijia Liu · Songtao Lu · Xiangyi Chen · Yao Feng · Kaidi Xu · Abdullah Al-Dujaili · Mingyi Hong · Una-May O'Reilly -
2019 Poster: PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization »
Songtao Lu · Mingyi Hong · Zhengdao Wang -
2019 Oral: PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization »
Songtao Lu · Mingyi Hong · Zhengdao Wang -
2018 Poster: Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks »
Mingyi Hong · Meisam Razaviyayn · Jason Lee -
2018 Oral: Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks »
Mingyi Hong · Meisam Razaviyayn · Jason Lee