Timezone: »
Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates. At the same time, the attack power of an individual user is limited because their updates are quickly drowned out by those of many other users. Existing attacks do not account for future behaviors of other users, and thus require many sequential updates and their effects are quickly erased. We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients, and ensures that backdoors are effective quickly and persist even after multiple rounds of community updates. We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds and demonstrate this attack on image classification, next-word prediction, and sentiment analysis.
Author Information
Yuxin Wen (University of Maryland)
Jonas Geiping (University of Maryland, College Park)
Liam Fowl (University of Maryland)
Hossein Souri (Johns Hopkins University)
Rama Chellappa (Johns Hopkins University)
Micah Goldblum (New York University)
Tom Goldstein (University of Maryland)
More from the Same Authors
-
2022 : Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch »
Hossein Souri · Liam Fowl · Rama Chellappa · Micah Goldblum · Tom Goldstein -
2022 : How much Data is Augmentation Worth? »
Jonas Geiping · Gowthami Somepalli · Ravid Shwartz-Ziv · Andrew Wilson · Tom Goldstein · Micah Goldblum -
2022 : Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2023 : Weighted Risk Invariance for Density-Aware Domain Generalization »
Gina Wong · Joshua Gleason · Rama Chellappa · Yoav Wald · Anqi Liu -
2023 : Cramming: Training a Language Model on a single GPU in one day »
Jonas Geiping · Tom Goldstein -
2023 : Understanding Data Replication in Diffusion Models »
Gowthami Somepalli · Vasu Singla · Micah Goldblum · Jonas Geiping · Tom Goldstein -
2023 Oral: A Watermark for Large Language Models »
John Kirchenbauer · Jonas Geiping · Yuxin Wen · Jonathan Katz · Ian Miers · Tom Goldstein -
2023 Poster: GOAT: A Global Transformer on Large-scale Graphs »
Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Renkun Ni · C. Bayan Bruss · Tom Goldstein -
2023 Poster: A Watermark for Large Language Models »
John Kirchenbauer · Jonas Geiping · Yuxin Wen · Jonathan Katz · Ian Miers · Tom Goldstein -
2023 Poster: Cramming: Training a Language Model on a single GPU in one day. »
Jonas Geiping · Tom Goldstein -
2022 : Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2022 Poster: Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations »
Amin Ghiasi · Hamid Kazemi · Steven Reich · Chen Zhu · Micah Goldblum · Tom Goldstein -
2022 Poster: Bayesian Model Selection, the Marginal Likelihood, and Generalization »
Sanae Lotfi · Pavel Izmailov · Gregory Benton · Micah Goldblum · Andrew Wilson -
2022 Oral: Bayesian Model Selection, the Marginal Likelihood, and Generalization »
Sanae Lotfi · Pavel Izmailov · Gregory Benton · Micah Goldblum · Andrew Wilson -
2022 Spotlight: Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations »
Amin Ghiasi · Hamid Kazemi · Steven Reich · Chen Zhu · Micah Goldblum · Tom Goldstein -
2022 Poster: Certified Neural Network Watermarks with Randomized Smoothing »
Arpit Bansal · Ping-yeh Chiang · Michael Curry · Rajiv Jain · Curtis Wigington · Varun Manjunatha · John P Dickerson · Tom Goldstein -
2022 Spotlight: Certified Neural Network Watermarks with Randomized Smoothing »
Arpit Bansal · Ping-yeh Chiang · Michael Curry · Rajiv Jain · Curtis Wigington · Varun Manjunatha · John P Dickerson · Tom Goldstein -
2022 Poster: Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification »
Yuxin Wen · Jonas Geiping · Liam Fowl · Micah Goldblum · Tom Goldstein -
2022 Spotlight: Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification »
Yuxin Wen · Jonas Geiping · Liam Fowl · Micah Goldblum · Tom Goldstein -
2021 : Paper Presentation 1: Analyzing the Security of Machine Learning for Algorithmic Trading »
Avi Schwarzschild · Micah Goldblum · Tom Goldstein -
2021 Workshop: ICML Workshop on Representation Learning for Finance and E-Commerce Applications »
Senthil Kumar · Sameena Shah · Joan Bruna · Tom Goldstein · Erik Mueller · Oleg Rokhlenko · Hongxia Yang · Jianpeng Xu · Oluwatobi O Olabiyi · Charese Smiley · C. Bayan Bruss · Saurabh H Nagrecha · Svitlana Vyetrenko -
2021 Poster: Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks »
Avi Schwarzschild · Micah Goldblum · Arjun Gupta · John P Dickerson · Tom Goldstein -
2021 Poster: Data Augmentation for Meta-Learning »
Renkun Ni · Micah Goldblum · Amr Sharaf · Kezhi Kong · Tom Goldstein -
2021 Spotlight: Data Augmentation for Meta-Learning »
Renkun Ni · Micah Goldblum · Amr Sharaf · Kezhi Kong · Tom Goldstein -
2021 Spotlight: Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks »
Avi Schwarzschild · Micah Goldblum · Arjun Gupta · John P Dickerson · Tom Goldstein -
2020 Poster: Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness »
Aounon Kumar · Alexander Levine · Tom Goldstein · Soheil Feizi -
2020 Poster: Certified Data Removal from Machine Learning Models »
Chuan Guo · Tom Goldstein · Awni Hannun · Laurens van der Maaten -
2020 Poster: Adversarial Attacks on Copyright Detection Systems »
Parsa Saadatpanah · Ali Shafahi · Tom Goldstein -
2020 Poster: Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks »
Micah Goldblum · Steven Reich · Liam Fowl · Renkun Ni · Valeriia Cherepanova · Tom Goldstein -
2020 Poster: The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent »
Karthik Abinav Sankararaman · Soham De · Zheng Xu · W. Ronny Huang · Tom Goldstein -
2019 Poster: Transferable Clean-Label Poisoning Attacks on Deep Neural Nets »
Chen Zhu · W. Ronny Huang · Hengduo Li · Gavin Taylor · Christoph Studer · Tom Goldstein -
2019 Oral: Transferable Clean-Label Poisoning Attacks on Deep Neural Nets »
Chen Zhu · W. Ronny Huang · Hengduo Li · Gavin Taylor · Christoph Studer · Tom Goldstein -
2018 Poster: Linear Spectral Estimators and an Application to Phase Retrieval »
Ramina Ghods · Andrew Lan · Tom Goldstein · Christoph Studer -
2018 Oral: Linear Spectral Estimators and an Application to Phase Retrieval »
Ramina Ghods · Andrew Lan · Tom Goldstein · Christoph Studer -
2017 Poster: Adaptive Consensus ADMM for Distributed Optimization »
Zheng Xu · Gavin Taylor · Hao Li · Mario Figueiredo · Xiaoming Yuan · Tom Goldstein -
2017 Talk: Adaptive Consensus ADMM for Distributed Optimization »
Zheng Xu · Gavin Taylor · Hao Li · Mario Figueiredo · Xiaoming Yuan · Tom Goldstein -
2017 Poster: Convex Phase Retrieval without Lifting via PhaseMax »
Tom Goldstein · Christoph Studer -
2017 Talk: Convex Phase Retrieval without Lifting via PhaseMax »
Tom Goldstein · Christoph Studer