Timezone: »
Quantifying privacy risks in large language models (LLM) is an important research question. We take a step towards answering this question by defining a real-world threat model wherein an entity seeks to augment an LLM with private data they possess via fine-tuning.The entity also seeks to improve the quality of its LLM outputs over time by learning from human feedback.We propose a novel phishing attack', a data extraction attack on this system where an attacker uses blind data poisoning, to induce the model to memorize the association between a given prompt and some
secret' privately held data.We validate that across multiple scales of LLMs and data modalities, an attacker can inject prompts into a training dataset that induce the model to memorize a `secret' that is unknown to the attacker, and easily extract this memorized secret.
Author Information
Ashwinee Panda (Princeton University)
Zhengming Zhang (Southeast University)
Yaoqing Yang (Dartmouth College)
Prateek Mittal (Princeton University)
Related Events (a corresponding poster, oral, or spotlight)
-
2023 : Teach GPT To Phish »
Dates n/a. Room
More from the Same Authors
-
2023 : Hyperparameter Tuning using Loss Landscape »
Jianlong Chen · Qinxue Cao · Yefan Zhou · Konstantin Schürholt · Yaoqing Yang -
2023 : Characterizing the Optimal $0-1$ Loss for Multi-class Classification with a Test-time Attacker »
Sophie Dai · Wenxin Ding · Arjun Nitin Bhagoji · Daniel Cullina · Ben Zhao · Heather Zheng · Prateek Mittal -
2023 : A Privacy-Friendly Approach to Data Valuation »
Jiachen Wang · Yuqing Zhu · Yu-Xiang Wang · Ruoxi Jia · Prateek Mittal -
2023 : On the Reproducibility of Data Valuation under Learning Stochasticity »
Jiachen Wang · Feiyang Kang · Chiyuan Zhang · Ruoxi Jia · Prateek Mittal -
2023 : Differentially Private Generation of High Fidelity Samples From Diffusion Models »
Vikash Sehwag · Ashwinee Panda · Ashwini Pokle · Xinyu Tang · Saeed Mahloujifar · Mung Chiang · Zico Kolter · Prateek Mittal -
2023 : Visual Adversarial Examples Jailbreak Aligned Large Language Models »
Xiangyu Qi · Kaixuan Huang · Ashwinee Panda · Mengdi Wang · Prateek Mittal -
2023 Poster: MultiRobustBench: Benchmarking Robustness Against Multiple Attacks »
Sophie Dai · Saeed Mahloujifar · Chong Xiang · Vikash Sehwag · Pin-Yu Chen · Prateek Mittal -
2023 Poster: Effectively Using Public Data in Privacy Preserving Machine Learning »
Milad Nasresfahani · Saeed Mahloujifar · Xinyu Tang · Prateek Mittal · Amir Houmansadr -
2023 Poster: A Three-regime Model of Network Pruning »
Yefan Zhou · Yaoqing Yang · Arin Chang · Michael Mahoney -
2023 Poster: Uncovering Adversarial Risks of Test-Time Adaptation »
Tong Wu · Feiran Jia · Xiangyu Qi · Jiachen Wang · Vikash Sehwag · Saeed Mahloujifar · Prateek Mittal -
2022 : Learner Knowledge Levels in Adversarial Machine Learning »
Sophie Dai · Prateek Mittal -
2022 Poster: Neurotoxin: Durable Backdoors in Federated Learning »
Zhengming Zhang · Ashwinee Panda · Linyue Song · Yaoqing Yang · Michael Mahoney · Prateek Mittal · Kannan Ramchandran · Joseph E Gonzalez -
2022 Spotlight: Neurotoxin: Durable Backdoors in Federated Learning »
Zhengming Zhang · Ashwinee Panda · Linyue Song · Yaoqing Yang · Michael Mahoney · Prateek Mittal · Kannan Ramchandran · Joseph E Gonzalez -
2021 Poster: Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries »
Arjun Nitin Bhagoji · Daniel Cullina · Vikash Sehwag · Prateek Mittal -
2021 Spotlight: Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries »
Arjun Nitin Bhagoji · Daniel Cullina · Vikash Sehwag · Prateek Mittal -
2019 Poster: Analyzing Federated Learning through an Adversarial Lens »
Arjun Nitin Bhagoji · Supriyo Chakraborty · Prateek Mittal · Seraphin Calo -
2019 Oral: Analyzing Federated Learning through an Adversarial Lens »
Arjun Nitin Bhagoji · Supriyo Chakraborty · Prateek Mittal · Seraphin Calo