Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Can Editing LLMs Inject Harm?

Canyu Chen · Baixiang Huang · Zekun Li · Zhaorun Chen · Shiyang Lai · Xiongxiao Xu · Jia-Chen Gu · Jindong Gu · Huaxiu Yao · Chaowei Xiao · Xifeng Yan · William Wang · Phil Torr · Dawn Song · Kai Shu

Keywords: [ Harm Injection ] [ LLM safety ] [ Knowledge Editing ]


Abstract:

Knowledge editing techniques have been increasingly adopted to efficiently correct the false or outdated knowledge in Large Language Models (LLMs), due to the high cost of retraining from scratch. Meanwhile, one critical but under-explored question is: can knowledge editing be used to inject harm into LLMs? In this paper, we propose to reformulate knowledge editing as a new type of safety threat for LLMs, namely Editing Attack, and conduct a systematic investigation with a newly constructed dataset EditAttack. Specifically, we focus on two typical safety risks of Editing Attack including Misinformation Injection and Bias Injection. For the risk of misinformation injection, we categorize it into commonsense misinformation injection and long-tail misinformation injection and find that editing attacks can inject both types of misinformation into LLMs, and the success rate is particularly high for commonsense misinformation injection. For the risk of bias injection, we discover that not only can one single biased sentence be injected into LLMs with a high success rate, but also one single biased sentence injection can cause a high bias increase in general LLMs' outputs, which are even highly irrelevant to the injected sentence, indicating a catastrophic impact on the overall fairness of LLMs. Then, we also demonstrate the high stealthiness of editing attacks, measured by their impact on the general knowledge and reasoning capacities of LLMs. Our discoveries demonstrate the emerging misuse risks of knowledge editing techniques on compromising the safety alignment of LLMs.

Chat is not available.