Skip to yearly menu bar Skip to main content


Poster

The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning

Nathaniel Li · Alexander Pan · Anjali Gopal · Summer Yue · Daniel Berrios · Alice Gatti · Justin Li · Ann-Kathrin Dombrowski · Shashwat Goel · Gabriel Mukobi · Nathan Helm-Burger · Rassin Lababidi · Lennart Justen · Andrew Liu · Michael Chen · Isabelle Barrass · Oliver Zhang · Xiaoyuan Zhu · Rishub Tamirisa · Bhrugu Bharathi · Ariel Herbert-Voss · Cort Breuer · Andy Zou · Mantas Mazeika · Zifan Wang · Palash Oswal · Weiran Lin · Adam Hunt · Justin Tienken-Harder · Kevin Shih · Kemper Talley · John Guan · Ian Steneker · David Campbell · Brad Jokubaitis · Steven Basart · Stephen Fitz · Ponnurangam Kumaraguru · Kallol Karmakar · Uday Tupakula · Vijay Varadharajan · Yan Shoshitaishvili · Jimmy Ba · Kevin Esvelt · Alexandr Wang · Dan Hendrycks

Hall C 4-9 #2217
[ ] [ Project Page ]
Thu 25 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons. To measure these risks, government institutions and major AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private and restricted to a narrow range of malicious use scenarios, which limits further research into reducing malicious use. To fill these gaps, we release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 3,668 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. To guide progress on unlearning, we develop RMU, a state-of-the-art unlearning method based on controlling model representations. RMU reduces model performance on WMDP while maintaining general capabilities in areas such as biology and computer science, suggesting that unlearning may be a concrete path towards reducing malicious use from LLMs. We release our benchmark and code publicly at https://wmdp.ai.

Live content is unavailable. Log in and register to view live content