Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Attacking Large Language Models with Projected Gradient Descent

Simon Markus Geisler · Tom Wollschläger · M. Hesham Abdalla · Johannes Gasteiger · Stephan Günnemann

Keywords: [ Automatic Red Teaming ] [ large language models ] [ Projected Gradient Descent ] [ adversarial attack ] [ Jailbreak ]


Abstract:

Current LLM alignment methods are readily broken through specifically crafted adversarial prompts. While crafting adversarial prompts using discrete optimization is highly effective, such attacks typically use more than 100,000 LLM calls. This high computational cost makes them unsuitable for, e.g., quantitative analyses and adversarial training. To remedy this, we revisit Projected Gradient Descent (PGD) on the continuously relaxed input prompt. Although previous attempts with ordinary gradient-based attacks largely failed, we show that carefully controlling the error introduced by the continuous relaxation tremendously boosts their efficacy. Our PGD for LLMs is up to one order of magnitude faster than state-of-the-art discrete optimization at achieving the same devastating attack results. The availability of such effective and efficient adversarial attacks is key for advancing and evaluating the alignment of LLMs.

Chat is not available.