Through the Stealth Lens: Attention-Aware Defenses Against Poisoning in RAG
Abstract
Retrieval-augmented generation (RAG) systems are vulnerable to attacks that inject poisoned passages into the retrieved context, even at low corruption rates. We show that existing attacks are not designed to be stealthy, allowing reliable detection and mitigation. We formalize a distinguishability-based security game to quantify stealth for such attacks. If a few poisoned passages control the response, they must bias the inference process more than the benign ones, inherently compromising stealth. This motivates analyzing intermediate signals of LLMs, such as attention weights, to approximate the influence of different passages on the response. Leveraging attention weights, we introduce the Normalized Passage Attention Score (NPAS) and a lightweight Attention-Variance Filter (AV Filter) that flags anomalous passages. Our method improves robustness, yielding up to ~20% higher accuracy than baseline defenses. We also develop adaptive attacks that attempt to conceal such anomalies, achieving up to 35% success rate and underscoring the challenges of achieving true stealth in poisoning RAG systems.