The First Drop of Ink: Nonlinear Impact of Misleading Information in Long-Context Reasoning
Abstract
As large language models (LLMs) are increasingly deployed in retrieval augmented generation (RAG) and agentic systems that accumulate extensive context, understanding how distracting information affects performance in long context becomes critical. Prior work shows that semantically relevant but misleading documents can cause performance degradation, yet the quantitative relationship between the proportion of distractors and performance remains unstudied. In this work, we systematically vary the proportion of hard distractors within fixed-length contexts, revealing a striking nonlinear pattern: as the proportion of hard distractors increases, performance drops sharply within the first small fraction, while the remainder of the range yields only marginal additional decline. We term this ''The First Drop of Ink'' effect, analogous to how a single drop of ink contaminates water. We provide both theoretical and empirical analysis grounded in attention mechanics: hard distractors disproportionately capture attention even at small proportions, with diminishing marginal impact as their proportion increases. Through controlled experiments, we further show that filtering yields performance gains primarily from context length reduction rather than distractor removal, and only achieves substantial recovery when hard distractor proportion is reduced to near zero, which highlights the importance of upstream retrieval precision.