Sycophancy Towards Researchers Drives Performative Misalignment
Abstract
The increasing situational awareness of language models raises safety concerns: models might be aware when they are evaluated, and adjust their behavior to evade monitoring and resist modification, e.g., pretending to be aligned only in evaluation. This \emph{alignment faking} behavior is often interpreted as scheming: an intentional effort of strategic deception. In this paper, we examine an alternative interpretation, \emph{performative misalignment}, which explains the change in behavior as a result of \emph{sycophancy towards AI researchers}. To back up this hypothesis, we present three empirical findings. First, we show that evaluation awareness persists even when we tell models they are deployed, which contradicts the scheming story which predicts less misalignment when the model perceives evaluation. Second, we use probing and steering to show that our current methods cannot mechanistically distinguish sycophancy and scheming in alignment faking evaluations. Third, we fine-tune models to be more sycophantic and observe increased sensitivity to evaluation cues. To conclude, we emphasize deconfounding sycophancy from scheming for future work on evaluations and mitigations of intent misalignment.