Automatically Finding Reward Model Biases
Abstract
Large language model (LLM) post-training typically relies on a training signal from a reward model (RM), such as for reinforcement learning from human feedback. Previous work shows that this signal can be biased in attributes such as length, format, and sycophancy. In this work, we introduce and study the research problem of automatically finding reward model biases in natural language. We offer a simple approach of using an LLM to iteratively propose and refine candidate biases. Our method can recover known biases and surface novel ones: for example, we found that Skywork-V2-8B, a leading open-weight reward model, often mistakenly favors responses with redundant spacing and responses with hallucinated content. In addition, we show evidence that iteration provides benefits over flat best-of-N search. We hope our work contributes to further research on improving RMs through automated interpretability methods.