Skip to yearly menu bar Skip to main content


Spotlight Poster

Great Models Think Alike and this Undermines AI Oversight

Shashwat Goel · Joschka Strüber · Ilze Amanda Auzina · Karuna Chandra · Ponnurangam Kumaraguru · Douwe Kiela · Ameya Pandurang Prabhu · Matthias Bethge · Jonas Geiping

East Exhibition Hall A-B #E-2411
[ ] [ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

As Language Model (LM) capabilities advance, evaluating and supervising them at scale is getting harder for humans. There is hope that other language models can automate both these tasks, which we refer to as AI Oversight. We study how model similarity affects both aspects of AI oversight by proposing Chance Adjusted Probabilistic Agreement (CAPA)--a metric for LM similarity based on overlap in model mistakes. Using CAPA, we first show that LLM-as-a-judge scores favor models similar to the judge, generalizing recent self-preference results. Then, we study training on LM annotations, and find complementary knowledge between the weak supervisor and strong student model plays a crucial role in gains from weak-to-strong generalization. As model capabilities increase, it becomes harder to find their mistakes, and we might defer more to AI oversight. However, we observe a concerning trend--model mistakes are becoming more similar with increasing capabilities, pointing to risks from correlated failures. Our work underscores the importance of reporting and correcting for model similarity, especially in the emerging paradigm of AI oversight.

Lay Summary:

Currently, there are hundreds of different language models (LM) available, as each tech company creates and releases their own chatbots. How different are these models really? Do all of them fail (or succeed) in the same ways? In this work, we measure model similarity based on how often they make the same mistakes. As LM capabilities advance, we find that model mistakes are becoming more similar, that is, Great Models Think Alike. At the same time, finding these mistakes and fixing them now needs more effort and expertise, making it expensive and time-consuming for humans. Recent research is trying to automate this process using another LM as a judge, or as a teacher, which we refer to as "AI Oversight". But could models thinking alike adversely affect AI oversight? Indeed, we find LM judgements show a bias, favouring more similar models. Moreover, when one LM is used as a 'teacher’ for another ‘student’ LM, we find lower performance improvements when models are more similar, perhaps because there is less complementary knowledge for the student to learn from. Overall, we show the importance of measuring model similarity, as it reveals insights beyond accuracy comparisons. To promote reporting of model similarity, we release a Python package lm-sim with many model similarity metrics, including ours.

Live content is unavailable. Log in and register to view live content