Skip to yearly menu bar Skip to main content


Poster

Do Large Language Models Generalize the Way People Expect? A Benchmark for Evaluation

Keyon Vafa · Ashesh Rambachan · Sendhil Mullainathan


Abstract:

What makes large language models (LLMs) impressive is also what makes them hard to evaluate: their diversity of uses. To evaluate these models, we must understand the purposes they will be used for. We argue those decisions are made by people, and in particular, beliefs about where an LLM will do well. We model such beliefs as the consequence of a human generalization function: having seen what an LLM gets right or wrong, people update where else it might succeed. We collect a dataset of 20K examples of how humans make generalizations across 79 tasks from the MMLU and BIG-Bench benchmarks. We show that the human generalization function can be predicted using NLP methods: people have consistent structured ways to generalize. We then evaluate LLM alignment with the human generalization function. Our results show that -- especially for cases where the cost of mistakes is high -- more capable models (e.g. GPT-4) can do worse on the instances people choose to use them for, exactly because they are not aligned with the human generalization function.

Live content is unavailable. Log in and register to view live content