Skip to yearly menu bar Skip to main content


Spotlight Poster

Position: Understanding LLMs Requires More Than Statistical Generalization

Patrik Reizinger · Szilvia Ujváry · Anna Mészáros · Anna Kerekes · Wieland Brendel · Ferenc Huszár

Hall C 4-9 #808
[ ] [ Paper PDF ]
[ Poster
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

The last decade has seen blossoming research in deep learning theory attempting to answer, ``Why does deep learning generalize?" A powerful shift in perspective precipitated this progress: the study of overparametrized models in the interpolation regime. In this paper, we argue that another perspective shift is due, since some of the desirable qualities of LLMs are not a consequence of good statistical generalization and require a separate theoretical explanation. Our core argument relies on the observation that AR probabilistic models are inherently non-identifiable: models zero or near-zero KL divergence apart---thus, equivalent test loss---can exhibit markedly different behaviors. We support our position with mathematical examples and empirical observations, illustrating why non-identifiability has practical relevance through three case studies: (1) the non-identifiability of zero-shot rule extrapolation; (2) the approximate non-identifiability of in-context learning; and (3) the non-identifiability of fine-tunability. We review promising research directions focusing on LLM-relevant generalization measures, transferability, and inductive biases.

Chat is not available.