Skip to yearly menu bar Skip to main content


Spotlight Poster

Layer by Layer: Uncovering Hidden Representations in Language Models

Oscar Skean · Md Rifat Arefin · Dan Zhao · Niket Patel · Jalal Naghiyev · Yann LeCun · Ravid Shwartz-Ziv

East Exhibition Hall A-B #E-2607
[ ] [ ] [ Project Page ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT
 
Oral presentation: Oral 1E Theory and Phenomenology
Tue 15 Jul 10 a.m. PDT — 11 a.m. PDT

Abstract:

From extracting features to generating text, the outputs of large language models (LLMs) typically rely on their final layers, following the conventional wisdom that earlier layers capture only low-level cues. However, our analysis shows that intermediate layers can encode even richer representations, often improving performance on a wide range of downstream tasks. To explain and quantify these hidden-layer properties, we propose a unified framework of representation quality metrics based on information theory, geometry, and invariance to input perturbations. Our framework highlights how each model layer balances information compression and signal preservation, revealing why mid-depth embeddings can exceed the last layer’s performance. Through extensive experiments on 32 text-embedding tasks across various architectures (transformers, state-space models) and domains (language, vision), we demonstrate that intermediate layers consistently provide stronger features, challenging the standard view on final-layer embeddings and opening new directions on using mid-layer representations for more robust and accurate representations.

Lay Summary:

Large language models (LLMs) are made up of many layers, with each layer coming one after another. Traditionally, it's believed that the final layers are the most important because they produce the output, while earlier layers are thought to handle only simple, low-level features. However, this study finds that the middle layers often contain richer and more useful information than the final ones. We developed a new framework to measure the quality of information in each layer, using tools from information theory and geometry. After testing many models and tasks, we discovered that intermediate layers consistently provide better features for understanding text. This challenges the common assumption that only the final layers matter and suggests that tapping into middle layers could lead to more accurate and reliable AI systems.

Live content is unavailable. Log in and register to view live content