Beyond Temperature: Hyperfitting as a Late-Stage Geometric Expansion
Meimingwei Li ⋅ Yuanhao Ding ⋅ Esteban Garces Arias ⋅ Christian Heumann
Abstract
Recent work has identified a counterintuitive phenomenon termed “Hyperfitting", where fine-tuning Large Language Models (LLMs) to near-zero training loss on small datasets surprisingly enhances open-ended generation quality and mitigates repetition in greedy decoding. While effective, the underlying mechanism remains poorly understood, with the extremely low-entropy output distributions suggesting a potential equivalence to simple temperature scaling. In this work, we demonstrate that this phenomenon is fundamentally distinct from distribution sharpening; entropy-matched control experiments reveal that temperature scaling fails to replicate the diversity gains of hyperfitting. Furthermore, we falsify the hypothesis of static vocabulary reweighting, showing through ablation studies that hyperfitting relies on a dynamic, context-dependent rank reordering mechanism. Layer-wise analysis localizes this effect to a “Terminal Expansion" in the final transformer block, where a substantial geometric expansion of the feature space ($\Delta \mathrm{Dim} \approx +80.8$) facilitates the promotion of deep-tail tokens. And we introduce \textbf{Late-Stage LoRA}, a targeted fine-tuning strategy that updates only the final 5 layers, achieving robust generation with minimal parameter updates.
Successful Page Load