LLMs as Noisy Channels: A Shannon Perspective on Model Capacity and Scaling Laws
Xu Ouyang ⋅ Deyi Liu ⋅ Yuhang Cai ⋅ Jing Liu ⋅ Yuan Yang ⋅ Chen Zheng ⋅ Thomas Hartvigsen ⋅ Yiyuan Ma
Abstract
Existing scaling laws for Large Language Models (LLMs), predominantly monotonic power laws, have successfully guided model development but fail to explain emerging non-monotonic phenomena such as catastrophic overtraining and quantization-induced degradation, where performance deteriorates despite increased compute. We propose the Shannon Scaling Law, a unified theoretical framework that models LLM training as information transmission over a noisy channel, grounded in the Shannon–Hartley theorem. By mapping model parameters to channel bandwidth and training tokens to signal power, our formulation explicitly captures the interaction between learning signal and intrinsic noise. This perspective reveals a fundamental Shannon capacity for LLMs: scaling model size or data without preserving a sufficient signal-to-noise ratio (SNR) inevitably amplifies noise, inducing a transition from monotonic improvement to U-shaped performance degradation. We validate our theory through extensive experiments on the Pythia and OLMo2 model suites under diverse perturbations, including Gaussian noise, quantization and supervised finetuning on math, QA and code tasks. The Shannon Scaling Law consistently outperforms classical scaling laws and recent perturbation-aware laws, achieving strong $R^2$ scores and accurately capturing loss basins missed by prior approaches. Our results suggest that SNR-aware scaling is essential for robust and efficient model growth, providing a principled foundation for future scaling strategies.
Successful Page Load