The Shape of Addition: Geometric Structures of Arithmetic in Large Language Models
Abstract
Large Language Models exhibit paradoxical fragility in fundamental arithmetic, implying a disconnect between internal computation and discrete output. By analyzing the residual stream geometry during multi-operand addition, we identify the Iso-Raw-Sum Trajectory (IRST), a topological manifold where representations are anchored by semantic digits and modulated by continuous carry fibers. We propose the Noisy Quantization Model, which frames arithmetic errors as topological slippages caused by internal neural noise pushing a continuous, latent carry potential across quantization thresholds. This geometric framework further elucidates probe versatility, explaining how lightweight probes can disentangle conflicting latent signals (such as ground truth versus hallucination) from a single activation vector. Finally, we validate these insights through a geometric consistency check method that effectively detects and corrects these quantization failures during inference.