Unveiling the Visual Counting Bottleneck in Vision-Language Models
Abstract
While Large Vision-Language Models (VLMs) excel at interpolation, they suffer catastrophic failures in systematic generalization, most notably in visual counting beyond training distributions. In this work, we investigate this extrapolation bottleneck by deconstructing visual counting into three cognitive stages: object individuation, abstract magnitude representation, and symbolic decoding. Using a controlled environment of synthetic Go game boards, we isolate the specific mechanism of failure. Contrary to the hypothesis that models suffer from perceptual errors, we demonstrate via linear probing that visual backbones maintain robust, linearly separable representations of quantity well into the extrapolation regime. Furthermore, models retain latent magnitude awareness, successfully performing comparative reasoning on quantities they fail to enumerate. We pinpoint the collapse to the Symbolic Decoding stage, where the model fails to project valid visual magnitudes onto discrete tokens. Our findings support a Fractured Magnitude Hypothesis: VLMs fail to acquire a Universal Number Space, instead learning disjoint, modality-specific statistical manifolds that prevent cross-modal grounding for unseen pairings. We validate our findings on the state-of-the-art foundation model, suggesting that bridging the extrapolation gap requires inductive priors that enforce unified magnitude representations rather than simply scaling training data.