HInT: Hypergraph Infusion at the Structural Layers Improves Table Understanding
Abstract
Decoder-only large language models (LLMs) struggle with table reasoning because tables must be serialized, obscuring row- and column-level structure. Prior graph and hypergraph approaches encode structure with an external encoder, but their gains are often inconsistent under autoregressive decoding. We analyze how tabular structure is represented inside decoder-only LLMs and find that row and column relations concentrate in a small subset of layers and attention heads. Based on this observation, we propose HInT, which injects hypergraph-derived structural features directly into these structural layers. HInT constructs a table hypergraph over cells and headers, performs lightweight message passing, and fuses the resulting features with token hidden states via token-level gated fusion, while preserving standard autoregressive computation. Experiments across diverse table reasoning tasks show consistent improvements over text-only baselines and prior (hyper)graph-based methods.