Symbol-Equivariant Recurrent Reasoning Models
Richard Freinschlag ⋅ Timo Bertram ⋅ Erich Kobler ⋅ Andreas Mayr ⋅ Günter Klambauer
Abstract
Reasoning problems such as Sudoku and ARC-AGI remain challenging for neural networks. Recurrent Reasoning Models (RRMs), including Hierarchical Reasoning Models (HRM) and Tiny Recursive Models (TRM), offer a compact alternative to large language models, but currently handle symbol symmetries only implicitly via costly data augmentation. We introduce symbol-equivariant recurrent reasoning models (SE-RRMs), which enforce permutation equivariance at the architectural level through symbol-equivariant layers, guaranteeing identical solutions under symbol or color permutations. SE-RRMs outperform prior RRMs on 9$\times$9 Sudoku and generalize from just training on 9$\times$9 to smaller 4$\times$4 and larger 16$\times$16 and 25$\times$25 instances, to which existing RRMs cannot extrapolate. On ARC-AGI-1 and ARC-AGI-2, SE-RRMs achieve competitive performance with substantially less data augmentation, demonstrating that explicitly encoding symmetry improves the robustness and scalability of neural reasoning.
Successful Page Load