Drop-in Circulant Structural Priors for Transformer Decoding of Cyclic Codes
Abstract
While Transformer-based architectures have revolutionized neural decoding, existing models often treat codes as generic sequences, ignoring their inherent algebraic properties. In this paper, we take a step toward bridging these two domains by proposing a novel decoding approach that integrates the algebraic structure of cyclic codes into Transformer-based decoders. Leveraging the inherent cyclic properties, we introduce interpretable error correction patterns and inter-node relationship hypotheses that link the structural characteristics of the codes to the model parameters. Building on these insights, we design a plug-and-play, flexibly deployable decoding method tailored for cyclic codes. Experimental results show that our method achieves an average reduction in bit error rate (BER) by an order of magnitude, while also reducing the total number of parameters by approximately 97%. Additional comparative experiments validate our proposed conjectures and highlight a promising pathway for bridging classical coding theory and modern Transformer-based decoding architectures.