Chiral Symmetry Breaking in Transformers: A Group-Equivariant Framework for Solving the Reversal Curse via Adjoint Manifold Mappings
Hanji Du
Abstract
The "reversal curse" exposes a critical asymmetry in autoregressive models, where causal masking collapses bidirectional logic into non-invertible latent subspaces. This work characterizes such failure as a structural breaking of chiral symmetry within the representation manifold. We bridge this gap with the **Chiral Transformer**—a framework that restores bidirectional consistency by enforcing an adjoint mapping operator $\mathcal{T}$ via contrastive regularization. Unlike standard generative approaches, our architecture utilizes **Adjoint-Induced Retrieval (AIR)** to perform logical inversion directly in the embedding space, effectively bypassing the contextual biases of the decoder. Empirical validation on synthetic benchmarks confirms this geometric intuition, where AIR elevates zero-shot accuracy from approximately 0% to a robust **65.07%**. These findings suggest that logical reversibility is a topological property attainable through explicit algebraic constraints rather than mere scaling of parameters.
Successful Page Load