DecoderTCR: Compositional Pretraining and Entropy-Guided Decoding for TCR-pMHC Interactions
Abstract
Modeling recognition between T-cell receptors (TCRs) and peptide-MHC (pMHC) complexes is a fundamental challenge in computational immunology, constrained by sparse paired interaction data relative to abundant unpaired sequences. We introduce DecoderTCR, a masked language model framework that addresses this through two contributions: (1) a compositional continual pre-training curriculum that learns component representations from marginal data before refining cross-chain dependencies from limited pairs, and (2) Iterative Entropy-Guided Refinement (IEGR), a non-autoregressive decoding algorithm that resolves high-confidence positions first to provide context for uncertain regions. On held-out benchmarks, DecoderTCR achieves 0.96 AUROC for zero-shot pMHC binding prediction and 0.76 AUROC for epitope-specific TCR recognition, approaching supervised baselines without epitope-specific training. Learned representations recover structural contacts without coordinate supervision, and generated sequences exhibit realistic recombination statistics. Experimental validation reveals a prediction-generation gap: strong discrimination does not yet yield reliable generation, highlighting an open challenge for the field.