Annotations Mitigate Post-Training Mode Collapse
Abstract
Post-training (via supervised fine-tuning) improves instruction-following, but often induces semantic mode collapse by biasing models toward low-entropy fine-tuning data at the expense of the high-entropy pre-training distribution. Crucially, we find this trade-off worsens with scale. To close this semantic diversity gap, we propose annotation-anchored training, a principled method that enables models to adopt the preference-following behaviors of post-training without sacrificing the inherent diversity of pre-training. Our approach is simple: we pre-train on documents paired with semantic annotations, inducing a rich annotation distribution that reflects the full breadth of pre-training data, and we preserve this distribution during post-training. This lets us sample diverse annotations at inference time and use them as anchors to guide generation, effectively transferring pre-training's semantic richness into post-trained models. We find that models trained with annotation-anchored training can attain 6× less diversity collapse than models trained with SFT, and improve with scale.