The Bridge-Garden Dilemma in LLM Distillation: Why Mixing Hard and Soft Labels Works
Guanghui Wang ⋅ Kaiwen Kacuila ⋅ zhiyong yang ⋅ Zitai Wang ⋅ Jin-Wen Wu ⋅ Longtao Huang ⋅ Qianqian Xu ⋅ Qingming Huang
Abstract
Knowledge distillation (KD) transfers knowledge from a large teacher model to a smaller student. In language modeling, the student is trained either on tokens sampled from the teacher (\textbf{hard labels}) or the teacher’s full next-token distribution (\textbf{soft labels}). Despite soft labels appear strictly richer, we find that mixing hard and soft labels consistently yields better results. Crucially, we show that this gain cannot be explained by closer teacher matching during training. Instead, it comes from reduced exposure bias---the mismatch between training and inference distributions. To explain this phenomenon, we introduce the Bridge–Garden Decomposition theory, which categorizes generation steps into two types: \textit{Bridges}, where the next token must be \textit{exact}, and \textit{Gardens}, where it can be \textit{flexible}. We show that hard-only KD excels in Bridges by avoiding risky deviations, while soft-only KD preserves diversity in Gardens. A hybrid strategy handles both cases and, as a result, reduces exposure bias across the sequence. Guided by this theory, we develop a family of Bridge--Garden hybrid supervision methods that adaptively balance hard and soft labels. Across seven teacher--student pairs (including Qwen, Llama, Gemma, and DeepSeek) and benchmarks in reasoning and coding, our approach outperforms divergence-based and on-policy KD baselines while reducing training cost by \textbf{9.7$\times$}, enabling efficient model compression.
Successful Page Load