NanoSpec: Accelerating Speculative Decoding using Minimalist In-Context Vocabularies
Zhiyang Chen ⋅ Daliang Xu ⋅ Yinyuan Zhang ⋅ Chenghua Wang ⋅ Mengwei Xu ⋅ Yun Ma
Abstract
The massive vocabulary sizes of large language models, often exceeding 100k tokens, impose a computational bottleneck on the final linear projection layer during speculative decoding. Existing vocabulary pruning solutions rely on static or coarsely-grained sub-vocabularies that necessitate large active sizes ($\sim$30k) to maintain draft quality. We propose NanoSpec, a novel training-free approach that breaks this trade-off by dynamically constructing a minimalist, context-aware active vocabulary for each generation step. Leveraging the inherent temporal locality of language generation, NanoSpec achieves high coverage while slashing the average vocabulary size by over $40\times$ (to $<$3k tokens) without requiring any auxiliary trained parameters. To realize the theoretical benefits of such high sparsity on modern hardware, we introduce a system-algorithm co-design that overcomes the inefficiencies of sparse memory access through asynchronous gathering and GPU-resident state management. As a complementary plug-and-play module, NanoSpec cuts draft inference time by an average of 51.6\%, delivering a $1.12-1.32\times$ end-to-end speedup over the state-of-the-art speculative decoding method EAGLE-2 across diverse benchmarks and outperforming complex training-based pruning baselines.
Successful Page Load