Self-Guidance: Enhancing Neural Codecs via Decoder Manifold Alignment
Abstract
Neural speech codecs based on Vector-Quantized VAEs (VQ-VAEs) are core audio tokenizers for speech LLMs, yet their reconstruction fidelity is bottlenecked by quantization error. Instead of modifying the quantizer or increasing model capacity—common approaches that complicate downstream language modeling—we introduce self-guidance, a simple yet general training principle that enhances the decoder's robustness to quantization error. Our core idea is to align the decoder's internal feature manifolds when processing both the quantized tokens and their original continuous embeddings, using a lightweight feature-mapping loss. This requires minimal training overhead and no inference-time changes. Applied to XCodec2, self-guidance improves all reconstruction metrics, achieving state-of-the-art low-bitrate performance. It generalizes across codebook sizes, quantizer types, and network architectures, demonstrating value as a universal codec enhancer. Notably, it enables a 4× codebook reduction without fidelity loss, which downstream TTS experiments show significantly improves LLM-based synthesis by simplifying the token modeling space. Self-guidance thus establishes an efficient, broadly applicable method for high-fidelity neural audio coding.