LynX: Token Interface Alignment for Video+X LLMs
Abstract
This study introduces an intriguing phenomenon in Video LLMs: rather than merely translating frames into textual embeddings, Video LLMs establish a continuous manifold, token interface, allowing visual tokens to operate as standalone entities within the architecture. Exploiting this discovery, we propose LynX, a scalable framework that integrates novel modalities by repurposing the internalized interface. Departing from conventional paradigms that necessitate heavy modality-specific encoders or paired supervision, LynX employs a lightweight auxiliary pathway in parallel with the frozen vision encoder. By aligning both the attention responses and the statistical distributions using unimodal data alone, our method synchronizes new sensory inputs with intrinsic video priors. Crucially, our distributional alignment ensures manifold compatibility while preserving the integrity of the Video LLMs. Extensive benchmarks demonstrate that LynX achieves state-of-the-art performance and efficiency across audio-visual QA, 3D reasoning, high-frame-rate, and multi-view video understanding. The code is available at https://anonymous.4open.science/r/lynx-DDC8/.