From Content to Knowledge: Lightning Fast Long-Video Understanding with Neural Knowledge Representations
Abstract
We propose a new paradigm for long video understanding by treating a long video as a Neural Knowledge Representation (NKR). NKR represent video contents neither as a stream of tokens or pre-organized databases, but as an individual small portion of network weights attached to the VLM backbone. The NKR weights is optimized to encapsulate the video's semantic content via a novel Agentic Knowledge Distillation (AKD) process, where an agent automatically synthesizes dense descriptions and question-answer pairs to distill the video's knowledge into the NKR. While AKD serves as a comprehensive, one-time encoding phase, the resulting NKR transforms the video into a portable, reusable asset. At inference, the lightweight NKR is mounted onto a frozen Vision-Language Model (VLM), enabling direct, query-based understanding without reloading or re-encoding the the original video. This approach decouples video length from inference cost, offering high amortized efficiency for multi-turn video understanding. Experiments on the LVBench benchmark show our method achieves performance comparable to state-of-the-art approaches while reducing end-to-end latency by over two orders of magnitude, opening new possibilities for interactive long-video understanding.