Proact-VL: A Proactive VideoLLM for Real-Time AI Companions
Abstract
Proactive and real-time interactive experiences are essential for human-like AI companions, yet face three key challenges: (1) achieving low-latency inference under continuous streaming inputs, (2) autonomously deciding when to respond, and (3) controlling both quality and quantity of generated content to meet real-time constraints. In this work, we instantiate AI companions through two gaming scenarios—commentator and guide—selected for their suitability for automatic evaluation. We introduce the \textbf{Live Gaming Benchmark}, a large-scale dataset with three representative scenarios: solo commentary, co-commentary, and user guidance, and present \textbf{Proact-VL}, a general framework that shapes multimodal language models into proactive, real-time interactive agents capable of human-like environment perception and interaction. Extensive experiments show Proact-VL achieves superior response latency and quality while maintaining strong video understanding capabilities, demonstrating its practicality for real-time interactive applications. Code is available at https://anonymous.4open.science/r/Proact-VL-8699/.