LOVE: Benchmarking and Evaluating Text-to-Video Generation and Video-to-Text Interpretation
Abstract
Recent advancements in large multimodal models (LMMs) have driven substantial progress in both text-to-video (T2V) generation and video-to-text (V2T) interpretation tasks. However, current AI-generated videos (AIGVs) still exhibit limitations in terms of perceptual quality and text-video alignment. To this end, we present AIGVE-60K, a comprehensive dataset and benchmark for AI-Generated Video Evaluation, which features (i) comprehensive tasks, encompassing 3,050 extensive prompts across 20 fine-grained task dimensions, (ii) the largest human annotations, including 120K mean-opinion scores (MOSs) and 60K question-answering (QA) pairs annotated on 58,500 videos generated from 30 T2V models, and (iii) bidirectional benchmarking and evaluating for both T2V generation and V2T interpretation capabilities. Based on AIGVE-60K, we propose LOVE, a LMM-based metric for AIGV Evaluation from multiple dimensions including perceptual preference, text-video correspondence, and task-specific accuracy. Building upon LOVE, we further introduce LOVE-Reward to optimize T2V models through reinforcement learning, effectively enhancing both the perceptual quality and text-video correspondence of generated videos. Comprehensive experiments demonstrate that LOVE achieves state-of-the-art performance and generalizes effectively to various AIGV benchmarks. LOVE-Reward significantly improves video generation quality. These findings highlight the significance of the AIGVE-60K dataset and the effectiveness of our proposed methods. The database and codes will be available upon publication.