Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Text, camera, action! Frontiers in controllable video generation

Fréchet Video Motion Distance: A Metric for Evaluating Motion Consistency in Videos

Jiahe Liu · Youran Qu · Qi Yan · xiaohui zeng · Lele Wang · Renjie Liao

Keywords: [ evaluation metric ] [ video tracking ] [ Video Generation ] [ generative model ]


Abstract:

Significant advancements have been made in video generative models recently. Unlike image generation, video generation presents greater challenges, requiring not only generating high-quality frames but also ensuring temporal consistency across these frames. Despite the impressive progress, research on metrics for evaluating the quality of generated videos, especially concerning temporal and motion consistency, remains underexplored. To bridge this research gap, we propose Fréchet Video Motion Distance (FVMD) metric, which focuses on evaluation for motion consistency in video generation. Specifically, we design explicit motion features based on key point tracking, and then measure the similarity between these features based on the Fréchet Distance. We conduct sensitivity analysis experiments by injecting noise into the real videos to verify the effectiveness of FVMD. Further, we carry out a human study, demonstrating that our metric effectively detects temporal noise and aligns closely with human perceptions of generated video quality.

Chat is not available.