Poster
in
Workshop: Data-centric Machine Learning Research (DMLR): Datasets for Foundation Models
VideoPhy: Evaluating Physical Commonsense In Video Generation
Hritik Bansal · Zongyu Lin · Tianyi Xie · Zeshun Zong · Chenfanfu Jiang · Yizhou Sun · Kai-Wei Chang · Aditya Grover
Recent advances in internet-scale video data pretraining have led to the development of text-to-video generative models that can create high-quality videos across a broad range of visual concepts and styles. Due to their ability to synthesize realistic motions and render complex objects, these generative models have the potential to become general-purpose simulators of the physical world. However, it is unclear how far we are from this goal with the existing text-to-video generative models. To this end, we present VideoPhy, a benchmark designed to assess whether the generated videos follow physics laws (e.g., conservation of mass) for real-world activities (e.g., pouring water into a glass). Specifically, we curate a list of 688 captions that involve interactions between various material types in the physical world (e.g., solid-solid, solid-fluid, fluid-fluid). Subsequently, we generate videos conditioned on these captions from diverse state-of-the-art text-to-video generative models, including open-source models (e.g., VideoCrafter2) and closed-source models (e.g., Gen-2 from Runway). Further, our human evaluation reveals that the existing models severely lack the capability to generate videos that adhere to the text prompt and lack physical commonsense. Specifically, the best performing model, VideoCrafter2, generates videos that adhere to the caption and physical laws for only 19% of the instances. VideoPhy thus highlights that the video generative models are far from accurately simulating the physical world. Finally, we also supplement the dataset with an auto-evaluator, VideoCon-Physics, to assess text adherence and physical commonsense at scale.