Seeing Realism from Simulation: Efficient Video Transfer for Vision-Language-Action Data Augmentation
Chenyu Hui ⋅ Xiaodi Huang ⋅ Siyu Xu ⋅ Yunke Wang ⋅ Shan You ⋅ Fei Wang ⋅ Tao Huang ⋅ Chang Xu
Abstract
Vision-language-action (VLA) models typically rely on large-scale real-world videos, whereas simulated data, despite being inexpensive and highly parallelizable to collect, often suffers from a substantial visual domain gap and limited environmental diversity, resulting in weak real-world generalization. We present an efficient video augmentation framework that converts simulated VLA videos into realistic training videos while preserving task semantics and action trajectories. Our pipeline extracts structured conditions from simulation via video semantic segmentation and video captioning, rewrites captions to diversify environments, and uses a conditional video transfer model to synthesize realistic videos. To make augmentation practical at scale, we introduce a diffusion feature-reuse mechanism that reuses video tokens across adjacent timesteps to accelerate generation, and a coreset sampling strategy that identifies a compact, non-redundant subset for augmentation under limited computation. Extensive experiments on RobotWin 2.0, LIBERO, LIBERO-Plus, and a real robotic platform demonstrate consistent improvements in both task performance and sim-to-real generalization. For example, our method improves RDT-1B by 8% on RobotWin 2.0, and boosts $\pi_0$ by 5.1% on the more challenging LIBERO-Plus benchmark. Code is released in supplementary material.
Successful Page Load