LIMMT: Less is More for Motion Tracking
Yu Guan ⋅ Zekun Qi ⋅ Chenghuai Lin ⋅ Xuchuan Chen ⋅ Wenyao Zhang ⋅ Jilong Wang ⋅ XinQiang Yu ⋅ He Wang ⋅ Li Yi
Abstract
We argue that high-quality motion data can steer tracking policies toward better optimization trajectories early in training. In this work, we introduce LIMMT (Less Is More for Motion Tracking). To our knowledge, this is the first data-centric study for physics-based humanoid motion tracking. We go beyond simply removing erroneous clips. We define motion data quality through three dimensions: physics feasibility, diversity, and complexity. We show that training with under 3% of AMASS yields better tracking performance than training with the full dataset. Extensive experiments and analyses validate the effectiveness of our framework. We will release our code and curated data on GitHub.
Successful Page Load