Timezone: »

For Pre-Trained Vision Models in Motor Control, Not All Policy Learning Methods are Created Equal
Yingdong Hu · Renhao Wang · Li Li · Yang Gao

Thu Jul 27 01:30 PM -- 03:00 PM (PDT) @ Exhibit Hall 1 #320

In recent years, increasing attention has been directed to leveraging pre-trained vision models for motor control. While existing works mainly emphasize the importance of this pre-training phase, the arguably equally important role played by downstream policy learning during control-specific fine-tuning is often neglected. It thus remains unclear if pre-trained vision models are consistent in their effectiveness under different control policies. To bridge this gap in understanding, we conduct a comprehensive study on 14 pre-trained vision models using 3 distinct classes of policy learning methods, including reinforcement learning (RL), imitation learning through behavior cloning (BC), and imitation learning with a visual reward function (VRF). Our study yields a series of intriguing results, including the discovery that the effectiveness of pre-training is highly dependent on the choice of the downstream policy learning algorithm. We show that conventionally accepted evaluation based on RL methods is highly variable and therefore unreliable, and further advocate for using more robust methods like VRF and BC. To facilitate more universal evaluations of pre-trained models and their policy learning methods in the future, we also release a benchmark of 21 tasks across 3 different environments alongside our work.

Author Information

Yingdong Hu (Tsinghua University)
Renhao Wang (University of British Columbia)
Li Li (Amazon)
Yang Gao (Shanghai Qizhi Institute)

More from the Same Authors