Timezone: »

Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos
Haoyu Xiong · Yun-Chun Chen · Homanga Bharadhwaj · Samrath Sinha · Animesh Garg

Learning from visual data opens the potential to accrue a large range of manipulation behaviors by leveraging human videos without specifying each of them mathematically, but rather through natural task specification. We consider the task of imitation from human videos for learning robot manipulation skills. In this paper, we present Learning by Watching (LbW), an algorithmic framework for policy learning through imitation from a single video specifying the task. The key insights of our method are two-fold. First, since the human arms may not have the same morphology as robot arms, our framework learns unsupervised human to robot translation to overcome the morphology mismatch issue. Second, to capture the details in salient regions that are crucial for learning state representations, our model performs unsupervised keypoint detection on the translated robot videos. The detected keypoints form a structured representation that contains semantically meaningful information and can be used directly for computing reward and policy learning. We evaluate the effectiveness of our LbW framework on five robot manipulation tasks, including reaching, pushing, sliding, coffee making, and drawer closing. Extensive experimental evaluations demonstrate that our method performs favorably against the state-of-the-art approaches.

Author Information

Haoyu Xiong (Shanghai Qizhi Institute)
Yun-Chun Chen (University of Toronto )
Homanga Bharadhwaj (University of Toronto)
Samrath Sinha (University of Toronto)
Animesh Garg (University of Toronto, Vector Institute, Nvidia)

More from the Same Authors