Timezone: »
In this paper, we explore how we can endow robots with the ability to learn correspondences between their own skills, and those of morphologically different robots in different domains, in an entirely unsupervised manner. We make the insight that different morphological robots use similar task strategies to solve similar tasks. Based on this insight, we frame learning skill correspondences as a problem of matching distributions of sequences of skills across robots. We then present an unsupervised objective that encourages a learnt skill translation model to match these distributions across domains, inspired by recent advances in unsupervised machine translation. Our approach is able to learn semantically meaningful correspondences between skills across multiple robot-robot and human-robot domain pairs despite being completely unsupervised. Further, the learnt correspondences enable the transfer of task strategies across robots and domains. We present dynamic visualizations of our results at https://sites.google.com/view/translatingrobotskills/home.
Author Information
Tanmay Shankar (Carnegie Mellon University)
Yixin Lin (Meta AI)
Aravind Rajeswaran (Meta AI (FAIR))
Vikash Kumar (Univ. Of Washington)
Stuart Anderson (Facebook AI Research)
Jean Oh (Carnegie Mellon University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Translating Robot Skills: Learning Unsupervised Skill Correspondences Across Robots »
Wed. Jul 20th through Thu the 21st Room Hall E #129
More from the Same Authors
-
2021 : Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention »
Abhishek Gupta · Justin Yu · Tony Z. Zhao · Vikash Kumar · Aaron Rovinsky · Kelvin Xu · Thomas Devlin · Sergey Levine -
2021 : Decision Transformer: Reinforcement Learning via Sequence Modeling »
Lili Chen · Kevin Lu · Aravind Rajeswaran · Kimin Lee · Aditya Grover · Michael Laskin · Pieter Abbeel · Aravind Srinivas · Igor Mordatch -
2021 : RRL: Resnet as representation for Reinforcement Learning »
Rutav Shah · Vikash Kumar -
2022 : Policy Architectures for Compositional Generalization in Control »
Allan Zhou · Vikash Kumar · Chelsea Finn · Aravind Rajeswaran -
2023 Poster: LIV: Language-Image Representations and Rewards for Robotic Control »
Yecheng Jason Ma · Vikash Kumar · Amy Zhang · Osbert Bastani · Dinesh Jayaraman -
2023 Poster: MyoDex: A Generalizable Prior for Dexterous Manipulation »
Vittorio Caggiano · Sudeep Dasari · Vikash Kumar -
2021 Poster: RRL: Resnet as representation for Reinforcement Learning »
Rutav Shah · Vikash Kumar -
2021 Spotlight: RRL: Resnet as representation for Reinforcement Learning »
Rutav Shah · Vikash Kumar -
2020 Poster: A Game Theoretic Framework for Model Based Reinforcement Learning »
Aravind Rajeswaran · Igor Mordatch · Vikash Kumar -
2019 : Welcome and Introduction »
Aravind Rajeswaran -
2019 Workshop: Generative Modeling and Model-Based Reasoning for Robotics and AI »
Aravind Rajeswaran · Emanuel Todorov · Igor Mordatch · William Agnew · Amy Zhang · Joelle Pineau · Michael Chang · Dumitru Erhan · Sergey Levine · Kimberly Stachenfeld · Marvin Zhang -
2019 Poster: Online Meta-Learning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine -
2019 Oral: Online Meta-Learning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine