Timezone: »
The ability to autonomously learn behaviors via direct interactions in uninstrumented environments can lead to generalist robots capable of enhancing productivity or providing care in unstructured settings like homes. Such uninstrumented settings warrant operations only using the robot’s proprioceptive sensor such as onboard cameras, joint encoders, etc which can be challenging for policy learning owing to the high dimensionality and partial observability issues. We propose RRL: Resnet as representation for Reinforcement Learning – a straightforward yet effective approach that can learn complex behaviors directly from proprioceptive inputs. RRL fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. In a simulated dexterous manipulation benchmark, where the state of the art methods fails to make significant progress, RRL delivers contact rich behaviors. The appeal of RRL lies in its simplicity in bringing together progress from the fields of Representation Learning, Imitation Learning, and Reinforcement Learning. Its effectiveness in learning behaviors directly from visual inputs with performance and sample efficiency matching learning directly from the state, even in complex high dimensional domains, is far from obvious.
Author Information
Rutav Shah (Indian Institute of Technology, Kharagpur)
Vikash Kumar (Univ. Of Washington)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: RRL: Resnet as representation for Reinforcement Learning »
Wed. Jul 21st 02:40 -- 02:45 AM Room
More from the Same Authors
-
2021 : Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention »
Abhishek Gupta · Justin Yu · Tony Z. Zhao · Vikash Kumar · Aaron Rovinsky · Kelvin Xu · Thomas Devlin · Sergey Levine -
2021 : RRL: Resnet as representation for Reinforcement Learning »
Rutav Shah · Vikash Kumar -
2022 : Policy Architectures for Compositional Generalization in Control »
Allan Zhou · Vikash Kumar · Chelsea Finn · Aravind Rajeswaran -
2023 : Visual Dexterity: In-hand Dexterous Manipulation from Depth »
Tao Chen · Megha Tippur · Siyang Wu · Vikash Kumar · Edward Adelson · Pulkit Agrawal -
2023 : Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware »
Tony Zhao · Vikash Kumar · Sergey Levine · Chelsea Finn -
2023 Poster: MyoDex: A Generalizable Prior for Dexterous Manipulation »
Vittorio Caggiano · Sudeep Dasari · Vikash Kumar -
2023 Poster: LIV: Language-Image Representations and Rewards for Robotic Control »
Yecheng Jason Ma · Vikash Kumar · Amy Zhang · Osbert Bastani · Dinesh Jayaraman -
2022 Poster: Translating Robot Skills: Learning Unsupervised Skill Correspondences Across Robots »
Tanmay Shankar · Yixin Lin · Aravind Rajeswaran · Vikash Kumar · Stuart Anderson · Jean Oh -
2022 Spotlight: Translating Robot Skills: Learning Unsupervised Skill Correspondences Across Robots »
Tanmay Shankar · Yixin Lin · Aravind Rajeswaran · Vikash Kumar · Stuart Anderson · Jean Oh -
2020 Poster: A Game Theoretic Framework for Model Based Reinforcement Learning »
Aravind Rajeswaran · Igor Mordatch · Vikash Kumar