Timezone: »
Building generalizable goal-conditioned agents from rich observations is a key to reinforcement learning (RL) solving real world problems. Traditionally in goal-conditioned RL, an agent is provided with the exact goal they intend to reach. However, it is often not realistic to know the configuration of the goal before performing a task. A more scalable framework would allow us to provide the agent with an example of an analogous task, and have the agent then infer what the goal should be for its current state. We propose a new form of state abstraction called goal-conditioned bisimulation that captures functional equivariance, allowing for the reuse of skills to achieve new goals. We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in real world manipulation tasks. Further, we prove that this learned representation is sufficient not only for goal-conditioned tasks, but is amenable to any downstream task described by a state-only reward function.
Author Information
Philippe Hansen-Estruch (University of California, Berkeley)
Undergraduate Researcher at BAIR, UC Berkeley - Working with Dr. Amy Zhang and Professor Sergey Levine
Amy Zhang (FAIR / UC Berkeley)
Ashvin Nair
Patrick Yin (UC Berkeley)
Sergey Levine (University of California, Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning »
Thu. Jul 21st 08:45 -- 08:50 PM Room Room 309
More from the Same Authors
-
2020 : Learning Invariant Representations for Reinforcement Learning without Reconstruction »
Amy Zhang -
2020 : Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP »
Amy Zhang -
2021 : Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention »
Abhishek Gupta · Justin Yu · Tony Z. Zhao · Vikash Kumar · Aaron Rovinsky · Kelvin Xu · Thomas Devlin · Sergey Levine -
2022 : You Only Live Once: Single-Life Reinforcement Learning via Learned Reward Shaping »
Annie Chen · Archit Sharma · Sergey Levine · Chelsea Finn -
2023 : Deep Neural Networks Extrapolate Cautiously in High Dimensions »
Katie Kang · Amrith Setlur · Claire Tomlin · Sergey Levine -
2023 : Conditional Bisimulation for Generalization in Reinforcement Learning »
Anuj Mahajan · Amy Zhang -
2023 Poster: Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning »
Tongzhou Wang · Antonio Torralba · Phillip Isola · Amy Zhang -
2023 Poster: LIV: Language-Image Representations and Rewards for Robotic Control »
Yecheng Jason Ma · Vikash Kumar · Amy Zhang · Osbert Bastani · Dinesh Jayaraman -
2022 : Invited talks 3, Q/A, Amy, Rich and Liting »
Liting Sun · Amy Zhang · Richard Zemel -
2022 : Invited talks 3, Amy Zhang, Rich Zemel and Liting Sun »
Amy Zhang · Richard Zemel · Liting Sun -
2022 Poster: Online Decision Transformer »
Qinqing Zheng · Amy Zhang · Aditya Grover -
2022 Poster: Robust Policy Learning over Multiple Uncertainty Sets »
Annie Xie · Shagun Sodhani · Chelsea Finn · Joelle Pineau · Amy Zhang -
2022 Spotlight: Robust Policy Learning over Multiple Uncertainty Sets »
Annie Xie · Shagun Sodhani · Chelsea Finn · Joelle Pineau · Amy Zhang -
2022 Oral: Online Decision Transformer »
Qinqing Zheng · Amy Zhang · Aditya Grover -
2022 Poster: Denoised MDPs: Learning World Models Better Than the World Itself »
Tongzhou Wang · Simon Du · Antonio Torralba · Phillip Isola · Amy Zhang · Yuandong Tian -
2022 Spotlight: Denoised MDPs: Learning World Models Better Than the World Itself »
Tongzhou Wang · Simon Du · Antonio Torralba · Phillip Isola · Amy Zhang · Yuandong Tian -
2022 Poster: Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control »
Katie Kang · Paula Gradu · Jason Choi · Michael Janner · Claire Tomlin · Sergey Levine -
2022 Spotlight: Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control »
Katie Kang · Paula Gradu · Jason Choi · Michael Janner · Claire Tomlin · Sergey Levine -
2021 Poster: Offline Meta-Reinforcement Learning with Advantage Weighting »
Eric Mitchell · Rafael Rafailov · Xue Bin Peng · Sergey Levine · Chelsea Finn -
2021 Spotlight: Offline Meta-Reinforcement Learning with Advantage Weighting »
Eric Mitchell · Rafael Rafailov · Xue Bin Peng · Sergey Levine · Chelsea Finn -
2020 : Paper spotlight: Learning Invariant Representations for Reinforcement Learning without Reconstruction »
Amy Zhang -
2018 Poster: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus -
2018 Oral: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus