Timezone: »
Meta reinforcement learning (meta-RL) aims to learn a policy solving a set of training tasks simultaneously and quickly adapting to new tasks. It requires massive amounts of data drawn from training tasks to infer the common structure shared among tasks. Without heavy reward engineering, the sparse rewards in long-horizon tasks exacerbate the problem of sample efficiency in meta-RL. Another challenge in meta-RL is the discrepancy of difficulty level among tasks, which might cause one easy task dominating learning of the shared policy and thus preclude policy adaptation to new tasks. In this work, we introduce a novel objective function to learn an action translator among training tasks. We theoretically verify that value of the transferred policy with the action translator can be close to the value of the source policy. We propose to combine the action translator with context-based meta-RL algorithms for better data collection and more efficient exploration during meta-training. Our approach of policy transfer empirically improves the sample efficiency and performance of meta-RL algorithms on sparse-reward tasks.
Author Information
Yijie Guo (University of Michigan)
Qiucheng Wu (University of Michigan)
Honglak Lee (Google / U. Michigan)
More from the Same Authors
-
2023 : Guide Your Agent with Adaptive Multimodal Rewards »
Changyeon Kim · Younggyo Seo · Hao Liu · Lisa Lee · Jinwoo Shin · Honglak Lee · Kimin Lee -
2023 : Learning Higher Order Skills that Efficiently Compose »
Anthony Liu · Dong Ki Kim · Sungryull Sohn · Honglak Lee -
2023 : Hierarchical Decomposition Framework for Feasibility-hard Combinatorial Optimization »
Hanbum Ko · Minu Kim · Han-Seul Jeong · Sunghoon Hong · Deunsol Yoon · Youngjoon Park · Woohyung Lim · Honglak Lee · Moontae Lee · Kanghoon Lee · Sungbin Lim · Sungryull Sohn -
2023 : Mixed-Curvature Transformers for Graph Representation Learning »
Sungjun Cho · Seunghyuk Cho · Sungwoo Park · Hankook Lee · Honglak Lee · Moontae Lee -
2023 Poster: Go Beyond Imagination: Maximizing Episodic Reachability with World Models »
Yao Fu · Run Peng · Honglak Lee -
2021 Poster: Learning to Weight Imperfect Demonstrations »
Yunke Wang · Chang Xu · Bo Du · Honglak Lee -
2021 Poster: Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks »
Sungryull Sohn · Sungtae Lee · Jongwook Choi · Harm van Seijen · Mehdi Fatemi · Honglak Lee -
2021 Poster: Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning »
Jongwook Choi · Archit Sharma · Honglak Lee · Sergey Levine · Shixiang Gu -
2021 Spotlight: Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning »
Jongwook Choi · Archit Sharma · Honglak Lee · Sergey Levine · Shixiang Gu -
2021 Spotlight: Learning to Weight Imperfect Demonstrations »
Yunke Wang · Chang Xu · Bo Du · Honglak Lee -
2021 Spotlight: Shortest-Path Constrained Reinforcement Learning for Sparse Reward Tasks »
Sungryull Sohn · Sungtae Lee · Jongwook Choi · Harm van Seijen · Mehdi Fatemi · Honglak Lee -
2021 Poster: State Entropy Maximization with Random Encoders for Efficient Exploration »
Younggyo Seo · Lili Chen · Jinwoo Shin · Honglak Lee · Pieter Abbeel · Kimin Lee -
2021 Spotlight: State Entropy Maximization with Random Encoders for Efficient Exploration »
Younggyo Seo · Lili Chen · Jinwoo Shin · Honglak Lee · Pieter Abbeel · Kimin Lee -
2020 Poster: Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning »
Kimin Lee · Younggyo Seo · Seunghyun Lee · Honglak Lee · Jinwoo Shin -
2019 Poster: Learning Latent Dynamics for Planning from Pixels »
Danijar Hafner · Timothy Lillicrap · Ian Fischer · Ruben Villegas · David Ha · Honglak Lee · James Davidson -
2019 Poster: Robust Inference via Generative Classifiers for Handling Noisy Labels »
Kimin Lee · Sukmin Yun · Kibok Lee · Honglak Lee · Bo Li · Jinwoo Shin -
2019 Poster: Similarity of Neural Network Representations Revisited »
Simon Kornblith · Mohammad Norouzi · Honglak Lee · Geoffrey Hinton -
2019 Oral: Similarity of Neural Network Representations Revisited »
Simon Kornblith · Mohammad Norouzi · Honglak Lee · Geoffrey Hinton -
2019 Oral: Robust Inference via Generative Classifiers for Handling Noisy Labels »
Kimin Lee · Sukmin Yun · Kibok Lee · Honglak Lee · Bo Li · Jinwoo Shin -
2019 Oral: Learning Latent Dynamics for Planning from Pixels »
Danijar Hafner · Timothy Lillicrap · Ian Fischer · Ruben Villegas · David Ha · Honglak Lee · James Davidson -
2018 Poster: Self-Imitation Learning »
Junhyuk Oh · Yijie Guo · Satinder Singh · Honglak Lee -
2018 Oral: Self-Imitation Learning »
Junhyuk Oh · Yijie Guo · Satinder Singh · Honglak Lee -
2018 Poster: Hierarchical Long-term Video Prediction without Supervision »
Nevan Wichers · Ruben Villegas · Dumitru Erhan · Honglak Lee -
2018 Oral: Hierarchical Long-term Video Prediction without Supervision »
Nevan Wichers · Ruben Villegas · Dumitru Erhan · Honglak Lee -
2017 Poster: Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning »
Junhyuk Oh · Satinder Singh · Honglak Lee · Pushmeet Kohli -
2017 Talk: Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning »
Junhyuk Oh · Satinder Singh · Honglak Lee · Pushmeet Kohli -
2017 Poster: Learning to Generate Long-term Future via Hierarchical Prediction »
Ruben Villegas · Jimei Yang · Yuliang Zou · Sungryull Sohn · Xunyu Lin · Honglak Lee -
2017 Talk: Learning to Generate Long-term Future via Hierarchical Prediction »
Ruben Villegas · Jimei Yang · Yuliang Zou · Sungryull Sohn · Xunyu Lin · Honglak Lee