Timezone: »
We investigate the challenge of parametrizing policies for reinforcement learning (RL) in high-dimensional continuous action spaces. Our objective is to develop a multimodal policy that overcomes limitations inherent in the commonly-used Gaussian parameterization. To achieve this, we propose a principled framework that models the continuous RL policy as a generative model of optimal trajectories. By conditioning the policy on a latent variable, we derive a novel variational bound as the optimization objective, which promotes exploration of the environment. We then present a practical model-based RL method, called Reparameterized Policy Gradient (RPG), which leverages the multimodal policy parameterization and learned world model to achieve strong exploration capabilities and high data efficiency. Empirical results demonstrate that our method can help agents evade local optima in tasks with dense rewards and solve challenging sparse-reward environments by incorporating an object-centric intrinsic reward. Our method consistently outperforms previous approaches across a range of tasks. Code and supplementary materials are available on the project page https://haosulab.github.io/RPG/
Author Information
Zhiao Huang (UCSD)
Litian Liang (University of California, San Diego)
Zhan Ling (UC San Diego)
Xuanlin Li (UCSD)
Chuang Gan (Umass Amherst/ IBM)
Hao Su (UCSD)
Related Events (a corresponding poster, oral, or spotlight)
-
2023 Poster: Reparameterized Policy Learning for Multimodal Trajectory Optimization »
Thu. Jul 27th through Fri the 28th Room Exhibit Hall 1 #112
More from the Same Authors
-
2021 : Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2023 : Situated Interaction with Real-Time State Conditioning of Language Models »
Sunny Panchal · Guillaume Berger · Antoine Mercier · Cornelius Böhm · Florian Dietrichkeit · Xuanlin Li · Reza Pourreza · Pulkit Madan · Apratim Bhattacharyya · Mingu Lee · Mark Todorovich · Ingo Bax · Roland Memisevic -
2023 Poster: Abstract-to-Executable Trajectory Translation for One-Shot Task Generalization »
Stone Tao · Xiaochen Li · Tongzhou Mu · Zhiao Huang · Yuzhe Qin · Hao Su -
2023 Poster: On the Forward Invariance of Neural ODEs »
Wei Xiao · Johnson Tsun-Hsuan Wang · Ramin Hasani · Mathias Lechner · Yutong Ban · Chuang Gan · Daniela Rus -
2023 Poster: On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline »
Nicklas Hansen · Zhecheng Yuan · Yanjie Ze · Tongzhou Mu · Aravind Rajeswaran · Hao Su · Huazhe Xu · Xiaolong Wang -
2023 Poster: Learning Neural Constitutive Laws from Motion Observations for Generalizable PDE Dynamics »
Pingchuan Ma · Peter Yichen Chen · Bolei Deng · Josh Tenenbaum · Tao Du · Chuang Gan · Wojciech Matusik -
2022 Poster: Temporal Difference Learning for Model Predictive Control »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2022 Spotlight: Temporal Difference Learning for Model Predictive Control »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2022 Poster: Improving Policy Optimization with Generalist-Specialist Learning »
Zhiwei Jia · Xuanlin Li · Zhan Ling · Shuang Liu · Yiran Wu · Hao Su -
2022 Spotlight: Improving Policy Optimization with Generalist-Specialist Learning »
Zhiwei Jia · Xuanlin Li · Zhan Ling · Shuang Liu · Yiran Wu · Hao Su -
2022 Poster: Prompting Decision Transformer for Few-Shot Policy Generalization »
Mengdi Xu · Yikang Shen · Shun Zhang · Yuchen Lu · Ding Zhao · Josh Tenenbaum · Chuang Gan -
2022 Spotlight: Prompting Decision Transformer for Few-Shot Policy Generalization »
Mengdi Xu · Yikang Shen · Shun Zhang · Yuchen Lu · Ding Zhao · Josh Tenenbaum · Chuang Gan -
2021 Poster: Global Prosody Style Transfer Without Text Transcriptions »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Jinjun Xiong · Chuang Gan · David Cox · Mark Hasegawa-Johnson -
2021 Oral: Global Prosody Style Transfer Without Text Transcriptions »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Jinjun Xiong · Chuang Gan · David Cox · Mark Hasegawa-Johnson -
2021 Poster: Adversarial Option-Aware Hierarchical Imitation Learning »
Mingxuan Jing · Wenbing Huang · Fuchun Sun · Xiaojian Ma · Tao Kong · Chuang Gan · Lei Li -
2021 Poster: AGENT: A Benchmark for Core Psychological Reasoning »
Tianmin Shu · Abhishek Bhandwaldar · Chuang Gan · Kevin Smith · Shari Liu · Dan Gutfreund · Elizabeth Spelke · Josh Tenenbaum · Tomer Ullman -
2021 Spotlight: AGENT: A Benchmark for Core Psychological Reasoning »
Tianmin Shu · Abhishek Bhandwaldar · Chuang Gan · Kevin Smith · Shari Liu · Dan Gutfreund · Elizabeth Spelke · Josh Tenenbaum · Tomer Ullman -
2021 Spotlight: Adversarial Option-Aware Hierarchical Imitation Learning »
Mingxuan Jing · Wenbing Huang · Fuchun Sun · Xiaojian Ma · Tao Kong · Chuang Gan · Lei Li -
2020 Poster: Information-Theoretic Local Minima Characterization and Regularization »
Zhiwei Jia · Hao Su