Timezone: »
Diffusion models have demonstrated their powerful generative capability in many tasks, with great potential to serve as a paradigm for offline reinforcement learning. However, the quality of the diffusion model is limited by the insufficient diversity of training data, which hinders the performance of planning and the generalizability to new tasks. This paper introduces AdaptDiffuser, an evolutionary planning method with diffusion that can self-evolve to improve the diffusion model hence a better planner, not only for seen tasks but can also adapt to unseen tasks. AdaptDiffuser enables the generation of rich synthetic expert data for goal-conditioned tasks using guidance from reward gradients. It then selects high-quality data via a discriminator to finetune the diffusion model, which improves the generalization ability to unseen tasks. Empirical experiments on two benchmark environments and two carefully designed unseen tasks in KUKA industrial robot arm and Maze2D environments demonstrate the effectiveness of AdaptDiffuser. For example, AdaptDiffuser not only outperforms the previous art Diffuser by 20.8% on Maze2D and 7.5% on MuJoCo locomotion, but also adapts better to new tasks, e.g., KUKA pick-and-place, by 27.9% without requiring additional expert data. More visualization results and demo videos could be found on our project page.
Author Information
Zhixuan Liang (The University of Hong Kong)
Yao Mu (The University of Hong Kong)
I am currently a Ph.D. Candidate of Computer Science at the University of Hong Kong, supervised by Prof. Ping Luo. Previously I obtained the M.Phil Degree under the supervision of Prof. Bo Cheng and Prof. Shengbo Li at the Intelligent Driving Laboratory from Tsinghua University in June 2021. Research Interests: Reinforcement Learning, Representation Learning, Autonomous Driving, and Computer Vision.
Mingyu Ding (UC Berkeley)
Fei Ni (Tianjin university)
Masayoshi Tomizuka (University of California, Berkeley)
Ping Luo (The University of Hong Kong)
Related Events (a corresponding poster, oral, or spotlight)
-
2023 Poster: AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners »
Thu. Jul 27th 08:30 -- 10:00 PM Room Exhibit Hall 1 #621
More from the Same Authors
-
2023 Poster: $\pi$-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation »
CHENGYUE WU · Teng Wang · Yixiao Ge · Zeyu Lu · Ruisong Zhou · Ying Shan · Ping Luo -
2023 Poster: MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL »
Fei Ni · Jianye Hao · Yao Mu · Yifu Yuan · Yan Zheng · Bin Wang · Zhixuan Liang -
2023 Poster: ChiPFormer: Transferable Chip Placement via Offline Decision Transformer »
Yao LAI · Jinxin Liu · Zhentao Tang · Bin Wang · Jianye Hao · Ping Luo -
2022 Poster: Flow-based Recurrent Belief State Learning for POMDPs »
Xiaoyu Chen · Yao Mu · Ping Luo · Shengbo Li · Jianyu Chen -
2022 Spotlight: Flow-based Recurrent Belief State Learning for POMDPs »
Xiaoyu Chen · Yao Mu · Ping Luo · Shengbo Li · Jianyu Chen -
2022 Poster: CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer »
Yao Mu · Shoufa Chen · Mingyu Ding · Jianyu Chen · Runjian Chen · Ping Luo -
2022 Spotlight: CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer »
Yao Mu · Shoufa Chen · Mingyu Ding · Jianyu Chen · Runjian Chen · Ping Luo -
2017 Poster: Learning Deep Architectures via Generalized Whitened Neural Networks »
Ping Luo -
2017 Talk: Learning Deep Architectures via Generalized Whitened Neural Networks »
Ping Luo