Timezone: »
Partially Observable Markov Decision Process (POMDP) provides a principled and generic framework to model real world sequential decision making processes but yet remains unsolved, especially for high dimensional continuous space and unknown models. The main challenge lies in how to accurately obtain the belief state, which is the probability distribution over the unobservable environment states given historical information. Accurately calculating this belief state is a precondition for obtaining an optimal policy of POMDPs. Recent advances in deep learning techniques show great potential to learn good belief states. However, existing methods can only learn approximated distribution with limited flexibility. In this paper, we introduce the \textbf{F}l\textbf{O}w-based \textbf{R}ecurrent \textbf{BE}lief \textbf{S}tate model (FORBES), which incorporates normalizing flows into the variational inference to learn general continuous belief states for POMDPs. Furthermore, we show that the learned belief states can be plugged into downstream RL algorithms to improve performance. In experiments, we show that our methods successfully capture the complex belief states that enable multi-modal predictions as well as high quality reconstructions, and results on challenging visual-motor control tasks show that our method achieves superior performance and sample efficiency.
Author Information
Xiaoyu Chen (Tsinghua University)
Yao Mu (The University of Hong Kong)
I am currently a Ph.D. Candidate of Computer Science at the University of Hong Kong, supervised by Prof. Ping Luo. Previously I obtained the M.Phil Degree under the supervision of Prof. Bo Cheng and Prof. Shengbo Li at the Intelligent Driving Laboratory from Tsinghua University in June 2021. Research Interests: Reinforcement Learning, Representation Learning, Autonomous Driving, and Computer Vision.
Ping Luo (The University of Hong Kong)
Shengbo Li (Tsinghua University)
Jianyu Chen (Tsinghua University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Flow-based Recurrent Belief State Learning for POMDPs »
Thu. Jul 21st 08:30 -- 08:35 PM Room Room 307
More from the Same Authors
-
2023 Poster: $\pi$-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation »
CHENGYUE WU · Teng Wang · Yixiao Ge · Zeyu Lu · Ruisong Zhou · Ying Shan · Ping Luo -
2023 Poster: MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL »
Fei Ni · Jianye Hao · Yao Mu · Yifu Yuan · Yan Zheng · Bin Wang · Zhixuan Liang -
2023 Poster: AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners »
Zhixuan Liang · Yao Mu · Mingyu Ding · Fei Ni · Masayoshi Tomizuka · Ping Luo -
2023 Poster: ChiPFormer: Transferable Chip Placement via Offline Decision Transformer »
Yao LAI · Jinxin Liu · Zhentao Tang · Bin Wang · Jianye Hao · Ping Luo -
2023 Oral: AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners »
Zhixuan Liang · Yao Mu · Mingyu Ding · Fei Ni · Masayoshi Tomizuka · Ping Luo -
2022 Poster: Reachability Constrained Reinforcement Learning »
Dongjie Yu · Haitong Ma · Shengbo Li · Jianyu Chen -
2022 Spotlight: Reachability Constrained Reinforcement Learning »
Dongjie Yu · Haitong Ma · Shengbo Li · Jianyu Chen -
2022 Poster: CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer »
Yao Mu · Shoufa Chen · Mingyu Ding · Jianyu Chen · Runjian Chen · Ping Luo -
2022 Spotlight: CtrlFormer: Learning Transferable State Representation for Visual Control via Transformer »
Yao Mu · Shoufa Chen · Mingyu Ding · Jianyu Chen · Runjian Chen · Ping Luo -
2017 Poster: Learning Deep Architectures via Generalized Whitened Neural Networks »
Ping Luo -
2017 Talk: Learning Deep Architectures via Generalized Whitened Neural Networks »
Ping Luo