Timezone: »

 
Poster
Bidirectional Model-based Policy Optimization
Hang Lai · Jian Shen · Weinan Zhang · Yong Yu

Wed Jul 15 04:00 PM -- 04:45 PM & Thu Jul 16 03:00 AM -- 03:45 AM (PDT) @ Virtual #None

Model-based reinforcement learning approaches leverage a forward dynamics model to support planning and decision making, which, however, may fail catastrophically if the model is inaccurate. Although there are several existing methods dedicated to combating the model error, the potential of the single forward model is still limited. In this paper, we propose to additionally construct a backward dynamics model to reduce the reliance on accuracy in forward model predictions. We develop a novel method, called Bidirectional Model-based Policy Optimization (BMPO) to utilize both the forward model and backward model to generate short branched rollouts for policy optimization. Furthermore, we theoretically derive a tighter bound of return discrepancy, which shows the superiority of BMPO against the one using merely the forward model. Extensive experiments demonstrate that BMPO outperforms state-of-the-art model-based methods in terms of sample efficiency and asymptotic performance.

Author Information

Hang Lai (Shanghai Jiao Tong University)
Jian Shen (Shanghai Jiao Tong University)
Weinan Zhang (Shanghai Jiao Tong University)
Yong Yu (Shanghai Jiao Tong University)

More from the Same Authors