Timezone: »

Model-based Meta Reinforcement Learning using Graph Structured Surrogate Models and Amortized Policy Search
Qi Wang · Herke van Hoof

Wed Jul 20 07:45 AM -- 07:50 AM (PDT) @ Room 307

Reinforcement learning is a promising paradigm for solving sequential decision-making problems, but low data efficiency and weak generalization across tasks are bottlenecks in real-world applications. Model-based meta reinforcement learning addresses these issues by learning dynamics and leveraging knowledge from prior experience. In this paper, we take a closer look at this framework and propose a new posterior sampling based approach that consists of a new model to identify task dynamics together with an amortized policy optimization step. We show that our model, called a graph structured surrogate model (GSSM), achieves competitive dynamics prediction performance with lower model complexity. Moreover, our approach in policy search is able to obtain high returns and allows fast execution by avoiding test-time policy gradient updates.

Author Information

Qi Wang (AMLab, University of Amsterdam)

I am a final-year Ph.D. student at AMLab. My promotores are Max Welling and Herke van Hoof. My research interest lies in Bayesian deep learning and intelligent decision-making.

Herke van Hoof (University of Amsterdam)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors