Timezone: »

 
Poster
Towards Adaptive Model-Based Reinforcement Learning
Yi Wan · Harm van Seijen · Ida Momennejad · Sarath Chandar · Janarthanan Rajendran · Ali Rahimi-Kalahroudi

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #918

In recent years, a growing number of deep model-based RL methods have been introduced. The interest in deep model-based RL is not surprising, given the many potential benefits, such as higher sample efficiency and the potential for fast adaption to changes in the environment. However, we demonstrate, using an improved version of the recently introduced Local Change Adaptation (LoCA) setup, that the well-known model-based methods Planet and Dreamer perform poorly in their ability to adapt to local environmental changes. Combined with prior work that made a similar observation about MuZero, a trend appears to emerge suggesting that current deep model-based methods have some serious limitations. We dive deeper into the causes of this poor performance, by identifying elements that hurt adaptation behavior and linking these to underlying techniques frequently used in deep model-based RL. We empirically validate these insights in the case of linear function approximation by demonstrating that a modified version of linear Dyna achieves effective adaptation to local changes. Furthermore, we provide detailed insights into the challenges of building an adaptive non-linear model-based method, by experimenting with a non-linear version of Dyna.

Author Information

Yi Wan (University of Alberta)
Harm van Seijen (Microsoft Research)
Ida Momennejad (Microsoft Research)
Sarath Chandar (Polytechnique Montreal)
Janarthanan Rajendran (Mila, University of Montreal)
Ali Rahimi-Kalahroudi (MILA - Université de Montréal)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors