Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Reinforcement Learning for Real Life

Revisiting Design Choices in Offline Model Based Reinforcement Learning

Cong Lu · Philip Ball · Jack Parker-Holder · Michael A Osborne · Stephen Roberts


Abstract:

Offline reinforcement learning enables agents to make use of large pre-collected datasets of environment transitions and learn control policies without the need for potentially expensive or unsafe online data collection. Recently, significant progress has been made in offline RL, with the dominant approach becoming methods which leverage a learned dynamics model. This typically involves constructing a probabilistic model, and using it to penalize rewards in regions of high uncertainty, solving for a pessimistic MDP that lower bounds the true MDP. Recent work, however, exhibits a breakdown between theory and practice, whereby pessimistic return ought to be bounded by the total variation distance of the model from the true dynamics, but is instead implemented through a penalty based on estimated model uncertainty. This has spawned a variety of uncertainty heuristics, with little to no comparison between differing approaches. In this paper, we show these heuristics have significant interactions with other design choices, such as the number of models in the ensemble, the model rollout length and the penalty weight. Furthermore, we compare these uncertainty heuristics under a new evaluation protocol that, for the first time, captures the specific covariate shift induced by model-based RL. This allows us to accurately assess the calibration of different proposed penalties. Finally, with these insights, we show that selecting these key hyperparameters using Bayesian Optimization produces drastically stronger performance than existing hand-tuned methods.

Chat is not available.