Timezone: »

 
Oral
Calibrated Model-Based Deep Reinforcement Learning
Ali Malik · Volodymyr Kuleshov · Jiaming Song · Danny Nemer · Harlan Seymour · Stefano Ermon

Thu Jun 13 09:40 AM -- 10:00 AM (PDT) @ Hall B

Accurate estimates of predictive uncertainty are important for building effective model-based reinforcement learning agents. However, predictive uncertainties --- especially ones derived from modern neural networks --- are often inaccurate and impose a bottleneck on performance. Here, we argue that ideal model uncertainties should be calibrated, i.e. their probabilities should match empirical frequencies of predicted events. We describe a simple way to augment any model-based reinforcement learning algorithm with calibrated uncertainties and show that doing so consistently improves the accuracy of planning and helps agents balance exploration and exploitation. On the HalfCheetah MuJoCo task, our system achieves state-of-the-art performance using 50\% fewer samples than the current leading approach. Our findings suggest that calibration can improve the performance and sample complexity of model-based reinforcement learning with minimal computational and implementation overhead.

Author Information

Ali Malik (Stanford Universtiy)
Volodymyr Kuleshov (Stanford University / Afresh)
Jiaming Song (Stanford)
Danny Nemer (Afresh Technologies)
Harlan Seymour (Afresh Technologies)
Stefano Ermon (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors