Poster
Distributional Reinforcement Learning for Efficient Exploration
Borislav Mavrin · Hengshuai Yao · Linglong Kong · Kaiwen Wu · Yaoliang Yu

Tue Jun 11th 06:30 -- 09:00 PM @ Pacific Ballroom #102

In distributional reinforcement learning (RL), the estimated distribution of value functions model both the parametric and intrinsic uncertainties. We propose a novel and efficient exploration method for deep RL that has two components. The first is a decaying schedule to suppress the intrinsic uncertainty. The second is an exploration bonus calculated from the upper quantiles of the learned distribution. In Atari 2600 games, our method achieves 483 % average gain across 49 games in cumulative rewards over QR-DQN. We also compared our algorithm with QR-DQN in a challenging 3D driving simulator (CARLA). Results show that our algorithm achieves nearoptimal safety rewards twice faster than QRDQN.

Author Information

Borislav Mavrin (University of Alberta)
Hengshuai Yao (Huawei Technologies)
Linglong Kong (University of Alberta)
Kaiwen Wu (University of Waterloo)
Yaoliang Yu (University of Waterloo)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors