Skip to yearly menu bar Skip to main content


Poster

Variance Control for Distributional Reinforcement Learning

Qi Kuang · Zhoufan Zhu · Liwen Zhang · Fan Zhou

Exhibit Hall 1 #423
[ ]
[ PDF [ Poster

Abstract:

Although distributional reinforcement learning (DRL) has been widely examined in the past few years, very few studies investigate the validity of the obtained Q-function estimator in the distributional setting. To fully understand how the approximation errors of the Q-function affect the whole training process, we do some error analysis and theoretically show how to reduce both the bias and the variance of the error terms. With this new understanding, we construct a new estimator Quantiled Expansion Mean (QEM) and introduce a new DRL algorithm (QEMRL) from the statistical perspective. We extensively evaluate our QEMRL algorithm on a variety of Atari and Mujoco benchmark tasks and demonstrate that QEMRL achieves significant improvement over baseline algorithms in terms of sample efficiency and convergence performance.

Chat is not available.