Skip to yearly menu bar Skip to main content


Poster

Averaging $n$-step Returns Reduces Variance in Reinforcement Learning

Brett Daley · Martha White · Marlos C. Machado

Hall C 4-9 #1107
[ ]
Thu 25 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract: Multistep returns, such as $n$-step returns and $\lambda$-returns, are commonly used to improve the sample efficiency of reinforcement learning (RL) methods. The variance of the multistep returns becomes the limiting factor in their length; looking too far into the future increases variance and reverses the benefits of multistep learning. In our work, we demonstrate the ability of compound returns—weighted averages of $n$-step returns—to reduce variance. We prove for the first time that any compound return with the same contraction modulus as a given $n$-step return has strictly lower variance. We additionally prove that this variance-reduction property improves the finite-sample complexity of temporal-difference learning under linear function approximation. Because general compound returns can be expensive to implement, we introduce two-bootstrap returns which reduce variance while remaining efficient, even when using minibatched experience replay. We conduct experiments showing that compound returns often increase the sample efficiency of $n$-step deep RL agents like DQN and PPO.

Live content is unavailable. Log in and register to view live content