Timezone: »

 
Poster
Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization
Wesley Chung · Valentin Thomas · Marlos C. Machado · Nicolas Le Roux

Tue Jul 20 09:00 PM -- 11:00 PM (PDT) @ None #None

Bandit and reinforcement learning (RL) problems can often be framed as optimization problems where the goal is to maximize average performance while having access only to stochastic estimates of the true gradient. Traditionally, stochastic optimization theory predicts that learning dynamics are governed by the curvature of the loss function and the noise of the gradient estimates. In this paper we demonstrate that the standard view is too limited for bandit and RL problems. To allow our analysis to be interpreted in light of multi-step MDPs, we focus on techniques derived from stochastic optimization principles~(e.g., natural policy gradient and EXP3) and we show that some standard assumptions from optimization theory are violated in these problems. We present theoretical results showing that, at least for bandit problems, curvature and noise are not sufficient to explain the learning dynamics and that seemingly innocuous choices like the baseline can determine whether an algorithm converges. These theoretical findings match our empirical evaluation, which we extend to multi-state MDPs.

Author Information

Wes Chung (Mila / McGill University)

I am a second-year PhD student co-superivsed by Prof. David Meger and Prof. Doina Precup. My research interests lie in reinforcement learning and optimization.

Valentin Thomas (Mila)
Marlos C. Machado (DeepMind, University of Alberta)
Nicolas Le Roux (Google)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors