Skip to yearly menu bar Skip to main content


( events)   Timezone: America/Los_Angeles  
Poster
Fri Jul 13 09:15 AM -- 12:00 PM (PDT) @ Hall B #14
The Uncertainty Bellman Equation and Exploration
Brendan O'Donoghue · Ian Osband · Remi Munos · Vlad Mnih
[ PDF
We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any time-step to the expected value at subsequent time-steps. In this paper we consider a similar uncertainty Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the posterior distribution of the Q-values induced by any policy. This bound can be much tighter than traditional count-based bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBE-exploration strategy for $\epsilon$-greedy improves DQN performance on 51 out of 57 games in the Atari suite.