Timezone: »

 
Poster
Lookahead-Bounded Q-learning
Ibrahim El Shar · Daniel Jiang

Tue Jul 14 09:00 AM -- 09:45 AM & Tue Jul 14 08:00 PM -- 08:45 PM (PDT) @

We introduce the lookahead-bounded Q-learning (LBQL) algorithm, a new, provably convergent variant of Q-learning that seeks to improve the performance of standard Q-learning in stochastic environments through the use of “lookahead” upper and lower bounds. To do this, LBQL employs previously collected experience and each iteration’s state-action values as dual feasible penalties to construct a sequence of sampled information relaxation problems. The solutions to these problems provide estimated upper and lower bounds on the optimal value, which we track via stochastic approximation. These quantities are then used to constrain the iterates to stay within the bounds at every iteration. Numerical experiments on benchmark problems show that LBQL exhibits faster convergence and more robustness to hyperparameters when compared to standard Q-learning and several related techniques. Our approach is particularly appealing in problems that require expensive simulations or real-world interactions.

Author Information

Ibrahim El Shar (University of Pittsburgh)
Daniel Jiang (University of Pittsburgh)

More from the Same Authors