Timezone: »

 
Poster
Vector Quantized Models for Planning
Sherjil Ozair · Yazhe Li · Ali Razavi · Ioannis Antonoglou · Aäron van den Oord · Oriol Vinyals

Tue Jul 20 09:00 AM -- 11:00 AM (PDT) @

Recent developments in the field of model-based RL have proven successful in a range of environments, especially ones where planning is essential. However, such successes have been limited to deterministic fully-observed environments. We present a new approach that handles stochastic and partially-observable environments. Our key insight is to use discrete autoencoders to capture the multiple possible effects of an action in a stochastic environment. We use a stochastic variant of Monte Carlo tree search to plan over both the agent's actions and the discrete latent variables representing the environment's response. Our approach significantly outperforms an offline version of MuZero on a stochastic interpretation of chess where the opponent is considered part of the environment. We also show that our approach scales to DeepMind Lab, a first-person 3D environment with large visual observations and partial observability.

Author Information

Sherjil Ozair (DeepMind)
Yazhe Li (Deepmind)
Ali Razavi (DeepMind)
Ioannis Antonoglou (Deepmind)
Aäron van den Oord (Google Deepmind)
Oriol Vinyals (Google DeepMind)

Oriol Vinyals is a Research Scientist at Google. He works in deep learning with the Google Brain team. Oriol holds a Ph.D. in EECS from University of California, Berkeley, and a Masters degree from University of California, San Diego. He is a recipient of the 2011 Microsoft Research PhD Fellowship. He was an early adopter of the new deep learning wave at Berkeley, and in his thesis he focused on non-convex optimization and recurrent neural networks. At Google Brain he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, language, and vision.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors