Timezone: »

 
Poster
Monte-Carlo Tree Search as Regularized Policy Optimization
Jean-Bastien Grill · Florent Altché · Yunhao Tang · Thomas Hubert · Michal Valko · Ioannis Antonoglou · Remi Munos

Thu Jul 16 12:00 PM -- 12:45 PM & Fri Jul 17 01:00 AM -- 01:45 AM (PDT) @ None #None

The combination of Monte-Carlo tree search (MCTS) with deep reinforcement learning has led to groundbreaking results in artificial intelligence. However, AlphaZero, the current state-of-the-art MCTS algorithm still relies on handcrafted heuristics that are only partially understood. In this paper, we show that AlphaZero's search heuristic, along with other common ones, can be interpreted as an approximation to the solution of a specific regularized policy optimization problem. With this insight, we propose a variant of AlphaZero which uses the exact solution to this policy optimization problem, and show experimentally that it reliably outperforms the original algorithm in multiple domains.

Author Information

Jean-Bastien Grill (DeepMind)
Florent Altché (DeepMind)
Yunhao Tang (Columbia University)
Thomas Hubert (DeepMind)
Michal Valko (DeepMind)
Ioannis Antonoglou (Deepmind)
Remi Munos (DeepMind)

More from the Same Authors