Timezone: »

Muesli: Combining Improvements in Policy Optimization
Matteo Hessel · Ivo Danihelka · Fabio Viola · Arthur Guez · Simon Schmitt · Laurent Sifre · Theophane Weber · David Silver · Hado van Hasselt

Tue Jul 20 09:00 AM -- 11:00 AM (PDT) @ Virtual #None

We propose a novel policy update that combines regularized policy optimization with model learning as an auxiliary loss. The update (henceforth Muesli) matches MuZero's state-of-the-art performance on Atari. Notably, Muesli does so without using deep search: it acts directly with a policy network and has computation speed comparable to model-free baselines. The Atari results are complemented by extensive ablations, and by additional results on continuous control and 9x9 Go.

Author Information

Matteo Hessel (DeepMind)
Ivo Danihelka (DeepMind)
Fabio Viola (DeepMind)
Arthur Guez (Google DeepMind)
Simon Schmitt (DeepMind)
Laurent Sifre (DeepMind)
Theo Weber (DeepMind)
David Silver (Google DeepMind)
Hado van Hasselt (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors