Timezone: »

 
Oral
Trajectory-Based Off-Policy Deep Reinforcement Learning
Andreas Doerr · Michael Volpp · Marc Toussaint · Sebastian Trimpe · Christian Daniel

Wed Jun 12 02:30 PM -- 02:35 PM (PDT) @ Hall B

Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and get frequently stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies, like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.

Author Information

Andreas Doerr (Bosch Center for Artificial Intelligence, Max Planck Institute for Intelligent Systems)

https://is.tuebingen.mpg.de/person/adoerr https://www.linkedin.com/in/andreasdoerr

Michael Volpp (Bosch Center for Artificial Intelligence)
Marc Toussaint (University Stuttgart)
Sebastian Trimpe (Max Planck Institute for Intelligent Systems)
Christian Daniel (Bosch Center for Artificial Intelligence)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors

  • 2019 : Poster Session 1 (all papers) »
    Matilde Gargiani · Yochai Zur · Chaim Baskin · Evgenii Zheltonozhskii · Liam Li · Ameet Talwalkar · Xuedong Shang · Harkirat Singh Behl · Atilim Gunes Baydin · Ivo Couckuyt · Tom Dhaene · Chieh Lin · Wei Wei · Min Sun · Orchid Majumder · Michele Donini · Yoshihiko Ozaki · Ryan P. Adams · Christian Geißler · Ping Luo · zhanglin peng · · Ruimao Zhang · John Langford · Rich Caruana · Debadeepta Dey · Charles Weill · Xavi Gonzalvo · Scott Yang · Scott Yak · Eugen Hotaj · Vladimir Macko · Mehryar Mohri · Corinna Cortes · Stefan Webb · Jonathan Chen · Martin Jankowiak · Noah Goodman · Aaron Klein · Frank Hutter · Mojan Javaheripi · Mohammad Samragh · Sungbin Lim · Taesup Kim · SUNGWOONG KIM · Michael Volpp · Iddo Drori · Yamuna Krishnamurthy · Kyunghyun Cho · Stanislaw Jastrzebski · Quentin de Laroussilhe · Mingxing Tan · Xiao Ma · Neil Houlsby · Andrea Gesmundo · Zalán Borsos · Krzysztof Maziarz · Felipe Petroski Such · Joel Lehman · Kenneth Stanley · Jeff Clune · Pieter Gijsbers · Joaquin Vanschoren · Felix Mohr · Eyke Hüllermeier · Zheng Xiong · Wenpeng Zhang · wenwu zhu · Weijia Shao · Aleksandra Faust · Michal Valko · Michael Y Li · Hugo Jair Escalante · Marcel Wever · Andrey Khorlin · Tara Javidi · Anthony Francis · Saurajit Mukherjee · Jungtaek Kim · Michael McCourt · Saehoon Kim · Tackgeun You · Seungjin Choi · Nicolas Knudde · Alexander Tornede · Ghassen Jerfel
  • 2018 Poster: Probabilistic Recurrent State-Space Models »
    Andreas Doerr · Christian Daniel · Martin Schiegg · Duy Nguyen-Tuong · Stefan Schaal · Marc Toussaint · Sebastian Trimpe
  • 2018 Oral: Probabilistic Recurrent State-Space Models »
    Andreas Doerr · Christian Daniel · Martin Schiegg · Duy Nguyen-Tuong · Stefan Schaal · Marc Toussaint · Sebastian Trimpe