Trajectory-Based Off-Policy Deep Reinforcement Learning
Andreas Doerr · Michael Volpp · Marc Toussaint · Sebastian Trimpe · Christian Daniel

Wed Jun 12th 06:30 -- 09:00 PM @ Pacific Ballroom #44

Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.

Author Information

Andreas Doerr (Bosch Center for Artificial Intelligence, Max Planck Institute for Intelligent Systems)

Michael Volpp (Bosch Center for Artificial Intelligence)
Marc Toussaint (University Stuttgart)
Sebastian Trimpe (Max Planck Institute for Intelligent Systems)
Christian Daniel (Bosch Center for Artificial Intelligence)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors