Skip to yearly menu bar Skip to main content


Trajectory-Based Off-Policy Deep Reinforcement Learning

Andreas Doerr · Michael Volpp · Marc Toussaint · Sebastian Trimpe · Christian Daniel

Pacific Ballroom #44

Keywords: [ Online Learning ] [ Deep Reinforcement Learning ] [ Algorithms ]


Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks. However, these methods are also data-inefficient, afflicted with high variance gradient estimates, and frequently get stuck in local optima. This work addresses these weaknesses by combining recent improvements in the reuse of off-policy data and exploration in parameter space with deterministic behavioral policies. The resulting objective is amenable to standard neural network optimization strategies like stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo. Incorporation of previous rollouts via importance sampling greatly improves data-efficiency, whilst stochastic optimization schemes facilitate the escape from local optima. We evaluate the proposed approach on a series of continuous control benchmark tasks. The results show that the proposed algorithm is able to successfully and reliably learn solutions using fewer system interactions than standard policy gradient methods.

Live content is unavailable. Log in and register to view live content