Timezone: »

 
On learning history-based policies for controlling Markov decision processes
Gandharv Patil · Aditya Mahajan · Doina Precup
Event URL: https://openreview.net/forum?id=ucojMYU1TH »

Reinforcement learning (RL) folklore suggests that history-based function approximation methods, such as recurrent neural nets or history-based state abstraction, perform better than their memory-less counterparts, due to the fact that function approximation in Markov decision processes (MDP) can be viewed as inducing a Partially observable MDP. However, there has been little formal analysis of such history-based algorithms, as most existing frameworks focus exclusively on memory-less features. In this paper, we introduce a theoretical framework for studying the behaviour of RL algorithms that learn to control an MDP using history-based feature abstraction mappings. Furthermore, we use this framework to design a practical RL algorithm and we numerically evaluate its effectiveness on a set of continuous control tasks.

Author Information

Gandharv Patil (McGill Univesity)

PhD student at McGill University working on Reinforcement Learning and Stochastic Optimisation.

Aditya Mahajan (McGill University)
Doina Precup (McGill University / DeepMind)

More from the Same Authors