Skip to yearly menu bar Skip to main content


Poster

Transfer of Samples in Policy Search via Multiple Importance Sampling

Andrea Tirinzoni · Mattia Salvini · Marcello Restelli

Pacific Ballroom #118

Keywords: [ Transfer and Multitask Learning ] [ Theory and Algorithms ]


Abstract:

We consider the transfer of experience samples in reinforcement learning. Most of the previous works in this context focused on value-based settings, where transferring instances conveniently reduces to the transfer of (s,a,s',r) tuples. In this paper, we consider the more complex case of reusing samples in policy search methods, in which the agent is required to transfer entire trajectories between environments with different transition models. By leveraging ideas from multiple importance sampling, we propose robust gradient estimators that effectively achieve this goal, along with several techniques to reduce their variance. In the case where the transition models are known, we theoretically establish the robustness to the negative transfer for our estimators. In the case of unknown models, we propose a method to efficiently estimate them when the target task belongs to a finite set of possible tasks and when it belongs to some reproducing kernel Hilbert space. We provide empirical results to show the effectiveness of our estimators.

Live content is unavailable. Log in and register to view live content