Transfer of Samples in Policy Search via Multiple Importance Sampling
Andrea Tirinzoni · Mattia Salvini · Marcello Restelli

Wed Jun 12th 05:05 -- 05:10 PM @ Room 104

We consider the transfer of experience samples in reinforcement learning. Most of the previous works in this context focused on value-based settings, where transferring instances conveniently reduces to the transfer of (s,a,s',r) tuples. In this paper, we consider the more complex case of reusing samples in policy search methods, in which the agent is required to transfer entire trajectories between environments with different transition models. By leveraging ideas from multiple importance sampling, we propose robust gradient estimators that effectively achieve this goal, along with several techniques to reduce their variance. In the case where the transition models are known, we theoretically establish the robustness to the negative transfer for our estimators. In the case of unknown models, we propose a method to efficiently estimate them when the target task belongs to a finite set of possible tasks and when it belongs to some reproducing kernel Hilbert space. We provide empirical results to show the effectiveness of our estimators.

Author Information

Andrea Tirinzoni (Politecnico di Milano)
Mattia Salvini (Politecnico di Milano)
Marcello Restelli (Politecnico di Milano)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors