Skip to yearly menu bar Skip to main content


Poster

Online Prototype Alignment for Few-shot Policy Transfer

Qi Yi · Rui Zhang · Shaohui Peng · Jiaming Guo · Yunkai Gao · Kaizhao Yuan · Ruizhi Chen · Siming Lan · Xing Hu · Zidong Du · Xishan Zhang · Qi Guo · Yunji Chen

Exhibit Hall 1 #526
[ ]
[ PDF [ Poster

Abstract:

Domain adaptation in RL mainly deals with the changes of observation when transferring the policy to a new environment. Many traditional approaches of domain adaptation in RL manage to learn a mapping function between the source and target domain in explicit or implicit ways. However, they typically require access to abundant data from the target domain. Besides, they often rely on visual clues to learn the mapping function and may fail when the source domain looks quite different from the target domain. To address these problems, in this paper, we propose a novel framework Online Prototype Alignment (OPA) to learn the mapping function based on the functional similarity of elements and is able to achieve few-shot policy transfer within only several episodes. The key insight of OPA is to introduce an exploration mechanism that can interact with the unseen elements of the target domain in an efficient and purposeful manner, and then connect them with the seen elements in the source domain according to their functionalities (instead of visual clues). Experimental results show that when the target domain looks visually different from the source domain, OPA can achieve better transfer performance even with much fewer samples from the target domain, outperforming prior methods.

Chat is not available.