Skip to yearly menu bar Skip to main content


Oral

Robust Asymmetric Learning in POMDPs

Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood

[ ] [ Livestream: Visit Reinforcement Learning 4 ] [ Paper ]
[ Paper ]

Abstract:

Policies for partially observed Markov decision processes can be efficiently learned by imitating expert policies generated using asymmetric information. Unfortunately, existing approaches for this kind of imitation learning have a serious flaw: the expert does not know what the trainee cannot see, and as a result may encourage actions that are sub-optimal or unsafe under partial information. To address this issue, we derive an update which, when applied iteratively to an expert, maximizes the expected reward of the trainee's policy. Using this update, we construct a computationally efficient algorithm, adaptive asymmetric DAgger (A2D), that jointly trains the expert and trainee policies. We then show that A2D allows the trainee to safely imitate the modified expert, and outperforms policies learned either by imitating a fixed expert or through direct reinforcement learning.

Chat is not available.