Timezone: »

Robust Asymmetric Learning in POMDPs
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood

Tue Jul 20 05:00 PM -- 05:20 PM (PDT) @ None

Policies for partially observed Markov decision processes can be efficiently learned by imitating expert policies generated using asymmetric information. Unfortunately, existing approaches for this kind of imitation learning have a serious flaw: the expert does not know what the trainee cannot see, and as a result may encourage actions that are sub-optimal or unsafe under partial information. To address this issue, we derive an update which, when applied iteratively to an expert, maximizes the expected reward of the trainee's policy. Using this update, we construct a computationally efficient algorithm, adaptive asymmetric DAgger (A2D), that jointly trains the expert and trainee policies. We then show that A2D allows the trainee to safely imitate the modified expert, and outperforms policies learned either by imitating a fixed expert or through direct reinforcement learning.

Author Information

Andrew Warrington (University of Oxford)
Jonathan Lavington (University of British Columbia)
Adam Scibior (University of British Columbia)
Mark Schmidt (University of British Columbia)
Frank Wood (University of British Columbia)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors