Timezone: »

OPtions as REsponses: Grounding behavioural hierarchies in multi-agent reinforcement learning
Alexander Vezhnevets · Yuhuai Wu · Maria Eckstein · Rémi Leblond · Joel Z Leibo

Tue Jul 14 02:00 PM -- 02:45 PM & Wed Jul 15 01:00 AM -- 01:45 AM (PDT) @

This paper investigates generalisation in multi-agent games, where the generality of the agent can be evaluated by playing against opponents it hasn't seen during training. We propose two new games with concealed information and complex, non-transitive reward structure (think rock-paper-scissors). It turns out that most current deep reinforcement learning methods fail to efficiently explore the strategy space, thus learning policies that generalise poorly to unseen opponents. We then propose a novel hierarchical agent architecture, where the hierarchy is grounded in the game-theoretic structure of the game -- the top level chooses strategic responses to opponents, while the low level implements them into policy over primitive actions. This grounding facilitates credit assignment across the levels of hierarchy. Our experiments show that the proposed hierarchical agent is capable of generalisation to unseen opponents, while conventional baselines fail to generalise whatsoever.

Author Information

Alexander Vezhnevets (DeepMind)
Yuhuai Wu (University of Toronto)
Maria Eckstein (UC Berkeley)
Rémi Leblond (DeepMind)
Joel Z Leibo (DeepMind)

More from the Same Authors