Skip to yearly menu bar Skip to main content


Poster

“Other-Play” for Zero-Shot Coordination

Hengyuan Hu · Alexander Peysakhovich · Adam Lerer · Jakob Foerster

Keywords: [ Planning, Control, and Multiagent Learning ] [ Reinforcement Learning ] [ Multiagent Learning ]


Abstract:

We consider the problem of zero-shot coordination - constructing AI agents that can coordinate with novel partners they have not seen before (e.g.humans). Standard Multi-Agent Reinforcement Learning (MARL) methods typically focus on the self-play (SP) setting where agents construct strategies by playing the game with themselves repeatedly. Unfortunately, applying SP naively to the zero-shot coordination problem can produce agents that establish highly specialized conventions that do not carry over to novel partners they have not been trained with. We introduce a novel learning algorithm called other-play (OP), that enhances self-play by looking for more robust strategies. We characterize OP theoretically as well as experimentally. We study the cooperative card game Hanabi and show that OP agents achieve higher scores when paired with independently trained agents as well as with human players than SP agents.

Chat is not available.