Timezone: »

Modeling Strong and Human-Like Gameplay with KL-Regularized Search
Athul Paul Jacob · David Wu · Gabriele Farina · Adam Lerer · Hengyuan Hu · Anton Bakhtin · Jacob Andreas · Noam Brown

@ None #None

We consider the task of building strong but human-like policies in multi-agent decision-making problems, given examples of human behavior. Imitation learning is effective at predicting human actions but may not match the strength of expert humans, while self-play learning and search techniques (e.g. AlphaZero) lead to strong performance but may produce policies that are difficult for humans to understand and coordinate with. We show in chess and Go that regularizing search based on the KL divergence from an imitation-learned policy results in higher human prediction accuracy and stronger performance than imitation learning alone. We then introduce a novel regret minimization algorithm that is regularized based on the KL divergence from an imitation-learned policy, and show that using this algorithm for search in no-press Diplomacy yields a policy that matches the human prediction accuracy of imitation learning while being substantially stronger.

Author Information

Athul Paul Jacob (MIT)
David Wu (Meta AI Research)
Gabriele Farina (Carnegie Mellon University)

I am currently a first-year Ph.D. student in the Computer Science Department at Carnegie Mellon University, where I am fortunate to be advised by Tuomas Sandholm. I am part of the Electronics Marketplaces Lab. I mostly work on Kidney Exchange and Algorithmic Game Theory.

Adam Lerer (Facebook AI Research)
Hengyuan Hu (Facebook AI Research)
Anton Bakhtin (Facebook AI Research)
Jacob Andreas (MIT)
Noam Brown (Facebook AI Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors