Timezone: »

Modeling Strong and Human-Like Gameplay with KL-Regularized Search
Athul Paul Jacob · David Wu · Gabriele Farina · Adam Lerer · Hengyuan Hu · Anton Bakhtin · Jacob Andreas · Noam Brown

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #816

We consider the task of accurately modeling strong human policies in multi-agent decision-making problems, given examples of human behavior. Imitation learning is effective at predicting human actions but may not match the strength of expert humans (e.g., by sometimes committing blunders), while self-play learning and search techniques such as AlphaZero lead to strong performance but may produce policies that differ markedly from human behavior. In chess and Go, we show that regularized search algorithms that penalize KL divergence from an imitation-learned policy yield higher prediction accuracy of strong humans and better performance than imitation learning alone. We then introduce a novel regret minimization algorithm that is regularized based on the KL divergence from an imitation-learned policy, and show that using this algorithm for search in no-press Diplomacy yields a policy that matches the human prediction accuracy of imitation learning while being substantially stronger.

Author Information

Athul Paul Jacob (MIT)
David Wu (FAIR)
Gabriele Farina (Carnegie Mellon University)

I am currently a first-year Ph.D. student in the Computer Science Department at Carnegie Mellon University, where I am fortunate to be advised by Tuomas Sandholm. I am part of the Electronics Marketplaces Lab. I mostly work on Kidney Exchange and Algorithmic Game Theory.

Adam Lerer (Facebook AI Research)
Hengyuan Hu (Meta AI)
Anton Bakhtin (Facebook AI Research)
Jacob Andreas (MIT)
Noam Brown (Facebook AI Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors