Timezone: »

 
Spotlight
Modeling Strong and Human-Like Gameplay with KL-Regularized Search
Athul Paul Jacob · David Wu · Gabriele Farina · Adam Lerer · Hengyuan Hu · Anton Bakhtin · Jacob Andreas · Noam Brown

Wed Jul 20 07:30 AM -- 07:35 AM (PDT) @ Room 307

We consider the task of accurately modeling strong human policies in multi-agent decision-making problems, given examples of human behavior. Imitation learning is effective at predicting human actions but may not match the strength of expert humans (e.g., by sometimes committing blunders), while self-play learning and search techniques such as AlphaZero lead to strong performance but may produce policies that differ markedly from human behavior. In chess and Go, we show that regularized search algorithms that penalize KL divergence from an imitation-learned policy yield higher prediction accuracy of strong humans and better performance than imitation learning alone. We then introduce a novel regret minimization algorithm that is regularized based on the KL divergence from an imitation-learned policy, and show that using this algorithm for search in no-press Diplomacy yields a policy that matches the human prediction accuracy of imitation learning while being substantially stronger.

Author Information

Athul Paul Jacob (MIT)
David Wu (FAIR)
Gabriele Farina (Carnegie Mellon University)
Adam Lerer (Facebook AI Research)
Hengyuan Hu (Meta AI)
Anton Bakhtin (Facebook AI Research)
Jacob Andreas (MIT)
Noam Brown (Facebook AI Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors