Timezone: »
We consider the task of accurately modeling strong human policies in multi-agent decision-making problems, given examples of human behavior. Imitation learning is effective at predicting human actions but may not match the strength of expert humans (e.g., by sometimes committing blunders), while self-play learning and search techniques such as AlphaZero lead to strong performance but may produce policies that differ markedly from human behavior. In chess and Go, we show that regularized search algorithms that penalize KL divergence from an imitation-learned policy yield higher prediction accuracy of strong humans and better performance than imitation learning alone. We then introduce a novel regret minimization algorithm that is regularized based on the KL divergence from an imitation-learned policy, and show that using this algorithm for search in no-press Diplomacy yields a policy that matches the human prediction accuracy of imitation learning while being substantially stronger.
Author Information
Athul Paul Jacob (MIT)
David Wu (FAIR)
Gabriele Farina (Carnegie Mellon University)
Adam Lerer (Facebook AI Research)
Hengyuan Hu (Meta AI)
Anton Bakhtin (Facebook AI Research)
Jacob Andreas (MIT)
Noam Brown (Facebook AI Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Modeling Strong and Human-Like Gameplay with KL-Regularized Search »
Wed. Jul 20th 02:30 -- 02:35 PM Room Room 307
More from the Same Authors
-
2021 : Off Belief Learning »
Hengyuan Hu -
2021 : Self-Explaining Deviations for Zero-Shot Coordination »
Hengyuan Hu -
2023 Poster: Abstracting Imperfect Information Away from Two-Player Zero-Sum Games »
Samuel Sokota · Ryan D'Orazio · Chun Kai Ling · David Wu · Zico Kolter · Noam Brown -
2023 Poster: Guiding Pretraining in Reinforcement Learning with Large Language Models »
Yuqing Du · Olivia Watkins · Zihan Wang · Cédric Colas · Trevor Darrell · Pieter Abbeel · Abhishek Gupta · Jacob Andreas -
2023 Poster: Near-Optimal $\Phi$-Regret Learning in Extensive-Form Games »
Ioannis Anagnostides · Gabriele Farina · Tuomas Sandholm -
2023 Poster: Team Belief DAG: Generalizing the Sequence Form to Team Games for Fast Computation of Correlated Team Max-Min Equilibria via Regret Minimization »
Brian Zhang · Gabriele Farina · Tuomas Sandholm -
2023 Poster: PromptBoosting: Black-Box Text Classification with Ten Forward Passes »
Bairu Hou · Joe O'Connor · Jacob Andreas · Shiyu Chang · Yang Zhang -
2022 Poster: On Last-Iterate Convergence Beyond Zero-Sum Games »
Ioannis Anagnostides · Ioannis Panageas · Gabriele Farina · Tuomas Sandholm -
2022 Poster: Kernelized Multiplicative Weights for 0/1-Polyhedral Games: Bridging the Gap Between Learning in Extensive-Form and Normal-Form Games »
Gabriele Farina · Chung-Wei Lee · Haipeng Luo · Christian Kroer -
2022 Spotlight: Kernelized Multiplicative Weights for 0/1-Polyhedral Games: Bridging the Gap Between Learning in Extensive-Form and Normal-Form Games »
Gabriele Farina · Chung-Wei Lee · Haipeng Luo · Christian Kroer -
2022 Spotlight: On Last-Iterate Convergence Beyond Zero-Sum Games »
Ioannis Anagnostides · Ioannis Panageas · Gabriele Farina · Tuomas Sandholm -
2022 Poster: Learning inverse folding from millions of predicted structures »
Chloe Hsu · Robert Verkuil · Jason Liu · Zeming Lin · Brian Hie · Tom Sercu · Adam Lerer · Alexander Rives -
2022 Oral: Learning inverse folding from millions of predicted structures »
Chloe Hsu · Robert Verkuil · Jason Liu · Zeming Lin · Brian Hie · Tom Sercu · Adam Lerer · Alexander Rives -
2021 Poster: Off-Belief Learning »
Hengyuan Hu · Adam Lerer · Brandon Cui · Luis Pineda · Noam Brown · Jakob Foerster -
2021 Poster: Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2021 Spotlight: Off-Belief Learning »
Hengyuan Hu · Adam Lerer · Brandon Cui · Luis Pineda · Noam Brown · Jakob Foerster -
2021 Spotlight: Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2021 Poster: Trajectory Diversity for Zero-Shot Coordination »
Andrei Lupu · Brandon Cui · Hengyuan Hu · Jakob Foerster -
2021 Spotlight: Trajectory Diversity for Zero-Shot Coordination »
Andrei Lupu · Brandon Cui · Hengyuan Hu · Jakob Foerster -
2020 Workshop: 1st Workshop on Language in Reinforcement Learning (LaReL) »
Nantas Nardelli · Jelena Luketina · Nantas Nardelli · Jakob Foerster · Victor Zhong · Jacob Andreas · Tim Rocktäschel · Edward Grefenstette · Tim Rocktäschel -
2020 Poster: “Other-Play” for Zero-Shot Coordination »
Hengyuan Hu · Alexander Peysakhovich · Adam Lerer · Jakob Foerster -
2020 Poster: Stochastic Regret Minimization in Extensive-Form Games »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2019 Poster: Deep Counterfactual Regret Minimization »
Noam Brown · Adam Lerer · Sam Gross · Tuomas Sandholm -
2019 Poster: Stable-Predictive Optimistic Counterfactual Regret Minimization »
Gabriele Farina · Christian Kroer · Noam Brown · Tuomas Sandholm -
2019 Poster: Regret Circuits: Composability of Regret Minimizers »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2019 Oral: Deep Counterfactual Regret Minimization »
Noam Brown · Adam Lerer · Sam Gross · Tuomas Sandholm -
2019 Oral: Stable-Predictive Optimistic Counterfactual Regret Minimization »
Gabriele Farina · Christian Kroer · Noam Brown · Tuomas Sandholm -
2019 Oral: Regret Circuits: Composability of Regret Minimizers »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2018 Poster: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus -
2018 Oral: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus -
2017 Poster: Regret Minimization in Behaviorally-Constrained Zero-Sum Games »
Gabriele Farina · Christian Kroer · Tuomas Sandholm -
2017 Talk: Regret Minimization in Behaviorally-Constrained Zero-Sum Games »
Gabriele Farina · Christian Kroer · Tuomas Sandholm