Skip to yearly menu bar Skip to main content


Poster

Near-Optimal Learning of Extensive-Form Games with Imperfect Information

Yu Bai · Chi Jin · Song Mei · Tiancheng Yu

Hall E #1103

Keywords: [ T: Online Learning and Bandits ] [ T: Game Theory ] [ RL: Multi-agent ] [ T: Reinforcement Learning and Planning ]


Abstract: This paper resolves the open question of designing near-optimal algorithms for learning imperfect-information extensive-form games from bandit feedback. We present the first line of algorithms that require only ˜O((XA+YB)/ε2)˜O((XA+YB)/ε2) episodes of play to find an εε-approximate Nash equilibrium in two-player zero-sum games, where X,YX,Y are the number of information sets and A,BA,B are the number of actions for the two players. This improves upon the best known sample complexity of ˜O((X2A+Y2B)/ε2)˜O((X2A+Y2B)/ε2) by a factor of ˜O(max{X,Y})˜O(max{X,Y}), and matches the information-theoretic lower bound up to logarithmic factors. We achieve this sample complexity by two new algorithms: Balanced Online Mirror Descent, and Balanced Counterfactual Regret Minimization. Both algorithms rely on novel approaches of integrating \emph{balanced exploration policies} into their classical counterparts. We also extend our results to learning Coarse Correlated Equilibria in multi-player general-sum games.

Chat is not available.