Timezone: »
Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents. While significant achievements have been made in various perfect- and imperfect-information games, DouDizhu (a.k.a. Fighting the Landlord), a three-player card game, is still unsolved. DouDizhu is a very challenging domain with competition, collaboration, imperfect information, large state space, and particularly a massive set of possible actions where the legal actions vary significantly from turn to turn. Unfortunately, modern reinforcement learning algorithms mainly focus on simple and small action spaces, and not surprisingly, are shown not to make satisfactory progress in DouDizhu. In this work, we propose a conceptually simple yet effective DouDizhu AI system, namely DouZero, which enhances traditional Monte-Carlo methods with deep neural networks, action encoding, and parallel actors. Starting from scratch in a single server with four GPUs, DouZero outperformed all the existing DouDizhu AI programs in days of training and was ranked the first in the Botzone leaderboard among 344 AI agents. Through building DouZero, we show that classic Monte-Carlo methods can be made to deliver strong results in a hard domain with a complex action space. The code and an online demo are released at https://github.com/kwai/DouZero with the hope that this insight could motivate future work.
Author Information
Daochen Zha (Texas A&M University)
Jingru Xie (Kwai Inc.)
Wenye Ma (Kuaishou)
Sheng Zhang (Georgia Institute of Technology)
I am currently a final-year PhD student in Machine Learning Program at Georgia Tech. I am fortunate to be advised by Prof. Justin Romberg and Prof. Ashwin Pananjady. Before coming to Georgia Tech, I graduated with an MS in Applied Mathematics from Columbia University and a BS in Mathematics and Applied Mathematics from Wuhan University. My research mainly focuses on reinforcement learning (RL) and distributed optimization. The overall goal of my research is to enhance the theoretical understanding of RL, and to design efficient algorithms for large-scale problems arise from machine-learning and decision-making applications. Specifically, I have studied the statistical efficiency (sample complexity) of RL algorithms, and designed an accelerated method for distributed stochastic optimization problems. In addition, during my previous research internships, I have developed an AI program for a popular Chinese poker game using self-play deep RL, proposed a matrix factorization framework for high-dimensional demand forecasting with missing values, and designed deep convolutional neural networks for automated image segmentation of neurons.
Xiangru Lian (Kwai Inc.)
Xia Hu (Texas A&M University)
Ji Liu (Kwai Seattle AI lab, University of Rochester)
Ji Liu is an Assistant Professor in Computer Science, Electrical and Computer Engineering, and Goergen Institute for Data Science at University of Rochester (UR). He received his Ph.D. in Computer Science from University of Wisconsin-Madison. His research interests focus on distributed optimization and machine learning. He also has rich experiences in various data analytics applications in healthcare, bioinformatics, social network, computer vision, etc. His recent research focus is on asynchronous parallel optimization, sparse learning (compressed sensing) theory and algorithm, structural model estimation, online learning, abnormal event detection, feature / pattern extraction, etc. He published more than 40 papers in top CS journals and conferences including JMLR, SIOPT, TPAMI, TIP, TKDD, NIPS, ICML, UAI, SIGKDD, ICCV, CVPR, ECCV, AAAI, IJCAI, ACM MM, etc. He won the award of Best Paper honorable mention at SIGKDD 2010 and the award of Best Student Paper award at UAI 2015.
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning »
Wed. Jul 21st 02:30 -- 02:35 AM Room
More from the Same Authors
-
2021 : Finite Sample Analysis of Average-Reward TD Learning and $Q$-Learning »
Sheng Zhang · Zhe Zhang · Siva Maguluri -
2021 Poster: Streaming Bayesian Deep Tensor Factorization »
Shikai Fang · Zheng Wang · Zhimeng Pan · Ji Liu · Shandian Zhe -
2021 Spotlight: Streaming Bayesian Deep Tensor Factorization »
Shikai Fang · Zheng Wang · Zhimeng Pan · Ji Liu · Shandian Zhe -
2021 Poster: 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed »
Hanlin Tang · Shaoduo Gan · Ammar Ahmad Awan · Samyam Rajbhandari · Conglong Li · Xiangru Lian · Ji Liu · Ce Zhang · Yuxiong He -
2021 Spotlight: 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed »
Hanlin Tang · Shaoduo Gan · Ammar Ahmad Awan · Samyam Rajbhandari · Conglong Li · Xiangru Lian · Ji Liu · Ce Zhang · Yuxiong He -
2019 Poster: RaFM: Rank-Aware Factorization Machines »
Xiaoshuang Chen · Yin Zheng · Jiaxing Wang · Wenye Ma · Junzhou Huang -
2019 Poster: Distributed Learning over Unreliable Networks »
Chen Yu · Hanlin Tang · Cedric Renggli · Simon Kassing · Ankit Singla · Dan Alistarh · Ce Zhang · Ji Liu -
2019 Poster: $\texttt{DoubleSqueeze}$: Parallel Stochastic Gradient Descent with Double-pass Error-Compensated Compression »
Hanlin Tang · Chen Yu · Xiangru Lian · Tong Zhang · Ji Liu -
2019 Oral: $\texttt{DoubleSqueeze}$: Parallel Stochastic Gradient Descent with Double-pass Error-Compensated Compression »
Hanlin Tang · Chen Yu · Xiangru Lian · Tong Zhang · Ji Liu -
2019 Oral: Distributed Learning over Unreliable Networks »
Chen Yu · Hanlin Tang · Cedric Renggli · Simon Kassing · Ankit Singla · Dan Alistarh · Ce Zhang · Ji Liu -
2019 Oral: RaFM: Rank-Aware Factorization Machines »
Xiaoshuang Chen · Yin Zheng · Jiaxing Wang · Wenye Ma · Junzhou Huang -
2018 Poster: Asynchronous Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Wei Zhang · Ce Zhang · Ji Liu -
2018 Poster: $D^2$: Decentralized Training over Decentralized Data »
Hanlin Tang · Xiangru Lian · Ming Yan · Ce Zhang · Ji Liu -
2018 Oral: $D^2$: Decentralized Training over Decentralized Data »
Hanlin Tang · Xiangru Lian · Ming Yan · Ce Zhang · Ji Liu -
2018 Oral: Asynchronous Decentralized Parallel Stochastic Gradient Descent »
Xiangru Lian · Wei Zhang · Ce Zhang · Ji Liu -
2017 Poster: ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning »
Hantian Zhang · Jerry Li · Kaan Kara · Dan Alistarh · Ji Liu · Ce Zhang -
2017 Talk: ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning »
Hantian Zhang · Jerry Li · Kaan Kara · Dan Alistarh · Ji Liu · Ce Zhang -
2017 Poster: On The Projection Operator to A Three-view Cardinality Constrained Set »
Haichuan Yang · Shupeng Gui · Chuyang Ke · Daniel Stefankovic · Ryohei Fujimaki · Ji Liu -
2017 Talk: On The Projection Operator to A Three-view Cardinality Constrained Set »
Haichuan Yang · Shupeng Gui · Chuyang Ke · Daniel Stefankovic · Ryohei Fujimaki · Ji Liu