Timezone: »

 
Spotlight
DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning
Daochen Zha · Jingru Xie · Wenye Ma · Sheng Zhang · Xiangru Lian · Xia Hu · Ji Liu

Tue Jul 20 07:30 PM -- 07:35 PM (PDT) @

Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents. While significant achievements have been made in various perfect- and imperfect-information games, DouDizhu (a.k.a. Fighting the Landlord), a three-player card game, is still unsolved. DouDizhu is a very challenging domain with competition, collaboration, imperfect information, large state space, and particularly a massive set of possible actions where the legal actions vary significantly from turn to turn. Unfortunately, modern reinforcement learning algorithms mainly focus on simple and small action spaces, and not surprisingly, are shown not to make satisfactory progress in DouDizhu. In this work, we propose a conceptually simple yet effective DouDizhu AI system, namely DouZero, which enhances traditional Monte-Carlo methods with deep neural networks, action encoding, and parallel actors. Starting from scratch in a single server with four GPUs, DouZero outperformed all the existing DouDizhu AI programs in days of training and was ranked the first in the Botzone leaderboard among 344 AI agents. Through building DouZero, we show that classic Monte-Carlo methods can be made to deliver strong results in a hard domain with a complex action space. The code and an online demo are released at https://github.com/kwai/DouZero with the hope that this insight could motivate future work.

Author Information

Daochen Zha (Texas A&M University)
Jingru Xie (Kwai Inc.)
Wenye Ma (Kuaishou)
Sheng Zhang (Georgia Institute of Technology)

I am currently a final-year PhD student in Machine Learning Program at Georgia Tech. I am fortunate to be advised by Prof. Justin Romberg and Prof. Ashwin Pananjady. Before coming to Georgia Tech, I graduated with an MS in Applied Mathematics from Columbia University and a BS in Mathematics and Applied Mathematics from Wuhan University. My research mainly focuses on reinforcement learning (RL) and distributed optimization. The overall goal of my research is to enhance the theoretical understanding of RL, and to design efficient algorithms for large-scale problems arise from machine-learning and decision-making applications. Specifically, I have studied the statistical efficiency (sample complexity) of RL algorithms, and designed an accelerated method for distributed stochastic optimization problems. In addition, during my previous research internships, I have developed an AI program for a popular Chinese poker game using self-play deep RL, proposed a matrix factorization framework for high-dimensional demand forecasting with missing values, and designed deep convolutional neural networks for automated image segmentation of neurons.

Xiangru Lian (Kwai Inc.)
Xia Hu (Texas A&M University)
Ji Liu (Kwai Seattle AI lab, University of Rochester)

Ji Liu is an Assistant Professor in Computer Science, Electrical and Computer Engineering, and Goergen Institute for Data Science at University of Rochester (UR). He received his Ph.D. in Computer Science from University of Wisconsin-Madison. His research interests focus on distributed optimization and machine learning. He also has rich experiences in various data analytics applications in healthcare, bioinformatics, social network, computer vision, etc. His recent research focus is on asynchronous parallel optimization, sparse learning (compressed sensing) theory and algorithm, structural model estimation, online learning, abnormal event detection, feature / pattern extraction, etc. He published more than 40 papers in top CS journals and conferences including JMLR, SIOPT, TPAMI, TIP, TKDD, NIPS, ICML, UAI, SIGKDD, ICCV, CVPR, ECCV, AAAI, IJCAI, ACM MM, etc. He won the award of Best Paper honorable mention at SIGKDD 2010 and the award of Best Student Paper award at UAI 2015.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors