Timezone: »

 
Poster
Learning Intuitive Policies Using Action Features
Mingwei Ma · Jizhou Liu · Samuel Sokota · Max Kleiman-Weiner · Jakob Foerster

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #105

An unaddressed challenge in multi-agent coordination is to enable AI agents to exploit the semantic relationships between the features of actions and the features of observations. Humans take advantage of these relationships in highly intuitive ways. For instance, in the absence of a shared language, we might point to the object we desire or hold up our fingers to indicate how many objects we want. To address this challenge, we investigate the effect of network architecture on the propensity of learning algorithms to exploit these semantic relationships. Across a procedurally generated coordination task, we find that attention-based architectures that jointly process a featurized representation of observations and actions have a better inductive bias for learning intuitive policies. Through fine-grained evaluation and scenario analysis, we show that the resulting policies are human-interpretable. Moreover, such agents coordinate with people without training on any human data.

Author Information

Mingwei Ma (University of Chicago)

Mingwei is a Ph.D. and MBA candidate at Chicago Booth generously funded by a Booth PhD Fellowship. His research interests include deep reinforcement learning, financial asset pricing, and high frequency data. His work has broad application in the systematic investment and algorithmic trading industry. Prior to PhD, Mingwei received a BA in Physics and Philosophy and a MSc in Mathematical Physics from the University of Oxford, where he specialized in computational and mathematical physics as well as large-scale data analysis.

Jizhou Liu (University of Chicago)
Samuel Sokota (Carnegie Mellon University)
Max Kleiman-Weiner
Jakob Foerster (Oxford university)
Jakob Foerster

Jakob Foerster started as an Associate Professor at the department of engineering science at the University of Oxford in the fall of 2021. During his PhD at Oxford he helped bring deep multi-agent reinforcement learning to the forefront of AI research and interned at Google Brain, OpenAI, and DeepMind. After his PhD he worked as a research scientist at Facebook AI Research in California, where he continued doing foundational work. He was the lead organizer of the first Emergent Communication workshop at NeurIPS in 2017, which he has helped organize ever since and was awarded a prestigious CIFAR AI chair in 2019. His past work addresses how AI agents can learn to cooperate and communicate with other agents, most recently he has been developing and addressing the zero-shot coordination problem setting, a crucial step towards human-AI coordination.

More from the Same Authors