Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Theory of Mind in Communicating Agents

Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement Learning

Ini Oguntola · Joseph Campbell · Simon Stepputtis · Katia Sycara

Keywords: [ rl ] [ Reinforcement Learning ] [ Interpretability ] [ theory of mind ] [ multi-agent ] [ intrinsic motivation ] [ ToM ] [ concept learning ]


Abstract:

The ability to model the mental states of others is crucial to human social intelligence, and can offer similar benefits to artificial agents with respect to the social dynamics induced in multi-agent settings. We present a method of grounding semantically meaningful, human-interpretable beliefs within policies modeled by deep networks. We then consider the task of 2nd-order belief prediction. We propose that ability of each agent to predict the beliefs of the other agents can be used as an intrinsic reward signal for multi-agent reinforcement learning. Finally, we present preliminary empirical results in a mixed cooperative-competitive environment.

Chat is not available.