Timezone: »

Emergent Social Learning via Multi-agent Reinforcement Learning
Kamal Ndousse · Douglas Eck · Sergey Levine · Natasha Jaques

Tue Jul 20 05:25 PM -- 05:30 PM (PDT) @ None

Social learning is a key component of human and animal intelligence. By taking cues from the behavior of experts in their environment, social learners can acquire sophisticated behavior and rapidly adapt to new circumstances. This paper investigates whether independent reinforcement learning (RL) agents in a multi-agent environment can learn to use social learning to improve their performance. We find that in most circumstances, vanilla model-free RL agents do not use social learning. We analyze the reasons for this deficiency, and show that by imposing constraints on the training environment and introducing a model-based auxiliary loss we are able to obtain generalized social learning policies which enable agents to: i) discover complex skills that are not learned from single-agent training, and ii) adapt online to novel environments by taking cues from experts present in the new environment. In contrast, agents trained with model-free RL or imitation learning generalize poorly and do not succeed in the transfer tasks. By mixing multi-agent and solo training, we can obtain agents that use social learning to gain skills that they can deploy when alone, even out-performing agents trained alone from the start.

Author Information

Kamal Ndousse (Anthropic)
Douglas Eck (Google Brain)
Sergey Levine (UC Berkeley)
Natasha Jaques (Google Brain, UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors