Timezone: »

 
Who to imitate: Imitating desired behavior from diverse multi-agent datasets
Tim Franzmeyer · Jakob Foerster · Edith Elkind · Phil Torr · Joao Henriques
Event URL: https://openreview.net/forum?id=TtvQRvydCn »

AI agents are commonly trained with large datasets of unfiltered demonstrations of human behavior.However, not all behaviors are equally safe or desirable.We assume that desired traits for an AI agent can be approximated by a desired value function (DVF), that assigns scores to collective outcomes in the dataset.For example, in a dataset of vehicle interactions, the DVF might refer to the number of occurred incidents.We propose to first assess how well individual agents' behavior is aligned with the DVF, e.g., assessing how likely an agent is to cause incidents, to then only imitate agents with desired behaviors.To identify agents with desired behavior, we propose the concept of an agent's Exchange Value, which quantifies the expected change in collective value when substituting the agent into a random group. This concept is similar to Shapley Values used in Economics, but offers greater flexibility. We further introduce a variance maximization objective to compute Exchange Values from incomplete observations, effectively clustering agents by their unobserved traits. Using both human and simulated datasets, we learn aligned imitation policies that outperform relevant baselines.

Author Information

Tim Franzmeyer (Oxford University)
Jakob Foerster (Oxford university)
Jakob Foerster

Jakob Foerster started as an Associate Professor at the department of engineering science at the University of Oxford in the fall of 2021. During his PhD at Oxford he helped bring deep multi-agent reinforcement learning to the forefront of AI research and interned at Google Brain, OpenAI, and DeepMind. After his PhD he worked as a research scientist at Facebook AI Research in California, where he continued doing foundational work. He was the lead organizer of the first Emergent Communication workshop at NeurIPS in 2017, which he has helped organize ever since and was awarded a prestigious CIFAR AI chair in 2019. His past work addresses how AI agents can learn to cooperate and communicate with other agents, most recently he has been developing and addressing the zero-shot coordination problem setting, a crucial step towards human-AI coordination.

Edith Elkind (University of Oxford)
Phil Torr (Oxford)
Joao Henriques (University of Oxford)

More from the Same Authors