Timezone: »
Inferring the Goals of Communicating Agents from Actions and Instructions
Lance Ying · Tan Zhi-Xuan · Vikash Mansinghka · Josh Tenenbaum
Event URL: https://openreview.net/forum?id=TBWhdZUOwO »
When humans cooperate, they frequently coordinate their activity through both verbal communication and non-verbal actions, using this information to infer a shared goal and plan. How can we model this inferential ability? In this paper, we introduce a model of a cooperative team where one agent, the principal, may communicate natural language instructions about their shared plan to another agent, the assistant, using GPT-3 as a likelihood function for instruction utterances. We then show how a third person observer can infer the team's goal via multi-modal Bayesian inverse planning from actions and instructions, computing the posterior distribution over goals under the assumption that agents will act and communicate rationally to achieve them. We evaluate this approach by comparing it with human goal inferences in a multi-agent gridworld, finding that our model's inferences closely correlate with human judgments $(R = 0.96)$. When compared to inference from actions alone, we also find that instructions lead to more rapid and less uncertain goal inference, highlighting the importance of verbal communication for cooperative agents.
When humans cooperate, they frequently coordinate their activity through both verbal communication and non-verbal actions, using this information to infer a shared goal and plan. How can we model this inferential ability? In this paper, we introduce a model of a cooperative team where one agent, the principal, may communicate natural language instructions about their shared plan to another agent, the assistant, using GPT-3 as a likelihood function for instruction utterances. We then show how a third person observer can infer the team's goal via multi-modal Bayesian inverse planning from actions and instructions, computing the posterior distribution over goals under the assumption that agents will act and communicate rationally to achieve them. We evaluate this approach by comparing it with human goal inferences in a multi-agent gridworld, finding that our model's inferences closely correlate with human judgments $(R = 0.96)$. When compared to inference from actions alone, we also find that instructions lead to more rapid and less uncertain goal inference, highlighting the importance of verbal communication for cooperative agents.
Author Information
Lance Ying (School of Engineering and Applied Sciences, Harvard University)
Tan Zhi-Xuan (Massachusetts Institute of Technology)
Vikash Mansinghka (Massachusetts Institute of Technology)
Josh Tenenbaum (MIT)
Joshua Brett Tenenbaum is Professor of Cognitive Science and Computation at the Massachusetts Institute of Technology. He is known for contributions to mathematical psychology and Bayesian cognitive science. He previously taught at Stanford University, where he was the Wasow Visiting Fellow from October 2010 to January 2011. Tenenbaum received his undergraduate degree in physics from Yale University in 1993, and his Ph.D. from MIT in 1999. His work primarily focuses on analyzing probabilistic inference as the engine of human cognition and as a means to develop machine learning.
More from the Same Authors
-
2022 : P18: Abstract Interpretation for Generalized Heuristic Search in Model-Based Planning »
Tan Zhi-Xuan -
2023 : Neuro-Symbolic Models of Human Moral Judgment: LLMs as Automatic Feature Extractors »
joseph kwon · Sydney Levine · Josh Tenenbaum -
2023 : Neuro-Symbolic Models of Human Moral Judgment: LLMs as Automatic Feature Extractors »
joseph kwon · Sydney Levine · Josh Tenenbaum -
2023 : Differentiating Metropolis-Hastings to Optimize Intractable Densities »
Gaurav Arya · Ruben Seyer · Frank Schäfer · Kartik Chandra · Alexander Lew · Mathieu Huot · Vikash Mansinghka · Jonathan Ragan-Kelley · Christopher Rackauckas · Moritz Schauer -
2023 : Neuro-Symbolic Models of Human Moral Judgment: LLMs as Automatic Feature Extractors »
joseph kwon · Sydney Levine · Josh Tenenbaum -
2023 : Sequential Monte Carlo Steering of Large Language Models using Probabilistic Programs »
Alexander Lew · Tan Zhi-Xuan · Gabriel Grand · Vikash Mansinghka -
2023 : Building Community Driven Libraries of Natural Programs »
Leonardo Hernandez Cano · Yewen Pu · Robert Hawkins · Josh Tenenbaum · Armando Solar-Lezama -
2023 : Inferring the Future by Imagining the Past »
Kartik Chandra · Tony Chen · Tzu-Mao Li · Jonathan Ragan-Kelley · Josh Tenenbaum -
2023 : The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling probabilistic social inferences from linguistic inputs »
Lance Ying · Katie Collins · Megan Wei · Cedegao Zhang · Tan Zhi-Xuan · Adrian Weller · Josh Tenenbaum · Catherine Wong -
2023 : Inferring the Future by Imagining the Past »
Kartik Chandra · Tony Chen · Tzu-Mao Li · Jonathan Ragan-Kelley · Josh Tenenbaum -
2023 Oral: Inferring Relational Potentials in Interacting Systems »
Armand Comas · Yilun Du · Christian Fernandez Lopez · Sandesh Ghimire · Mario Sznaier · Josh Tenenbaum · Octavia Camps -
2023 Poster: Sequential Monte Carlo Learning for Time Series Structure Discovery »
Feras Saad · Brian Patton · Matthew Hoffman · Rif Saurous · Vikash Mansinghka -
2023 Poster: On the Complexity of Bayesian Generalization »
Yu-Zhe Shi · Manjie Xu · John Hopcroft · Kun He · Josh Tenenbaum · Song-Chun Zhu · Ying Nian Wu · Wenjuan Han · Yixin Zhu -
2023 Poster: Inferring Relational Potentials in Interacting Systems »
Armand Comas · Yilun Du · Christian Fernandez Lopez · Sandesh Ghimire · Mario Sznaier · Josh Tenenbaum · Octavia Camps -
2023 Poster: Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC »
Yilun Du · Conor Durkan · Robin Strudel · Josh Tenenbaum · Sander Dieleman · Rob Fergus · Jascha Sohl-Dickstein · Arnaud Doucet · Will Grathwohl -
2023 Poster: Learning Neural Constitutive Laws from Motion Observations for Generalizable PDE Dynamics »
Pingchuan Ma · Peter Yichen Chen · Bolei Deng · Josh Tenenbaum · Tao Du · Chuang Gan · Wojciech Matusik -
2022 : Contributed Spotlight Talks: Part 1 »
David Dohan · Winnie Xu · Sugandha Sharma · Tan Zhi-Xuan -
2022 Poster: Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning »
Aviv Netanyahu · Tianmin Shu · Josh Tenenbaum · Pulkit Agrawal -
2022 Spotlight: Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning »
Aviv Netanyahu · Tianmin Shu · Josh Tenenbaum · Pulkit Agrawal -
2022 Poster: Planning with Diffusion for Flexible Behavior Synthesis »
Michael Janner · Yilun Du · Josh Tenenbaum · Sergey Levine -
2022 Oral: Planning with Diffusion for Flexible Behavior Synthesis »
Michael Janner · Yilun Du · Josh Tenenbaum · Sergey Levine -
2022 Poster: Learning Iterative Reasoning through Energy Minimization »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2022 Poster: Prompting Decision Transformer for Few-Shot Policy Generalization »
Mengdi Xu · Yikang Shen · Shun Zhang · Yuchen Lu · Ding Zhao · Josh Tenenbaum · Chuang Gan -
2022 Spotlight: Learning Iterative Reasoning through Energy Minimization »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2022 Spotlight: Prompting Decision Transformer for Few-Shot Policy Generalization »
Mengdi Xu · Yikang Shen · Shun Zhang · Yuchen Lu · Ding Zhao · Josh Tenenbaum · Chuang Gan -
2021 Poster: A large-scale benchmark for few-shot program induction and synthesis »
Ferran Alet · Javier Lopez-Contreras · James Koppel · Maxwell Nye · Armando Solar-Lezama · Tomas Lozano-Perez · Leslie Kaelbling · Josh Tenenbaum -
2021 Spotlight: A large-scale benchmark for few-shot program induction and synthesis »
Ferran Alet · Javier Lopez-Contreras · James Koppel · Maxwell Nye · Armando Solar-Lezama · Tomas Lozano-Perez · Leslie Kaelbling · Josh Tenenbaum -
2021 Poster: AGENT: A Benchmark for Core Psychological Reasoning »
Tianmin Shu · Abhishek Bhandwaldar · Chuang Gan · Kevin Smith · Shari Liu · Dan Gutfreund · Elizabeth Spelke · Josh Tenenbaum · Tomer Ullman -
2021 Spotlight: AGENT: A Benchmark for Core Psychological Reasoning »
Tianmin Shu · Abhishek Bhandwaldar · Chuang Gan · Kevin Smith · Shari Liu · Dan Gutfreund · Elizabeth Spelke · Josh Tenenbaum · Tomer Ullman -
2021 Poster: Improved Contrastive Divergence Training of Energy-Based Models »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2021 Poster: Leveraging Language to Learn Program Abstractions and Search Heuristics »
Catherine Wong · Kevin Ellis · Josh Tenenbaum · Jacob Andreas -
2021 Spotlight: Leveraging Language to Learn Program Abstractions and Search Heuristics »
Catherine Wong · Kevin Ellis · Josh Tenenbaum · Jacob Andreas -
2021 Spotlight: Improved Contrastive Divergence Training of Energy-Based Models »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2020 Poster: Visual Grounding of Learned Physical Models »
Yunzhu Li · Toru Lin · Kexin Yi · Daniel Bear · Daniel Yamins · Jiajun Wu · Josh Tenenbaum · Antonio Torralba -
2020 Poster: Causal Inference using Gaussian Processes with Structured Latent Confounders »
Sam Witty · Kenta Takatsu · David Jensen · Vikash Mansinghka -
2020 : Engagement and Solidarity with Global Queer Communities »
Raphael Gontijo Lopes · Bisi Alimi · Faris Gezahegn · Ida Momennejad · Tan Zhi-Xuan -
2019 Poster: Learning to Infer Program Sketches »
Maxwell Nye · Luke Hewitt · Josh Tenenbaum · Armando Solar-Lezama -
2019 Oral: Learning to Infer Program Sketches »
Maxwell Nye · Luke Hewitt · Josh Tenenbaum · Armando Solar-Lezama -
2019 Poster: Infinite Mixture Prototypes for Few-shot Learning »
Kelsey Allen · Evan Shelhamer · Hanul Shin · Josh Tenenbaum -
2019 Oral: Infinite Mixture Prototypes for Few-shot Learning »
Kelsey Allen · Evan Shelhamer · Hanul Shin · Josh Tenenbaum -
2019 Poster: Neurally-Guided Structure Inference »
Sidi Lu · Jiayuan Mao · Josh Tenenbaum · Jiajun Wu -
2019 Oral: Neurally-Guided Structure Inference »
Sidi Lu · Jiayuan Mao · Josh Tenenbaum · Jiajun Wu -
2018 Invited Talk: Building Machines that Learn and Think Like People »
Josh Tenenbaum