Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Theory of Mind in Communicating Agents

Comparing the Evaluation and Production of Loophole Behavior in Children and Large Language Models

Sonia Murthy · Sophie Bridgers · Kiera Parece · Elena Glassman · Tomer Ullman

Keywords: [ Pragmatics ] [ Artificial intelligence ] [ large-language models ] [ loopholes ] [ social reasoning ] [ theory of mind ]


Abstract:

In law, lore, and everyday life, loopholes are commonplace. When people exploit a loophole, they understand the intended meaning or goal of another, but choose to go with a different, though still possible interpretation. Previous work suggests people exploit loopholes when their goals are misaligned with the goals of others, but both capitulation and disobedience are too costly. Past and current AI research has shown that artificial intelligence engages in what seems superficially like the exploitation of loopholes. However, this is an anthropomorphization. It remains unclear to what extent current models, especially Large Language Models (LLMs), capture the pragmatic understanding required for engaging in loopholes. We examined the performance of LLMs on two metrics developed for studying loophole behavior in adults and children: evaluation (are loopholes rated as resulting in differential trouble compared to compliance and non-compliance), and generation (coming up with new loopholes in a given context). We conducted a fine-grained comparison of state-of-the-art LLMs to children, and found that while some LLMs rate loophole behaviors as resulting in less trouble than outright non-compliance (in line with children), they struggle to generate loopholes of their own. Our results suggest a separation between the faculties underlying the evaluation and generation of loophole behavior, in both children and LLMs, with LLM abilities dovetailing with those of the youngest children in our studies.

Chat is not available.