Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Theory of Mind in Communicating Agents

Hi-ToM: A Benchmark for Evaluating Higher-Order Theory of Mind Reasoning in Large Language Models

Yinghui He · Yufan Wu · Yulong Chen · Naihao Deng

Keywords: [ Chain of Thought Prompting ] [ Higher Order Theory of Mind ] [ large language models ]


Abstract:

Theory of Mind (ToM) is the ability to understand and reason about one's own and others' mental states, which plays a critical role in the development of intelligence, language understanding, and cognitive processes. While existing work has primarily focused on first and second-order ToM, we explore higher-order ToM, which involves recursive reasoning on others' beliefs. We introduce Hi-ToM, a Higher Order Theory of Mind benchmark. Our experimental evaluation using GPT-4 reveals a decline in performance on higher-order ToM tasks, indicating the limitations of current models. This highlights the challenges of reasoning in complex ToM scenarios and emphasizes the need for further advancements in large language models' higher-order ToM capabilities.

Chat is not available.