Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Theory of Mind in Communicating Agents

Preference Proxies: Evaluating Large Language Models in capturing Human Preferences in Human-AI Tasks

Mudit Verma · Siddhant Bhambri · Subbarao Kambhampati

Keywords: [ Human Aware AI ] [ large language models ] [ Human Preferences ] [ theory of mind ]


Abstract:

In this work, we investigate the potential of Large Language Models (LLMs) to serve as effective human proxies by capturing human preferences in the context of collaboration with AI agents. Focusing on two key aspects of human preferences - explicability and sub-task specification in team settings - we explore LLMs' ability to not only model mental states but also understand human reasoning processes. By developing scenarios where optimal AI performance relies on modeling human mental states and reasoning, our investigation involving two different preference types and a user study (with 17 participants) contributes valuable insights into the suitability of LLMs as ``Preference Proxies" in various human-AI applications, paving the way for future research on the integration of AI agents with human users in Human-Aware AI tasks.

Chat is not available.