Skip to yearly menu bar Skip to main content


Poster

Can AI Assistants Know What They Don't Know?

Qinyuan Cheng · Tianxiang Sun · Xiangyang Liu · Wenwei Zhang · Zhangyue Yin · Shimin Li · Linyang Li · Zhengfu He · Kai Chen · Xipeng Qiu

Hall C 4-9 #906
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Tue 23 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

AI assistants powered by Large Language Models (LLMs) have demonstrated impressive performance in various tasks. However, LLMs still make factual errors in knowledge-intensive tasks such as open-domain question answering. These untruthful responses from AI assistants can pose significant risks in practical applications. Therefore, in this paper, we ask the question Can AI assistants know what they don't know and express this awareness through natural language? To investigate this, we construct a model-specific "I don't know" (Idk) dataset. This dataset includes Supervised Fine-tuning data and preference data, categorizing questions based on whether the assistant knows or does not know the answers. Then, we align the assistant with its corresponding Idk dataset using different alignment methods, including Supervised Fine-tuning and preference optimization. Experimental results show that, after alignment with the Idk dataset, the assistant is more capable of declining to answer questions outside its knowledge scope. The assistant aligned with the Idk dataset shows significantly higher truthfulness than the original assistant.

Chat is not available.