Skip to yearly menu bar Skip to main content


Poster

Position Paper: TrustLLM: Trustworthiness in Large Language Models

Yue Huang · Lichao Sun · Haoran Wang · Siyuan Wu · Qihui Zhang · Yuan Li · Chujie Gao · Yixin Huang · Wenhan Lyu · Yixuan Zhang · Xiner Li · Hanchi Sun · Zhengliang Liu · Yixin Liu · Yijue Wang · Zhikun Zhang · Bertie Vidgen · Bhavya Kailkhura · Caiming Xiong · Chaowei Xiao · Chunyuan Li · Eric Xing · Furong Huang · Hao Liu · Heng Ji · Hongyi Wang · Huan Zhang · Huaxiu Yao · Manolis Kellis · Marinka Zitnik · Meng Jiang · Mohit Bansal · James Zou · Jian Pei · Jian Liu · Jianfeng Gao · Jiawei Han · Jieyu Zhao · Jiliang Tang · Jindong Wang · Joaquin Vanschoren · John Mitchell · Kai Shu · Kaidi Xu · Kai-Wei Chang · Lifang He · Lifu Huang · Michael Backes · Neil Gong · Philip Yu · Pin-Yu Chen · Quanquan Gu · Ran Xu · ZHITAO YING · Shuiwang Ji · Suman Jana · Tianlong Chen · Tianming Liu · Tianyi Zhou · William Wang · Xiang Li · Xiangliang Zhang · Xiao Wang · Xing Xie · Xun Chen · Xuyu Wang · Yan Liu · Yanfang Ye · Yinzhi Cao · Yong Chen · Yue Zhao


Abstract:

Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring the trustworthiness of LLMs emerges as an important topic. This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over \textbf{30 datasets}. Our findings firstly show that in general trustworthiness and capability (i.e., functional effectiveness) are positively related. For instance, LLMs like GPT-4, ERNIE, and Llama2, which exhibit strong performance in stereotype categorization, tend to reject stereotypical statements more reliably. Similarly, Llama2-70b and GPT-4, known for their proficiency in natural language inference, demonstrate enhanced resilience to adversarial attacks. Secondly, our observations reveal that proprietary LLMs generally outperform most open-source counterparts in terms of trustworthiness, raising concerns about the potential risks of widely accessible open-source LLMs. However, a few open-source LLMs come very close to proprietary ones. Notably, Llama2 demonstrates superior trustworthiness in several tasks, suggesting that open-source models can achieve high levels of trustworthiness without additional mechanisms like moderators, offering valuable insights for developers in this field.Thirdly, it is important to note that some LLMs, such as Llama2, may be overly calibrated towards exhibiting trustworthiness, to the extent that they compromise their utility by mistakenly treating benign prompts as harmful and consequently not responding. Besides these observations, we've uncovered key insights into the multifaceted trustworthiness in LLMs. We emphasize the importance of ensuring transparency not only in the models themselves but also in the technologies that underpin trustworthiness. Knowing the specific trustworthy technologies that have been employed is crucial for analyzing their effectiveness. We advocate that the establishment of an AI alliance between industry, academia, the open-source community as well as various practitioners to foster collaboration is imperative to advance the trustworthiness of LLMs.

Live content is unavailable. Log in and register to view live content