Skip to yearly menu bar Skip to main content


Poster

Position Paper: Building Guardrails for Large Language Models

Yi DONG · Ronghui Mu · Gaojie Jin · Yi Qi · Jinwei Hu · Xingyu Zhao · Jie Meng · Wenjie Ruan · Xiaowei Huang


Abstract:

As Large Language Models (LLMs) become more integrated into our daily lives, it is crucial to identify and mitigate their risks, especially when the risks can have profound impacts on human users and societies. Guardrails, which filter the inputs or outputs of LLMs, have emerged as a core safeguarding technology. This position paper takes a deep look at current open-source solutions (Llama Guard, Nvidia NeMo, Guardrails AI), and discusses the challenges and the road towards building more complete solutions. Drawing on robust evidence from previous research, we advocate for a systematic approach to construct guardrails for LLMs, based on comprehensive consideration of diverse contexts across various LLMs applications. We propose employing socio-technical methods through collaboration with a multi-disciplinary team to pinpoint precise technical requirements, exploring advanced neural-symbolic implementations to embrace the complexity of the requirements, and developing verification and testing to ensure the utmost quality of the final product.

Live content is unavailable. Log in and register to view live content