Poster
in
Workshop: Next Generation of AI Safety
PrimeGuard: Safe and Helpful LLMs through Tuning-Free Routing
Blazej Manczak · Eric Lin · Eliott Zemour · Vaikkunth Mugunthan
Keywords: [ large language models ] [ inference-time guardrailing ] [ guardrailing tax ] [ model alignment ] [ instruction compilation ] [ AI Safety ]
Deploying language models (LMs) necessitates outputs to be both high-quality and compliant with safety guidelines. Although Inference-Time Guardrails (ITG) offer solutions that shift model output distributions towards compliance, we find that current methods struggle in balancing safety with helpfulness. ITG Methods that safely address non-compliant queries exhibit lower helpfulness while those that prioritize helpfulness compromise on safety. We refer to this trade-off as the guardrail tax, analogous to the alignment tax.To address this, we propose PrimeGuard, a novel ITG method that utilizes structured control flow. PrimeGuard routes requests to different self-instantiations of the LM with varying instructions, leveraging its inherent instruction-following capabilities and in-context learning. Our tuning-free approach dynamically compiles system-designer guidelines for each query. We construct and release safe-eval, a diverse red-team safety benchmark. Extensive evaluations demonstrate that PrimeGuard, without fine-tuning, outperforms all competing baselines and overcomes the guardrail tax by improving the fraction of safe responses from 61\% to 97\% and increasing average helpfulness scores from 4.17 to 4.29 on the largest models, while reducing attack success rates from 100\% to 8\%.