Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Cascade Reward Sampling for Efficient Decoding-Time Alignment

Bolian Li · Yifan Wang · Ananth Grama · Ruqi Zhang

Keywords: [ large language models ] [ Language Model Alignment ]


Abstract:

Aligning large language models (LLMs) with human preferences is critical for their deployment. Recently, decoding-time alignment has emerged as an effective plug-and-play technique that requires no fine-tuning of model parameters. However, generating text that achieves both high reward and high likelihood remains a significant challenge. Existing methods often fail to generate high-reward text or incur substantial computational costs. In this paper, we propose CAscade RewarD Sampling (CARDS) to address both issues, guaranteeing the generation of high-reward and high-likelihood text with significantly low costs. Based on our observation that high-reward prefixes induce high-reward complete text on average, our approach leverages rejection sampling to iteratively generate small semantic segments to form such prefixes, where the segment length is dynamically determined by the predictive uncertainty of LLMs. This strategy guarantees desirable prefixes for subsequent generations and significantly reduces wasteful token re-generations and the number of reward model scoring. Our experiments demonstrate substantial gains in both generation efficiency and alignment ratings compared to the baselines, achieving 5 times faster text generation and 99\% win rates in GPT-4/Claude-3 helpfulness evaluation.

Chat is not available.