Skip to yearly menu bar Skip to main content


Poster

Controlled Decoding from Language Models

Sidharth Mudgal · Jong Lee · Harish Ganapathy · YaGuang Li · Tao Wang · Yanping Huang · Zhifeng Chen · Heng-Tze Cheng · Michael Collins · Trevor Strohman · Jilin Chen · Alex Beutel · Ahmad Beirami

Hall C 4-9 #2712
[ ] [ Paper PDF ]
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract: KL-regularized reinforcement learning (RL) is a popular alignment framework to control the language model responses towards high reward outcomes. We pose a tokenwise RL objective and propose a modular solver for it, called *controlled decoding (CD)*. CD exerts control through a separate *prefix scorer* module, which is trained to learn a value function for the reward. The prefix scorer is used at inference time to control the generation from a frozen base model, provably sampling from a solution to the RL objective. We empirically demonstrate that CD is effective as a control mechanism on popular benchmarks. We also show that prefix scorers for multiple rewards may be combined at inference time, effectively solving a multi-objective RL problem with no additional training. We show that the benefits of applying CD transfer to an unseen base model with no further tuning as well. Finally, we show that CD can be applied in a blockwise decoding fashion at inference-time, essentially bridging the gap between the popular best-of-$K$ strategy and tokenwise control through reinforcement learning. This makes CD a promising approach for alignment of language models.

Live content is unavailable. Log in and register to view live content