Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Sampling and Optimization in Discrete Space

Sequential Monte Carlo Steering of Large Language Models using Probabilistic Programs

Alexander Lew · Tan Zhi-Xuan · Gabriel Grand · Vikash Mansinghka


Abstract:

Even after fine-tuning and reinforcement learning, large language models (LLMs) can be difficult, if not impossible, to control reliably with prompts alone. We propose a new inference-time approach to enforcing syntactic and semantic constraints on the outputs of LLMs, called sequential Monte Carlo (SMC) steering. The key idea is to specify language generation tasks as posterior inference problems in a class of discrete probabilistic sequence models, and replace standard decoding with sequential Monte Carlo inference. For a computational cost similar to that of beam search, SMC can steer LLMs to solve diverse tasks, including infilling, generation under syntactic constraints, and prompt intersection. To facilitate experimentation with SMC steering, we present a probabilistic programming library, LLaMPPL, for concisely specifying new generation tasks as language model probabilistic programs, and automating steering of LLaMA-family Transformers.

Chat is not available.