Poster
in
Workshop: Automated Reinforcement Learning: Exploring Meta-Learning, AutoML, and LLMs
Discovering Preference Optimization Algorithms with and for Large Language Models
Christopher Lu · Samuel Holt · Claudio Fanconi · Alexander Chan · Jakob Foerster · M van der Schaar · Robert Lange
Offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs. Typically, preference optimization is approached as an offline supervised learning task using manually-crafted convex loss functions. While these methods offer theoretical insights, they are inherently constrained by human creativity and the vast search space for optimal loss functions remains largely unexplored. We address this by performing LLM-driven objective discovery to automatically discover new state-of-the-art preference optimization algorithms without expert human intervention. Specifically, we iteratively prompt an LLM to propose and implement new preference optimization loss functions based on previously-evaluated performance metrics. This process leads to the discovery of previously-unknown and performant preference optimization algorithms. From this exploration, we introduce Discovered Preference Optimization (DiscoPOP), a novel algorithm that adaptively blends logistic and exponential losses. Experiments demonstrate the state-of-the-art performance of DiscoPOP and its successful transfer to held-out tasks. We provide code at https://anonymous.4open.science/r/neurips2024_discopop/.