Poster
in
Workshop: AI for Math Workshop
Advancing LLM Reasoning Generalists with Preference Trees
Lifan Yuan · Ganqu Cui · Hanbin Wang · Ning Ding · Xingyao Wang · Jia Deng · Boji Shan · Huimin Chen · Ruobing Xie · Yankai Lin · Zhenghao Liu · Bowen Zhou · Hao Peng · Zhiyuan Liu · Maosong Sun
We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 tests covering five tasks, and achieves a 33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging benchmarks, substantially outperforming existing open-source models by margins more than 13.3%. The strong performance of EURUS can be primarily attributed to ULTRAINTERACT, our newly-curated large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. ULTRAINTERACT can be used in both supervised fine-tuning and preference learning. For each instruction, it includes a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise data to facilitate preference learning. ULTRAINTERACT allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. Inspired by this, we derive a novel reward modeling objective which, together with ULTRAINTERACT, leads to a strong reward model. All artifacts produced during this research will be made public.