Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Math Workshop

AI-Assisted Generation of Difficult Math Questions

Vedant Shah · Anirudh Goyal · Dingli Yu · Kaifeng Lyu · Simon Park · Rosemary Nan Ke · James McClelland · Yoshua Bengio · Sanjeev Arora · Michael Mozer


Abstract:

Current LLM training positions mathematical reasoning as a core capability. With publicly available sources fully tapped, there is unmet demand for diverse and challenging mathematics questions. Relying solely on human experts is both time-consuming and costly, while LLM-generated questions often lack the requisite diversity and difficulty. We present a design framework that combines the strengths of LLMs with a human-in-the-loop approach to generate a diverse array of challenging math questions. Initially, leveraging LLM metacognition skills [Didolkar et al., 2024], a strong LLM is used to extract core "skills" from existing math datasets. These skills serve as the basis for generating novel and difficult questions by prompting the LLM with random pairs of core skills that must be utilized in the question. This ``out of distribution'' task is challenging for both LLMs and humans. Our pipeline employs LLMs to iteratively generate and refine questions and solutions through multi-turn prompting. Human annotators then verify and further refine the questions, with their efficiency enhanced through further LLM interactions. Applying this pipeline on skills extracted from MATH dataset [Hendrycks et al., 2021] resulted in a dataset of complex math questions, while improving expert productivity. Despite using skills from the MATH dataset, our approach of combining random skill pairs in questions resulted in noticeably higher quality, as evidenced by: (a) Lower performance of all models on our questions than on MATH (with open models being the most affected). (b) Higher performance on MATH when using our questions as in-context examples. Although focused on mathematics, our methodology seems applicable to other domains requiring structured reasoning. It can be seen as a method for {\em scalable oversight,} where human experts evaluate highly capable AI models by also using AI-assistance.

Chat is not available.