Efficient Adaptive Testing via Gradient Path Matching Subset Selection for AI Education
Abstract
Adaptive testing is widely adopted in AI-driven educational assessment systems (e.g., GRE), where the goal is to select an optimal subset of questions from a large question pool to accurately estimate an examinee's ability. A fundamental challenge is that: optimal question subsets are inherently personalized, and solving for them is NP-hard. Recently, it has been framed as a gradient matching problem: aligning gradients between selected subsets and the full question set across the entire ability parameter space. However, such global alignment on entire space is computationally expensive and difficult to scale. In this work, we propose GPM (Gradient Path Matching), a novel framework that instead aligns gradients along possible optimization paths toward the final estimate. By leveraging intermediate gradients as supervision, GPM learns an explicit and generalizable selection algorithm from large-scale data. We provide theoretical analysis on its convergence and scalability. Experiments on both real-world and synthetic datasets demonstrate that it achieves the same estimation accuracy using, on average, 20% fewer questions.