Skip to yearly menu bar Skip to main content


Poster

CurBench: Curriculum Learning Benchmark

Yuwei Zhou · Zirui Pan · Xin Wang · Hong Chen · Haoyang Li · Yanwen Huang · Zhixiao Xiong · Fangzhou Xiong · Peiyang Xu · Shengnan liu · Wenwu Zhu


Abstract:

Curriculum learning is a training paradigm where machine learning models are trained in a meaningful order, inspired by the way humans learn curricula. Due to its capability to improve model generalization and convergence, curriculum learning has gained considerable attention and has been widely applied to various research domains. Nevertheless, as new curriculum learning methods continue to emerge, it remains an open issue to benchmark them fairly. Therefore, we develop CurBench, the first benchmark that supports systematic evaluations for curriculum learning. Specifically, it consists of 15 datasets spanning 3 research domains: computer vision, natural language processing, and graph machine learning, along with 3 settings: standard, noise, and imbalance. To facilitate a comprehensive comparison, we establish the evaluation from 2 dimensions: performance and complexity. CurBench also provides a unified pipeline that plugs automatic curricula into general machine learning process, enabling the implementation of 14 core curriculum learning methods. On the basis of this benchmark, we conduct comparative experiments and make empirical analyses of existing methods. CurBench will be open-source and publicly available on GitHub after the review stage.

Live content is unavailable. Log in and register to view live content