Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Subset Selection in Machine Learning: From Theory to Applications

GoldiProx Selection: Faster training by learning what is learnable, not yet learned, and worth learning

Sören Mindermann · Muhammed Razzak · Adrien Morisot · Aidan Gomez · Sebastian Farquhar · Jan Brauner · Yarin Gal


Abstract:

We introduce GoldiProx Selection, a technique for faster model training which selects a sequence of training points that are just right’’. We propose an information-theoretic acquisition function---the reducible validation loss---and compute it with a small proxy model to efficiently choose training points that maximally inform predictions on a validation set. We show that thehard’’ (e.g. high loss) points usually selected in the optimization literature are typically noisy, while the easy’’ (e.g. low noise) samples often prioritized for curriculum learning confer less information. Further, points with uncertain labels, typically targeted by active learning, tend to be less relevant to the task. In contrast, GoldiProx Selection chooses points that arejust right’’ and empirically outperforms the above approaches. Moreover, a single GoldiProx Sequence can accelerate training across architectures; practitioners can share and reuse it without the need to recompute it.