Timezone: »

 
Poster
Adaptive Second Order Coresets for Data-efficient Machine Learning
Omead Pooladzandi · David Davini · Baharan Mirzasoleiman

@ None #None

Training machine learning models on massive datasets incurs substantial computational costs. To alleviate such costs, there has been a sustained effort to develop data-efficient training methods that can carefully select subsets of the dataset to train on that generalize on par with training on the full dataset. However, many current methods provide little theoretical guarantees for the quality of the models trained on the chosen subsets and perform poorly in practice. We propose AdaCore, a method that leverages the geometry of the dataset to extract subsets of training dataset for efficient machine learning. The key idea behind our method is to dynamically incorporate the curvature of the loss function via an exponentially averaged adaptive estimate of the Hessian to select weighted subsets (coresets) that provide a close approximation of the full preconditioned gradient. We prove rigorous guarantees for the convergence of various first and second order methods applied to the subsets chosen by AdaCore. Our extensive experiments show that AdaCore extracts coresets with significantly higher quality compared to baselines and speeds up training various machine learning models, such as logistic regression and neural networks, by over 2.5x while selecting fewer data points for training.

Author Information

Omead Pooladzandi (University of California, Los Angeles)
David Davini (UCLA)
Baharan Mirzasoleiman (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors