Timezone: »
Training machine learning models on massive datasets incurs substantialcomputational costs. To alleviate such costs, there has been a sustained effort to develop data-efficient training methods that can carefully select subsets of the training examples that generalize on par with the full training data. However, existing methods are limited in providing theoretical guarantees for the quality of the models trained on the extracted subsets, and may perform poorly in practice. We propose AdaCore, a method that leverages the geometry of the data to extract subsets of the training examples for efficient machine learning. The key idea behind our method is to dynamically approximate the curvature of the loss function via an exponentially-averaged estimate of the Hessian to select weighted subsets (coresets) that provide a close approximation of the full gradient preconditioned with the Hessian. We prove rigorous guarantees for the convergence of various first and second-order methods applied to the subsets chosen by AdaCore. Our extensive experiments show that AdaCore extracts coresets with higher quality compared to baselines and speeds up training of convex and non-convex machine learning models, such as logistic regression and neural networks, by over 2.9x over the full data and 4.5x over random subsets.
Author Information
Omead Pooladzandi (University of California, Los Angeles)
David Davini (UCLA)
Baharan Mirzasoleiman (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Adaptive Second Order Coresets for Data-efficient Machine Learning »
Thu. Jul 21st through Fri the 22nd Room Hall E #710
More from the Same Authors
-
2021 : CrossWalk: Fairness-enhanced Node Representation Learning »
Ahmad Khajehnejad · Moein Khajehnejad · Krishna Gummadi · Adrian Weller · Baharan Mirzasoleiman -
2022 : Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 : Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2023 : Which Features are Learned by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 : Which Features are Learned by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 : Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning »
Yu Yang · Besmira Nushi · Hamid Palangi · Baharan Mirzasoleiman -
2023 : Robust Learning with Progressive Data Expansion Against Spurious Correlation »
Yihe Deng · Yu Yang · Baharan Mirzasoleiman · Quanquan Gu -
2023 : Data-Efficient Contrastive Self-supervised Learning: Most Beneficial Examples for Supervised Learning Contribute the Least »
Siddharth Joshi · Baharan Mirzasoleiman -
2023 : Which Features are Learned by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Baharan Mirzasoleiman · Sanmi Koyejo -
2023 Poster: Towards Sustainable Learning: Coresets for Data-efficient Deep Learning »
Yu Yang · Hao Kang · Baharan Mirzasoleiman -
2023 Poster: Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 Poster: Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning »
Yu Yang · Besmira Nushi · Hamid Palangi · Baharan Mirzasoleiman -
2023 Oral: Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 Poster: Data-Efficient Contrastive Self-supervised Learning: Most Beneficial Examples for Supervised Learning Contribute the Least »
Siddharth Joshi · Baharan Mirzasoleiman -
2022 : Less Data Can Be More! »
Baharan Mirzasoleiman -
2022 : Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Baharan Mirzasoleiman -
2022 Poster: Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 Spotlight: Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 Poster: Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Tian Yu Liu · Baharan Mirzasoleiman -
2022 Oral: Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Tian Yu Liu · Baharan Mirzasoleiman -
2021 : Data-efficient and Robust Learning from Massive Datasets »
Baharan Mirzasoleiman