Timezone: »
Large datasets have been crucial to the success of modern machine learning models. However, training on massive data has two major limitations. First, it is contingent on exceptionally large and expensive computational resources, and incurs a substantial cost due to the significant energy consumption. Second, in many real-world applications such as medical diagnosis, self-driving cars, and fraud detection, big data contains highly imbalanced classes and noisy labels. In such cases, training on the entire data does not result in a high-quality model.
In this talk, I will argue that we can address the above limitations by developing techniques that can identify and extract the representative subsets for learning from massive datasets. Training on representative subsets not only reduces the substantial costs of learning from big data, but also improves their accuracy and robustness against noisy labels. I will discuss how we can develop theoretically rigorous techniques that provide strong guarantees for the quality of the extracted subsets, as well as the learned models’ quality and robustness against noisy labels. I will also show the effectiveness of such methods in practice for data-efficient and robust learning.
Author Information
Baharan Mirzasoleiman (Stanford University)
More from the Same Authors
-
2021 : CrossWalk: Fairness-enhanced Node Representation Learning »
Ahmad Khajehnejad · Moein Khajehnejad · Krishna Gummadi · Adrian Weller · Baharan Mirzasoleiman -
2022 Poster: Adaptive Second Order Coresets for Data-efficient Machine Learning »
Omead Pooladzandi · David Davini · Baharan Mirzasoleiman -
2022 Spotlight: Adaptive Second Order Coresets for Data-efficient Machine Learning »
Omead Pooladzandi · David Davini · Baharan Mirzasoleiman -
2022 Poster: Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Tian Yu Liu · Baharan Mirzasoleiman -
2022 Oral: Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Tian Yu Liu · Baharan Mirzasoleiman -
2022 Poster: Guaranteed Robust Deep Learning against Extreme Label Noise using Self-supervised Learning »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 Spotlight: Guaranteed Robust Deep Learning against Extreme Label Noise using Self-supervised Learning »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 : Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 : Less Data Can Be More! »
Baharan Mirzasoleiman -
2022 : Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Baharan Mirzasoleiman