Timezone: »
Understanding the Detrimental Class-level Effects of Data Augmentation
Polina Kirichenko · Mark Ibrahim · Randall Balestriero · Diane Bouchacourt · Ramakrishna Vedantam · Hamed Firooz · Andrew Wilson
Event URL: https://openreview.net/forum?id=dQkeoGnn68 »
Data augmentation (DA) encodes invariance and provides implicit regularization critical to a model's performance in image classification tasks. However, while DA improves average accuracy, recent studies have shown that its impact can be highly class dependent: achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as $20\%$ on ImageNet. In this work, we present a framework for understanding how DA interacts with class-level learning dynamics. Using higher-quality multi-label annotations on ImageNet, we systematically categorize the affected classes and find that the majority are inherently ambiguous, spuriously correlated, or involve fine-grained distinctions, while DA controls the model's bias towards one of the closely related classes. While many of the previously reported performance drops are explained by multi-label annotations, our analysis of class confusions reveals other sources of accuracy degradation. We show that simple class-conditional augmentation strategies informed by our framework improve performance on the negatively affected classes.
Data augmentation (DA) encodes invariance and provides implicit regularization critical to a model's performance in image classification tasks. However, while DA improves average accuracy, recent studies have shown that its impact can be highly class dependent: achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as $20\%$ on ImageNet. In this work, we present a framework for understanding how DA interacts with class-level learning dynamics. Using higher-quality multi-label annotations on ImageNet, we systematically categorize the affected classes and find that the majority are inherently ambiguous, spuriously correlated, or involve fine-grained distinctions, while DA controls the model's bias towards one of the closely related classes. While many of the previously reported performance drops are explained by multi-label annotations, our analysis of class confusions reveals other sources of accuracy degradation. We show that simple class-conditional augmentation strategies informed by our framework improve performance on the negatively affected classes.
Author Information
Polina Kirichenko (New York University)
Mark Ibrahim (Fundamental AI Research (FAIR), Meta AI)
Randall Balestriero (Rice University)
Diane Bouchacourt (Meta)
Ramakrishna Vedantam (Self Employed)
Hamed Firooz (Facebook)
Andrew Wilson (New York University)
More from the Same Authors
-
2021 : Task-agnostic Continual Learning with Hybrid Probabilistic Models »
Polina Kirichenko -
2022 : Towards Better Understanding of Self-Supervised Representations »
Neha Mukund Kalibhat · Kanika Narang · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2022 : BARACK: Partially Supervised Group Robustness With Guarantees »
Nimit Sohoni · Maziar Sanjabi · Nicolas Ballas · Aditya Grover · Shaoliang Nie · Hamed Firooz · Christopher Re -
2022 : How much Data is Augmentation Worth? »
Jonas Geiping · Gowthami Somepalli · Ravid Shwartz-Ziv · Andrew Wilson · Tom Goldstein · Micah Goldblum -
2022 : Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations »
Polina Kirichenko · Polina Kirichenko · Pavel Izmailov · Andrew Wilson -
2022 : On Feature Learning in the Presence of Spurious Correlations »
Pavel Izmailov · Polina Kirichenko · Nate Gruver · Andrew Wilson -
2022 : Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2022 : What Do We Maximize In Self-Supervised Learning? »
Ravid Shwartz-Ziv · Ravid Shwartz-Ziv · Randall Balestriero · Yann LeCun · Yann LeCun -
2023 : Provable Instance Specific Robustness via Linear Constraints »
Ahmed Imtiaz Humayun · Josue Casco-Rodriguez · Randall Balestriero · Richard Baraniuk -
2023 : Cross-Risk Minimization: Inferring Groups Information for Improved Generalization »
Mohammad Pezeshki · Diane Bouchacourt · Mark Ibrahim · Nicolas Ballas · Pascal Vincent · David Lopez-Paz -
2023 : Does Progress On Object Recognition Benchmarks Improve Real-World Generalization? »
Megan Richards · Diane Bouchacourt · Mark Ibrahim · Polina Kirichenko -
2023 : Protein Design with Guided Discrete Diffusion »
Nate Gruver · Samuel Stanton · Nathan Frey · Tim G. J. Rudner · Isidro Hotzel · Julien Lafrance-Vanasse · Arvind Rajpal · Kyunghyun Cho · Andrew Wilson -
2023 Poster: RankMe: Assessing the Downstream Performance of Pretrained Self-Supervised Representations by Their Rank »
Quentin Garrido · Randall Balestriero · Laurent Najman · Yann LeCun -
2023 Poster: The SSL Interplay: Augmentations, Inductive Bias, and Generalization »
Vivien Cabannnes · Bobak T Kiani · Randall Balestriero · Yann LeCun · Alberto Bietti -
2023 Poster: Hyperbolic Image-text Representations »
Karan Desai · Maximilian Nickel · Tanmay Rajpurohit · Justin Johnson · Ramakrishna Vedantam -
2023 Oral: RankMe: Assessing the Downstream Performance of Pretrained Self-Supervised Representations by Their Rank »
Quentin Garrido · Randall Balestriero · Laurent Najman · Yann LeCun -
2023 Poster: Simple and Fast Group Robustness by Automatic Feature Reweighting »
Shikai Qiu · Andres Potapczynski · Pavel Izmailov · Andrew Wilson -
2023 Poster: User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems »
Marc Finzi · Anudhyan Boral · Andrew Wilson · Fei Sha · Leonardo Zepeda-Nunez -
2023 Poster: Identifying Interpretable Subspaces in Image Representations »
Neha Mukund Kalibhat · Shweta Bhardwaj · C. Bayan Bruss · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2023 Poster: Function-Space Regularization in Neural Networks: A Probabilistic Perspective »
Tim G. J. Rudner · Sanyam Kapoor · Shikai Qiu · Andrew Wilson -
2022 : Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2021 : Invited Talk 5: Applications of normalizing flows: semi-supervised learning, anomaly detection, and continual learning »
Polina Kirichenko -
2020 Poster: Entropy Minimization In Emergent Languages »
Eugene Kharitonov · Rahma Chaabouni · Diane Bouchacourt · Marco Baroni -
2018 Poster: A Spline Theory of Deep Learning »
Randall Balestriero · Richard Baraniuk -
2018 Oral: A Spline Theory of Deep Learning »
Randall Balestriero · Richard Baraniuk