Timezone: »
Dataset replication is a useful tool for assessing whether improvements in test accuracy on a specific benchmark correspond to improvements in models' ability to generalize reliably. In this work, we present unintuitive yet significant ways in which standard approaches to dataset replication introduce statistical bias, skewing the resulting observations. We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for selection frequency, a human-in-the-loop measure of data quality. We show that after remeasuring selection frequencies and correcting for statistical bias, only an estimated 3.6% of the original 11.7% accuracy drop remains unaccounted for. We conclude with concrete recommendations for recognizing and avoiding bias in dataset replication. Code for our study is publicly available: https://git.io/data-rep-analysis.
Author Information
Logan Engstrom (MIT)
Andrew Ilyas (Massachusetts Institute of Technology)
Shibani Santurkar (MIT)
Dimitris Tsipras (MIT)
Jacob Steinhardt (University of California, Berkeley)
Aleksander Madry (MIT)
More from the Same Authors
-
2022 : A Game-Theoretic Perspective on Trust in Recommendation »
Sarah Cen · Andrew Ilyas · Aleksander Madry -
2023 : ModelDiff: A Framework for Comparing Learning Algorithms »
Harshay Shah · Sung Min (Sam) Park · Andrew Ilyas · Aleksander Madry -
2023 : Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation »
Joshua Vendrow · Saachi Jain · Logan Engstrom · Aleksander Madry -
2023 : What Works in Chest X-Ray Classification? A Case Study of Design Choices »
Evan Vogelbaum · Logan Engstrom · Aleksander Madry -
2023 : The Journey, Not the Destination: How Data Guides Diffusion Models »
Kristian Georgiev · Joshua Vendrow · Hadi Salman · Sung Min (Sam) Park · Aleksander Madry -
2023 : Paper Spotlights »
Andrew Ilyas · Alizée Pace · Ji Won Park · Adam Breitholtz · Nari Johnson -
2023 Poster: TRAK: Attributing Model Behavior at Scale »
Sung Min (Sam) Park · Kristian Georgiev · Andrew Ilyas · Guillaume Leclerc · Aleksander Madry -
2023 Poster: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Esin Durmus · Faisal Ladhak · Cinoo Lee · Percy Liang · Tatsunori Hashimoto -
2023 Oral: TRAK: Attributing Model Behavior at Scale »
Sung Min (Sam) Park · Kristian Georgiev · Andrew Ilyas · Guillaume Leclerc · Aleksander Madry -
2023 Oral: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Esin Durmus · Faisal Ladhak · Cinoo Lee · Percy Liang · Tatsunori Hashimoto -
2023 Poster: Automatically Auditing Large Language Models via Discrete Optimization »
Erik Jones · Anca Dragan · Aditi Raghunathan · Jacob Steinhardt -
2023 Poster: ModelDiff: A Framework for Comparing Learning Algorithms »
Harshay Shah · Sung Min (Sam) Park · Andrew Ilyas · Aleksander Madry -
2023 Oral: Raising the Cost of Malicious AI-Powered Image Editing »
Hadi Salman · Alaa Khaddaj · Guillaume Leclerc · Andrew Ilyas · Aleksander Madry -
2023 Poster: Are Neurons Actually Collapsed? On the Fine-Grained Structure in Neural Representations »
Yongyi Yang · Jacob Steinhardt · Wei Hu -
2023 Poster: Rethinking Backdoor Attacks »
Alaa Khaddaj · Guillaume Leclerc · Aleksandar Makelov · Kristian Georgiev · Hadi Salman · Andrew Ilyas · Aleksander Madry -
2023 Poster: Raising the Cost of Malicious AI-Powered Image Editing »
Hadi Salman · Alaa Khaddaj · Guillaume Leclerc · Andrew Ilyas · Aleksander Madry -
2022 Workshop: Principles of Distribution Shift (PODS) »
Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski -
2022 : Panel discussion »
Steffen Schneider · Aleksander Madry · Alexei Efros · Chelsea Finn · Soheil Feizi -
2022 : Dr. Aleksander Madry's Talk »
Aleksander Madry -
2022 : Invited Talk 1: Aleksander Mądry »
Aleksander Madry -
2022 Poster: Datamodels: Understanding Predictions with Data and Data with Predictions »
Andrew Ilyas · Sung Min (Sam) Park · Logan Engstrom · Guillaume Leclerc · Aleksander Madry -
2022 Poster: Adversarially trained neural representations are already as robust as biological neural representations »
Chong Guo · Michael Lee · Guillaume Leclerc · Joel Dapello · Yug Rao · Aleksander Madry · James DiCarlo -
2022 Oral: Adversarially trained neural representations are already as robust as biological neural representations »
Chong Guo · Michael Lee · Guillaume Leclerc · Joel Dapello · Yug Rao · Aleksander Madry · James DiCarlo -
2022 Spotlight: Datamodels: Understanding Predictions with Data and Data with Predictions »
Andrew Ilyas · Sung Min (Sam) Park · Logan Engstrom · Guillaume Leclerc · Aleksander Madry -
2022 Poster: Combining Diverse Feature Priors »
Saachi Jain · Dimitris Tsipras · Aleksander Madry -
2022 Spotlight: Combining Diverse Feature Priors »
Saachi Jain · Dimitris Tsipras · Aleksander Madry -
2021 : Invited Talk #4 »
Aleksander Madry -
2021 Poster: Leveraging Sparse Linear Layers for Debuggable Deep Networks »
Eric Wong · Shibani Santurkar · Aleksander Madry -
2021 Oral: Leveraging Sparse Linear Layers for Debuggable Deep Networks »
Eric Wong · Shibani Santurkar · Aleksander Madry -
2020 Poster: From ImageNet to Image Classification: Contextualizing Progress on Benchmarks »
Dimitris Tsipras · Shibani Santurkar · Logan Engstrom · Andrew Ilyas · Aleksander Madry -
2019 Workshop: Identifying and Understanding Deep Learning Phenomena »
Hanie Sedghi · Samy Bengio · Kenji Hata · Aleksander Madry · Ari Morcos · Behnam Neyshabur · Maithra Raghu · Ali Rahimi · Ludwig Schmidt · Ying Xiao -
2019 : Panel Discussion (Nati Srebro, Dan Roy, Chelsea Finn, Mikhail Belkin, Aleksander Mądry, Jason Lee) »
Nati Srebro · Daniel Roy · Chelsea Finn · Mikhail Belkin · Aleksander Madry · Jason Lee -
2019 : Keynote by Aleksander Mądry: Are All Features Created Equal? »
Aleksander Madry -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song -
2019 Poster: Sever: A Robust Meta-Algorithm for Stochastic Optimization »
Ilias Diakonikolas · Gautam Kamath · Daniel Kane · Jerry Li · Jacob Steinhardt · Alistair Stewart -
2019 Poster: Exploring the Landscape of Spatial Robustness »
Logan Engstrom · Brandon Tran · Dimitris Tsipras · Ludwig Schmidt · Aleksander Madry -
2019 Oral: Sever: A Robust Meta-Algorithm for Stochastic Optimization »
Ilias Diakonikolas · Gautam Kamath · Daniel Kane · Jerry Li · Jacob Steinhardt · Alistair Stewart -
2019 Oral: Exploring the Landscape of Spatial Robustness »
Logan Engstrom · Brandon Tran · Dimitris Tsipras · Ludwig Schmidt · Aleksander Madry -
2018 Poster: On the Limitations of First-Order Approximation in GAN Dynamics »
Jerry Li · Aleksander Madry · John Peebles · Ludwig Schmidt -
2018 Oral: On the Limitations of First-Order Approximation in GAN Dynamics »
Jerry Li · Aleksander Madry · John Peebles · Ludwig Schmidt -
2018 Poster: Black-box Adversarial Attacks with Limited Queries and Information »
Andrew Ilyas · Logan Engstrom · Anish Athalye · Jessy Lin -
2018 Oral: Black-box Adversarial Attacks with Limited Queries and Information »
Andrew Ilyas · Logan Engstrom · Anish Athalye · Jessy Lin -
2018 Poster: Synthesizing Robust Adversarial Examples »
Anish Athalye · Logan Engstrom · Andrew Ilyas · Kevin Kwok -
2018 Poster: A Classification-Based Study of Covariate Shift in GAN Distributions »
Shibani Santurkar · Ludwig Schmidt · Aleksander Madry -
2018 Oral: Synthesizing Robust Adversarial Examples »
Anish Athalye · Logan Engstrom · Andrew Ilyas · Kevin Kwok -
2018 Oral: A Classification-Based Study of Covariate Shift in GAN Distributions »
Shibani Santurkar · Ludwig Schmidt · Aleksander Madry -
2017 Poster: Deep Tensor Convolution on Multicores »
David Budden · Alexander Matveev · Shibani Santurkar · Shraman Ray Chaudhuri · Nir Shavit -
2017 Talk: Deep Tensor Convolution on Multicores »
David Budden · Alexander Matveev · Shibani Santurkar · Shraman Ray Chaudhuri · Nir Shavit