Timezone: »

 
Oral
Do ImageNet Classifiers Generalize to ImageNet?
Benjamin Recht · Rebecca Roelofs · Ludwig Schmidt · Vaishaal Shankar

Wed Jun 12 04:00 PM -- 04:20 PM (PDT) @ Seaside Ballroom

Generalization is the main goal in machine learning, but few researchers systematically investigate how well models perform on truly unseen data. This raises the danger that the community may be overfitting to excessively re-used test sets. To investigate this question, we conduct a novel reproducibility experiment on CIFAR-10 and ImageNet by assembling new test sets and then evaluating a wide range of classification models. Despite our careful efforts to match the distribution of the original datasets, the accuracy of many models drops around 10%. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results show that the accuracy drops are likely not caused by adaptive overfitting, but by the models' inability to generalize reliably to slightly "harder" images than those found in the original test set.

Author Information

Benjamin Recht (Berkeley)

Benjamin Recht is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Ben's research group studies the theory and practice of optimization algorithms with a focus on applications in machine learning, data analysis, and controls. Ben is the recipient of a Presidential Early Career Awards for Scientists and Engineers, an Alfred P. Sloan Research Fellowship, the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization, the 2014 Jamon Prize, the 2015 William O. Baker Award for Initiatives in Research, and the 2017 NIPS Test of Time Award.

Rebecca Roelofs (University of California Berkeley)
Ludwig Schmidt (University of California, Berkeley)
Vaishaal Shankar (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors