Timezone: »
Oral
The advantages of multiple classes for reducing overfitting from test set reuse
Vitaly Feldman · Roy Frostig · Moritz Hardt
Excessive reuse of holdout data can lead to overfitting. Yet, there is no concrete evidence of significant overfitting due to holdout reuse in popular multiclass benchmarks.
Known results show that, in the worst-case, revealing the accuracy of $k$ adaptively chosen classifiers on a data set of size $n$ allows to create a classifier with bias of $\Theta(\sqrt{k/n})$ for any binary prediction problem. We show a new upper bound of $\tilde O(\max\{\sqrt{k\log(n)/(mn)},k/n\})$ on the bias that any attack with $k \geq \tilde\Omega(m)$ queries can achieve in a prediction problem with $m$ classes. Moreover, we show a natural attack that, under plausible technical condition, achieves the nearly matching bias of $\Omega(\sqrt{k/(mn)})$. Complementing our theoretical work, we give new practical attacks to stress test multiclass benchmarks by aiming to create as large a bias as possible with a given number of queries. Through extensive experiments, we show that the additional uncertainty of prediction with a large number of classes indeed mitigates the effect of our best attacks.
Our work extends important developments in understanding of overfitting in adaptive data analysis to multiclass prediction problems. In addition it bears out the surprising fact that multiclass prediction problems are significantly more robust to overfitting from reusing the test set. This helps to explain why popular multiclass prediction benchmarks, such as ImageNet, may enjoy a longer lifespan than what intuition from the binary case would have suggested.
Author Information
Vitaly Feldman (Google Brain)
Roy Frostig (Google Brain)
Moritz Hardt (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: The advantages of multiple classes for reducing overfitting from test set reuse »
Thu. Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom #197
More from the Same Authors
-
2021 : Lossless Compression of Efficient Private Local Randomizers »
Vitaly Feldman · Kunal Talwar -
2021 : Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling »
Vitaly Feldman · Audra McMillan · Kunal Talwar -
2021 : Mean Estimation with User-level Privacy under Data Heterogeneity »
Rachel Cummings · Vitaly Feldman · Audra McMillan · Kunal Talwar -
2021 : When Is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning? »
Gavin Brown · Mark Bun · Vitaly Feldman · Adam Smith · Kunal Talwar -
2023 : Differentially Private Heavy Hitters using Federated Analytics »
Karan Chadha · Junye Chen · John Duchi · Vitaly Feldman · Hanieh Hashemi · Omid Javidbakht · Audra McMillan · Kunal Talwar -
2023 Poster: Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar -
2022 Poster: Optimal Algorithms for Mean Estimation under Local Differential Privacy »
Hilal Asi · Vitaly Feldman · Kunal Talwar -
2022 Oral: Optimal Algorithms for Mean Estimation under Local Differential Privacy »
Hilal Asi · Vitaly Feldman · Kunal Talwar -
2022 Poster: Private frequency estimation via projective geometry »
Vitaly Feldman · Jelani Nelson · Huy Nguyen · Kunal Talwar -
2022 Spotlight: Private frequency estimation via projective geometry »
Vitaly Feldman · Jelani Nelson · Huy Nguyen · Kunal Talwar -
2021 Poster: Lossless Compression of Efficient Private Local Randomizers »
Vitaly Feldman · Kunal Talwar -
2021 Poster: Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar -
2021 Oral: Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry »
Hilal Asi · Vitaly Feldman · Tomer Koren · Kunal Talwar -
2021 Spotlight: Lossless Compression of Efficient Private Local Randomizers »
Vitaly Feldman · Kunal Talwar