Timezone: »
We propose to study the generalization error of a learned predictor h^ in terms of that of a surrogate (potentially randomized) classifier that is coupled to h^ and designed to trade empirical risk for control of generalization error. In the case where h^ interpolates the data, it is interesting to consider theoretical surrogate classifiers that are partially derandomized or rerandomized, e.g., fit to the training data but with modified label noise. We show that replacing h^ by its conditional distribution with respect to an arbitrary sigma-field is a viable method to derandomize. We give an example, inspired by the work of Nagarajan and Kolter (2019), where the learned classifier h^ interpolates the training data with high probability, has small risk, and, yet, does not belong to a nonrandom class with a tight uniform bound on two-sided generalization error. At the same time, we bound the risk of h^ in terms of a surrogate that is constructed by conditioning and shown to belong to a nonrandom class with uniformly small generalization error.
Author Information
Jeffrey Negrea (University of Toronto)
Gintare Karolina Dziugaite (Element AI)
Daniel Roy (University of Toronto; Vector Institute)
More from the Same Authors
-
2021 : Towards a Unified Information-Theoretic Framework for Generalization »
Mahdi Haghifam · Gintare Karolina Dziugaite · Shay Moran -
2021 : On the Generalization Improvement from Neural Network Pruning »
Tian Jin · Gintare Karolina Dziugaite · Michael Carbin -
2022 : Pre-Training on a Data Diet: Identifying Sufficient Examples for Early Training »
Mansheej Paul · Brett Larsen · Surya Ganguli · Jonathan Frankle · Gintare Karolina Dziugaite -
2023 : Flat minima can fail to transfer to downstream tasks »
Deepansha Singh · Ekansh Sharma · Daniel Roy · Gintare Karolina Dziugaite -
2023 : Invited talk: Lessons Learned from Studying PAC-Bayes and Generalization »
Gintare Karolina Dziugaite -
2022 : Finding Structured Winning Tickets with Early Pruning »
Udbhav Bamba · Devin Kwok · Gintare Karolina Dziugaite · David Rolnick -
2021 : On the Generalization Improvement from Neural Network Pruning »
Tian Jin · Gintare Karolina Dziugaite · Michael Carbin -
2020 Poster: Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance »
Blair Bilodeau · Dylan Foster · Daniel Roy -
2020 Poster: Linear Mode Connectivity and the Lottery Ticket Hypothesis »
Jonathan Frankle · Gintare Karolina Dziugaite · Daniel Roy · Michael Carbin -
2019 : Panel Discussion (Nati Srebro, Dan Roy, Chelsea Finn, Mikhail Belkin, Aleksander MÄ…dry, Jason Lee) »
Nati Srebro · Daniel Roy · Chelsea Finn · Mikhail Belkin · Aleksander Madry · Jason Lee -
2019 : Keynote by Dan Roy: Progress on Nonvacuous Generalization Bounds »
Daniel Roy -
2018 Poster: Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors »
Gintare Karolina Dziugaite · Daniel Roy -
2018 Oral: Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors »
Gintare Karolina Dziugaite · Daniel Roy