Timezone: »
(Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy
Elan Rosenfeld · Saurabh Garg
Event URL: https://openreview.net/forum?id=sYgsZxHgDt »
We derive an (almost) guaranteed upper bound on the error of deep neural networks under distribution shift using unlabeled test data. Prior methods either give bounds that are vacuous in practice or give \emph{estimates} that are accurate on average but heavily underestimate error for a sizeable fraction of shifts. Our bound requires a simple, intuitive condition which is well justified by prior empirical works and holds in practice effectively 100\% of the time. The bound is inspired by $\hdh$-divergence but is easier to evaluate and substantially tighter, consistently providing non-vacuous guarantees. Estimating the bound requires optimizing one multiclass classifier to disagree with another, for which some prior works have used sub-optimal proxy losses; we devise a "disagreement loss" which is theoretically justified and performs better in practice. Across a wide range of benchmarks, our method gives valid error bounds while achieving average accuracy comparable to competitive estimation baselines.
We derive an (almost) guaranteed upper bound on the error of deep neural networks under distribution shift using unlabeled test data. Prior methods either give bounds that are vacuous in practice or give \emph{estimates} that are accurate on average but heavily underestimate error for a sizeable fraction of shifts. Our bound requires a simple, intuitive condition which is well justified by prior empirical works and holds in practice effectively 100\% of the time. The bound is inspired by $\hdh$-divergence but is easier to evaluate and substantially tighter, consistently providing non-vacuous guarantees. Estimating the bound requires optimizing one multiclass classifier to disagree with another, for which some prior works have used sub-optimal proxy losses; we devise a "disagreement loss" which is theoretically justified and performs better in practice. Across a wide range of benchmarks, our method gives valid error bounds while achieving average accuracy comparable to competitive estimation baselines.
Author Information
Elan Rosenfeld (Carnegie Mellon University)
Saurabh Garg (Carnegie Mellon University)
Related Events (a corresponding poster, oral, or spotlight)
-
2023 : (Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy »
Dates n/a. Room
More from the Same Authors
-
2022 : Domain Adaptation under Open Set Label Shift »
Saurabh Garg · Sivaraman Balakrishnan · Zachary Lipton -
2022 : Unsupervised Learning under Latent Label Shift »
Pranav Mani · Manley Roberts · Saurabh Garg · Zachary Lipton -
2022 : Characterizing Datapoints via Second-Split Forgetting »
Pratyush Maini · Saurabh Garg · Zachary Lipton · Zico Kolter -
2023 : Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift »
Saurabh Garg · Amrith Setlur · Zachary Lipton · Sivaraman Balakrishnan · Virginia Smith · Aditi Raghunathan -
2023 : Learning Linear Causal Representations from Interventions under General Nonlinear Mixing »
Simon Buchholz · Goutham Rajendran · Elan Rosenfeld · Bryon Aragam · Bernhard Schölkopf · Pradeep Ravikumar -
2023 : (Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy »
Elan Rosenfeld · Saurabh Garg -
2023 : Learning Linear Causal Representations from Interventions under General Nonlinear Mixing »
Simon Buchholz · Goutham Rajendran · Elan Rosenfeld · Bryon Aragam · Bernhard Schölkopf · Pradeep Ravikumar -
2023 : Conditional Diffusion Replay for Continual Learning in Medical Settings »
Yewon Byun · Saurabh Garg · Sanket Vaibhav Mehta · Praveer Singh · Jayashree Kalpathy-cramer · Bryan Wilder · Zachary Lipton -
2023 : Prompt-based Generative Replay: A Text-to-Image Approach for Continual Learning in Medical Settings »
Yewon Byun · Saurabh Garg · Sanket Vaibhav Mehta · Jayashree Kalpathy-Cramer · Praveer Singh · Bryan Wilder · Zachary Lipton -
2023 : (Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy »
Elan Rosenfeld · Saurabh Garg -
2023 Poster: RLSbench: Domain Adaptation Under Relaxed Label Shift »
Saurabh Garg · Nick Erickson · University of California James Sharpnack · Alex Smola · Sivaraman Balakrishnan · Zachary Lipton -
2023 Poster: CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets »
Zachary Novack · Julian McAuley · Zachary Lipton · Saurabh Garg -
2022 Workshop: Principles of Distribution Shift (PODS) »
Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski -
2021 Poster: RATT: Leveraging Unlabeled Data to Guarantee Generalization »
Saurabh Garg · Sivaraman Balakrishnan · Zico Kolter · Zachary Lipton -
2021 Oral: RATT: Leveraging Unlabeled Data to Guarantee Generalization »
Saurabh Garg · Sivaraman Balakrishnan · Zico Kolter · Zachary Lipton -
2021 Poster: On Proximal Policy Optimization's Heavy-tailed Gradients »
Saurabh Garg · Joshua Zhanson · Emilio Parisotto · Adarsh Prasad · Zico Kolter · Zachary Lipton · Sivaraman Balakrishnan · Ruslan Salakhutdinov · Pradeep Ravikumar -
2021 Spotlight: On Proximal Policy Optimization's Heavy-tailed Gradients »
Saurabh Garg · Joshua Zhanson · Emilio Parisotto · Adarsh Prasad · Zico Kolter · Zachary Lipton · Sivaraman Balakrishnan · Ruslan Salakhutdinov · Pradeep Ravikumar -
2020 : Contributed Talk 3: A Unified View of Label Shift Estimation »
Saurabh Garg -
2020 Poster: Certified Robustness to Label-Flipping Attacks via Randomized Smoothing »
Elan Rosenfeld · Ezra Winston · Pradeep Ravikumar · Zico Kolter -
2019 Poster: Certified Adversarial Robustness via Randomized Smoothing »
Jeremy Cohen · Elan Rosenfeld · Zico Kolter -
2019 Oral: Certified Adversarial Robustness via Randomized Smoothing »
Jeremy Cohen · Elan Rosenfeld · Zico Kolter