Timezone: »
Deep learning models achieve high predictive accuracy across a broad spectrum of tasks, but rigorously quantifying their predictive uncertainty remains challenging. Usable estimates of predictive uncertainty should (1) cover the true prediction targets with high probability, and (2) discriminate between high- and low confidence prediction instances. Existing methods for uncertainty quantification are based predominantly on Bayesian neural networks; these may fall short of (1) and (2) — i.e., Bayesian credible intervals do not guarantee frequentist coverage, and approximate posterior inference undermines discriminative accuracy. In this paper, we develop the discriminative jackknife (DJ), a frequentist procedure that utilizes influence functions of a model’s loss functional to construct a jackknife (or leave one-out) estimator of predictive confidence intervals. The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy. Experiments demonstrate that DJ performs competitively compared to existing Bayesian and non-Bayesian regression baselines.
Author Information
Ahmed Alaa (UCLA)
Mihaela van der Schaar (University of Cambridge and UCLA)
More from the Same Authors
-
2020 Poster: Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift »
Alexander Chan · Ahmed Alaa · Zhaozhi Qian · Mihaela van der Schaar -
2020 Poster: Time Series Deconfounder: Estimating Treatment Effects over Time in the Presence of Hidden Confounders »
Ioana Bica · Ahmed Alaa · Mihaela van der Schaar -
2020 Poster: Temporal Phenotyping using Deep Predictive Clustering of Disease Progression »
Changhee Lee · Mihaela van der Schaar -
2020 Poster: Learning for Dose Allocation in Adaptive Clinical Trials with Safety Constraints »
Cong Shen · Zhiyang Wang · Sofia Villar · Mihaela van der Schaar -
2020 Poster: Frequentist Uncertainty in Recurrent Neural Networks via Blockwise Influence Functions »
Ahmed Alaa · Mihaela van der Schaar -
2020 Poster: Inverse Active Sensing: Modeling and Understanding Timely Decision-Making »
Daniel Jarrett · Mihaela van der Schaar -
2020 Tutorial: Machine Learning for Healthcare: Challenges, Methods, Frontiers »
Mihaela van der Schaar -
2019 Poster: Validating Causal Inference Models via Influence Functions »
Ahmed Alaa · Mihaela van der Schaar -
2019 Oral: Validating Causal Inference Models via Influence Functions »
Ahmed Alaa · Mihaela van der Schaar -
2018 Poster: AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning »
Ahmed M. Alaa Ibrahim · Mihaela van der Schaar -
2018 Oral: AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning »
Ahmed M. Alaa Ibrahim · Mihaela van der Schaar -
2018 Poster: Limits of Estimating Heterogeneous Treatment Effects: Guidelines for Practical Algorithm Design »
Ahmed M. Alaa Ibrahim · Mihaela van der Schaar -
2018 Oral: Limits of Estimating Heterogeneous Treatment Effects: Guidelines for Practical Algorithm Design »
Ahmed M. Alaa Ibrahim · Mihaela van der Schaar -
2017 Poster: Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis »
Ahmed M. Alaa Ibrahim · Scott B Hu · Mihaela van der Schaar -
2017 Talk: Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis »
Ahmed M. Alaa Ibrahim · Scott B Hu · Mihaela van der Schaar