Timezone: »
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data. Yet, when using RNNs to inform decision-making, predictions by themselves are not sufficient — we also need estimates of predictive uncertainty. Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods; these are computationally prohibitive, and require major alterations to the RNN architecture and training. Capitalizing on ideas from classical jackknife resampling, we develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals. Our method derives predictive uncertainty from the variability of the (jackknife) sampling distribution of the RNN outputs, which is estimated by repeatedly deleting “blocks” of (temporally-correlated) training data, and collecting the predictions of the RNN re-trained on the remaining data. To avoid exhaustive re-training, we utilize influence functions to estimate the effect of removing training data blocks on the learned RNN parameters. Using data from a critical care setting, we demonstrate the utility of uncertainty quantification in sequential decision-making.
Author Information
Ahmed Alaa (UCLA)
Mihaela van der Schaar (University of Cambridge and UCLA)
More from the Same Authors
-
2020 Poster: Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift »
Alexander Chan · Ahmed Alaa · Zhaozhi Qian · Mihaela van der Schaar -
2020 Poster: Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions »
Ahmed Alaa · Mihaela van der Schaar -
2020 Poster: Time Series Deconfounder: Estimating Treatment Effects over Time in the Presence of Hidden Confounders »
Ioana Bica · Ahmed Alaa · Mihaela van der Schaar -
2020 Poster: Temporal Phenotyping using Deep Predictive Clustering of Disease Progression »
Changhee Lee · Mihaela van der Schaar -
2020 Poster: Learning for Dose Allocation in Adaptive Clinical Trials with Safety Constraints »
Cong Shen · Zhiyang Wang · Sofia Villar · Mihaela van der Schaar -
2020 Poster: Inverse Active Sensing: Modeling and Understanding Timely Decision-Making »
Daniel Jarrett · Mihaela van der Schaar -
2020 Tutorial: Machine Learning for Healthcare: Challenges, Methods, Frontiers »
Mihaela van der Schaar -
2019 Poster: Validating Causal Inference Models via Influence Functions »
Ahmed Alaa · Mihaela van der Schaar -
2019 Oral: Validating Causal Inference Models via Influence Functions »
Ahmed Alaa · Mihaela van der Schaar -
2018 Poster: AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning »
Ahmed M. Alaa Ibrahim · Mihaela van der Schaar -
2018 Oral: AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning »
Ahmed M. Alaa Ibrahim · Mihaela van der Schaar -
2018 Poster: Limits of Estimating Heterogeneous Treatment Effects: Guidelines for Practical Algorithm Design »
Ahmed M. Alaa Ibrahim · Mihaela van der Schaar -
2018 Oral: Limits of Estimating Heterogeneous Treatment Effects: Guidelines for Practical Algorithm Design »
Ahmed M. Alaa Ibrahim · Mihaela van der Schaar -
2017 Poster: Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis »
Ahmed M. Alaa Ibrahim · Scott B Hu · Mihaela van der Schaar -
2017 Talk: Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis »
Ahmed M. Alaa Ibrahim · Scott B Hu · Mihaela van der Schaar