Timezone: »

 
Poster
Frequentist Uncertainty in Recurrent Neural Networks via Blockwise Influence Functions
Ahmed Alaa · Mihaela van der Schaar

Tue Jul 14 01:00 PM -- 01:45 PM & Wed Jul 15 01:00 AM -- 01:45 AM (PDT) @ None #None

Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data. Yet, when using RNNs to inform decision-making, predictions by themselves are not sufficient — we also need estimates of predictive uncertainty. Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods; these are computationally prohibitive, and require major alterations to the RNN architecture and training. Capitalizing on ideas from classical jackknife resampling, we develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals. Our method derives predictive uncertainty from the variability of the (jackknife) sampling distribution of the RNN outputs, which is estimated by repeatedly deleting “blocks” of (temporally-correlated) training data, and collecting the predictions of the RNN re-trained on the remaining data. To avoid exhaustive re-training, we utilize influence functions to estimate the effect of removing training data blocks on the learned RNN parameters. Using data from a critical care setting, we demonstrate the utility of uncertainty quantification in sequential decision-making.

Author Information

Ahmed Alaa (UCLA)
Mihaela van der Schaar (University of Cambridge and UCLA)

More from the Same Authors