Timezone: »
Ensembles are often expensive to evaluate since they require running multiple models—each of which is costly in the case of neural networks. Using ensembles in compute-constrained applications would be much more practical if just a subset of the models could be evaluated. We address this issue with a novel product-of-experts-based method for early-exit ensembling. We rely on the fact that the product of finite-support probability distributions (e.g., the continuous uniform) has support less than or equal to that of the multiplicands. Thus, by setting a confidence threshold, we can stop evaluating ensemble members once the size of the support has been sufficiently reduced. We demonstrate our methodology for both real-value regression and multi-class classification.
Author Information
James Allingham (University of Cambridge)
Eric Nalisnick (University of Amsterdam)
More from the Same Authors
-
2021 : Bayesian Regression from Multiple Sources of Weak Supervision »
Putra Manggala · Holger Hoos · Eric Nalisnick · Putra Manggala -
2022 Poster: Adapting the Linearised Laplace Model Evidence for Modern Deep Learning »
Javier Antorán · David Janz · James Allingham · Erik Daxberger · Riccardo Barbano · Eric Nalisnick · Jose Miguel Hernandez-Lobato -
2022 Spotlight: Adapting the Linearised Laplace Model Evidence for Modern Deep Learning »
Javier Antorán · David Janz · James Allingham · Erik Daxberger · Riccardo Barbano · Eric Nalisnick · Jose Miguel Hernandez-Lobato -
2022 Poster: Calibrated Learning to Defer with One-vs-All Classifiers »
Rajeev Verma · Eric Nalisnick -
2022 Spotlight: Calibrated Learning to Defer with One-vs-All Classifiers »
Rajeev Verma · Eric Nalisnick -
2021 Poster: Bayesian Deep Learning via Subnetwork Inference »
Erik Daxberger · Eric Nalisnick · James Allingham · Javier Antorán · Jose Miguel Hernandez-Lobato -
2021 Spotlight: Bayesian Deep Learning via Subnetwork Inference »
Erik Daxberger · Eric Nalisnick · James Allingham · Javier Antorán · Jose Miguel Hernandez-Lobato -
2020 : Invited talk 2: Detecting Distribution Shift with Deep Generative Models »
Eric Nalisnick -
2020 : Spotlight Talk 3: Depth Uncertainty in Neural Networks »
Javier Antorán · James Allingham -
2019 Poster: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Oral: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Oral: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Poster: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan