Timezone: »
Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions, or to model a diverse population without being overly reductive. For instance, epidemiological forecasts, cost estimates, and revenue predictions all benefit from being able to quantify the range of possible values accurately. As such, many models have been developed for this problem over many years of research in statistics, machine learning, and related fields. Rather than proposing yet another (new) algorithm for quantile regression we adopt a meta viewpoint: we investigate methods for aggregating any number of conditional quantile models, in order to improve accuracy and robustness. We consider weighted ensembles where weights may vary over not only individual models, but also over quantile levels, and feature values. All of the models we consider in this paper can be fit using modern deep learning toolkits, and hence are widely accessible (from an implementation point of view) and scalable. To improve the accuracy of the predicted quantiles (or equivalently, prediction intervals), we develop tools for ensuring that quantiles remain monotonically ordered, and apply conformal calibration methods. These can be used without any modification of the original library of base models. We also review some basic theory surrounding quantile aggregation and related scoring rules, and contribute a few new results to this literature (for example, the fact that post sorting or post isotonic regression can only improve the weighted interval score). Finally, we provide an extensive suite of empirical comparisons across 34 data sets from two different benchmark repositories.
Author Information
Rasool Fakoor (AWS)
Taesup Kim (Seoul National University)
Jonas Mueller (Cleanlab)
Alexander Smola (Amazon)
Ryan Tibshirani (Carnegie Mellon University)
More from the Same Authors
-
2022 : Adaptive Interest for Emphatic Reinforcement Learning »
Martin Klissarov · Rasool Fakoor · Jonas Mueller · Kavosh Asadi · Taesup Kim · Alex Smola -
2022 : Efficient Task Adaptation by Mixing Discovered Skills »
Eunseok Yang · JUNGSUB RHIM · Taesup Kim -
2023 : Uncertainty-Guided Online Test-Time Adaptation via Meta-Learning »
kyubyung chae · Taesup Kim -
2023 : Budgeting Counterfactual for Offline RL »
Yao Liu · Pratik Chaudhari · Rasool Fakoor -
2023 : UOTA: Unsupervised Open-Set Task Adaptation Using a Vision-Language Foundation Model »
Youngjo Min · Kwangrok Ryoo · Bumsoo Kim · Taesup Kim -
2022 Workshop: Workshop on Distribution-Free Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Sharon Li · Ryan Tibshirani · Aaditya Ramdas · Stephen Bates -
2021 Workshop: Workshop on Distribution-Free Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Sharon Li · Aaditya Ramdas · Ryan Tibshirani -
2018 Poster: Detecting and Correcting for Label Shift with Black Box Predictors »
Zachary Lipton · Yu-Xiang Wang · Alexander Smola -
2018 Oral: Detecting and Correcting for Label Shift with Black Box Predictors »
Zachary Lipton · Yu-Xiang Wang · Alexander Smola