Timezone: »
In supervised learning, we leverage a labeled dataset to design methods for function estimation. In many practical situations, we are able to obtain alternative feedback, possibly at a low cost. A broad goal is to understand the usefulness of, and to design algorithms to exploit, this alternative feedback. We focus on a semi-supervised setting where we obtain additional ordinal (or comparison) information for potentially unlabeled samples. We consider ordinal feedback of varying qualities where we have either a perfect ordering of the samples, a noisy ordering of the samples or noisy pairwise comparisons between the samples. We provide a precise quantification of the usefulness of these types of ordinal feedback in non-parametric regression, showing that in many cases it is possible to accurately estimate an underlying function with a very small labeled set, effectively escaping the curse of dimensionality. We develop an algorithm called Ranking-Regression (RR) and analyze its accuracy as a function of size of the labeled and unlabeled datasets and various noise parameters. We also present lower bounds, that establish fundamental limits for the task and show that RR is optimal in a variety of settings. Finally, we present experiments that show the efficacy of RR and investigate its robustness to various sources of noise and model-misspecification.
Author Information
Yichong Xu (Carnegie Mellon University)
Hariank Muthakana (Carnegie Mellon University)
Sivaraman Balakrishnan (Carnegie Mellon University)
Aarti Singh (Carnegie Mellon University)
Artur Dubrawski (CMU)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Nonparametric Regression with Comparisons: Escaping the Curse of Dimensionality with Ordinal Information »
Wed. Jul 11th 09:00 -- 09:20 AM Room A6
More from the Same Authors
-
2020 : Contributed Talk: Catch Me if I Can: Detecting Strategic Behaviour in Peer Assessment »
Ivan Stelmakh · Nihar Shah · Aarti Singh -
2022 : Threshold Bandit Problem with Link Assumption between Pulls and Duels »
Keshav Narayan · Aarti Singh -
2022 : Domain Adaptation under Open Set Label Shift »
Saurabh Garg · Sivaraman Balakrishnan · Zachary Lipton -
2023 : Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift »
Saurabh Garg · Amrith Setlur · Zachary Lipton · Sivaraman Balakrishnan · Virginia Smith · Aditi Raghunathan -
2023 Poster: Weighted Tallying Bandits: Overcoming Intractability via Repeated Exposure Optimality »
Dhruv Malik · Conor Igoe · Yuanzhi Li · Aarti Singh -
2023 Poster: RLSbench: Domain Adaptation Under Relaxed Label Shift »
Saurabh Garg · Nick Erickson · University of California James Sharpnack · Alex Smola · Sivaraman Balakrishnan · Zachary Lipton -
2023 Poster: The Virtues of Laziness in Model-based RL: A Unified Objective and Algorithms »
Anirudh Vemula · Yuda Song · Aarti Singh · J. Bagnell · Sanjiban Choudhury -
2022 : Threshold Bandit Problem with Link Assumption between Pulls and Duels »
Keshav Narayan · Aarti Singh -
2017 Poster: Near-Optimal Design of Experiments via Regret Minimization »
Zeyuan Allen-Zhu · Yuanzhi Li · Aarti Singh · Yining Wang -
2017 Talk: Near-Optimal Design of Experiments via Regret Minimization »
Zeyuan Allen-Zhu · Yuanzhi Li · Aarti Singh · Yining Wang -
2017 Poster: Uncorrelation and Evenness: a New Diversity-Promoting Regularizer »
Pengtao Xie · Aarti Singh · Eric Xing -
2017 Talk: Uncorrelation and Evenness: a New Diversity-Promoting Regularizer »
Pengtao Xie · Aarti Singh · Eric Xing