Timezone: »
We consider the problem of learning the qualities of a collection of items by performing noisy comparisons among them. Following the standard paradigm, we assume there is a fixed ``comparison graph'' and every neighboring pair of items in this graph is compared k times according to the Bradley-Terry-Luce model (where the probability than an item wins a comparison is proportional the item quality). We are interested in how the relative error in quality estimation scales with the comparison graph in the regime where k is large. We show that, asymptotically, the relevant graph-theoretic quantity is the square root of the resistance of the comparison graph. Specifically, we provide an algorithm with relative error decay that scales with the square root of the graph resistance, and provide a lower bound showing that (up to log factors) a better scaling is impossible. The performance guarantee of our algorithm, both in terms of the graph and the skewness of the item quality distribution, significantly outperforms earlier results.
Author Information
Julien Hendrickx (University of Catholique de Louvain)
Alex Olshevsky (Boston University)
Venkatesh Saligrama (Boston University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Graph Resistance and Learning from Pairwise Comparisons »
Thu. Jun 13th 06:40 -- 07:00 PM Room Room 201
More from the Same Authors
-
2022 : Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk »
Tianrui Chen · Aditya Gangrade · Venkatesh Saligrama -
2022 : ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 : Acting Optimistically in Choosing Safe Actions »
Tianrui Chen · Aditya Gangrade · Venkatesh Saligrama -
2022 : ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 : Achieving High TinyML Accuracy through Selective Cloud Interactions »
Anil Kag · Igor Fedorov · Aditya Gangrade · Paul Whatmough · Venkatesh Saligrama -
2022 : FedHeN: Federated Learning in Heterogeneous Networks »
Durmus Alp Emre Acar · Venkatesh Saligrama -
2022 Poster: Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk »
Tianrui Chen · Aditya Gangrade · Venkatesh Saligrama -
2022 Spotlight: Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk »
Tianrui Chen · Aditya Gangrade · Venkatesh Saligrama -
2022 Poster: Faster Algorithms for Learning Convex Functions »
Ali Siahkamari · Durmus Alp Emre Acar · Christopher Liao · Kelly Geyer · Venkatesh Saligrama · Brian Kulis -
2022 Poster: ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Spotlight: ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Spotlight: Faster Algorithms for Learning Convex Functions »
Ali Siahkamari · Durmus Alp Emre Acar · Christopher Liao · Kelly Geyer · Venkatesh Saligrama · Brian Kulis -
2021 Poster: Debiasing Model Updates for Improving Personalized Federated Training »
Durmus Alp Emre Acar · Yue Zhao · Ruizhao Zhu · Ramon Matas · Matthew Mattina · Paul Whatmough · Venkatesh Saligrama -
2021 Spotlight: Debiasing Model Updates for Improving Personalized Federated Training »
Durmus Alp Emre Acar · Yue Zhao · Ruizhao Zhu · Ramon Matas · Matthew Mattina · Paul Whatmough · Venkatesh Saligrama -
2021 Poster: Memory Efficient Online Meta Learning »
Durmus Alp Emre Acar · Ruizhao Zhu · Venkatesh Saligrama -
2021 Poster: Temporal Difference Learning as Gradient Splitting »
Rui Liu · Alex Olshevsky -
2021 Oral: Temporal Difference Learning as Gradient Splitting »
Rui Liu · Alex Olshevsky -
2021 Spotlight: Memory Efficient Online Meta Learning »
Durmus Alp Emre Acar · Ruizhao Zhu · Venkatesh Saligrama -
2021 Poster: Training Recurrent Neural Networks via Forward Propagation Through Time »
Anil Kag · Venkatesh Saligrama -
2021 Spotlight: Training Recurrent Neural Networks via Forward Propagation Through Time »
Anil Kag · Venkatesh Saligrama -
2020 Poster: Piecewise Linear Regression via a Difference of Convex Functions »
Ali Siahkamari · Aditya Gangrade · Brian Kulis · Venkatesh Saligrama -
2020 Poster: Minimax Rate for Learning From Pairwise Comparisons in the BTL Model »
Julien Hendrickx · Alex Olshevsky · Venkatesh Saligrama -
2019 Poster: Learning Classifiers for Target Domain with Limited or No Labels »
Pengkai Zhu · Hanxiao Wang · Venkatesh Saligrama -
2019 Oral: Learning Classifiers for Target Domain with Limited or No Labels »
Pengkai Zhu · Hanxiao Wang · Venkatesh Saligrama -
2018 Poster: Gradient Descent for Sparse Rank-One Matrix Completion for Crowd-Sourced Aggregation of Sparsely Interacting Workers »
Yao Ma · Alex Olshevsky · Csaba Szepesvari · Venkatesh Saligrama -
2018 Oral: Gradient Descent for Sparse Rank-One Matrix Completion for Crowd-Sourced Aggregation of Sparsely Interacting Workers »
Yao Ma · Alex Olshevsky · Csaba Szepesvari · Venkatesh Saligrama -
2017 Workshop: ML on a budget: IoT, Mobile and other tiny-ML applications »
Manik Varma · Venkatesh Saligrama · Prateek Jain -
2017 Poster: Adaptive Neural Networks for Efficient Inference »
Tolga Bolukbasi · Joseph Wang · Ofer Dekel · Venkatesh Saligrama -
2017 Talk: Adaptive Neural Networks for Efficient Inference »
Tolga Bolukbasi · Joseph Wang · Ofer Dekel · Venkatesh Saligrama -
2017 Poster: Connected Subgraph Detection with Mirror Descent on SDPs »
Cem Aksoylar · Orecchia Lorenzo · Venkatesh Saligrama -
2017 Talk: Connected Subgraph Detection with Mirror Descent on SDPs »
Cem Aksoylar · Orecchia Lorenzo · Venkatesh Saligrama