Timezone: »
Understanding complex dynamics of two-sided online matching markets, where the demand-side agents compete to match with the supply-side (arms), has recently received substantial interest. To that end, in this paper, we introduce the framework of decentralized two-sided matching market under non stationary (dynamic) environments. We adhere to the serial dictatorship setting, where the demand-side agents have unknown and different preferences over the supply-side (arms), but the arms have fixed and known preference over the agents. We propose and analyze an asynchronous and decentralized learning algorithm, namely Non-Stationary Competing Bandits (\texttt{NSCB}), where the agents play (restrictive) successive elimination type learning algorithms to learn their preference over the arms. The complexity in understanding such a system stems from the fact that the competing bandits choose their actions in an asynchronous fashion, and the lower ranked agents only get to learn from a set of arms, not \emph{dominated} by the higher ranked agents, which leads to \emph{forced exploration}. With carefully defined complexity parameters, we characterize this \emph{forced exploration} and obtain sub-linear (logarithmic) regret of \texttt{NSCB}. Furthermore, we validate our theoretical findings via experiments.
Author Information
Avishek Ghosh (University of California, San Diego)
Abishek Sankararaman (Amazon Web Services)
Kannan Ramchandran (UC Berkeley)
Tara Javidi (University of California, San Diego)
Arya Mazumdar (University of California, San Diego)
More from the Same Authors
-
2023 : A Convergent Federated Clustering Algorithm without Initial Condition »
Harsh Vardhan · Avishek Ghosh · Arya Mazumdar -
2023 : $\texttt{FED-CURE}$: A Robust Federated Learning Algorithm with Cubic Regularized Newton »
Avishek Ghosh · Raj Kumar Maity · Arya Mazumdar -
2023 : Two-Sided Bandit Learning in Fully-Decentralized Matching Markets »
Tejas Pagare · Avishek Ghosh -
2022 Poster: Breaking the $\sqrt{T}$ Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits »
Avishek Ghosh · Abishek Sankararaman -
2022 Spotlight: Breaking the $\sqrt{T}$ Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits »
Avishek Ghosh · Abishek Sankararaman -
2022 Poster: On Learning Mixture of Linear Regressions in the Non-Realizable Setting »
Soumyabrata Pal · Arya Mazumdar · Rajat Sen · Avishek Ghosh -
2022 Poster: FITNESS: (Fine Tune on New and Similar Samples) to detect anomalies in streams with drift and outliers »
Abishek Sankararaman · Balakrishnan Narayanaswamy · Vikramank Singh · Zhao Song -
2022 Poster: Neurotoxin: Durable Backdoors in Federated Learning »
Zhengming Zhang · Ashwinee Panda · Linyue Song · Yaoqing Yang · Michael Mahoney · Prateek Mittal · Kannan Ramchandran · Joseph E Gonzalez -
2022 Spotlight: On Learning Mixture of Linear Regressions in the Non-Realizable Setting »
Soumyabrata Pal · Arya Mazumdar · Rajat Sen · Avishek Ghosh -
2022 Spotlight: Neurotoxin: Durable Backdoors in Federated Learning »
Zhengming Zhang · Ashwinee Panda · Linyue Song · Yaoqing Yang · Michael Mahoney · Prateek Mittal · Kannan Ramchandran · Joseph E Gonzalez -
2022 Spotlight: FITNESS: (Fine Tune on New and Similar Samples) to detect anomalies in streams with drift and outliers »
Abishek Sankararaman · Balakrishnan Narayanaswamy · Vikramank Singh · Zhao Song -
2021 Poster: Beyond $log^2(T)$ regret for decentralized bandits in matching markets »
Soumya Basu · Karthik Abinav Sankararaman · Abishek Sankararaman -
2021 Spotlight: Beyond $log^2(T)$ regret for decentralized bandits in matching markets »
Soumya Basu · Karthik Abinav Sankararaman · Abishek Sankararaman -
2019 Poster: Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2019 Oral: Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2019 Poster: Rademacher Complexity for Adversarially Robust Generalization »
Dong Yin · Kannan Ramchandran · Peter Bartlett -
2019 Oral: Rademacher Complexity for Adversarially Robust Generalization »
Dong Yin · Kannan Ramchandran · Peter Bartlett -
2018 Poster: Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2018 Oral: Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2017 Poster: The Sample Complexity of Online One-Class Collaborative Filtering »
Reinhard Heckel · Kannan Ramchandran -
2017 Talk: The Sample Complexity of Online One-Class Collaborative Filtering »
Reinhard Heckel · Kannan Ramchandran