Timezone: »
The past decade has witnessed the flourishing of a new profession as media content creators, who rely on revenue streams from online content recommendation platforms. The rewarding mechanism employed by these platforms creates a competitive environment among creators which affects their production choices and, consequently, content distribution and system welfare. In this work, we uncover a fundamental limit about a class of widely adopted mechanisms, coined Merit-based Monotone Mechanisms, by showing that they inevitably lead to a constant fraction loss of the welfare. To circumvent this limitation, we introduce Backward Rewarding Mechanisms (BRMs) and show that the competition games resulting from BRM possess a potential game structure, which naturally induces the strategic creators' behavior dynamics to optimize any given welfare metric. In addition, the class of BRM can be parameterized so that it allows the platform to directly optimize welfare within the feasible mechanism space even when the welfare metric is not explicitly defined.
Author Information
Fan Yao (University of Virginia)
Chuanhao Li (University of Virginia)
Karthik Abinav Sankararaman (Facebook)
Yiming Liao (Meta)
Yan Zhu (Google)
Qifan Wang (Meta AI)
Hongning Wang (University of Virginia)
I am an associate professor in the Department of Computer Science of University of Virginia. My research interest includes data mining, machine learning, and information retrieval, with a special emphasis on computational user behavior modeling.
Haifeng Xu (University of Chicago)
More from the Same Authors
-
2023 : Inverse Game Theory for Stackelberg Games: the Blessing of Bounded Rationality »
Jibang Wu · Weiran Shen · Fei Fang · Haifeng Xu -
2023 : Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation »
Thomas Kleine Büning · Aadirupa Saha · Christos Dimitrakakis · Haifeng Xu -
2023 : Follow-ups Also Matter: Improving Contextual Bandits via Post-serving Contexts »
Chaoqi Wang · Ziyu Ye · Zhe Feng · Ashwinkumar Badanidiyuru · Haifeng Xu -
2023 : Learning from a Learning User for Optimal Recommendations »
Fan Yao · Chuanhao Li · Denis Nekipelov · Hongning Wang · Haifeng Xu -
2023 Oral: How Bad is Top-$K$ Recommendation under Competing Content Creators? »
Fan Yao · Chuanhao Li · Denis Nekipelov · Hongning Wang · Haifeng Xu -
2023 Poster: How Bad is Top-$K$ Recommendation under Competing Content Creators? »
Fan Yao · Chuanhao Li · Denis Nekipelov · Hongning Wang · Haifeng Xu -
2022 Poster: When Are Linear Stochastic Bandits Attackable? »
Huazheng Wang · Haifeng Xu · Hongning Wang -
2022 Poster: Learning from a Learning User for Optimal Recommendations »
Fan Yao · Chuanhao Li · Denis Nekipelov · Hongning Wang · Haifeng Xu -
2022 Spotlight: Learning from a Learning User for Optimal Recommendations »
Fan Yao · Chuanhao Li · Denis Nekipelov · Hongning Wang · Haifeng Xu -
2022 Spotlight: When Are Linear Stochastic Bandits Attackable? »
Huazheng Wang · Haifeng Xu · Hongning Wang -
2021 Poster: Beyond $log^2(T)$ regret for decentralized bandits in matching markets »
Soumya Basu · Karthik Abinav Sankararaman · Abishek Sankararaman -
2021 Spotlight: Beyond $log^2(T)$ regret for decentralized bandits in matching markets »
Soumya Basu · Karthik Abinav Sankararaman · Abishek Sankararaman -
2021 Poster: PAC-Learning for Strategic Classification »
Ravi Sundaram · Anil Vullikanti · Haifeng Xu · Fan Yao -
2021 Oral: PAC-Learning for Strategic Classification »
Ravi Sundaram · Anil Vullikanti · Haifeng Xu · Fan Yao -
2020 Poster: The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent »
Karthik Abinav Sankararaman · Soham De · Zheng Xu · W. Ronny Huang · Tom Goldstein