Timezone: »
Poster
Pairwise Ranking Losses of Click-Through Rates Prediction for Welfare Maximization in Ad Auctions
Boxiang Lyu · Zhe Feng · Zach Robertson · Sanmi Koyejo
We study the design of loss functions for click-through rates (CTR) to optimize (social) welfare in advertising auctions. Existing works either only focus on CTR predictions without consideration of business objectives (e.g., welfare) in auctions or assume that the distribution over the participants' expected cost-per-impression (eCPM) is known a priori, then use various additional assumptions on the parametric form of the distribution to derive loss functions for predicting CTRs. In this work, we bring back the welfare objectives of ad auctions into CTR predictions and propose a novel weighted rankloss to train the CTR model. Compared to existing literature, our approach provides a provable guarantee on welfare but without assumptions on the eCPMs' distribution while also avoiding the intractability of naively applying existing learning-to-rank methods. Further, we propose a theoretically justifiable technique for calibrating the losses using labels generated from a teacher network, only assuming that the teacher network has bounded $\ell_2$ generalization error. Finally, we demonstrate the advantages of the proposed loss on synthetic and real-world data.
Author Information
Boxiang Lyu (University of Chicago Booth School of Business)
Zhe Feng (Google Research)
Zach Robertson (Stanford University)
Sanmi Koyejo (Stanford University)
More from the Same Authors
-
2023 : Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks »
Zach Robertson · Sanmi Koyejo -
2023 : FACADE: A Framework for Adversarial Circuit Anomaly Detection and Evaluation »
Dhruv Pai · Andres Carranza · Rylan Schaeffer · Arnuv Tandon · Sanmi Koyejo -
2023 : Leveraging Side Information for Communication-Efficient Federated Learning »
Berivan Isik · Francesco Pase · Deniz Gunduz · Sanmi Koyejo · Tsachy Weissman · Michele Zorzi -
2023 : GPT-Zip: Deep Compression of Finetuned Large Language Models »
Berivan Isik · Hermann Kumbong · Wanyi Ning · Xiaozhe Yao · Sanmi Koyejo · Ce Zhang -
2023 : Beyond Scale: the Diversity Coefficient as a Data Quality Metric Demonstrates LLMs are Pre-trained on Formally Diverse Data »
Alycia Lee · Brando Miranda · Sanmi Koyejo -
2023 : Are Emergent Abilities of Large Language Models a Mirage? »
Rylan Schaeffer · Brando Miranda · Sanmi Koyejo -
2023 : Thomas: Learning to Explore Human Preference via Probabilistic Reward Model »
Sang Truong · Duc Nguyen · Tho Quan · Sanmi Koyejo -
2023 : Follow-ups Also Matter: Improving Contextual Bandits via Post-serving Contexts »
Chaoqi Wang · Ziyu Ye · Zhe Feng · Ashwinkumar Badanidiyuru · Haifeng Xu -
2023 : On learning domain general predictors »
Sanmi Koyejo -
2023 : Deceptive Alignment Monitoring »
Andres Carranza · Dhruv Pai · Rylan Schaeffer · Arnuv Tandon · Sanmi Koyejo -
2023 : Vignettes on Pairwise-Feedback Mechanisms for Learning with Uncertain Preferences »
Sanmi Koyejo -
2023 Poster: Addressing Budget Allocation and Revenue Allocation in Data Market Environments Using an Adaptive Sampling Algorithm »
Boxin Zhao · Boxiang Lyu · Raul Castro Fernandez · Mladen Kolar -
2023 Poster: Improved Online Learning Algorithms for CTR Prediction in Ad Auctions »
Zhe Feng · Christopher Liaw · Zixin Zhou -
2022 Poster: A Context-Integrated Transformer-Based Neural Network for Auction Design »
Zhijian Duan · Jingwu Tang · Yutong Yin · Zhe Feng · Xiang Yan · Manzil Zaheer · Xiaotie Deng -
2022 Spotlight: A Context-Integrated Transformer-Based Neural Network for Auction Design »
Zhijian Duan · Jingwu Tang · Yutong Yin · Zhe Feng · Xiang Yan · Manzil Zaheer · Xiaotie Deng -
2022 Poster: Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning »
Boxiang Lyu · Zhaoran Wang · Mladen Kolar · Zhuoran Yang -
2022 Spotlight: Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning »
Boxiang Lyu · Zhaoran Wang · Mladen Kolar · Zhuoran Yang