Timezone: »
Poster
Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence
Zi-Hao Qiu · Quanqi Hu · Yongjian Zhong · Lijun Zhang · Tianbao Yang
NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking metric in information retrieval and machine learning. However, efficient and provable stochastic methods for maximizing NDCG are still lacking, especially for deep models. In this paper, we propose a principled approach to optimize NDCG and its top-$K$ variant. First, we formulate a novel compositional optimization problem for optimizing the NDCG surrogate, and a novel bilevel compositional optimization problem for optimizing the top-$K$ NDCG surrogate. Then, we develop efficient stochastic algorithms with provable convergence guarantees for the non-convex objectives. Different from existing NDCG optimization methods, the per-iteration complexity of our algorithms scales with the mini-batch size instead of the number of total items. To improve the effectiveness for deep learning, we further propose practical strategies by using initial warm-up and stop gradient operator. Experimental results on multiple datasets demonstrate that our methods outperform prior ranking approaches in terms of NDCG. To the best of our knowledge, this is the first time that stochastic algorithms are proposed to optimize NDCG with a provable convergence guarantee. Our proposed methods are implemented in the LibAUC library at https://libauc.org.
Author Information
Zi-Hao Qiu (Nanjing University)
Quanqi Hu (University of Iowa)
Yongjian Zhong (The University of Iowa)
Lijun Zhang (Nanjing University)
Tianbao Yang (The University of Iowa)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence »
Wed. Jul 20th 08:45 -- 08:50 PM Room Room 327 - 329
More from the Same Authors
-
2023 Poster: Provable Multi-instance Deep AUC Maximization with Stochastic Pooling »
Dixian Zhu · Bokun Wang · Zhi Chen · Yaxing Wang · Milan Sonka · Xiaodong Wu · Tianbao Yang -
2023 Poster: Label Distributionally Robust Losses for Multi-class Classification: Consistency, Robustness and Adaptivity »
Dixian Zhu · Yiming Ying · Tianbao Yang -
2023 Poster: Generalization Analysis for Contrastive Representation Learning »
Yunwen Lei · Tianbao Yang · Yiming Ying · Ding-Xuan Zhou -
2023 Poster: Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization »
Zi-Hao Qiu · Quanqi Hu · Zhuoning Yuan · Denny Zhou · Lijun Zhang · Tianbao Yang -
2023 Poster: Learning Unnormalized Statistical Models via Compositional Optimization »
Wei Jiang · Jiayu Qin · Lingyu Wu · Changyou Chen · Tianbao Yang · Lijun Zhang -
2023 Poster: Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for Multi-Block Bilevel Optimization »
Quanqi Hu · Zi-Hao Qiu · Zhishuai Guo · Lijun Zhang · Tianbao Yang -
2023 Poster: FeDXL: Provable Federated Learning for Deep X-Risk Optimization »
Zhishuai Guo · Rong Jin · Jiebo Luo · Tianbao Yang -
2022 Poster: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance »
Zhuoning Yuan · Yuexin Wu · Zi-Hao Qiu · Xianzhi Du · Lijun Zhang · Denny Zhou · Tianbao Yang -
2022 Poster: A Simple yet Universal Strategy for Online Convex Optimization »
Lijun Zhang · Guanghui Wang · Jinfeng Yi · Tianbao Yang -
2022 Oral: A Simple yet Universal Strategy for Online Convex Optimization »
Lijun Zhang · Guanghui Wang · Jinfeng Yi · Tianbao Yang -
2022 Spotlight: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance »
Zhuoning Yuan · Yuexin Wu · Zi-Hao Qiu · Xianzhi Du · Lijun Zhang · Denny Zhou · Tianbao Yang -
2022 Poster: GraphFM: Improving Large-Scale GNN Training via Feature Momentum »
Haiyang Yu · Limei Wang · Bokun Wang · Meng Liu · Tianbao Yang · Shuiwang Ji -
2022 Poster: Optimal Algorithms for Stochastic Multi-Level Compositional Optimization »
Wei Jiang · Bokun Wang · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: Finite-Sum Coupled Compositional Stochastic Optimization: Theory and Applications »
Bokun Wang · Tianbao Yang -
2022 Spotlight: GraphFM: Improving Large-Scale GNN Training via Feature Momentum »
Haiyang Yu · Limei Wang · Bokun Wang · Meng Liu · Tianbao Yang · Shuiwang Ji -
2022 Spotlight: Finite-Sum Coupled Compositional Stochastic Optimization: Theory and Applications »
Bokun Wang · Tianbao Yang -
2022 Spotlight: Optimal Algorithms for Stochastic Multi-Level Compositional Optimization »
Wei Jiang · Bokun Wang · Yibo Wang · Lijun Zhang · Tianbao Yang -
2022 Poster: When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee »
Dixian Zhu · Gang Li · Bokun Wang · Xiaodong Wu · Tianbao Yang -
2022 Spotlight: When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee »
Dixian Zhu · Gang Li · Bokun Wang · Xiaodong Wu · Tianbao Yang -
2021 Poster: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems »
Yunwen Lei · Zhenhuan Yang · Tianbao Yang · Yiming Ying -
2021 Oral: Stability and Generalization of Stochastic Gradient Methods for Minimax Problems »
Yunwen Lei · Zhenhuan Yang · Tianbao Yang · Yiming Ying -
2021 Poster: Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity »
Zhuoning Yuan · Zhishuai Guo · Yi Xu · Yiming Ying · Tianbao Yang -
2021 Spotlight: Federated Deep AUC Maximization for Hetergeneous Data with a Constant Communication Complexity »
Zhuoning Yuan · Zhishuai Guo · Yi Xu · Yiming Ying · Tianbao Yang -
2020 Poster: Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks »
Zhishuai Guo · Mingrui Liu · Zhuoning Yuan · Li Shen · Wei Liu · Tianbao Yang -
2020 Poster: Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints »
Runchao Ma · Qihang Lin · Tianbao Yang -
2020 Poster: Stochastic Optimization for Non-convex Inf-Projection Problems »
Yan Yan · Yi Xu · Lijun Zhang · Wang Xiaoyu · Tianbao Yang -
2019 Poster: Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence »
Yi Xu · Qi Qi · Qihang Lin · rong jin · Tianbao Yang -
2019 Oral: Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence »
Yi Xu · Qi Qi · Qihang Lin · rong jin · Tianbao Yang -
2019 Poster: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2019 Oral: Katalyst: Boosting Convex Katayusha for Non-Convex Problems with a Large Condition Number »
Zaiyi Chen · Yi Xu · Haoyuan Hu · Tianbao Yang -
2018 Poster: Dynamic Regret of Strongly Adaptive Methods »
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou -
2018 Poster: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Poster: Level-Set Methods for Finite-Sum Constrained Convex Optimization »
Qihang Lin · Runchao Ma · Tianbao Yang -
2018 Oral: Level-Set Methods for Finite-Sum Constrained Convex Optimization »
Qihang Lin · Runchao Ma · Tianbao Yang -
2018 Oral: SADAGRAD: Strongly Adaptive Stochastic Gradient Methods »
Zaiyi Chen · Yi Xu · Enhong Chen · Tianbao Yang -
2018 Oral: Dynamic Regret of Strongly Adaptive Methods »
Lijun Zhang · Tianbao Yang · rong jin · Zhi-Hua Zhou -
2018 Poster: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2018 Oral: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2017 Poster: Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence »
Yi Xu · Qihang Lin · Tianbao Yang -
2017 Poster: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Tianbao Yang · Qihang Lin · Lijun Zhang -
2017 Talk: A Richer Theory of Convex Constrained Optimization with Reduced Projections and Improved Rates »
Tianbao Yang · Qihang Lin · Lijun Zhang -
2017 Talk: Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence »
Yi Xu · Qihang Lin · Tianbao Yang