Timezone: »
Recently, compositional optimization (CO) has gained popularity because of its applications in distributionally robust optimization (DRO) and many other machine learning problems. Often (non-smooth) regularization terms are added to an objective to impose some structure and/or improve the generalization performance of the learned model. However, when it comes to CO, there is a lack of efficient algorithms that can solve regularized CO problems. Moreover, current state-of-the-art methods to solve such problems rely on the computation of large batch gradients (depending on the solution accuracy) not feasible for most practical settings. To address these challenges, in this work, we consider a certain regularized version of the CO problem that often arises in DRO formulations and develop a proximal algorithm for solving the problem. We perform a Moreau envelope-based analysis and establish that without the need to compute large batch gradients \anamec~achieves $\mathcal{O}(\epsilon^{-2})$ sample complexity, that matches the vanilla SGD guarantees for solving non-CO problems. We corroborate our theoretical findings with empirical studies on large-scale DRO problems.
Author Information
Prashant Khanduri (Wayne State University)
Chengyin Li (Wayne State University)
RAFI IBN SULTAN (Wayne State University)
Yao Qiang (Wayne State University)
Joerg Kliewer (New Jersey Institute of Technology)
Dongxiao Zhu (Wayne State University)
Dongxiao Zhu is currently an Associate Professor at Department of Computer Science, Wayne State University. He received the B.S. from Shandong University (1996), the M.S. from Peking University (1999) and the Ph.D. from University of Michigan (2006). Dongxiao Zhu's recent research interests are in Machine Learning and Applications in health informatics, natural language processing, medical imaging and other data science domains. Dr. Zhu is the Director of Machine Learning and Predictive Analytics (MLPA) Lab and the Director of Computer Science Graduate Program at Wayne State University. He has published over 70 peer-reviewed publications and numerous book chapters and he served on several editorial boards of scientific journals. Dr. Zhu's research has been supported by NIH, NSF and private agencies and he has served on multiple NIH and NSF grant review panels. Dr. Zhu has advised numerous students at undergraduate, graduate and postdoctoral levels and his teaching interest lies in programming language, data structures and algorithms, machine learning and data science.
Related Events (a corresponding poster, oral, or spotlight)
-
2023 : Proximal Compositional Optimization for Distributionally Robust Learning »
Dates n/a. Room
More from the Same Authors
-
2021 : Achieving Optimal Sample and Communication Complexities for Non-IID Federated Learning »
Prashant Khanduri -
2022 : Saliency Guided Adversarial Training for Tackling Generalization Gap with Applications to Medical Imaging Classification System »
Xin Li · Yao Qiang · CHNEGYIN LI · Sijia Liu · Dongxiao Zhu -
2023 Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Baharan Mirzasoleiman · Sanmi Koyejo -
2023 Poster: Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach »
Prashant Khanduri · Ioannis Tsaknakis · Yihua Zhang · Jia Liu · Sijia Liu · Jiawei Zhang · Mingyi Hong -
2023 Poster: Prometheus: Taming Sample and Communication Complexities in Constrained Decentralized Stochastic Bilevel Learning »
Zhuqing Liu · Xin Zhang · Prashant Khanduri · Songtao Lu · Jia Liu -
2023 Poster: FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks »
Bingqing Song · Prashant Khanduri · xinwei zhang · Jinfeng Yi · Mingyi Hong -
2022 Workshop: New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo