Timezone: »
In this paper, we study mixed continuous-discrete minimax problems where the minimization is over a continuous variable belonging to Euclidean space and the maximization is over subsets of a given ground set. We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable. Even though such problems appear frequently in machine learning applications, little is known about how to address them from algorithmic and theoretical perspectives. For such problems, we first show that obtaining saddle points are hard up to any approximation, and thus introduce new notions of (near-) optimality. We then provide several algorithmic procedures for solving convex and monotone-submodular minimax problems and characterize their convergence rates, computational complexity, and quality of the final solution. Finally, we provide experiments to showcase the effectiveness of our purposed methods on training robust recommender systems in the presence of poisoning attacks.
Author Information
Arman Adibi (University of Pennsylvania)
Aryan Mokhtari (UT Austin)
Hamed Hassani (University of Pennsylvania)

I am an assistant professor in the Department of Electrical and Systems Engineering (as of July 2017). I hold a secondary appointment in the Department of Computer and Information Systems. I am also a faculty affiliate of the Warren Center for Network and Data Sciences. Before joining Penn, I was a research fellow at the Simons Institute, UC Berkeley (program: Foundations of Machine Learning). Prior to that, I was a post-doctoral scholar and lecturer in the Institute for Machine Learning at ETH Zürich. I received my Ph.D. degree in Computer and Communication Sciences from EPFL.
More from the Same Authors
-
2021 : Out-of-Distribution Robustness in Deep Learning Compression »
Eric Lei · Hamed Hassani -
2022 : Toward Certified Robustness Against Real-World Distribution Shifts »
Haoze Wu · TERUHIRO TAGOMORI · Alex Robey · Fengjun Yang · Nikolai Matni · George J. Pappas · Hamed Hassani · Corina Pasareanu · Clark Barrett -
2023 Poster: Demystifying Disagreement-on-the-Line in High Dimensions »
Donghwan Lee · Behrad Moniri · Xinmeng Huang · Edgar Dobriban · Hamed Hassani -
2023 Poster: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2023 Oral: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2022 Poster: MAML and ANIL Provably Learn Representations »
Liam Collins · Aryan Mokhtari · Sewoong Oh · Sanjay Shakkottai -
2022 Spotlight: MAML and ANIL Provably Learn Representations »
Liam Collins · Aryan Mokhtari · Sewoong Oh · Sanjay Shakkottai -
2022 Poster: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2022 Spotlight: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2022 Poster: Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood »
Qiujiang Jin · Alec Koppel · Ketan Rajawat · Aryan Mokhtari -
2022 Spotlight: Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood »
Qiujiang Jin · Alec Koppel · Ketan Rajawat · Aryan Mokhtari -
2021 : Minimax Optimization: The Case of Convex-Submodular »
Hamed Hassani · Aryan Mokhtari · Arman Adibi -
2021 : Contributed Talk #1 »
Eric Lei · Hamed Hassani · Shirin Bidokhti -
2021 Poster: Exploiting Shared Representations for Personalized Federated Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2021 Spotlight: Exploiting Shared Representations for Personalized Federated Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2020 Poster: Quantized Decentralized Stochastic Learning over Directed Graphs »
Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2020 Tutorial: Submodular Optimization: From Discrete to Continuous and Back »
Hamed Hassani · Amin Karbasi -
2019 Poster: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Oral: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Poster: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2019 Oral: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2018 Poster: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi -
2018 Oral: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi -
2018 Poster: Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication »
Zebang Shen · Aryan Mokhtari · Tengfei Zhou · Peilin Zhao · Hui Qian -
2018 Oral: Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication »
Zebang Shen · Aryan Mokhtari · Tengfei Zhou · Peilin Zhao · Hui Qian