Timezone: »
One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially-chosen perturbations of data. Despite the promise of this approach, algorithms based on this paradigm have not engendered sufficient levels of robustness, and suffer from pathological behavior like robust overfitting. To understand this shortcoming, we first show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on the robustness of trained classifiers. The identification of this pitfall informs a novel non-zero-sum bilevel formulation of adversarial training, wherein each player optimizes a different objective function. Our formulation naturally yields a simple algorithmic framework that matches and in some cases outperforms state-of-the-art attacks, attains comparable levels of robustness to standard adversarial training algorithms, and does not suffer from robust overfitting.
Author Information
Alex Robey (University of Pennsylvania)
Fabian Latorre (EPFL)
George J. Pappas (University of Pennsylvania)
George J. Pappas is the Joseph Moore Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, the O. Hugo Schuck Best Paper Award, the National Science Foundation PECASE, and the George H. Heilmeier Faculty Excellence Award.
Hamed Hassani (University of Pennsylvania)

I am an assistant professor in the Department of Electrical and Systems Engineering (as of July 2017). I hold a secondary appointment in the Department of Computer and Information Systems. I am also a faculty affiliate of the Warren Center for Network and Data Sciences. Before joining Penn, I was a research fellow at the Simons Institute, UC Berkeley (program: Foundations of Machine Learning). Prior to that, I was a post-doctoral scholar and lecturer in the Institute for Machine Learning at ETH Zürich. I received my Ph.D. degree in Computer and Communication Sciences from EPFL.
Volkan Cevher (EPFL)
Related Events (a corresponding poster, oral, or spotlight)
-
2023 : Adversarial Training Should Be Cast as a Non-Zero-Sum Game »
Dates n/a. Room
More from the Same Authors
-
2021 : Minimax Optimization: The Case of Convex-Submodular »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2021 : Out-of-Distribution Robustness in Deep Learning Compression »
Eric Lei · Hamed Hassani -
2022 : Robustness in deep learning: The width (good), the depth (bad), and the initialization (ugly) »
Zhenyu Zhu · Fanghui Liu · Grigorios Chrysos · Volkan Cevher -
2022 : Sound and Complete Verification of Polynomial Networks »
Elias Abad Rocamora · Mehmet Fatih Sahin · Fanghui Liu · Grigorios Chrysos · Volkan Cevher -
2022 : Toward Certified Robustness Against Real-World Distribution Shifts »
Haoze Wu · TERUHIRO TAGOMORI · Alex Robey · Fengjun Yang · Nikolai Matni · George J. Pappas · Hamed Hassani · Corina Pasareanu · Clark Barrett -
2023 : Text + Sketch: Image Compression at Ultra Low Rates »
Eric Lei · Yigit Berkay Uslu · Hamed Hassani · Shirin Bidokhti -
2023 Poster: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2023 Oral: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2023 Oral: Semi Bandit dynamics in Congestion Games: Convergence to Nash Equilibrium and No-Regret Guarantees. »
Ioannis Panageas · EFSTRATIOS PANTELEIMON SKOULAKIS · Luca Viano · Xiao Wang · Volkan Cevher -
2023 Poster: When do Minimax-fair Learning and Empirical Risk Minimization Coincide? »
Harvineet Singh · Matthäus Kleindessner · Volkan Cevher · Rumi Chunara · Chris Russell -
2023 Poster: Benign Overfitting in Deep Neural Networks under Lazy Training »
Zhenyu Zhu · Fanghui Liu · Grigorios Chrysos · Francesco Locatello · Volkan Cevher -
2023 Poster: What can online reinforcement learning with function approximation benefit from general coverage conditions? »
Fanghui Liu · Luca Viano · Volkan Cevher -
2023 Poster: Semi Bandit dynamics in Congestion Games: Convergence to Nash Equilibrium and No-Regret Guarantees. »
Ioannis Panageas · EFSTRATIOS PANTELEIMON SKOULAKIS · Luca Viano · Xiao Wang · Volkan Cevher -
2023 Poster: Variational Autoencoding Neural Operators »
Jacob H. Seidman · Georgios Kissas · George J. Pappas · Paris Perdikaris -
2023 Poster: Demystifying Disagreement-on-the-Line in High Dimensions »
Donghwan Lee · Behrad Moniri · Xinmeng Huang · Edgar Dobriban · Hamed Hassani -
2023 : 1-Path-Norm Regularization of Deep Neural Networks »
Fabian Latorre · Antoine Bonnet · Paul Rolland · Nadav Hallak · Volkan Cevher -
2023 : 1-Path-Norm Regularization of Deep Neural Networks »
Fabian Latorre · Antoine Bonnet · Paul Rolland · Nadav Hallak · Volkan Cevher -
2023 Affinity Workshop: LatinX in AI (LXAI) Workshop »
Laura Montoya · Jose Gallego-Posada · Pablo Rivas · Vinicius Carida · Mateo Espinosa Zarlenga · Carlos Miranda · Andres Marquez · Ramesh Doddaiah · David Alvarez-Melis · Ivan Dario Arraut Guerrero · Mateo Guaman Castro · Ana Maria Quintero-Ossa · Fabian Latorre · Julio Hurtado · Jaime David Acevedo-Viloria · Miguel Felipe Arevalo-Castiblanco -
2022 Poster: Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models »
Paul Rolland · Volkan Cevher · Matthäus Kleindessner · Chris Russell · Dominik Janzing · Bernhard Schölkopf · Francesco Locatello -
2022 Poster: UnderGrad: A Universal Black-Box Optimization Method with Almost Dimension-Free Convergence Rate Guarantees »
Kimon Antonakopoulos · Dong Quan Vu · Volkan Cevher · Kfir Levy · Panayotis Mertikopoulos -
2022 Oral: UnderGrad: A Universal Black-Box Optimization Method with Almost Dimension-Free Convergence Rate Guarantees »
Kimon Antonakopoulos · Dong Quan Vu · Volkan Cevher · Kfir Levy · Panayotis Mertikopoulos -
2022 Oral: Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models »
Paul Rolland · Volkan Cevher · Matthäus Kleindessner · Chris Russell · Dominik Janzing · Bernhard Schölkopf · Francesco Locatello -
2022 Poster: A Natural Actor-Critic Framework for Zero-Sum Markov Games »
Ahmet Alacaoglu · Luca Viano · Niao He · Volkan Cevher -
2022 Poster: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2022 Spotlight: A Natural Actor-Critic Framework for Zero-Sum Markov Games »
Ahmet Alacaoglu · Luca Viano · Niao He · Volkan Cevher -
2022 Spotlight: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2021 : Minimax Optimization: The Case of Convex-Submodular »
Hamed Hassani · Aryan Mokhtari · Arman Adibi -
2021 : Contributed Talk #1 »
Eric Lei · Hamed Hassani · Shirin Bidokhti -
2021 Poster: Exploiting Shared Representations for Personalized Federated Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2021 Spotlight: Exploiting Shared Representations for Personalized Federated Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2021 Poster: The Limits of Min-Max Optimization Algorithms: Convergence to Spurious Non-Critical Sets »
Ya-Ping Hsieh · Panayotis Mertikopoulos · Volkan Cevher -
2021 Poster: Regret Minimization in Stochastic Non-Convex Learning via a Proximal-Gradient Approach »
Nadav Hallak · Panayotis Mertikopoulos · Volkan Cevher -
2021 Spotlight: Regret Minimization in Stochastic Non-Convex Learning via a Proximal-Gradient Approach »
Nadav Hallak · Panayotis Mertikopoulos · Volkan Cevher -
2021 Oral: The Limits of Min-Max Optimization Algorithms: Convergence to Spurious Non-Critical Sets »
Ya-Ping Hsieh · Panayotis Mertikopoulos · Volkan Cevher -
2020 Poster: Efficient Proximal Mapping of the 1-path-norm of Shallow Networks »
Fabian Latorre · Paul Rolland · Shaul Nadav Hallak · Volkan Cevher -
2020 Poster: Conditional gradient methods for stochastically constrained convex minimization »
Maria-Luiza Vladarean · Ahmet Alacaoglu · Ya-Ping Hsieh · Volkan Cevher -
2020 Poster: Random extrapolation for primal-dual coordinate descent »
Ahmet Alacaoglu · Olivier Fercoq · Volkan Cevher -
2020 Poster: Double-Loop Unadjusted Langevin Algorithm »
Paul Rolland · Armin Eftekhari · Ali Kavis · Volkan Cevher -
2020 Poster: A new regret analysis for Adam-type algorithms »
Ahmet Alacaoglu · Yura Malitsky · Panayotis Mertikopoulos · Volkan Cevher -
2020 Poster: Quantized Decentralized Stochastic Learning over Directed Graphs »
Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2020 Tutorial: Submodular Optimization: From Discrete to Continuous and Back »
Hamed Hassani · Amin Karbasi -
2020 Affinity Workshop: LatinX in AI Workshop »
Nils Murrugarra-Llerena · Pedro Braga · Walter Mayor · Karla Caballero · Ivan Dario Arraut Guerrero · Leonel Rozo · Juan Banda · Fabian Latorre · Kevin Bello · Leobardo Morales · Leonel Rozo · Angela M Flores-Saravia -
2019 Poster: Almost surely constrained convex optimization »
Olivier Fercoq · Ahmet Alacaoglu · Ion Necoara · Volkan Cevher -
2019 Poster: Finding Mixed Nash Equilibria of Generative Adversarial Networks »
Ya-Ping Hsieh · Chen Liu · Volkan Cevher -
2019 Poster: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Poster: Efficient learning of smooth probability functions from Bernoulli tests with guarantees »
Paul Rolland · Ali Kavis · Alexander Niklaus Immer · Adish Singla · Volkan Cevher -
2019 Oral: Finding Mixed Nash Equilibria of Generative Adversarial Networks »
Ya-Ping Hsieh · Chen Liu · Volkan Cevher -
2019 Oral: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Oral: Efficient learning of smooth probability functions from Bernoulli tests with guarantees »
Paul Rolland · Ali Kavis · Alexander Niklaus Immer · Adish Singla · Volkan Cevher -
2019 Oral: Almost surely constrained convex optimization »
Olivier Fercoq · Ahmet Alacaoglu · Ion Necoara · Volkan Cevher -
2019 Poster: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2019 Poster: On Certifying Non-Uniform Bounds against Adversarial Attacks »
Chen Liu · Ryota Tomioka · Volkan Cevher -
2019 Poster: Conditional Gradient Methods via Stochastic Path-Integrated Differential Estimator »
Alp Yurtsever · Suvrit Sra · Volkan Cevher -
2019 Poster: A Conditional-Gradient-Based Augmented Lagrangian Framework »
Alp Yurtsever · Olivier Fercoq · Volkan Cevher -
2019 Oral: Conditional Gradient Methods via Stochastic Path-Integrated Differential Estimator »
Alp Yurtsever · Suvrit Sra · Volkan Cevher -
2019 Oral: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2019 Oral: A Conditional-Gradient-Based Augmented Lagrangian Framework »
Alp Yurtsever · Olivier Fercoq · Volkan Cevher -
2019 Oral: On Certifying Non-Uniform Bounds against Adversarial Attacks »
Chen Liu · Ryota Tomioka · Volkan Cevher -
2018 Poster: A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming »
Alp Yurtsever · Olivier Fercoq · Francesco Locatello · Volkan Cevher -
2018 Oral: A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming »
Alp Yurtsever · Olivier Fercoq · Francesco Locatello · Volkan Cevher -
2018 Poster: Let’s be Honest: An Optimal No-Regret Framework for Zero-Sum Games »
Ehsan Asadi Kangarshahi · Ya-Ping Hsieh · Mehmet Fatih Sahin · Volkan Cevher -
2018 Poster: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi -
2018 Poster: Optimal Distributed Learning with Multi-pass Stochastic Gradient Methods »
Junhong Lin · Volkan Cevher -
2018 Oral: Let’s be Honest: An Optimal No-Regret Framework for Zero-Sum Games »
Ehsan Asadi Kangarshahi · Ya-Ping Hsieh · Mehmet Fatih Sahin · Volkan Cevher -
2018 Oral: Optimal Distributed Learning with Multi-pass Stochastic Gradient Methods »
Junhong Lin · Volkan Cevher -
2018 Oral: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi -
2018 Poster: Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces »
Junhong Lin · Volkan Cevher -
2018 Oral: Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces »
Junhong Lin · Volkan Cevher -
2017 Poster: Robust Submodular Maximization: A Non-Uniform Partitioning Approach »
Ilija Bogunovic · Slobodan Mitrovic · Jonathan Scarlett · Volkan Cevher -
2017 Talk: Robust Submodular Maximization: A Non-Uniform Partitioning Approach »
Ilija Bogunovic · Slobodan Mitrovic · Jonathan Scarlett · Volkan Cevher