Timezone: »
We consider the problem of certifying the robustness of deep neural networks against real-world distribution shifts. To do so, we bridge the gap between hand-crafted specifications and realistic deployment settings by proposing a novel neural-symbolic verification framework, in which we train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model. A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations, which are fundamental to many state-of-the-art generative models. To address this challenge, we propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement. The key idea is to ``lazily'' refine the abstraction of sigmoid functions to exclude spurious counter-examples found in the previous abstraction, thus guaranteeing progress in the verification process while keeping the state-space small. Experiments on the MNIST and CIFAR-10 datasets show that our framework significantly outperforms existing methods on a range of challenging distribution shifts.
Author Information
Haoze Wu (Stanford University)
TERUHIRO TAGOMORI (NRI SecureTechnologies)
Alex Robey (University of Pennsylvania)
Fengjun Yang (University of Pennsylvania)
Nikolai Matni (University of Pennsylvania)
George J. Pappas (University of Pennsylvania)
George J. Pappas is the Joseph Moore Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, the O. Hugo Schuck Best Paper Award, the National Science Foundation PECASE, and the George H. Heilmeier Faculty Excellence Award.
Hamed Hassani (University of Pennsylvania)

I am an assistant professor in the Department of Electrical and Systems Engineering (as of July 2017). I hold a secondary appointment in the Department of Computer and Information Systems. I am also a faculty affiliate of the Warren Center for Network and Data Sciences. Before joining Penn, I was a research fellow at the Simons Institute, UC Berkeley (program: Foundations of Machine Learning). Prior to that, I was a post-doctoral scholar and lecturer in the Institute for Machine Learning at ETH Zürich. I received my Ph.D. degree in Computer and Communication Sciences from EPFL.
Corina Pasareanu (Carnegie Mellon University)
Clark Barrett (Stanford University)
More from the Same Authors
-
2021 : Minimax Optimization: The Case of Convex-Submodular »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2021 : Out-of-Distribution Robustness in Deep Learning Compression »
Eric Lei · Hamed Hassani -
2022 : Bridging Distribution Shift in Imitation Learning via Taylor Expansions »
Daniel Pfrommer · Thomas T. Zhang · Nikolai Matni · Stephen Tu -
2023 : H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models »
Zhenyu Zhang · Ying Sheng · Tianyi Zhou · Tianlong Chen · Lianmin Zheng · Ruisi Cai · Zhao Song · Yuandong Tian · Christopher Re · Clark Barrett · Zhangyang “Atlas” Wang · Beidi Chen -
2023 : Text + Sketch: Image Compression at Ultra Low Rates »
Eric Lei · Yigit Berkay Uslu · Hamed Hassani · Shirin Bidokhti -
2023 : Adversarial Training Should Be Cast as a Non-Zero-Sum Game »
Alex Robey · Fabian Latorre · George J. Pappas · Hamed Hassani · Volkan Cevher -
2023 Poster: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2023 Poster: The Power of Learned Locally Linear Models for Nonlinear Policy Optimization »
Daniel Pfrommer · Max Simchowitz · Tyler Westenbroek · Nikolai Matni · Stephen Tu -
2023 Oral: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2023 Poster: Variational Autoencoding Neural Operators »
Jacob H. Seidman · Georgios Kissas · George J. Pappas · Paris Perdikaris -
2023 Poster: Demystifying Disagreement-on-the-Line in High Dimensions »
Donghwan Lee · Behrad Moniri · Xinmeng Huang · Edgar Dobriban · Hamed Hassani -
2022 Workshop: Workshop on Formal Verification of Machine Learning »
Huan Zhang · Leslie Rice · Kaidi Xu · aditi raghunathan · Wan-Yi Lin · Cho-Jui Hsieh · Clark Barrett · Martin Vechev · Zico Kolter -
2022 Poster: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2022 Spotlight: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2021 : Minimax Optimization: The Case of Convex-Submodular »
Hamed Hassani · Aryan Mokhtari · Arman Adibi -
2021 : Contributed Talk #1 »
Eric Lei · Hamed Hassani · Shirin Bidokhti -
2021 Poster: Exploiting Shared Representations for Personalized Federated Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2021 Spotlight: Exploiting Shared Representations for Personalized Federated Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2020 Poster: Quantized Decentralized Stochastic Learning over Directed Graphs »
Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2020 Tutorial: Submodular Optimization: From Discrete to Continuous and Back »
Hamed Hassani · Amin Karbasi -
2019 Poster: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Oral: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Poster: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2019 Oral: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2018 Poster: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi -
2018 Oral: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi