Timezone: »
Deep networks for computer vision are not reliable when they encounter adversarial examples. In this paper, we introduce a framework that uses the dense intrinsic constraints in natural images to robustify inference. By introducing constraints at inference time, we can shift the burden of robustness from training to testing, thereby allowing the model to dynamically adjust to each individual image's unique and potentially novel characteristics at inference time. Our theoretical results show the importance of having dense constraints at inference time. In contrast to existing single-constraint methods, we propose to use equivariance, which naturally allows dense constraints at a fine-grained level in the feature space. Our empirical experiments show that restoring feature equivariance at inference time defends against worst-case adversarial perturbations. The method obtains improved adversarial robustness on four datasets (ImageNet, Cityscapes, PASCAL VOC, and MS-COCO) on image recognition, semantic segmentation, and instance segmentation tasks.
Author Information
Chengzhi Mao (Columbia University)
Lingyu Zhang (Columbia University)
Abhishek Joshi (Columbia University)
Junfeng Yang (Columbia University)
Hao Wang (Rutgers University)
Dr. Hao Wang is currently an assistant professor in the department of computer science at Rutgers University. Previously he was a Postdoctoral Associate at the Computer Science & Artificial Intelligence Lab (CSAIL) of MIT, working with Dina Katabi and Tommi Jaakkola. He received his PhD degree from the Hong Kong University of Science and Technology, as the sole recipient of the School of Engineering PhD Research Excellence Award in 2017. He has been a visiting researcher in the Machine Learning Department of Carnegie Mellon University. His research focuses on statistical machine learning, deep learning, and data mining, with broad applications on recommender systems, healthcare, user profiling, social network analysis, text mining, etc. His work on Bayesian deep learning for recommender systems and personalized modeling has inspired hundreds of follow-up works published at top conferences such as AAAI, ICML, IJCAI, KDD, NIPS, SIGIR, and WWW. It has received over 1000 citations, becoming the most cited paper at KDD 2015. In 2015, he was awarded the Microsoft Fellowship in Asia and the Baidu Research Fellowship for his innovation on Bayesian deep learning and its applications on data mining and social network analysis.
Carl Vondrick (Columbia University)
More from the Same Authors
-
2022 : Finding Spuriously Correlated Visual Attributes »
Revant Teotia · Chengzhi Mao · Carl Vondrick -
2022 : Doubly Right Object Recognition »
Revant Teotia · Chengzhi Mao · Carl Vondrick -
2023 : Towards Effective Data Poisoning for Imbalanced Classification »
Snigdha Sushil Mishra · Hao He · Hao Wang -
2023 Oral: Self-Interpretable Time Series Prediction with Counterfactual Explanations »
Jingquan Yan · Hao Wang -
2023 Poster: Taxonomy-Structured Domain Adaptation »
Tianyi Liu · Zihao Xu · Hao He · Guangyuan Hao · Guang-He Lee · Hao Wang -
2023 Poster: Self-Interpretable Time Series Prediction with Counterfactual Explanations »
Jingquan Yan · Hao Wang -
2022 Poster: Domain Adaptation for Time Series Forecasting via Attention Sharing »
Xiaoyong Jin · Youngsuk Park · Danielle Robinson · Hao Wang · Yuyang Wang -
2022 Spotlight: Domain Adaptation for Time Series Forecasting via Attention Sharing »
Xiaoyong Jin · Youngsuk Park · Danielle Robinson · Hao Wang · Yuyang Wang -
2021 Poster: STRODE: Stochastic Boundary Ordinary Differential Equation »
Huang Hengguan · Hongfu Liu · Hao Wang · Chang Xiao · Ye Wang -
2021 Poster: Correcting Exposure Bias for Link Recommendation »
Shantanu Gupta · Hao Wang · Zachary Lipton · Yuyang Wang -
2021 Spotlight: Correcting Exposure Bias for Link Recommendation »
Shantanu Gupta · Hao Wang · Zachary Lipton · Yuyang Wang -
2021 Spotlight: STRODE: Stochastic Boundary Ordinary Differential Equation »
Huang Hengguan · Hongfu Liu · Hao Wang · Chang Xiao · Ye Wang -
2021 Poster: Delving into Deep Imbalanced Regression »
Yuzhe Yang · Kaiwen Zha · YINGCONG CHEN · Hao Wang · Dina Katabi -
2021 Oral: Delving into Deep Imbalanced Regression »
Yuzhe Yang · Kaiwen Zha · YINGCONG CHEN · Hao Wang · Dina Katabi -
2020 Poster: Deep Graph Random Process for Relational-Thinking-Based Speech Recognition »
Huang Hengguan · Fuzhao Xue · Hao Wang · Ye Wang -
2019 Workshop: Workshop on Self-Supervised Learning »
Aaron van den Oord · Yusuf Aytar · Carl Doersch · Carl Vondrick · Alec Radford · Pierre Sermanet · Amir Zamir · Pieter Abbeel