Timezone: »
Convolutional Neural Networks (CNNs) are known to rely more on local texture rather than global shape when making decisions. Recent work also indicates a close relationship between CNN's texture-bias and its robustness against distribution shift, adversarial perturbation, random corruption, etc. In this work, we attempt at improving various kinds of robustness universally by alleviating CNN's texture bias. With inspiration from the human visual system, we propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias. Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture. Through extensive experiments, we observe enhanced robustness under various scenarios (domain generalization, few-shot classification, image corruption, and adversarial perturbation). To the best of our knowledge, this work is one of the earliest attempts to improve different kinds of robustness in a unified model, shedding new light on the relationship between shape-bias and robustness, also on new approaches to trustworthy machine learning algorithms. Code is available at https://github.com/bfshi/InfoDrop.
Author Information
Baifeng Shi (Peking University)
Dinghuai Zhang (Peking University)
Qi Dai (Microsoft Research)
Zhanxing Zhu (Peking University)
Yadong Mu (Peking University)
Jingdong Wang (Microsoft)
More from the Same Authors
-
2023 Poster: Trapdoor Normalization with Irreversible Ownership Verification »
Hanwen Liu · Zhenyu Weng · Yuesheng Zhu · Yadong Mu -
2023 Poster: MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations »
Mingxuan Yi · Zhanxing Zhu · Song Liu -
2021 Workshop: ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI »
Quanshi Zhang · Tian Han · Lixin Fan · Zhanxing Zhu · Hang Su · Ying Nian Wu -
2021 Poster: Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization »
Zeke Xie · Li Yuan · Zhanxing Zhu · Masashi Sugiyama -
2021 Spotlight: Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization »
Zeke Xie · Li Yuan · Zhanxing Zhu · Masashi Sugiyama -
2020 Poster: On Breaking Deep Generative Model-based Defenses and Beyond »
Yanzhi Chen · Renjie Xie · Zhanxing Zhu -
2020 Poster: On the Noisy Gradient Descent that Generalizes as SGD »
Jingfeng Wu · Wenqing Hu · Haoyi Xiong · Jun Huan · Vladimir Braverman · Zhanxing Zhu -
2019 Poster: Interpreting Adversarially Trained Convolutional Neural Networks »
Tianyuan Zhang · Zhanxing Zhu -
2019 Poster: The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects »
Zhanxing Zhu · Jingfeng Wu · Bing Yu · Lei Wu · Jinwen Ma -
2019 Oral: Interpreting Adversarially Trained Convolutional Neural Networks »
Tianyuan Zhang · Zhanxing Zhu -
2019 Oral: The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects »
Zhanxing Zhu · Jingfeng Wu · Bing Yu · Lei Wu · Jinwen Ma