Timezone: »
Deep neural networks (DNNs) are found to be vulnerable to adversarial noise. They are typically misled by adversarial samples to make wrong predictions. To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method. Specifically, we first measure the dependence by estimating the mutual information (MI) between outputs and the natural patterns of inputs (called natural MI) and MI between outputs and the adversarial patterns of inputs (called adversarial MI), respectively. We find that adversarial samples usually have larger adversarial MI and smaller natural MI compared with those w.r.t. natural samples. Motivated by this observation, we propose to enhance the adversarial robustness by maximizing the natural MI and minimizing the adversarial MI during the training process. In this way, the target model is expected to pay more attention to the natural pattern that contains objective semantics. Empirical evaluations demonstrate that our method could effectively improve the adversarial accuracy against multiple attacks.
Author Information
Dawei Zhou (Xidian University)
Nannan Wang (Xidian University)
Xinbo Gao (Chongqing University of Posts and Telecommunications)
Bo Han (HKBU / RIKEN)
Xiaoyu Wang (-)
Yibing Zhan (Jingdong )
Tongliang Liu (The University of Sydney)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Improving Adversarial Robustness via Mutual Information Estimation »
Wed. Jul 20th through Thu the 21st Room Hall E #317
More from the Same Authors
-
2022 : Invariance Principle Meets Out-of-Distribution Generalization on Graphs »
Yongqiang Chen · Yonggang Zhang · Yatao Bian · Han Yang · Kaili MA · Binghui Xie · Tongliang Liu · Bo Han · James Cheng -
2022 : Pareto Invariant Risk Minimization »
Yongqiang Chen · Kaiwen Zhou · Yatao Bian · Binghui Xie · Kaili MA · Yonggang Zhang · Han Yang · Bo Han · James Cheng -
2023 : Towards Understanding Feature Learning in Out-of-Distribution Generalization »
Yongqiang Chen · Wei Huang · Kaiwen Zhou · Yatao Bian · Bo Han · James Cheng -
2023 : Advancing Counterfactual Inference through Quantile Regression »
Shaoan Xie · Biwei Huang · Bin Gu · Tongliang Liu · Kun Zhang -
2023 Poster: Eliminating Adversarial Noise via Information Discard and Robust Representation Restoration »
Dawei Zhou · Yukun Chen · Nannan Wang · Decheng Liu · Xinbo Gao · Tongliang Liu -
2023 Poster: Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation »
Ruijiang Dong · Feng Liu · Haoang Chi · Tongliang Liu · Mingming Gong · Gang Niu · Masashi Sugiyama · Bo Han -
2023 Poster: Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score »
Shuhai Zhang · Feng Liu · Jiahao Yang · 逸凡 杨 · Changsheng Li · Bo Han · Mingkui Tan -
2023 Poster: Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability »
Jianing Zhu · Hengzhuang Li · Jiangchao Yao · Tongliang Liu · Jianliang Xu · Bo Han -
2023 Poster: Moderately Distributional Exploration for Domain Generalization »
Rui Dai · Yonggang Zhang · zhen fang · Bo Han · Xinmei Tian -
2023 Poster: A Universal Unbiased Method for Classification from Aggregate Observations »
Zixi Wei · Lei Feng · Bo Han · Tongliang Liu · Gang Niu · Xiaofeng Zhu · Heng Tao Shen -
2023 Poster: Exploring Model Dynamics for Accumulative Poisoning Discovery »
Jianing Zhu · Xiawei Guo · Jiangchao Yao · Chao Du · LI He · Shuo Yuan · Tongliang Liu · Liang Wang · Bo Han -
2023 Poster: Evolving Semantic Prototype Improves Generative Zero-Shot Learning »
Shiming Chen · Wenjin Hou · Ziming Hong · Xiaohan Ding · Yibing Song · Xinge You · Tongliang Liu · Kun Zhang -
2023 Poster: Which is Better for Learning with Noisy Labels: The Semi-supervised Method or Modeling Label Noise? »
Yu Yao · Mingming Gong · Yuxuan Du · Jun Yu · Bo Han · Kun Zhang · Tongliang Liu -
2023 Poster: On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation »
Zhanke Zhou · Chenyu Zhou · Xuan Li · Jiangchao Yao · QUANMING YAO · Bo Han -
2023 Poster: Phase-aware Adversarial Defense for Improving Adversarial Robustness »
Dawei Zhou · Nannan Wang · Heng Yang · Xinbo Gao · Tongliang Liu -
2023 Poster: Detecting Out-of-distribution Data through In-distribution Class Prior »
Xue JIANG · Feng Liu · zhen fang · Hong Chen · Tongliang Liu · Feng Zheng · Bo Han -
2022 Poster: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Contrastive Learning with Boosted Memorization »
Zhihan Zhou · Jiangchao Yao · Yan-Feng Wang · Bo Han · Ya Zhang -
2022 Poster: Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning »
Zhenheng Tang · Yonggang Zhang · Shaohuai Shi · Xin He · Bo Han · Xiaowen Chu -
2022 Spotlight: Contrastive Learning with Boosted Memorization »
Zhihan Zhou · Jiangchao Yao · Yan-Feng Wang · Bo Han · Ya Zhang -
2022 Spotlight: Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning »
Zhenheng Tang · Yonggang Zhang · Shaohuai Shi · Xin He · Bo Han · Xiaowen Chu -
2022 Spotlight: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Understanding Robust Overfitting of Adversarial Training and Beyond »
Chaojian Yu · Bo Han · Li Shen · Jun Yu · Chen Gong · Mingming Gong · Tongliang Liu -
2022 Poster: Modeling Adversarial Noise for Adversarial Training »
Dawei Zhou · Nannan Wang · Bo Han · Tongliang Liu -
2022 Spotlight: Understanding Robust Overfitting of Adversarial Training and Beyond »
Chaojian Yu · Bo Han · Li Shen · Jun Yu · Chen Gong · Mingming Gong · Tongliang Liu -
2022 Spotlight: Modeling Adversarial Noise for Adversarial Training »
Dawei Zhou · Nannan Wang · Bo Han · Tongliang Liu -
2022 Poster: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack »
Ruize Gao · Jiongxiao Wang · Kaiwen Zhou · Feng Liu · Binghui Xie · Gang Niu · Bo Han · James Cheng -
2022 Poster: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2022 Spotlight: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack »
Ruize Gao · Jiongxiao Wang · Kaiwen Zhou · Feng Liu · Binghui Xie · Gang Niu · Bo Han · James Cheng -
2022 Oral: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2021 Poster: Towards Defending against Adversarial Examples via Attack-Invariant Features »
Dawei Zhou · Tongliang Liu · Bo Han · Nannan Wang · Chunlei Peng · Xinbo Gao -
2021 Poster: Provably End-to-end Label-noise Learning without Anchor Points »
Xuefeng Li · Tongliang Liu · Bo Han · Gang Niu · Masashi Sugiyama -
2021 Poster: Learning Diverse-Structured Networks for Adversarial Robustness »
Xuefeng Du · Jingfeng Zhang · Bo Han · Tongliang Liu · Yu Rong · Gang Niu · Junzhou Huang · Masashi Sugiyama -
2021 Poster: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Towards Defending against Adversarial Examples via Attack-Invariant Features »
Dawei Zhou · Tongliang Liu · Bo Han · Nannan Wang · Chunlei Peng · Xinbo Gao -
2021 Spotlight: Provably End-to-end Label-noise Learning without Anchor Points »
Xuefeng Li · Tongliang Liu · Bo Han · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Learning Diverse-Structured Networks for Adversarial Robustness »
Xuefeng Du · Jingfeng Zhang · Bo Han · Tongliang Liu · Yu Rong · Gang Niu · Junzhou Huang · Masashi Sugiyama -
2021 Spotlight: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Poster: Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels »
Songhua Wu · Xiaobo Xia · Tongliang Liu · Bo Han · Mingming Gong · Nannan Wang · Haifeng Liu · Gang Niu -
2021 Poster: Confidence Scores Make Instance-dependent Label-noise Learning Possible »
Antonin Berthon · Bo Han · Gang Niu · Tongliang Liu · Masashi Sugiyama -
2021 Spotlight: Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels »
Songhua Wu · Xiaobo Xia · Tongliang Liu · Bo Han · Mingming Gong · Nannan Wang · Haifeng Liu · Gang Niu -
2021 Oral: Confidence Scores Make Instance-dependent Label-noise Learning Possible »
Antonin Berthon · Bo Han · Gang Niu · Tongliang Liu · Masashi Sugiyama -
2020 Poster: Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks »
Yonggang Zhang · Ya Li · Tongliang Liu · Xinmei Tian -
2020 Poster: Learning with Bounded Instance- and Label-dependent Label Noise »
Jiacheng Cheng · Tongliang Liu · Kotagiri Ramamohanarao · Dacheng Tao -
2020 Poster: Label-Noise Robust Domain Adaptation »
Xiyu Yu · Tongliang Liu · Mingming Gong · Kun Zhang · Kayhan Batmanghelich · Dacheng Tao -
2020 Poster: LTF: A Label Transformation Framework for Correcting Label Shift »
Jiaxian Guo · Mingming Gong · Tongliang Liu · Kun Zhang · Dacheng Tao -
2018 Poster: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang -
2018 Oral: Fast Stochastic AUC Maximization with $O(1/n)$-Convergence Rate »
Mingrui Liu · Xiaoxuan Zhang · Zaiyi Chen · Xiaoyu Wang · Tianbao Yang