Timezone: »
Deep neural networks (DNNs) are vulnerable to adversarial noise. Denoising model-based defense is a major protection strategy. However, denoising models may fail and induce negative effects in fully white-box scenarios. In this work, we start from the latent inherent properties of adversarial samples to break the limitations. Unlike solely learning a mapping from adversarial samples to natural samples, we aim to achieve denoising by destroying the spatial characteristics of adversarial noise and preserving the robust features of natural information. Motivated by this, we propose a defense based on information discard and robust representation restoration. Our method utilize complementary masks to disrupt adversarial noise and guided denoising models to restore robust-predictive representations from masked samples. Experimental results show that our method has competitive performance against white-box attacks and effectively reverses the negative effect of denoising models.
Author Information
Dawei Zhou (Xidian University)
Yukun Chen (Xidian University)
Nannan Wang (Xidian University)
Decheng Liu (Xidian University )
Xinbo Gao (Chongqing University of Posts and Telecommunications)
Tongliang Liu (The University of Sydney)
More from the Same Authors
-
2022 : Invariance Principle Meets Out-of-Distribution Generalization on Graphs »
Yongqiang Chen · Yonggang Zhang · Yatao Bian · Han Yang · Kaili MA · Binghui Xie · Tongliang Liu · Bo Han · James Cheng -
2023 : Advancing Counterfactual Inference through Quantile Regression »
Shaoan Xie · Biwei Huang · Bin Gu · Tongliang Liu · Kun Zhang -
2023 Poster: Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation »
Ruijiang Dong · Feng Liu · Haoang Chi · Tongliang Liu · Mingming Gong · Gang Niu · Masashi Sugiyama · Bo Han -
2023 Poster: Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability »
Jianing Zhu · Hengzhuang Li · Jiangchao Yao · Tongliang Liu · Jianliang Xu · Bo Han -
2023 Poster: A Universal Unbiased Method for Classification from Aggregate Observations »
Zixi Wei · Lei Feng · Bo Han · Tongliang Liu · Gang Niu · Xiaofeng Zhu · Heng Tao Shen -
2023 Poster: Exploring Model Dynamics for Accumulative Poisoning Discovery »
Jianing Zhu · Xiawei Guo · Jiangchao Yao · Chao Du · LI He · Shuo Yuan · Tongliang Liu · Liang Wang · Bo Han -
2023 Poster: Evolving Semantic Prototype Improves Generative Zero-Shot Learning »
Shiming Chen · Wenjin Hou · Ziming Hong · Xiaohan Ding · Yibing Song · Xinge You · Tongliang Liu · Kun Zhang -
2023 Poster: Which is Better for Learning with Noisy Labels: The Semi-supervised Method or Modeling Label Noise? »
Yu Yao · Mingming Gong · Yuxuan Du · Jun Yu · Bo Han · Kun Zhang · Tongliang Liu -
2023 Poster: Phase-aware Adversarial Defense for Improving Adversarial Robustness »
Dawei Zhou · Nannan Wang · Heng Yang · Xinbo Gao · Tongliang Liu -
2023 Poster: Detecting Out-of-distribution Data through In-distribution Class Prior »
Xue JIANG · Feng Liu · zhen fang · Hong Chen · Tongliang Liu · Feng Zheng · Bo Han -
2022 Poster: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Spotlight: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Understanding Robust Overfitting of Adversarial Training and Beyond »
Chaojian Yu · Bo Han · Li Shen · Jun Yu · Chen Gong · Mingming Gong · Tongliang Liu -
2022 Poster: Modeling Adversarial Noise for Adversarial Training »
Dawei Zhou · Nannan Wang · Bo Han · Tongliang Liu -
2022 Poster: Improving Adversarial Robustness via Mutual Information Estimation »
Dawei Zhou · Nannan Wang · Xinbo Gao · Bo Han · Xiaoyu Wang · Yibing Zhan · Tongliang Liu -
2022 Spotlight: Understanding Robust Overfitting of Adversarial Training and Beyond »
Chaojian Yu · Bo Han · Li Shen · Jun Yu · Chen Gong · Mingming Gong · Tongliang Liu -
2022 Spotlight: Improving Adversarial Robustness via Mutual Information Estimation »
Dawei Zhou · Nannan Wang · Xinbo Gao · Bo Han · Xiaoyu Wang · Yibing Zhan · Tongliang Liu -
2022 Spotlight: Modeling Adversarial Noise for Adversarial Training »
Dawei Zhou · Nannan Wang · Bo Han · Tongliang Liu -
2022 Poster: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2022 Oral: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2021 Poster: Towards Defending against Adversarial Examples via Attack-Invariant Features »
Dawei Zhou · Tongliang Liu · Bo Han · Nannan Wang · Chunlei Peng · Xinbo Gao -
2021 Poster: Provably End-to-end Label-noise Learning without Anchor Points »
Xuefeng Li · Tongliang Liu · Bo Han · Gang Niu · Masashi Sugiyama -
2021 Poster: Learning Diverse-Structured Networks for Adversarial Robustness »
Xuefeng Du · Jingfeng Zhang · Bo Han · Tongliang Liu · Yu Rong · Gang Niu · Junzhou Huang · Masashi Sugiyama -
2021 Poster: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Towards Defending against Adversarial Examples via Attack-Invariant Features »
Dawei Zhou · Tongliang Liu · Bo Han · Nannan Wang · Chunlei Peng · Xinbo Gao -
2021 Spotlight: Provably End-to-end Label-noise Learning without Anchor Points »
Xuefeng Li · Tongliang Liu · Bo Han · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Learning Diverse-Structured Networks for Adversarial Robustness »
Xuefeng Du · Jingfeng Zhang · Bo Han · Tongliang Liu · Yu Rong · Gang Niu · Junzhou Huang · Masashi Sugiyama -
2021 Spotlight: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Poster: Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels »
Songhua Wu · Xiaobo Xia · Tongliang Liu · Bo Han · Mingming Gong · Nannan Wang · Haifeng Liu · Gang Niu -
2021 Poster: Confidence Scores Make Instance-dependent Label-noise Learning Possible »
Antonin Berthon · Bo Han · Gang Niu · Tongliang Liu · Masashi Sugiyama -
2021 Spotlight: Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels »
Songhua Wu · Xiaobo Xia · Tongliang Liu · Bo Han · Mingming Gong · Nannan Wang · Haifeng Liu · Gang Niu -
2021 Oral: Confidence Scores Make Instance-dependent Label-noise Learning Possible »
Antonin Berthon · Bo Han · Gang Niu · Tongliang Liu · Masashi Sugiyama -
2020 Poster: Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks »
Yonggang Zhang · Ya Li · Tongliang Liu · Xinmei Tian -
2020 Poster: Learning with Bounded Instance- and Label-dependent Label Noise »
Jiacheng Cheng · Tongliang Liu · Kotagiri Ramamohanarao · Dacheng Tao -
2020 Poster: Label-Noise Robust Domain Adaptation »
Xiyu Yu · Tongliang Liu · Mingming Gong · Kun Zhang · Kayhan Batmanghelich · Dacheng Tao -
2020 Poster: LTF: A Label Transformation Framework for Correcting Label Shift »
Jiaxian Guo · Mingming Gong · Tongliang Liu · Kun Zhang · Dacheng Tao