Timezone: »
Training machine learning models that are robust against adversarial inputs poses seemingly insurmountable challenges. To better understand adversarial robustness, we consider the underlying problem of learning robust representations. We develop a notion of representation vulnerability that captures the maximum change of mutual information between the input and output distributions, under the worst-case input perturbation. Then, we prove a theorem that establishes a lower bound on the minimum adversarial risk that can be achieved for any downstream classifier based on its representation vulnerability. We propose an unsupervised learning method for obtaining intrinsically robust representations by maximizing the worst-case mutual information between the input and output distributions. Experiments on downstream classification tasks %and analyses of saliency maps support the robustness of the representations found using unsupervised learning with our training principle.
Author Information
Sicheng Zhu (University of Virginia)
Xiao Zhang (University of Virginia)
David Evans (University of Virginia)
More from the Same Authors
-
2021 : Formalizing Distribution Inference Risks »
Anshuman Suri · Anshuman Suri · David Evans -
2022 : Memorization in NLP Fine-tuning Methods »
FatemehSadat Mireshghallah · FatemehSadat Mireshghallah · Archit Uniyal · Archit Uniyal · Tianhao Wang · Tianhao Wang · David Evans · David Evans · Taylor Berg-Kirkpatrick · Taylor Berg-Kirkpatrick -
2021 Poster: Model-Targeted Poisoning Attacks with Provable Convergence »
Fnu Suya · Saeed Mahloujifar · Anshuman Suri · David Evans · Yuan Tian -
2021 Spotlight: Model-Targeted Poisoning Attacks with Provable Convergence »
Fnu Suya · Saeed Mahloujifar · Anshuman Suri · David Evans · Yuan Tian -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song -
2018 Poster: Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow »
Xiao Zhang · Simon Du · Quanquan Gu -
2018 Oral: Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow »
Xiao Zhang · Simon Du · Quanquan Gu -
2018 Poster: A Primal-Dual Analysis of Global Optimality in Nonconvex Low-Rank Matrix Recovery »
Xiao Zhang · Lingxiao Wang · Yaodong Yu · Quanquan Gu -
2018 Oral: A Primal-Dual Analysis of Global Optimality in Nonconvex Low-Rank Matrix Recovery »
Xiao Zhang · Lingxiao Wang · Yaodong Yu · Quanquan Gu -
2017 Poster: A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery »
Lingxiao Wang · Xiao Zhang · Quanquan Gu -
2017 Talk: A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery »
Lingxiao Wang · Xiao Zhang · Quanquan Gu