Timezone: »
A deep neural network (DNN) consists of a nonlinear transformation from an input to a feature representation, followed by a common softmax linear classifier. Though many efforts have been devoted to designing a proper architecture for nonlinear transformation, little investigation has been done on the classifier part. In this paper, we show that a properly designed classifier can improve robustness to adversarial attacks and lead to better prediction results. Specifically, we define a Max-Mahalanobis distribution (MMD) and theoretically show that if the input distributes as a MMD, the linear discriminant analysis (LDA) classifier will have the best robustness to adversarial examples. We further propose a novel Max-Mahalanobis linear discriminant analysis (MM-LDA) network, which explicitly maps a complicated data distribution in the input space to a MMD in the latent feature space and then applies LDA to make predictions. Our results demonstrate that the MM-LDA networks are significantly more robust to adversarial attacks, and have better performance in class-biased classification.
Author Information
Tianyu Pang (Tsinghua University)
Chao Du (Tsinghua University)
Jun Zhu (Tsinghua University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Max-Mahalanobis Linear Discriminant Analysis Networks »
Thu. Jul 12th 03:30 -- 03:40 PM Room A7
More from the Same Authors
-
2021 : Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk »
Chengyang Ying · Xinning Zhou · Dong Yan · Jun Zhu -
2021 : Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks »
Xiao Yang · Yinpeng Dong · Tianyu Pang -
2021 : Strategically-timed State-Observation Attacks on Deep Reinforcement Learning Agents »
Xinning Zhou · You Qiaoben · Chengyang Ying · Jun Zhu -
2021 : Adversarial Semantic Contour for Object Detection »
Yichi Zhang · Zijian Zhu · Xiao Yang · Jun Zhu -
2021 : Query-based Adversarial Attacks on Graph with Fake Nodes »
Zhengyi Wang · Zhongkai Hao · Jun Zhu -
2022 Poster: NeuralEF: Deconstructing Kernels by Deep Neural Networks »
Zhijie Deng · Jiaxin Shi · Jun Zhu -
2022 Spotlight: NeuralEF: Deconstructing Kernels by Deep Neural Networks »
Zhijie Deng · Jiaxin Shi · Jun Zhu -
2022 Poster: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2022 Poster: Fast Lossless Neural Compression with Integer-Only Discrete Flows »
Siyu Wang · Jianfei Chen · Chongxuan Li · Jun Zhu · Bo Zhang -
2022 Spotlight: Fast Lossless Neural Compression with Integer-Only Discrete Flows »
Siyu Wang · Jianfei Chen · Chongxuan Li · Jun Zhu · Bo Zhang -
2022 Spotlight: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2022 Poster: Thompson Sampling for (Combinatorial) Pure Exploration »
Siwei Wang · Jun Zhu -
2022 Spotlight: Thompson Sampling for (Combinatorial) Pure Exploration »
Siwei Wang · Jun Zhu -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2021 Poster: Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models »
Fan Bao · Kun Xu · Chongxuan Li · Lanqing Hong · Jun Zhu · Bo Zhang -
2021 Spotlight: Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models »
Fan Bao · Kun Xu · Chongxuan Li · Lanqing Hong · Jun Zhu · Bo Zhang -
2020 Poster: Understanding and Stabilizing GANs' Training Dynamics Using Control Theory »
Kun Xu · Chongxuan Li · Jun Zhu · Bo Zhang -
2020 Poster: Variance Reduction and Quasi-Newton for Particle-Based Variational Inference »
Michael Zhu · Chang Liu · Jun Zhu -
2020 Poster: VFlow: More Expressive Generative Flows with Variational Data Augmentation »
Jianfei Chen · Cheng Lu · Biqi Chenli · Jun Zhu · Tian Tian -
2020 Poster: Nonparametric Score Estimators »
Yuhao Zhou · Jiaxin Shi · Jun Zhu -
2019 Poster: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2019 Oral: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2018 Poster: Message Passing Stein Variational Gradient Descent »
Jingwei Zhuo · Chang Liu · Jiaxin Shi · Jun Zhu · Ning Chen · Bo Zhang -
2018 Poster: Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors »
Yichi Zhou · Jun Zhu · Jingwei Zhuo -
2018 Oral: Message Passing Stein Variational Gradient Descent »
Jingwei Zhuo · Chang Liu · Jiaxin Shi · Jun Zhu · Ning Chen · Bo Zhang -
2018 Oral: Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors »
Yichi Zhou · Jun Zhu · Jingwei Zhuo -
2018 Poster: Adversarial Attack on Graph Structured Data »
Hanjun Dai · Hui Li · Tian Tian · Xin Huang · Lin Wang · Jun Zhu · Le Song -
2018 Oral: Adversarial Attack on Graph Structured Data »
Hanjun Dai · Hui Li · Tian Tian · Xin Huang · Lin Wang · Jun Zhu · Le Song -
2018 Poster: Stochastic Training of Graph Convolutional Networks with Variance Reduction »
Jianfei Chen · Jun Zhu · Le Song -
2018 Poster: A Spectral Approach to Gradient Estimation for Implicit Distributions »
Jiaxin Shi · Shengyang Sun · Jun Zhu -
2018 Oral: A Spectral Approach to Gradient Estimation for Implicit Distributions »
Jiaxin Shi · Shengyang Sun · Jun Zhu -
2018 Oral: Stochastic Training of Graph Convolutional Networks with Variance Reduction »
Jianfei Chen · Jun Zhu · Le Song -
2017 Poster: Identify the Nash Equilibrium in Static Games with Random Payoffs »
Yichi Zhou · Jialian Li · Jun Zhu -
2017 Talk: Identify the Nash Equilibrium in Static Games with Random Payoffs »
Yichi Zhou · Jialian Li · Jun Zhu