Timezone: »

Max-Mahalanobis Linear Discriminant Analysis Networks
Tianyu Pang · Chao Du · Jun Zhu

Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #38

A deep neural network (DNN) consists of a nonlinear transformation from an input to a feature representation, followed by a common softmax linear classifier. Though many efforts have been devoted to designing a proper architecture for nonlinear transformation, little investigation has been done on the classifier part. In this paper, we show that a properly designed classifier can improve robustness to adversarial attacks and lead to better prediction results. Specifically, we define a Max-Mahalanobis distribution (MMD) and theoretically show that if the input distributes as a MMD, the linear discriminant analysis (LDA) classifier will have the best robustness to adversarial examples. We further propose a novel Max-Mahalanobis linear discriminant analysis (MM-LDA) network, which explicitly maps a complicated data distribution in the input space to a MMD in the latent feature space and then applies LDA to make predictions. Our results demonstrate that the MM-LDA networks are significantly more robust to adversarial attacks, and have better performance in class-biased classification.

Author Information

Tianyu Pang (Tsinghua University)
Chao Du (Tsinghua University)
Jun Zhu (Tsinghua University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors