Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Topology, Algebra, and Geometry in Machine Learning

Robust Lp-Norm Linear Discriminant Analysis with Proxy Matrix Optimization

Navya Nagananda · Breton Minnehan · Andreas Savakis


Abstract:

Linear Discriminant Analysis (LDA) is an established supervised dimensionality reduction method that is traditionally based on the L2-norm. However, the standard L2-norm LDA is susceptible to outliers in the data that often contribute to a drop in accuracy. Using the L1 or fractional p-norms makes LDA more robust to outliers, but it is a harder problem to solve due to the nature of the corresponding objective functions. In this paper, we leverage the orthogonal constraint of the Grassmann manifold to iteratively obtain the optimal projection matrix for the data in a lower dimensional space. Instead of optimizing the matrix directly on the manifold, we use the proxy matrix optimization (PMO) method, utilizing an auxiliary matrix in ambient space that is retracted to the closest location on the manifold along the loss minimizing geodesic. The Lp-LDA-PMO learning is based on backpropagation, which allows easy integration in a neural network and flexibility to change the value of the p-norm. Our experiments on synthetic and real data show that using fractional p-norms for LDA leads to an improvement in accuracy compared to the traditional L2-based LDA.

Chat is not available.