Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Topology, Algebra, and Geometry in Machine Learning

For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions

Nishanth Dikkala · Gal Kaplun · Rina Panigrahy


Abstract: It is well established that training deep neural networks gives useful representations that capture essential features of the inputs. However, these representations are poorly understood in theory and practice. An important question for supervised learning is whether these representations capture features informative for classification, while filtering out non-informative noisy ones. We study this question formally by considering a generative process where each class is associated with a high-dimensional manifold and different classes define different manifolds. Each input of a class is produced using two latent vectors: (i) a ``manifold identifier" $\gamma$ and; (ii)~a ``transformation parameter" $\theta$ that shifts examples along the surface of a manifold. E.g., $\gamma$ might represent a canonical image of a dog, and $\theta$ might stand for variations in pose or lighting. We provide theoretical evidence that neural representations can be viewed as LSH-like functions that map each input to an embedding that is a function of solely the informative $\gamma$ and invariant to $\theta$, effectively recovering the manifold identifier $\gamma$. We prove that we get one-shot learning to unseen classes as one of the desirable consequences of this behavior.

Chat is not available.