Timezone: »
For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions
Nishanth Dikkala · Gal Kaplun · Rina Panigrahy
Fri Jul 22 01:45 PM  03:00 PM (PDT) @
It is well established that training deep neural networks gives useful representations that capture essential features of the inputs. However, these representations are poorly understood in theory and practice. An important question for supervised learning is whether these representations capture features informative for classification, while filtering out noninformative noisy ones. We study this question formally by considering a generative process where each class is associated with a highdimensional manifold and different classes define different manifolds. Each input of a class is produced using two latent vectors: (i) a ``manifold identifier" $\gamma$ and; (ii)~a ``transformation parameter" $\theta$ that shifts examples along the surface of a manifold. E.g., $\gamma$ might represent a canonical image of a dog, and $\theta$ might stand for variations in pose or lighting. We provide theoretical evidence that neural representations can be viewed as LSHlike functions that map each input to an embedding that is a function of solely the informative $\gamma$ and invariant to $\theta$, effectively recovering the manifold identifier $\gamma$. We prove that we get oneshot learning to unseen classes as one of the desirable consequences of this behavior.
Author Information
Nishanth Dikkala (Google Research)
Gal Kaplun (Harvard)
Rina Panigrahy (Google)
More from the Same Authors

2022 : Provable Hierarchical Lifelong Learning with a Sketchbased Modular Architecture »
ZIHAO DENG · Zee Fryer · Brendan Juba · Rina Panigrahy · Xin Wang 
2022 : A Theoretical View on Sparsely Activated Networks »
Cenk Baykal · Nishanth Dikkala · Rina Panigrahy · Cyrus Rashtchian · Xin Wang 
2022 Poster: Do More Negative Samples Necessarily Hurt In Contrastive Learning? »
Pranjal Awasthi · Nishanth Dikkala · Pritish Kamath 
2022 Oral: Do More Negative Samples Necessarily Hurt In Contrastive Learning? »
Pranjal Awasthi · Nishanth Dikkala · Pritish Kamath 
2021 Poster: Statistical Estimation from Dependent Data »
Vardis Kandiros · Yuval Dagan · Nishanth Dikkala · Surbhi Goel · Constantinos Daskalakis 
2021 Spotlight: Statistical Estimation from Dependent Data »
Vardis Kandiros · Yuval Dagan · Nishanth Dikkala · Surbhi Goel · Constantinos Daskalakis 
2019 Poster: Robust Influence Maximization for Hyperparametric Models »
Dimitrios Kalimeris · Gal Kaplun · Yaron Singer 
2019 Oral: Robust Influence Maximization for Hyperparametric Models »
Dimitrios Kalimeris · Gal Kaplun · Yaron Singer