Timezone: »
Identifiability is a desirable property of a statistical model: it implies that the true model parameters may be estimated to any desired precision, given sufficient computational resources and data. We study identifiability in the context of representation learning: discovering nonlinear data representations that are optimal with respect to some downstream task. When parameterized as deep neural networks, such representation functions lack identifiability in parameter space, because they are over-parameterized by design. In this paper, building on recent advances in nonlinear Independent Components Analysis, we aim to rehabilitate identifiability by showing that a large family of discriminative models are in fact identifiable in function space, up to a linear indeterminacy. Many models for representation learning in a wide variety of domains have been identifiable in this sense, including text, images and audio, state-of-the-art at time of publication. We derive sufficient conditions for linear identifiability and provide empirical support for the result on both simulated and real-world data.
Author Information
Geoffrey Roeder (Princeton University)
Luke Metz (Google Brain)
Durk Kingma (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: On Linear Identifiability of Learned Representations »
Thu. Jul 22nd 04:00 -- 06:00 PM Room
More from the Same Authors
-
2022 : Discovered Policy Optimisation »
Christopher Lu · Jakub Grudzien Kuba · Alistair Letcher · Luke Metz · Christian Schroeder · Jakob Foerster -
2021 Poster: Learn2Hop: Learned Optimization on Rough Landscapes »
Amil Merchant · Luke Metz · Samuel Schoenholz · Ekin Dogus Cubuk -
2021 Spotlight: Learn2Hop: Learned Optimization on Rough Landscapes »
Amil Merchant · Luke Metz · Samuel Schoenholz · Ekin Dogus Cubuk -
2021 Poster: Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies »
Paul Vicol · Luke Metz · Jascha Sohl-Dickstein -
2021 Oral: Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies »
Paul Vicol · Luke Metz · Jascha Sohl-Dickstein -
2019 : Spotlight »
Tyler Scott · Kiran Thekumparampil · Jonathan Aigrain · Rene Bidart · Priyadarshini Panda · Dian Ang Yap · Yaniv Yacoby · Raphael Gontijo Lopes · Alberto Marchisio · Erik Englesson · Wanqian Yang · Moritz Graule · Yi Sun · Daniel Kang · Mike Dusenberry · Min Du · Hartmut Maennel · Kunal Menda · Vineet Edupuganti · Luke Metz · David Stutz · Vignesh Srinivasan · Timo Sämann · Vineeth N Balasubramanian · Sina Mohseni · Rob Cornish · Judith Butepage · Zhangyang Wang · Bai Li · Bo Han · Honglin Li · Maksym Andriushchenko · Lukas Ruff · Meet P. Vadera · Yaniv Ovadia · Sunil Thulasidasan · Disi Ji · Gang Niu · Saeed Mahloujifar · Aviral Kumar · SANGHYUK CHUN · Dong Yin · Joyce Xu Xu · Hugo Gomes · Raanan Rohekar -
2019 Poster: Understanding and correcting pathologies in the training of learned optimizers »
Luke Metz · Niru Maheswaranathan · Jeremy Nixon · Daniel Freeman · Jascha Sohl-Dickstein -
2019 Poster: Guided evolutionary strategies: augmenting random search with surrogate gradients »
Niru Maheswaranathan · Luke Metz · George Tucker · Dami Choi · Jascha Sohl-Dickstein -
2019 Oral: Guided evolutionary strategies: augmenting random search with surrogate gradients »
Niru Maheswaranathan · Luke Metz · George Tucker · Dami Choi · Jascha Sohl-Dickstein -
2019 Oral: Understanding and correcting pathologies in the training of learned optimizers »
Luke Metz · Niru Maheswaranathan · Jeremy Nixon · Daniel Freeman · Jascha Sohl-Dickstein -
2019 Poster: Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems »
Ted Meeds · Geoffrey Roeder · Paul Grant · Andrew Phillips · Neil Dalchau -
2019 Oral: Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems »
Ted Meeds · Geoffrey Roeder · Paul Grant · Andrew Phillips · Neil Dalchau