Timezone: »

 
Spotlight
On Linear Identifiability of Learned Representations
Geoffrey Roeder · Luke Metz · Durk Kingma

Thu Jul 22 05:25 AM -- 05:30 AM (PDT) @ None

Identifiability is a desirable property of a statistical model: it implies that the true model parameters may be estimated to any desired precision, given sufficient computational resources and data. We study identifiability in the context of representation learning: discovering nonlinear data representations that are optimal with respect to some downstream task. When parameterized as deep neural networks, such representation functions lack identifiability in parameter space, because they are over-parameterized by design. In this paper, building on recent advances in nonlinear Independent Components Analysis, we aim to rehabilitate identifiability by showing that a large family of discriminative models are in fact identifiable in function space, up to a linear indeterminacy. Many models for representation learning in a wide variety of domains have been identifiable in this sense, including text, images and audio, state-of-the-art at time of publication. We derive sufficient conditions for linear identifiability and provide empirical support for the result on both simulated and real-world data.

Author Information

Geoffrey Roeder (Princeton University)
Luke Metz (Google Brain)
Durk Kingma (Google Brain)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors