Timezone: »

 
Leveraging Task Structures for Improved Identifiability in Neural Network Representations
Wenlin Chen · Julien Horwood · Juyeon Heo · Jose Miguel Hernandez-Lobato
Event URL: https://openreview.net/forum?id=yGoJ0xQAG3 »

This work extends the theory of identifiability in supervised learning by considering the consequences of having access to a distribution of tasks. In such cases, we show that identifiability is achievable even in the case of regression, extending prior work restricted to the single-task classification case. Furthermore, we show that the existence of a task distribution which defines a conditional prior over latent variables reduces the equivalence class for identifiability to permutations and scaling, a much stronger and more useful result. When we further assume a causal structure over these tasks, our approach enables simple maximum marginal likelihood optimization together with downstream applicability to causal representation learning. Empirically, we validate that our model outperforms more general unsupervised models in recovering canonical representations for arbitrary non-linear data arising from randomly initialized neural networks.

Author Information

Wenlin Chen (University of Cambridge)
Julien Horwood (University of Cambridge)
Juyeon Heo (University of Cambridge)
Jose Miguel Hernandez-Lobato (University of Cambridge)

More from the Same Authors