Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Spurious correlations, Invariance, and Stability (SCIS)

Learning Switchable Representation with Masked Decoding and Sparse Encoding

Kohei Hayashi · Masanori Koyama

Keywords: [ sparseness ] [ identifiability ] [ domain adaptation ] [ unsupervised representation learning ]


Abstract:

In this study, we explore the unsupervised learning based on private/shared factor decomposition, which decomposes the latent space into private factors that vary only in a specific domain the shared factors that vary in all domains. We study when/how we can force the model to respect the true private/shared factor decomposition that underlies the dataset. We show that, when we train a masked decoder and an encoder with sparseness regularization in the latent space, we can identify the true private/shared decomposition up to mixing within each component. We empirically confirm this result and study the efficacy of this training strategy as a representation learning method.

Chat is not available.