Timezone: »

Learning from Nested Data with Ornstein Auto-Encoders
Youngwon Choi · Sungdong Lee · Joong-Ho (Johann) Won

Wed Jul 21 09:00 AM -- 11:00 AM (PDT) @ None #None

Many of real-world data, e.g., the VGGFace2 dataset, which is a collection of multiple portraits of individuals, come with nested structures due to grouped observation. The Ornstein auto-encoder (OAE) is an emerging framework for representation learning from nested data, based on an optimal transport distance between random processes. An attractive feature of OAE is its ability to generate new variations nested within an observational unit, whether or not the unit is known to the model. A previously proposed algorithm for OAE, termed the random-intercept OAE (RIOAE), showed an impressive performance in learning nested representations, yet lacks theoretical justification. In this work, we show that RIOAE minimizes a loose upper bound of the employed optimal transport distance. After identifying several issues with RIOAE, we present the product-space OAE (PSOAE) that minimizes a tighter upper bound of the distance and achieves orthogonality in the representation space. PSOAE alleviates the instability of RIOAE and provides more flexible representation of nested data. We demonstrate the high performance of PSOAE in the three key tasks of generative models: exemplar generation, style transfer, and new concept generation.

Author Information

Youngwon Choi (University of California, Los Angeles)
Sungdong Lee (Seoul National University)
Joong-Ho (Johann) Won (Seoul National University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors