Timezone: »
We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data. Traditionally, such a structure is fixed to be a vector space with data variations represented by translations along individual latent dimensions. We argue this simple structure is suboptimal since it requires the model to learn to discard the properties (e.g. different scales of changes, different levels of abstractness) of data variations, which is an extra work than equivariance learning. Instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations. Considering it is hard to conduct training on group structures, we focus on Lie groups and adopt a parameterization using Lie algebra. Based on the parameterization, some disentanglement learning constraints are naturally derived. A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning. Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.
Author Information
Xinqi Zhu (University of Sydney)
Chang Xu (University of Sydney)
Dacheng Tao (The University of Sydney)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Oral: Commutative Lie Group VAE for Disentanglement Learning »
Fri. Jul 23rd 12:00 -- 12:20 AM Room
More from the Same Authors
-
2022 Poster: Spatial-Channel Token Distillation for Vision MLPs »
Yanxi Li · Xinghao Chen · Minjing Dong · Yehui Tang · Yunhe Wang · Chang Xu -
2022 Spotlight: Spatial-Channel Token Distillation for Vision MLPs »
Yanxi Li · Xinghao Chen · Minjing Dong · Yehui Tang · Yunhe Wang · Chang Xu -
2021 Poster: Learning to Weight Imperfect Demonstrations »
Yunke Wang · Chang Xu · Bo Du · Honglak Lee -
2021 Poster: K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets »
Xiu Su · Shan You · Mingkai Zheng · Fei Wang · Chen Qian · Changshui Zhang · Chang Xu -
2021 Spotlight: K-shot NAS: Learnable Weight-Sharing for NAS with K-shot Supernets »
Xiu Su · Shan You · Mingkai Zheng · Fei Wang · Chen Qian · Changshui Zhang · Chang Xu -
2021 Spotlight: Learning to Weight Imperfect Demonstrations »
Yunke Wang · Chang Xu · Bo Du · Honglak Lee -
2020 Poster: Deep Streaming Label Learning »
Zhen Wang · Liu Liu · Dacheng Tao -
2020 Poster: Learning with Bounded Instance- and Label-dependent Label Noise »
Jiacheng Cheng · Tongliang Liu · Kotagiri Ramamohanarao · Dacheng Tao -
2020 Poster: Neural Architecture Search in A Proxy Validation Loss Landscape »
Yanxi Li · Minjing Dong · Yunhe Wang · Chang Xu -
2020 Poster: Label-Noise Robust Domain Adaptation »
Xiyu Yu · Tongliang Liu · Mingming Gong · Kun Zhang · Kayhan Batmanghelich · Dacheng Tao -
2020 Poster: Training Binary Neural Networks through Learning with Noisy Supervision »
Kai Han · Yunhe Wang · Yixing Xu · Chunjing Xu · Enhua Wu · Chang Xu -
2020 Poster: LTF: A Label Transformation Framework for Correcting Label Shift »
Jiaxian Guo · Mingming Gong · Tongliang Liu · Kun Zhang · Dacheng Tao -
2019 Poster: LegoNet: Efficient Convolutional Neural Networks with Lego Filters »
Zhaohui Yang · Yunhe Wang · Chuanjian Liu · Hanting Chen · Chunjing Xu · Boxin Shi · Chao Xu · Chang Xu -
2019 Oral: LegoNet: Efficient Convolutional Neural Networks with Lego Filters »
Zhaohui Yang · Yunhe Wang · Chuanjian Liu · Hanting Chen · Chunjing Xu · Boxin Shi · Chao Xu · Chang Xu