Timezone: »
Variational autoencoder (VAE) estimates the posterior parameters (mean and variance) of latent variables corresponding to each input data. While it is used for many tasks, the transparency of the model is still an underlying issue. This paper provides a quantitative understanding of VAE property through the differential geometric and information-theoretic interpretations of VAE. According to the Rate-distortion theory, the optimal transform coding is achieved by using an orthonormal transform with PCA basis where the transform space is isometric to the input. Considering the analogy of transform coding to VAE, we clarify theoretically and experimentally that VAE can be mapped to an implicit isometric embedding with a scale factor derived from the posterior parameter. As a result, we can estimate the data probabilities in the input space from the prior, loss metrics, and corresponding posterior parameters, and further, the quantitative importance of each latent variable can be evaluated like the eigenvalue of PCA.
Author Information
Akira Nakagawa (Fujitsu Limited)
Keizo Kato (Fujitsu Laboratories Ltd.)
Taiji Suzuki (The University of Tokyo / RIKEN)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding »
Wed. Jul 21st 04:00 -- 06:00 AM Room Virtual
More from the Same Authors
-
2021 Poster: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting »
Shunta Akiyama · Taiji Suzuki -
2021 Spotlight: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting »
Shunta Akiyama · Taiji Suzuki -
2021 Poster: Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning »
Tomoya Murata · Taiji Suzuki -
2021 Spotlight: Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning »
Tomoya Murata · Taiji Suzuki -
2020 Poster: Rate-distortion optimization guided autoencoder for isometric embedding in Euclidean latent space »
Keizo Kato · Jing Zhou · Tomotake Sasaki · Akira Nakagawa -
2019 Poster: Approximation and non-parametric estimation of ResNet-type convolutional neural networks »
Kenta Oono · Taiji Suzuki -
2019 Oral: Approximation and non-parametric estimation of ResNet-type convolutional neural networks »
Kenta Oono · Taiji Suzuki -
2018 Poster: Functional Gradient Boosting based on Residual Network Perception »
Atsushi Nitanda · Taiji Suzuki -
2018 Oral: Functional Gradient Boosting based on Residual Network Perception »
Atsushi Nitanda · Taiji Suzuki