Timezone: »

 
Poster
GFlowNet-EM for Learning Compositional Latent Variable Models
Edward Hu · Nikolay Malkin · Moksh Jain · Katie Everett · Alexandros Graikos · Yoshua Bengio

Tue Jul 25 02:00 PM -- 04:30 PM (PDT) @ Exhibit Hall 1 #426
Event URL: https://github.com/GFNOrg/GFlowNet-EM »

Latent variable models (LVMs) with discrete compositional latents are an important but challenging setting due to a combinatorially large number of possible configurations of the latents. A key tradeoff in modeling the posteriors over latents is between expressivity and tractable optimization. For algorithms based on expectation-maximization (EM), the E-step is often intractable without restrictive approximations to the posterior. We propose the use of GFlowNets, algorithms for sampling from an unnormalized density by learning a stochastic policy for sequential construction of samples, for this intractable E-step. By training GFlowNets to sample from the posterior over latents, we take advantage of their strengths as amortized variational inference algorithms for complex distributions over discrete structures. Our approach, GFlowNet-EM, enables the training of expressive LVMs with discrete compositional latents, as shown by experiments on non-context-free grammar induction and on images using discrete variational autoencoders (VAEs) without conditional independence enforced in the encoder.

Author Information

Edward Hu (Mila)
Nikolay Malkin (Mila / Université de Montréal)
Moksh Jain (Mila / Université de Montréal)
Katie Everett (Google DeepMind, Massachusetts Institute of Technology)
Alexandros Graikos (Stony Brook University)
Yoshua Bengio (Mila - Quebec AI Institute)

More from the Same Authors