Skip to yearly menu bar Skip to main content


Emerging Convolutions for Generative Normalizing Flows

Emiel Hoogeboom · Rianne Van den Berg · Max Welling

Pacific Ballroom #8

Keywords: [ Unsupervised Learning ] [ Representation Learning ] [ Deep Generative Models ]


Generative flows are attractive because they admit exact likelihood optimization and efficient image synthesis. Recently, Kingma & Dhariwal (2018) demonstrated with Glow that generative flows are capable of generating high quality images. We generalize the 1 × 1 convolutions proposed in Glow to invertible d × d convolutions, which are more flexible since they operate on both channel and spatial axes. We propose two methods to produce invertible convolutions, that have receptive fields identical to standard convolutions: Emerging convolutions are obtained by chaining specific autoregressive convolutions, and periodic convolutions are decoupled in the frequency domain. Our experiments show that the flexibility of d × d convolutions significantly improves the performance of generative flow models on galaxy images, CIFAR10 and ImageNet.

Live content is unavailable. Log in and register to view live content