Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling
Transformer Neural Autoregressive Flows
Massimiliano Patacchiola · Aliaksandra Shysheya · Katja Hofmann · Richard E Turner
Keywords: [ Density Estimation ] [ Transformer ] [ neural autoregressive flows ] [ Normalizing flows ]
Density estimation, a central problem in machine learning, can be performed using Normalizing Flows (NFs). NFs comprise a sequence of invertible transformations, that turn a complex target distribution into a simple one, by exploiting the change of variables theorem. Neural Autoregressive Flows (NAFs) and Block Neural Autoregressive Flows (B-NAFs) are arguably the most perfomant members of the NF family. However, they suffer scalability issues and training instability due to the constraints imposed on the network structure. In this paper, we propose a novel solution to these challenges by exploiting transformers to define a new class of neural flows called Transformer Neural Autoregressive Flows (T-NAFs). T-NAFs treat each dimension of a random variable as a separate input token, using attention masking to enforce an autoregressive constraint. We take an amortization-inspired approach where the transformer outputs the parameters of an invertible transformation. The experimental results demonstrate that T-NAFs consistently match or outperform NAFs and B-NAFs across multiple datasets from the UCI benchmark. Additionally, we showcase the effectiveness of T-NAFs in handling datasets with rich dependencies through time and across variables, such as those encountered in climate and weather modeling and reinforcement learning. Remarkably, T-NAFs achieve these results using an order of magnitude fewer parameters than previous approaches and without composing multiple flows.