On the difficulty of unbiased alpha divergence minimization

Tomas Geffner · Justin Domke

[ Abstract ] [ Livestream: Visit Probabilistic Methods 3 ] [ Paper ]
Thu 22 Jul 7:45 a.m. — 7:50 a.m. PDT

Several approximate inference algorithms have been proposed to minimize an alpha-divergence between an approximating distribution and a target distribution. Many of these algorithms introduce bias, the magnitude of which becomes problematic in high dimensions. Other algorithms are unbiased. These often seem to suffer from high variance, but little is rigorously known. In this work we study unbiased methods for alpha-divergence minimization through the Signal-to-Noise Ratio (SNR) of the gradient estimator. We study several representative scenarios where strong analytical results are possible, such as fully-factorized or Gaussian distributions. We find that when alpha is not zero, the SNR worsens exponentially in the dimensionality of the problem. This casts doubt on the practicality of these methods. We empirically confirm these theoretical results.

Chat is not available.