Skip to yearly menu bar Skip to main content


Cold Analysis of Rao-Blackwellized Straight-Through Gumbel-Softmax Gradient Estimator

Alexander Shekhovtsov

Exhibit Hall 1 #726
[ ]
[ PDF [ Poster


Many problems in machine learning require an estimate of the gradient of an expectation in discrete random variables with respect to the sampling distribution. This work is motivated by the development of the Gumbel-Softmax family of estimators, which use a temperature-controlled relaxation of discrete variables. The state-of-the art in this family, the Gumbel-Rao estimator uses an extra internal sampling to reduce the variance, which may be costly. We analyze this estimator and show that it possesses a zero temperature limit with a surprisingly simple closed form. The limit estimator, called ZGR, has favorable bias and variance properties, it is easy to implement and computationally inexpensive. It decomposes as the average of the straight through (ST) estimator and DARN estimator --- two basic but not very well performing on their own estimators. We demonstrate that the simple ST--ZGR family of estimators practically dominates in the bias-variance tradeoffs the whole GR family while also outperforming SOTA unbiased estimators.

Chat is not available.