Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization

Hedda Cohen Indelman · Tamir Hazan

[ Abstract ] [ Livestream: Visit Supervised Learning 1 ] [ Paper ]
Wed 21 Jul 5:30 a.m. — 5:35 a.m. PDT

Direct loss minimization is a popular approach for learning predictors over structured label spaces. This approach is computationally appealing as it replaces integration with optimization and allows to propagate gradients in a deep net using loss-perturbed prediction. Recently, this technique was extended to generative models, by introducing a randomized predictor that samples a structure from a randomly perturbed score function. In this work, we interpolate between these techniques by learning the variance of randomized structured predictors as well as their mean, in order to balance between the learned score function and the randomized noise. We demonstrate empirically the effectiveness of learning this balance in structured discrete spaces.

Chat is not available.