Skip to yearly menu bar Skip to main content


Poster

LegendreTron: Uprising Proper Multiclass Loss Learning

Kevin H. Lam · Christian Walder · Spiridon Penev · Richard Nock

Exhibit Hall 1 #333
[ ]
[ PDF [ Poster

Abstract: Loss functions serve as the foundation of supervised learning and are often chosen prior to model development. To avoid potentially ad hoc choices of losses, statistical decision theory describes a desirable property for losses known as *properness*, which asserts that Bayes' rule is optimal. Recent works have sought to *learn losses* and models jointly. Existing methods do this by fitting an inverse canonical link function which monotonically maps R to [0,1] to estimate probabilities for binary problems. In this paper, we extend monotonicity to maps between RC1 and the projected probability simplex Δ~C1 by using monotonicity of gradients of convex functions. We present LegendreTron as a novel and practical method that jointly learns *proper canonical losses* and probabilities for multiclass problems. Tested on a benchmark of domains with up to 1,000 classes, our experimental results show that our method consistently outperforms the natural multiclass baseline under a t-test at 99% significance on all datasets with greater than 10 classes.

Chat is not available.