Skip to yearly menu bar Skip to main content


Poster

Cut your Losses with Squentropy

Like Hui · Misha Belkin · Stephen Wright

Exhibit Hall 1 #741
[ ]
[ PDF [ Poster

Abstract:

Nearly all practical neural models for classification are trained using the cross-entropy loss. Yet this ubiquitous choice is supported by little theoretical or empirical evidence. Recent work (Hui & Belkin, 2020) suggests that training using the (rescaled) square loss is often superior in terms of the classification accuracy. In this paper we propose the "squentropy" loss, which is the sum of two terms: the cross-entropy loss and the average square loss over the incorrect classes. We provide an extensive set of experiment on multi-class classification problems showing that the squentropy loss outperforms both the pure cross-entropy and rescaled square losses in terms of the classification accuracy. We also demonstrate that it provides significantly better model calibration than either of these alternative losses and, furthermore, has less variance with respect to the random initialization. Additionally, in contrast to the square loss, squentropy loss can frequently be trained using exactly the same optimization parameters, including the learning rate, as the standard cross-entropy loss, making it a true ''plug-and-play'' replacement. Finally, unlike the rescaled square loss, multiclass squentropy contains no parameters that need to be adjusted.

Chat is not available.