Divisiveness-Consistent Label Distribution Learning
Abstract
Label Distribution Learning (LDL) is an effective learning paradigm for predicting entire conditional label distributions, improving the trustworthiness of predictions in risk-sensitive tasks. Although previous LDL methods achieve satisfactory performance on conventional evaluation metrics, they generally overlook the divisiveness within label distributions, i.e., the propensity of label distribution to exhibit dissension between semantically opposing labels, which is an essential indicator of the practical decision risk. Therefore, we propose a divisiveness‑consistent label distribution learning framework to quantify and preserve the divisiveness information. First, we formalize a divisiveness measure that satisfies the axiomatic property of polarity monotonicity to quantify the divisiveness information. Second, we theoretically demonstrate the inconsistency between conventional loss functions and divisiveness error. Besides, in order to address the adversarial gradient problem arising from directly minimizing the divisiveness error, we propose a pairwise divisiveness loss as an unbiased estimator of the original divisiveness error. Experiments confirm the effectiveness of the proposed method.