Skip to yearly menu bar Skip to main content


Poster

Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation

Masahiro Kato · Takeshi Teshima

Virtual

Keywords: [ Algorithms ] [ Semi-supervised learning ]


Abstract:

Density ratio estimation (DRE) is at the core of various machine learning tasks such as anomaly detection and domain adaptation. In the DRE literature, existing studies have extensively studied methods based on Bregman divergence (BD) minimization. However, when we apply the BD minimization with highly flexible models, such as deep neural networks, it tends to suffer from what we call train-loss hacking, which is a source of over-fitting caused by a typical characteristic of empirical BD estimators. In this paper, to mitigate train-loss hacking, we propose non-negative correction for empirical BD estimators. Theoretically, we confirm the soundness of the proposed method through a generalization error bound. In our experiments, the proposed methods show favorable performances in inlier-based outlier detection.

Chat is not available.