Timezone: »

 
Poster
Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation
Masahiro Kato · Takeshi Teshima

Thu Jul 22 09:00 AM -- 11:00 AM (PDT) @ Virtual

Density ratio estimation (DRE) is at the core of various machine learning tasks such as anomaly detection and domain adaptation. In the DRE literature, existing studies have extensively studied methods based on Bregman divergence (BD) minimization. However, when we apply the BD minimization with highly flexible models, such as deep neural networks, it tends to suffer from what we call train-loss hacking, which is a source of over-fitting caused by a typical characteristic of empirical BD estimators. In this paper, to mitigate train-loss hacking, we propose non-negative correction for empirical BD estimators. Theoretically, we confirm the soundness of the proposed method through a generalization error bound. In our experiments, the proposed methods show favorable performances in inlier-based outlier detection.

Author Information

Masahiro Kato (Cyberagent)
Takeshi Teshima (The University of Tokyo / RIKEN)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors