Skip to yearly menu bar Skip to main content


Poster

Revisiting precision recall definition for generative modeling

Loic Simon · Ryan Webster · Julien Rabin

Pacific Ballroom #14

Keywords: [ Others ] [ Generative Adversarial Networks ] [ Computer Vision ]


Abstract:

In this article we revisit the definition of Precision-Recall (PR) curves for generative models proposed by (Sajjadi et al., 2018). Rather than providing a scalar for generative quality, PR curves distinguish mode-collapse (poor recall) and bad quality (poor precision). We first generalize their formulation to arbitrary measures hence removing any restriction to finite support. We also expose a bridge between PR curves and type I and type II error (a.k.a. false detection and rejection) rates of likelihood ratio classifiers on the task of discriminating between samples of the two distributions. Building upon this new perspective, we propose a novel algorithm to approximate precision-recall curves, that shares some interesting methodological properties with the hypothesis testing technique from (Lopez-Paz & Oquab, 2017). We demonstrate the interest of the proposed formulation over the original approach on controlled multi-modal datasets.

Live content is unavailable. Log in and register to view live content