Timezone: »

 
Poster
When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee
Dixian Zhu · Gang Li · Bokun Wang · Xiaodong Wu · Tianbao Yang

Tue Jul 19 03:30 PM -- 05:30 PM (PDT) @ Hall E #416

In this paper, we propose systematic and efficient gradient-based methods for both one-way and two-way partial AUC (pAUC) maximization that are applicable to deep learning. We propose new formulations of pAUC surrogate objectives by using the distributionally robust optimization (DRO) to define the loss for each individual positive data. We consider two formulations of DRO, one of which is based on conditional-value-at-risk (CVaR) that yields a non-smooth but exact estimator for pAUC, and another one is based on a KL divergence regularized DRO that yields an inexact but smooth (soft) estimator for pAUC. For both one-way and two-way pAUC maximization, we propose two algorithms and prove their convergence for optimizing their two formulations, respectively. Experiments demonstrate the effectiveness of the proposed algorithms for pAUC maximization for deep learning on various datasets.

Author Information

Dixian Zhu (University of Iowa)
Gang Li (the University of Iowa)
Bokun Wang (The University of Iowa)
Xiaodong Wu (University of Iowa)
Tianbao Yang (The University of Iowa)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors