Skip to yearly menu bar Skip to main content


Poster

Principled learning method for Wasserstein distributionally robust optimization with local perturbations

Yongchan Kwon · Wonyoung Kim · Joong-Ho (Johann) Won · Myunghee Cho Paik

Virtual

Keywords: [ Supervised Learning ] [ Statistical Learning Theory ] [ Robust Statistics and Machine Learning ] [ Learning Theory ]


Abstract:

Wasserstein distributionally robust optimization (WDRO) attempts to learn a model that minimizes the local worst-case risk in the vicinity of the empirical data distribution defined by Wasserstein ball. While WDRO has received attention as a promising tool for inference since its introduction, its theoretical understanding has not been fully matured. Gao et al. (2017) proposed a minimizer based on a tractable approximation of the local worst-case risk, but without showing risk consistency. In this paper, we propose a minimizer based on a novel approximation theorem and provide the corresponding risk consistency results. Furthermore, we develop WDRO inference for locally perturbed data that include the Mixup (Zhang et al., 2017) as a special case. We show that our approximation and risk consistency results naturally extend to the cases when data are locally perturbed. Numerical experiments demonstrate robustness of the proposed method using image classification datasets. Our results show that the proposed method achieves significantly higher accuracy than baseline models on noisy datasets.

Chat is not available.