Timezone: »

Does Distributionally Robust Supervised Learning Give Robust Classifiers?
Weihua Hu · Gang Niu · Issei Sato · Masashi Sugiyama

Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #98

Distributionally Robust Supervised Learning (DRSL) is necessary for building reliable machine learning systems. When machine learning is deployed in the real world, its performance can be significantly degraded because test data may follow a different distribution from training data. DRSL with f-divergences explicitly considers the worst-case distribution shift by minimizing the adversarially reweighted training loss. In this paper, we analyze this DRSL, focusing on the classification scenario. Since the DRSL is explicitly formulated for a distribution shift scenario, we naturally expect it to give a robust classifier that can aggressively handle shifted distributions. However, surprisingly, we prove that the DRSL just ends up giving a classifier that exactly fits the given training distribution, which is too pessimistic. This pessimism comes from two sources: the particular losses used in classification and the fact that the variety of distributions to which the DRSL tries to be robust is too wide. Motivated by our analysis, we propose simple DRSL that overcomes this pessimism and empirically demonstrate its effectiveness.

Author Information

Weihua Hu (The University of Tokyo)
Gang Niu (RIKEN)
Gang Niu

Gang Niu is currently an indefinite-term senior research scientist at RIKEN Center for Advanced Intelligence Project.

Issei Sato (University of Tokyo / RIKEN)
Masashi Sugiyama (RIKEN / The University of Tokyo)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors