Timezone: »

 
Oral
Complementary-Label Learning for Arbitrary Losses and Models
Takashi Ishida · Gang Niu · Aditya Menon · Masashi Sugiyama

Thu Jun 13 09:25 AM -- 09:30 AM (PDT) @ Room 103

In contrast to the standard classification paradigm where the true (or possibly noisy) class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label, which only specifies one of the classes that the pattern does not belong to. The goal of this paper is to derive a novel framework of complementary-label learning with an unbiased estimator of the classification risk, for arbitrary losses and models---all existing methods have failed to achieve this goal. With this framework, model/hyper-parameter selection (through cross-validation) becomes possible without the need of any ordinarily labeled validation data, while using any linear/non-linear models or convex/non-convex loss functions. We further improve the risk estimator by a non-negative correction and gradient-descent-ascent trick, and demonstrate its superiority through experiments.

Author Information

Takashi Ishida (The University of Tokyo / RIKEN)
Gang Niu (RIKEN)

Gang Niu is currently a research scientist (indefinite-term) at RIKEN Center for Advanced Intelligence Project. He received the PhD degree in computer science from Tokyo Institute of Technology in 2013. Before joining RIKEN as a research scientist, he was a senior software engineer at Baidu and then an assistant professor at the University of Tokyo. He has published more than 70 journal articles and conference papers, including 14 NeurIPS (1 oral and 3 spotlights), 28 ICML, and 2 ICLR (1 oral) papers. He has served as an area chair 14 times, including ICML 2019--2021, NeurIPS 2019--2021, and ICLR 2021--2022.

Aditya Menon (Australian National University)
Masashi Sugiyama (RIKEN / The University of Tokyo)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors