Poster

Disentangling Sampling and Labeling Bias for Learning in Large-output Spaces

Ankit Singh Rawat · Aditya Menon · Wittawat Jitkrittum · Sadeep Jayasumana · Felix Xinnan Yu · Sashank Jakkam Reddi · Sanjiv Kumar

Keywords: [ Supervised Learning ] [ Algorithms ]

[ Abstract ]
[ Paper ]
[ Visit Poster at Spot C1 in Virtual World ] [ Visit Poster at Spot D1 in Virtual World ]
Wed 21 Jul 9 a.m. PDT — 11 a.m. PDT
 
Spotlight presentation: Supervised Learning 1
Wed 21 Jul 5 a.m. PDT — 6 a.m. PDT

Abstract:

Negative sampling schemes enable efficient training given a large number of classes, by offering a means to approximate a computationally expensive loss function that takes all labels into account. In this paper, we present a new connection between these schemes and loss modification techniques for countering label imbalance. We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels. Further, we provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance. We empirically verify our findings on long-tail classification and retrieval benchmarks.

Chat is not available.