Timezone: »

Do More Negative Samples Necessarily Hurt In Contrastive Learning?
Pranjal Awasthi · Nishanth Dikkala · Pritish Kamath

Thu Jul 21 01:05 PM -- 01:25 PM (PDT) @ Hall F

Recent investigations in noise contrastive estimation suggest, both empirically as well as theoretically, that while having more negative samples'' in the contrastive loss improves downstream classification performance initially, but beyond a threshold, it results in worse downstream classification performance due to acollision-coverage'' tradeoff. But is such a phenomenon inherent in contrastive learning?We show in a simple framework, where positive pairs are generated by sampling from the underlying latent class (introduced by Saunshi et al. (ICML 2019)), that the downstream performance of the representation optimizing the (population) contrastive loss in fact does not degrade with the number of negative samples. Along the way, we give a structural characterization of the optimal representation under such types of noise contrastive estimation. We also provide empirical support for our observations on CIFAR-10 and CIFAR-100 datasets.

Author Information

Pranjal Awasthi (Google)
Nishanth Dikkala (Google Research)
Pritish Kamath (Google Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors