Timezone: »

 
Poster
Maximum Likelihood with Bias-Corrected Calibration is Hard-To-Beat at Label Shift Adaptation
Amr Mohamed Alexandari · Anshul Kundaje · Avanti Shrikumar

Tue Jul 14 11:00 AM -- 11:45 AM & Tue Jul 14 10:00 PM -- 10:45 PM (PDT) @ Virtual #None

Label shift refers to the phenomenon where the prior class probability p(y) changes between the training and test distributions, while the conditional probability p(x|y) stays fixed. Label shift arises in settings like medical diagnosis, where a classifier trained to predict disease given symptoms must be adapted to scenarios where the baseline prevalence of the disease is different. Given estimates of p(y|x) from a predictive model, Saerens et al. proposed an efficient maximum likelihood algorithm to correct for label shift that does not require model retraining, but a limiting assumption of this algorithm is that p(y|x) is calibrated, which is not true of modern neural networks. Recently, Black Box Shift Learning (BBSL) and Regularized Learning under Label Shifts (RLLS) have emerged as state-of-the-art techniques to cope with label shift when a classifier does not output calibrated probabilities, but both methods require model retraining with importance weights and neither has been benchmarked against maximum likelihood. Here we (1) show that combining maximum likelihood with a type of calibration we call bias-corrected calibration outperforms both BBSL and RLLS across diverse datasets and distribution shifts, (2) prove that the maximum likelihood objective is concave, and (3) introduce a principled strategy for estimating source-domain priors that improves robustness to poor calibration. This work demonstrates that the maximum likelihood with appropriate calibration is a formidable and efficient baseline for label shift adaptation; notebooks reproducing experiments available at https://github.com/kundajelab/labelshiftexperiments

Author Information

Amr Mohamed Alexandari (Stanford University)
Anshul Kundaje (Stanford University)
Avanti Shrikumar (Stanford University)

More from the Same Authors

  • 2021 Workshop: ICML 2021 Workshop on Computational Biology »
    Yubin Xie · Cassandra Burdziak · Amine Remita · Elham Azizi · Abdoulaye BanirĂ© Diallo · Sandhya Prabhakaran · Debora Marks · Dana Pe'er · Wesley Tansey · Julia Vogt · Engelbert MEPHU NGUIFO · Jaan Altosaar · Anshul Kundaje · Sabeur Aridhi · Bishnu Sarker · Wajdi Dhifli · Alexander Anderson
  • 2021 Poster: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
    Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang
  • 2021 Oral: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
    Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang
  • 2020 Workshop: ICML 2020 Workshop on Computational Biology »
    Workshop CompBio · Delasa Aghamirzaie · Alexander Anderson · Elham Azizi · Abdoulaye BanirĂ© Diallo · Cassandra Burdziak · Jill Gallaher · Anshul Kundaje · Dana Pe'er · Sandhya Prabhakaran · Amine Remita · Mark Robertson-Tessi · Wesley Tansey · Julia Vogt · Yubin Xie
  • 2019 Workshop: ICML 2019 Workshop on Computational Biology »
    Donna Pe'er · Sandhya Prabhakaran · Elham Azizi · Abdoulaye BanirĂ© Diallo · Anshul Kundaje · Barbara Engelhardt · Wajdi Dhifli · Engelbert MEPHU NGUIFO · Wesley Tansey · Julia Vogt · Jennifer Listgarten · Cassandra Burdziak · Workshop CompBio
  • 2017 Poster: Learning Important Features Through Propagating Activation Differences »
    Avanti Shrikumar · Peyton Greenside · Anshul Kundaje
  • 2017 Talk: Learning Important Features Through Propagating Activation Differences »
    Avanti Shrikumar · Peyton Greenside · Anshul Kundaje