Timezone: »

 
Poster
When does Privileged information Explain Away Label Noise?
Guillermo Ortiz Jimenez · Mark Collier · Anant Nawalgaria · Alexander D'Amour · Jesse Berent · Rodolphe Jenatton · Efi Kokiopoulou

Wed Jul 26 02:00 PM -- 03:30 PM (PDT) @ Exhibit Hall 1 #115

Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise. However, the reasons for its effectiveness are not well understood. In this study, we investigate the role played by different properties of the PI in explaining away label noise. Through experiments on multiple datasets with real PI (CIFAR-N/H) and a new large-scale benchmark ImageNet-PI, we find that PI is most helpful when it allows networks to easily distinguish clean from noisy data, while enabling a learning shortcut to memorize the noisy examples. Interestingly, when PI becomes too predictive of the target label, PI methods often perform worse than their no-PI baselines. Based on these findings, we propose several enhancements to the state-of-the-art PI methods and demonstrate the potential of PI as a means of tackling label noise. Finally, we show how we can easily combine the resulting PI approaches with existing no-PI techniques designed to deal with label noise.

Author Information

Guillermo Ortiz Jimenez (EPFL)
Mark Collier (Google)
Anant Nawalgaria (Research, Google)
Alexander D'Amour (Google DeepMind)
Jesse Berent (Google)
Rodolphe Jenatton (Google Research)
Efi Kokiopoulou (Google AI)

Efi is a research scientist at Google since February 2013. She joined Google as a PostDoc researcher in September 2011. Before that she was a postdoctoral research fellow at the Seminar for Applied Mathematics (SAM) at ETH, Zurich. She completed her PhD studies in December 2008 at the Signal Processing Laboratory (LTS4) of the Swiss Federal Institute of Technology (EPFL), Lausanne under the supervision of Prof. Pascal Frossard. Before that she was with the Computer Science & Engineering Department of the University of Minnesota, USA, where she obtained in June 2005 her M.Sc. degree under the supervision of Prof. Yousef Saad. She obtained B.Eng. and MscEng. degrees in 2002 and 2003 respectively at the Computer Engineering and Informatics Department of the University of Patras, Greece.

More from the Same Authors

  • 2022 : Catastrophic overfitting is a bug but also a feature »
    Guillermo Ortiz Jimenez · Pau de Jorge Aranda · Amartya Sanyal · Adel Bibi · Puneet Dokania · Pascal Frossard · Gregory Rogez · Phil Torr
  • 2022 : Fairness and robustness in anti-causal prediction »
    Maggie Makar · Alexander D'Amour
  • 2022 : Fairness and robustness in anti-causal prediction »
    Maggie Makar · Alexander D'Amour
  • 2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
    Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · Jie Ren · Joost van Amersfoort · Kehang Han · E. Kelly Buchanan · Kevin Murphy · Mark Collier · Mike Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani
  • 2022 : Adapting to Shifts in Latent Confounders via Observed Concepts and Proxies »
    Matt Kusner · Ibrahim Alabdulmohsin · Stephen Pfohl · Olawale Salaudeen · Arthur Gretton · Sanmi Koyejo · Jessica Schrouff · Alexander D'Amour
  • 2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
    Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani
  • 2023 : Three Towers: Flexible Contrastive Learning with Pretrained Image Models »
    Jannik Kossen · Mark Collier · Basil Mustafa · Xiao Wang · Xiaohua Zhai · Lucas Beyer · Andreas Steiner · Jesse Berent · Rodolphe Jenatton · Efi Kokiopoulou
  • 2023 Poster: Underspecification Presents Challenges for Credibility in Modern Machine Learning »
    Alexander D'Amour · Katherine Heller · Dan Moldovan · Ben Adlam · Babak Alipanahi · Alex Beutel · Christina Chen · Jonathan Deaton · Jacob Eisenstein · Matthew Hoffman · Farhad Hormozdiari · Neil Houlsby · Shaobo Hou · Ghassen Jerfel · Alan Karthikesalingam · Mario Lucic · Yian Ma · Cory McLean · Diana Mincu · Akinori Mitani · Andrea Montanari · Zachary Nado · Vivek Natarajan · Christopher Nielson · Thomas F. Osborne · Rajiv Raman · Kim Ramasamy · Rory sayres · Jessica Schrouff · Martin Seneviratne · Shannon Sequeira · Harini Suresh · Victor Veitch · Maksym Vladymyrov · Xuezhi Wang · Kellie Webster · Steve Yadlowsky · Taedong Yun · Xiaohua Zhai · D. Sculley
  • 2023 Poster: Scaling Vision Transformers to 22 Billion Parameters »
    Mostafa Dehghani · Josip Djolonga · Basil Mustafa · Piotr Padlewski · Jonathan Heek · Justin Gilmer · Andreas Steiner · Mathilde Caron · Robert Geirhos · Ibrahim Alabdulmohsin · Rodolphe Jenatton · Lucas Beyer · Michael Tschannen · Anurag Arnab · Xiao Wang · Carlos Riquelme · Matthias Minderer · Joan Puigcerver · Utku Evci · Manoj Kumar · Sjoerd van Steenkiste · Gamaleldin Elsayed · Aravindh Mahendran · Fisher Yu · Avital Oliver · Fantine Huot · Jasmijn Bastings · Mark Collier · Alexey Gritsenko · Vighnesh N Birodkar · Cristina Vasconcelos · Yi Tay · Thomas Mensink · Alexander Kolesnikov · Filip Pavetic · Dustin Tran · Thomas Kipf · Mario Lucic · Xiaohua Zhai · Daniel Keysers · Jeremiah Harmsen · Neil Houlsby
  • 2023 Oral: Scaling Vision Transformers to 22 Billion Parameters »
    Mostafa Dehghani · Josip Djolonga · Basil Mustafa · Piotr Padlewski · Jonathan Heek · Justin Gilmer · Andreas Steiner · Mathilde Caron · Robert Geirhos · Ibrahim Alabdulmohsin · Rodolphe Jenatton · Lucas Beyer · Michael Tschannen · Anurag Arnab · Xiao Wang · Carlos Riquelme · Matthias Minderer · Joan Puigcerver · Utku Evci · Manoj Kumar · Sjoerd van Steenkiste · Gamaleldin Elsayed · Aravindh Mahendran · Fisher Yu · Avital Oliver · Fantine Huot · Jasmijn Bastings · Mark Collier · Alexey Gritsenko · Vighnesh N Birodkar · Cristina Vasconcelos · Yi Tay · Thomas Mensink · Alexander Kolesnikov · Filip Pavetic · Dustin Tran · Thomas Kipf · Mario Lucic · Xiaohua Zhai · Daniel Keysers · Jeremiah Harmsen · Neil Houlsby
  • 2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
    Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani
  • 2022 Poster: Transfer and Marginalize: Explaining Away Label Noise with Privileged Information »
    Mark Collier · Rodolphe Jenatton · Efi Kokiopoulou · Jesse Berent
  • 2022 Spotlight: Transfer and Marginalize: Explaining Away Label Noise with Privileged Information »
    Mark Collier · Rodolphe Jenatton · Efi Kokiopoulou · Jesse Berent
  • 2021 Workshop: The Neglected Assumptions In Causal Inference »
    Niki Kilbertus · Lily Hu · Laura Balzer · Uri Shalit · Alexander D'Amour · Razieh Nabi
  • 2020 Poster: The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks »
    Jakub Swiatkowski · Kevin Roth · Bastiaan Veeling · Linh Tran · Joshua V Dillon · Jasper Snoek · Stephan Mandt · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin
  • 2020 Poster: How Good is the Bayes Posterior in Deep Neural Networks Really? »
    Florian Wenzel · Kevin Roth · Bastiaan Veeling · Jakub Swiatkowski · Linh Tran · Stephan Mandt · Jasper Snoek · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin