Timezone: »
Properness for supervised losses stipulates that the loss function shapes the learning algorithm towards the true posterior of the data generating distribution. Unfortunately, data in modern machine learning can be corrupted or twisted in many ways. Hence, optimizing a proper loss function on twisted data could perilously lead the learning algorithm towards the twisted posterior, rather than to the desired clean posterior. Many papers cope with specific twists (e.g., label/feature/adversarial noise), but there is a growing need for a unified and actionable understanding atop properness. Our chief theoretical contribution is a generalization of the properness framework with a notion called twist-properness, which delineates loss functions with the ability to "untwist" the twisted posterior into the clean posterior. Notably, we show that a nontrivial extension of a loss function called alpha-loss, which was first introduced in information theory, is twist-proper. We study the twist-proper alpha-loss under a novel boosting algorithm, called PILBoost, and provide formal and experimental results for this algorithm. Our overarching practical conclusion is that the twist-proper alpha-loss outperforms the proper log-loss on several variants of twisted data.
Author Information
Tyler Sypherd (Arizona State University)
Richard Nock (Google Research)
Lalitha Sankar (Arizona State University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Being Properly Improper »
Tue. Jul 19th through Wed the 20th Room Hall E #1213
More from the Same Authors
-
2021 : Neural Network-based Estimation of the MMSE »
Mario Diaz · Peter Kairouz · Lalitha Sankar -
2021 : Realizing GANs via a Tunable Loss Function »
Gowtham Raghunath Kurri · Tyler Sypherd · Lalitha Sankar -
2022 : Fair Universal Representations using Adversarial Models »
Monica Welfert · Peter Kairouz · Jiachun Liao · Chong Huang · Lalitha Sankar -
2022 : AugLoss: A Robust, Reliable Methodology for Real-World Corruptions »
Kyle Otstot · John Kevin Cava · Tyler Sypherd · Lalitha Sankar -
2023 : Tunable Dual-Objective GANs for Stable Training »
Monica Welfert · Kyle Otstot · Gowtham Kurri · Lalitha Sankar -
2023 Poster: Fair Densities via Boosting the Sufficient Statistics of Exponential Families »
Alexander Soen · Hisham Husain · Richard Nock -
2023 Oral: Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice »
Yishay Mansour · Richard Nock · Robert C. Williamson -
2023 Poster: LegendreTron: Uprising Proper Multiclass Loss Learning »
Kevin H. Lam · Christian Walder · Spiridon Penev · Richard Nock -
2023 Poster: Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice »
Yishay Mansour · Richard Nock · Robert C. Williamson -
2023 Poster: The Saddle-Point Method in Differential Privacy »
Wael Alghamdi · Felipe Gomez · Shahab Asoodeh · Flavio Calmon · Oliver Kosut · Lalitha Sankar -
2022 Poster: Neural Network Poisson Models for Behavioural and Neural Spike Train Data »
Moein Khajehnejad · Forough Habibollahi Saatlou · Richard Nock · Ehsan Arabzadeh · Peter Dayan · Amir Dezfouli -
2022 Spotlight: Neural Network Poisson Models for Behavioural and Neural Spike Train Data »
Moein Khajehnejad · Forough Habibollahi Saatlou · Richard Nock · Ehsan Arabzadeh · Peter Dayan · Amir Dezfouli -
2022 Poster: Generative Trees: Adversarial and Copycat »
Richard Nock · Mathieu Guillame-Bert -
2022 Oral: Generative Trees: Adversarial and Copycat »
Richard Nock · Mathieu Guillame-Bert -
2021 : Invited Talk: Lalitha Sankar »
Lalitha Sankar -
2021 Poster: The Impact of Record Linkage on Learning from Feature Partitioned Data »
Richard Nock · Stephen J Hardy · Wilko Henecka · Hamish Ivey-Law · Jakub Nabaglo · Giorgio Patrini · Guillaume Smith · Brian Thorne -
2021 Spotlight: The Impact of Record Linkage on Learning from Feature Partitioned Data »
Richard Nock · Stephen J Hardy · Wilko Henecka · Hamish Ivey-Law · Jakub Nabaglo · Giorgio Patrini · Guillaume Smith · Brian Thorne -
2021 Poster: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2021 Spotlight: Generalised Lipschitz Regularisation Equals Distributional Robustness »
Zac Cranko · Zhan Shi · Xinhua Zhang · Richard Nock · Simon Kornblith -
2020 Poster: Supervised learning: no loss no cry »
Richard Nock · Aditya Menon -
2019 Poster: Monge blunts Bayes: Hardness Results for Adversarial Training »
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder -
2019 Poster: Lossless or Quantized Boosting with Integer Arithmetic »
Richard Nock · Robert C Williamson -
2019 Oral: Lossless or Quantized Boosting with Integer Arithmetic »
Richard Nock · Robert C Williamson -
2019 Oral: Monge blunts Bayes: Hardness Results for Adversarial Training »
Zac Cranko · Aditya Menon · Richard Nock · Cheng Soon Ong · Zhan Shi · Christian Walder -
2019 Poster: Boosted Density Estimation Remastered »
Zac Cranko · Richard Nock -
2019 Oral: Boosted Density Estimation Remastered »
Zac Cranko · Richard Nock -
2018 Poster: Variational Network Inference: Strong and Stable with Concrete Support »
Amir Dezfouli · Edwin Bonilla · Richard Nock -
2018 Oral: Variational Network Inference: Strong and Stable with Concrete Support »
Amir Dezfouli · Edwin Bonilla · Richard Nock -
2017 Workshop: Human in the Loop Machine Learning »
Richard Nock · Cheng Soon Ong