Timezone: »
The vulnerability of machine learning models to spurious correlations has mostly been discussed in the context of supervised learning (SL). However, there is a lack of insight on how spurious correlations affect the performance of popular self-supervised learning (SSL) and auto-encoder based models (AE). In this work, we shed light on this by evaluating the performance of these models on both real world and synthetic distribution shift datasets. Following observations that the linear head itself can be susceptible to spurious correlations, we develop a new evaluation scheme with the linear head trained on out-of-distribution (OOD) data, to isolate the performance of the pre-trained models from a potential bias of the linear head used for evaluation. With this new methodology, we show that SSL models are consistently more robust to distribution shifts and thus better at OOD generalisation than AE and SL models.
Author Information
Yuge Shi (University of Oxford)
Imant Daunhawer (ETH Zurich)
Julia Vogt (Memorial Sloan Kettering Cancer Center)
Phil Torr (Oxford)
Amartya Sanyal (ETH Zürich)
Postdoc at Max Planck Institute for Intelligent Systems Tuebingen Postdoc at ETH Zurich D.Phil student at __University of Oxford__ Research Intern at Facebook AI Research
More from the Same Authors
-
2021 : Combating Adversaries with Anti-Adversaries »
Motasem Alfarra · Juan C Perez · Ali Thabet · Adel Bibi · Phil Torr · Bernard Ghanem -
2021 : Detecting and Quantifying Malicious Activity with Simulation-based Inference »
Andrew Gambardella · Naeemullah Khan · Phil Torr · Atilim Gunes Baydin -
2022 : Make Some Noise: Reliable and Efficient Single-Step Adversarial Training »
Pau de Jorge Aranda · Adel Bibi · Riccardo Volpi · Amartya Sanyal · Phil Torr · Gregory Rogez · Puneet Dokania -
2022 : Catastrophic overfitting is a bug but also a feature »
Guillermo Ortiz Jimenez · Pau de Jorge Aranda · Amartya Sanyal · Adel Bibi · Puneet Dokania · Pascal Frossard · Gregory Rogez · Phil Torr -
2022 : Illusionary Attacks on Sequential Decision Makers and Countermeasures »
Tim Franzmeyer · Joao Henriques · Jakob Foerster · Phil Torr · Adel Bibi · Christian Schroeder -
2022 : How robust are pre-trained models to distribution shift? »
Yuge Shi · Imant Daunhawer · Julia Vogt · Phil Torr · Amartya Sanyal -
2023 : Illusory Attacks: Detectability Matters in Adversarial Attacks on Sequential Decision-Makers »
Tim Franzmeyer · Stephen Mcaleer · Joao Henriques · Jakob Foerster · Phil Torr · Adel Bibi · Christian Schroeder -
2023 : Certified Calibration: Bounding Worst-Case Calibration under Adversarial Attacks »
Cornelius Emde · Francesco Pinto · Thomas Lukasiewicz · Phil Torr · Adel Bibi -
2023 : Certifying Ensembles: A General Certification Theory with S-Lipschitzness »
Aleksandar Petrov · Francisco Eiras · Amartya Sanyal · Phil Torr · Adel Bibi -
2023 : (Un)reasonable Allure of Ante-hoc Interpretability for High-stakes Domains: Transparency Is Necessary but Insufficient for Explainability »
Kacper Sokol · Julia Vogt -
2023 : Deep Generative Clustering with Multimodal Variational Autoencoders »
Emanuele Palumbo · Sonia Laguna · Daphné Chopard · Julia Vogt -
2023 : Tree Variational Autoencoders »
Laura Manduchi · Moritz Vandenhirtz · Alain Ryser · Julia Vogt -
2023 : Uncovering Latent Structure Using Random Partition Models »
Thomas Sutter · Alain Ryser · Joram Liebeskind · Julia Vogt -
2023 : Differentiable Set Partitioning »
Thomas Sutter · Alain Ryser · Joram Liebeskind · Julia Vogt -
2023 : Deep Generative Clustering with Multimodal Variational Autoencoders »
Emanuele Palumbo · Sonia Laguna · Daphné Chopard · Julia Vogt -
2023 : Tree Variational Autoencoders »
Laura Manduchi · Moritz Vandenhirtz · Alain Ryser · Julia Vogt -
2023 : Language Model Tokenizers Introduce Unfairness Between Languages »
Aleksandar Petrov · Emanuele La Malfa · Phil Torr · Adel Bibi -
2023 : Who to imitate: Imitating desired behavior from diverse multi-agent datasets »
Tim Franzmeyer · Jakob Foerster · Edith Elkind · Phil Torr · Joao Henriques -
2023 : Provably Correct Physics-Informed Neural Networks »
Francisco Girbal Eiras · Adel Bibi · Rudy Bunel · Krishnamurthy Dvijotham · Phil Torr · M. Pawan Kumar -
2023 Poster: On the Identifiability and Estimation of Causal Location-Scale Noise Models »
Alexander Immer · Christoph Schultheiss · Julia Vogt · Bernhard Schölkopf · Peter Bühlmann · Alexander Marx -
2023 Poster: Tuning Computer Vision Models With Task Rewards »
André Susano Pinto · Alexander Kolesnikov · Yuge Shi · Lucas Beyer · Xiaohua Zhai -
2023 Poster: Graph Inductive Biases in Transformers without Message Passing »
Liheng Ma · Chen Lin · Derek Lim · Adriana Romero Soriano · Puneet Dokania · Mark Coates · Phil Torr · Ser Nam Lim -
2023 Poster: Certifying Ensembles: A General Certification Theory with S-Lipschitzness »
Aleksandar Petrov · Francisco Eiras · Amartya Sanyal · Phil Torr · Adel Bibi -
2022 : Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS »
Christian Schroeder · Yongchao Huang · Phil Torr · Martin Strohmeier -
2022 : Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS »
Christian Schroeder · Yongchao Huang · Phil Torr · Martin Strohmeier -
2022 Poster: Adversarial Masking for Self-Supervised Learning »
Yuge Shi · Siddharth N · Phil Torr · Adam Kosiorek -
2022 Spotlight: Adversarial Masking for Self-Supervised Learning »
Yuge Shi · Siddharth N · Phil Torr · Adam Kosiorek -
2022 Poster: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2022 Spotlight: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2018 Poster: TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service »
Amartya Sanyal · Matt Kusner · Adria Gascon · Varun Kanade -
2018 Oral: TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service »
Amartya Sanyal · Matt Kusner · Adria Gascon · Varun Kanade -
2017 Poster: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Talk: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson