Machine learning models often break when deployed in the wild, despite excellent performance on benchmarks. In particular, models can learn to rely on apparently unnatural or irrelevant features. For instance, 1) in detecting lung disease from chest X-rays, models rely on the type of scanner rather than physiological signals, 2) in natural language inference, models rely on the number of shared words rather than the subject’s relationship with the object, 3) in precision medicine, polygenic risk scores for diseases like breast cancer rely on genes prevalent mainly in European populations, and predict poorly in other populations. In examples like these and others, the undesirable behavior stems from the model exploiting a spurious correlation. Improper treatment of spurious correlations can discourage the use of ML in the real world and lead to catastrophic consequences in extreme cases. The recent surge of interest in this issue is accordingly welcome and timely: more than 50 closely related papers have been published just in ICML 2021, NeurIPS 2021, and ICLR 2022. However, the most fundamental questions remain unanswered— e.g., how should the notion of spurious correlations be made precise? How should one evaluate models in the presence of spurious correlations? In which situations can a given method be expected to work, or fail? Which notions of invariance are fruitful and tractable? Further, relevant work has sprung up ad hoc from several distinct communities, with limited interplay between them: invariance and independence-constrained learning in causality-inspired ML, methods to decorrelate predictions and protected features (e.g. race) in algorithmic fairness, and stress testing procedures to discover unexpected model dependencies in reliable ML. This workshop will bring together these different communities to make progress on common foundational problems, and facilitate their interaction with domain-experts to build impactful collaborations.
| Introductory Remarks (Presentation) | |
| Invited Talks 1, Bernhard Schölkopf and David Lopez-Paz (Invited Talk) | |
| Invited talks I, Q/A (Q/A session) | |
| Break | |
| Invited talks 2, Christina Heinze-Deml and Marzyeh Ghassemi (Invited talk) | |
| Invited talks 2 Q/A, Christina and Marzyeh (Q/A) | |
| Spotlights | |
| Lunch break (Break) | |
| Poster Session (in-person only) (In-person poster session) | |
| Invited talks 3, Amy Zhang, Rich Zemel and Liting Sun (Invited talk) | |
| Invited talks 3, Q/A, Amy, Rich and Liting (Live Q/A session) | |
| SCIS 2022 Panel (Live panel over zoom) | |
| Closing remarks (Presentation) | |
| Poster Session (in-person only) (In-person poster session) | |
| Breakout sessions (Breakout sessions (in-person and virtual)) | |
| Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty (Poster) | |
| On the nonlinear correlation of ML performance across data subpopulations (Poster) | |
| DAFT: Distilling Adversarially Fine-tuned teachers for OOD Robustness (Poster) | |
| Are Vision Transformers Robust to Spurious Correlations ? (Poster) | |
| Learning Debiased Classifier with Biased Committee (Poster) | |
| Understanding Rare Spurious Correlations in Neural Networks (Poster) | |
| How robust are pre-trained models to distribution shift? (Poster) | |
| Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations (Poster) | |
| A Unified Causal View of Domain Invariant Representation Learning (Poster) | |
| Causal Omnivore: Fusing Noisy Estimates of Spurious Correlations (Poster) | |
| Towards Multi-level Fairness and Robustness on Federated Learning (Poster) | |
| Towards Group Robustness in the Presence of Partial Group Labels (Poster) | |
| Improving Group-based Robustness and Calibration via Ordered Risk and Confidence Regularization (Poster) | |
| Learning Switchable Representation with Masked Decoding and Sparse Encoding (Poster) | |
| On the Generalization and Adaption Performance of Causal Models (Poster) | |
| How much Data is Augmentation Worth? (Poster) | |
| Self-Supervision on Images and Text Reduces Reliance on Visual Shortcut Features (Poster) | |
| The Importance of Background Information for Out of Distribution Generalization (Poster) | |
| Using causal modeling to analyze generalization of biomarkers in high-dimensional domains: a case study of adaptive immune repertoires (Poster) | |
| Causal Discovery using Model Invariance through Knockoff Interventions (Poster) | |
| Unsupervised Causal Generative Understanding of Images (Poster) | |
| Characterizing Datapoints via Second-Split Forgetting (Poster) | |
| "Why did the Model Fail?": Attributing Model Performance Changes to Distribution Shifts (Poster) | |
| Towards Environment-Invariant Representation Learning for Robust Task Transfer (Poster) | |
| BARACK: Partially Supervised Group Robustness With Guarantees (Poster) | |
| A Study of Causal Confusion in Preference-Based Reward Learning (Poster) | |
| Unsupervised Learning under Latent Label Shift (Poster) | |
| Representation Learning as Finding Necessary and Sufficient Causes (Poster) | |
| Policy Architectures for Compositional Generalization in Control (Poster) | |
| Understanding Generalization and Robustess of Learned Representations of Chaotic Dynamical Systems (Poster) | |
| Invariance Principle Meets Out-of-Distribution Generalization on Graphs (Poster) | |
| Selection Bias Induced Spurious Correlations in Large Language Models (Poster) | |
| Detecting Shortcut Learning using Mutual Information (Poster) | |
| Probing Classifiers are Unreliable for Concept Removal and Detection (Poster) | |
| Invariance Discovery for Systematic Generalization in Reinforcement Learning (Poster) | |
| Modeling the Data-Generating Process is Necessary for Out-of-Distribution Generalization (Poster) | |
| Towards Domain Adversarial Methods to Mitigate Texture Bias (Poster) | |
| Evaluating and Improving Robustness of Self-Supervised Representations to Spurious Correlations (Poster) | |
| Causal Prediction Can Induce Performative Stability (Poster) | |
| In the Eye of the Beholder: Robust Prediction with Causal User Modeling (Poster) | |
| Towards Better Understanding of Self-Supervised Representations (Poster) | |
| Learning to induce causal structure (Poster) | |
| Finding Spuriously Correlated Visual Attributes (Poster) | |
| Causal Balancing for Domain Generalization (Poster) | |
| SimpleSpot and Evaluating Systemic Errors using Synthetic Image Datasets (Poster) | |
| HyperInvariances: Amortizing Invariance Learning (Poster) | |
| Diversify and Disambiguate: Learning from Underspecified Data (Poster) | |
| Optimization-based Causal Estimation from Heterogenous Environments (Poster) | |
| Causally motivated multi-shortcut identification and removal (Poster) | |
| Robust Calibration with Multi-domain Temperature Scaling (Poster) | |
| Invariant and Transportable Representations for Anti-Causal Domain Shifts (Poster) | |
| Domain Adaptation under Open Set Label Shift (Poster) | |
| Automated Invariance Testing for Machine Learning Models Using Sparse Linear Layers (Poster) | |
| Fairness and robustness in anti-causal prediction (Poster) | |
| Out-of-Distribution Failure through the Lens of Labeling Mechanisms: An Information Theoretic Approach (Poster) | |
| Enhancing Unit-tests for Invariance Discovery (Poster) | |
| Latent Variable Models for Bayesian Causal Discovery (Poster) | |
| Are We Viewing the Problem of Robust Generalisation through the Appropriate Lens? (Poster) | |
| Conditional Distributional Invariance through Implicit Regularization (Poster) | |
| Evaluating Robustness to Dataset Shift via Parametric Robustness Sets (Poster) | |
| Repeated Environment Inference for Invariant Learning (Poster) | |
| Contrastive Adapters for Foundation Model Group Robustness (Poster) | |
| Doubly Right Object Recognition (Poster) | |
| Optimizing maintenance by learning individual treatment effects (Poster) | |
| SelecMix: Debiased Learning by Mixing up Contradicting Pairs (Poster) | |
| Linear Connectivity Reveals Generalization Strategies (Poster) | |
| OOD-Probe: A Neural Interpretation of Out-of-Domain Generalization (Poster) | |