As machine learning models are introduced into every aspect of our lives, and potential benefits become abundant, so do possible catastrophic failures. One of the most common failure scenarios when deploying machine learning models in the wild, which could possibly lead to dire consequences in extreme cases, is the reliance of models on apparently unnatural or irrelevant features.
The issue comes up in a variety of applications: from the reliance of detection models for X-rays on scanner types and marks made by technicians in the hospital, through visual question answering models being sensitive to linguistic variations in the questions, the list of examples for such undesirable behaviors keeps growing.In examples like these, the undesirable behavior stems from the model exploiting a spurious correlation.
Following last year's workshop on Spurious Correlations, Invariance and Stability (SCIS), it is apparent that work on spurious correlations is a long-term effort that spans communities such as fairness, causality-inspired ML, and domains such as NLP, healthcare and many others. Hence we hope that this year's workshop, the second edition of SCIS, will help facilitate this long term effort across communities. The workshop will feature talks by top experts doing methodological work on dealing with spurious correlations, and an extended poster session to allow extensive discussion on work submitted to the workshop.
Opening Remarks | |
Distribution Shifts in Generalist and Causal Models (Talk) | |
Paper Spotlights (Spotlight) | |
Break | |
On learning domain general predictors (Talk) | |
Using Causality to Improve Safety Throughout the AI Lifecycle (Talk) | |
Lunch Break (Break) | |
A data-centric view on reliable generalization: From ImageNet to LAION-5B (Talk) | |
Causal vs Causality-inspired representation learning (Talk) | |
Poster Session 1 (in-person only) (Poster Session) | |
SCIS 2023 Panel, The Future of Generalization: Scale, Safety and Beyond (Panel Discussion) | |
Causal Conversation + Poster Session 2 (Poster Session) | |
Group Robustness via Adaptive Class-Specific Scaling (Poster) | |
Bridging the Domain Gap by Clustering-based Image-Text Graph Matching (Poster) | |
The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-language Models (Poster) | |
Results on Counterfactual Invariance (Poster) | |
Shortcut Detection with Variational Autoencoders (Poster) | |
Seeing is not Believing: Robust Reinforcement Learning against Spurious Correlation (Poster) | |
(Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy (Poster) | |
Replicable Reinforcement Learning (Poster) | |
Invariant Causal Set Covering Machines (Poster) | |
Tackling Shortcut Learning in Deep Neural Networks: An Iterative Approach with Interpretable Models (Poster) | |
Concept Algebra for Score-based Conditional Model (Poster) | |
Learning Counterfactually Invariant Predictors (Poster) | |
Contextual Vision Transformers for Robust Representation Learning (Poster) | |
Leveraging Task Structures for Improved Identifiability in Neural Network Representations (Poster) | |
SAFE: Stable Feature Extraction without Environment Labels (Poster) | |
Stabilizing GNN for Fairness via Lipschitz Bounds (Poster) | |
Uncertainty-Guided Online Test-Time Adaptation via Meta-Learning (Poster) | |
Spuriosity Rankings for Free: A Simple Framework for Last Layer Retraining Based on Object Detection (Poster) | |
Towards Understanding Feature Learning in Out-of-Distribution Generalization (Poster) | |
Robust Learning with Progressive Data Expansion Against Spurious Correlation (Poster) | |
Large Dimensional Change Point Detection with FWER Control as Automatic Stopping (Poster) | |
Approximate Causal Effect Identification under Weak Confounding (Poster) | |
Deep Neural Networks Extrapolate Cautiously (Most of the Time) (Poster) | |
Neuro-Causal Factor Analysis (Poster) | |
Identifiability of Discretized Latent Coordinate Systems via Density Landmarks Detection (Poster) | |
Learning Linear Causal Representations from Interventions under General Nonlinear Mixing (Poster) | |
Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline Reinforcement Learning (Poster) | |
Mitigating Simplicity Bias in Deep Learning for Improved OOD Generalization and Robustness (Poster) | |
Identifying Causal Mechanism Shifts among Nonlinear Additive Noise Models (Poster) | |
Adversarial Data Augmentations for Out-of-Distribution Generalization (Poster) | |
Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning (Poster) | |
C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder (Poster) | |
Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding (Oral) | |
Towards Modular Learning of Deep Causal Generative Models (Poster) | |
Towards A Scalable Solution for Compositional Multi-Group Fair Classification (Poster) | |
Identifiability Guarantees for Causal Disentanglement from Soft Interventions (Poster) | |
Reviving Shift Equivariance in Vision Transformers (Poster) | |
A Cosine Similarity-based Method for Out-of-Distribution Detection (Poster) | |
Provable domain adaptation using privileged information (Oral) | |
Towards Fair Knowledge Distillation using Student Feedback (Poster) | |
Learning Independent Causal Mechanisms (Poster) | |
Antibody DomainBed: Towards robust predictions using invariant representations of biological sequences carrying complex distribution shifts (Oral) | |
Removing Multiple Biases through the Lens of Multi-task Learning (Poster) | |
Prediction without Preclusion: Recourse Verification with Reachable Sets (Poster) | |
Out of the Ordinary: Spectrally Adapting Regression for Covariate Shift (Poster) | |
Arbitrary Decisions are a Hidden Cost of Differentially Private Training (Poster) | |
Data Models for Dataset Drift Controls in Machine Learning With Optical Images (Poster) | |
Group Fairness with Uncertainty in Sensitive Attributes (Poster) | |
Front-door Adjustment Beyond Markov Equivalence with Limited Graph Knowledge (Poster) | |
ERM++: An Improved Baseline for Domain Generalization (Poster) | |
Sharpness-Aware Minimization Enhances Feature Diversity (Poster) | |
Saving a Split for Last-layer Retraining can Improve Group Robustness without Group Annotations (Poster) | |
Learning Diverse Features in Vision Transformers for Improved Generalization (Poster) | |
Robustness through Loss Consistency Regularization (Poster) | |
Cross-Risk Minimization: Inferring Groups Information for Improved Generalization (Poster) | |
Which Features are Learned by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression (Poster) | |
Implications of Gaussian process kernel mismatch for out-of-distribution data (Poster) | |
Leveraging sparse and shared feature activations for disentangled representation learning (Poster) | |
Do as your neighbors: Invariant learning through non-parametric neighbourhood matching (Poster) | |
Separating multimodal modeling from multidimensional modeling for multimodal learning (Poster) | |
Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks (Poster) | |
Complementing a Policy with a Different Observation Space (Poster) | |
Where Does My Model Underperform?: A Human Evaluation of Slice Discovery Algorithms (Oral) | |
Exploring new ways: Enforcing representational dissimilarity to learn new features and reduce error consistency (Poster) | |
Feature Selection in the Presence of Monotone Batch Effects (Poster) | |
Transportable Representations for Out-of-distribution Generalization (Poster) | |
Understanding the Detrimental Class-level Effects of Data Augmentation (Poster) | |
Calibrated Propensities for Causal Effect Estimation (Poster) | |
Confident feature ranking (Poster) | |
Why is SAM Robust to Label Noise? (Poster) | |
Spurious Correlations and Where to Find Them (Poster) | |
Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift (Poster) | |
Spuriosity Didn’t Kill the Classifier: Using Invariant Predictions to Harness Spurious Features (Poster) | |
Regularizing Adversarial Imitation Learning Using Causal Invariance (Poster) | |
Regularizing Model Gradients with Concepts to Improve Robustness to Spurious Correlations (Poster) | |
Temporal Consistency based Test Time Adaptation: Towards Fair and Personalized AI (Poster) | |
Pruning for Better Domain Generalizability (Poster) | |
Identifying and Disentangling Spurious Features in Pretrained Image Representations (Poster) | |
Fairness-Preserving Regularizer: Balancing Core and Spurious Features (Poster) | |
Weighted Risk Invariance for Density-Aware Domain Generalization (Poster) | |
Causal-structure Driven Augmentations for Text OOD Generalization (Poster) | |
Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling (Poster) | |
Optimization or Architecture: What Matters in Non-Linear Filtering? (Poster) | |
Shortcut Learning Through the Lens of Training Dynamics (Poster) | |
Causal Dynamics Learning with Quantized Local Independence Discovery (Poster) | |
Studying Generalization on Memory-Based Methods in Continual Learning (Poster) | |
Bias-to-Text: Debiasing Unknown Visual Biases by Language Interpretation (Poster) | |
Impact of Noise on Calibration and Generalisation of Neural Networks (Poster) | |
Improve Identity-Robustness for Face Models (Poster) | |
ModelDiff: A Framework for Comparing Learning Algorithms (Oral) | |