Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Jul 29 11:50 AM -- 08:00 PM (PDT) @ Meeting Room 316 AB None
The Second Workshop on Spurious Correlations, Invariance and Stability
Yoav Wald · Claudia Shi · Aahlad Puli · Amir Feder · Limor Gultchin · Mark Goldstein · Maggie Makar · Victor Veitch · Uri Shalit





Workshop Home Page

As machine learning models are introduced into every aspect of our lives, and potential benefits become abundant, so do possible catastrophic failures. One of the most common failure scenarios when deploying machine learning models in the wild, which could possibly lead to dire consequences in extreme cases, is the reliance of models on apparently unnatural or irrelevant features.
The issue comes up in a variety of applications: from the reliance of detection models for X-rays on scanner types and marks made by technicians in the hospital, through visual question answering models being sensitive to linguistic variations in the questions, the list of examples for such undesirable behaviors keeps growing.In examples like these, the undesirable behavior stems from the model exploiting a spurious correlation.

Following last year's workshop on Spurious Correlations, Invariance and Stability (SCIS), it is apparent that work on spurious correlations is a long-term effort that spans communities such as fairness, causality-inspired ML, and domains such as NLP, healthcare and many others. Hence we hope that this year's workshop, the second edition of SCIS, will help facilitate this long term effort across communities. The workshop will feature talks by top experts doing methodological work on dealing with spurious correlations, and an extended poster session to allow extensive discussion on work submitted to the workshop.

Opening Remarks
Distribution Shifts in Generalist and Causal Models (Talk)
Paper Spotlights (Spotlight)
Break
On learning domain general predictors (Talk)
Using Causality to Improve Safety Throughout the AI Lifecycle (Talk)
Lunch Break (Break)
A data-centric view on reliable generalization: From ImageNet to LAION-5B (Talk)
Causal vs Causality-inspired representation learning (Talk)
Poster Session 1 (in-person only) (Poster Session)
SCIS 2023 Panel, The Future of Generalization: Scale, Safety and Beyond (Panel Discussion)
Causal Conversation + Poster Session 2 (Poster Session)
Studying Generalization on Memory-Based Methods in Continual Learning (Poster)
Concept Algebra for Score-based Conditional Model (Poster)
(Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy (Poster)
Antibody DomainBed: Towards robust predictions using invariant representations of biological sequences carrying complex distribution shifts (Oral)
Fairness-Preserving Regularizer: Balancing Core and Spurious Features (Poster)
Temporal Consistency based Test Time Adaptation: Towards Fair and Personalized AI (Poster)
Why is SAM Robust to Label Noise? (Poster)
Identifying and Disentangling Spurious Features in Pretrained Image Representations (Poster)
Prediction without Preclusion: Recourse Verification with Reachable Sets (Poster)
Bridging the Domain Gap by Clustering-based Image-Text Graph Matching (Poster)
Impact of Noise on Calibration and Generalisation of Neural Networks (Poster)
Replicable Reinforcement Learning (Poster)
Learning Diverse Features in Vision Transformers for Improved Generalization (Poster)
Contextual Vision Transformers for Robust Representation Learning (Poster)
Learning Counterfactually Invariant Predictors (Poster)
Group Fairness with Uncertainty in Sensitive Attributes (Poster)
Cross-Risk Minimization: Inferring Groups Information for Improved Generalization (Poster)
Robustness through Loss Consistency Regularization (Poster)
Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding (Oral)
Tackling Shortcut Learning in Deep Neural Networks: An Iterative Approach with Interpretable Models (Poster)
Towards Understanding Feature Learning in Out-of-Distribution Generalization (Poster)
Spuriosity Didn’t Kill the Classifier: Using Invariant Predictions to Harness Spurious Features (Poster)
Approximate Causal Effect Identification under Weak Confounding (Poster)
Causal Dynamics Learning with Quantized Local Independence Discovery (Poster)
Weighted Risk Invariance for Density-Aware Domain Generalization (Poster)
Shortcut Detection with Variational Autoencoders (Poster)
Towards A Scalable Solution for Compositional Multi-Group Fair Classification (Poster)
Large Dimensional Change Point Detection with FWER Control as Automatic Stopping (Poster)
Where Does My Model Underperform?: A Human Evaluation of Slice Discovery Algorithms (Oral)
Learning Independent Causal Mechanisms (Poster)
Identifiability Guarantees for Causal Disentanglement from Soft Interventions (Poster)
Uncertainty-Guided Online Test-Time Adaptation via Meta-Learning (Poster)
Saving a Split for Last-layer Retraining can Improve Group Robustness without Group Annotations (Poster)
Removing Multiple Biases through the Lens of Multi-task Learning (Poster)
Separating multimodal modeling from multidimensional modeling for multimodal learning (Poster)
Exploring new ways: Enforcing representational dissimilarity to learn new features and reduce error consistency (Poster)
Stabilizing GNN for Fairness via Lipschitz Bounds (Poster)
Calibrated Propensities for Causal Effect Estimation (Poster)
Leveraging sparse and shared feature activations for disentangled representation learning (Poster)
Do as your neighbors: Invariant learning through non-parametric neighbourhood matching (Poster)
Front-door Adjustment Beyond Markov Equivalence with Limited Graph Knowledge (Poster)
Arbitrary Decisions are a Hidden Cost of Differentially Private Training (Poster)
Out of the Ordinary: Spectrally Adapting Regression for Covariate Shift (Poster)
Provable domain adaptation using privileged information (Oral)
A Cosine Similarity-based Method for Out-of-Distribution Detection (Poster)
Towards Modular Learning of Deep Causal Generative Models (Poster)
Deep Neural Networks Extrapolate Cautiously (Most of the Time) (Poster)
Spuriosity Rankings for Free: A Simple Framework for Last Layer Retraining Based on Object Detection (Poster)
Results on Counterfactual Invariance (Poster)
Group Robustness via Adaptive Class-Specific Scaling (Poster)
Causal-structure Driven Augmentations for Text OOD Generalization (Poster)
Leveraging Task Structures for Improved Identifiability in Neural Network Representations (Poster)
Adversarial Data Augmentations for Out-of-Distribution Generalization (Poster)
Feature Selection in the Presence of Monotone Batch Effects (Poster)
Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning (Poster)
Identifying Causal Mechanism Shifts among Nonlinear Additive Noise Models (Poster)
Robust Learning with Progressive Data Expansion Against Spurious Correlation (Poster)
SAFE: Stable Feature Extraction without Environment Labels (Poster)
Regularizing Model Gradients with Concepts to Improve Robustness to Spurious Correlations (Poster)
Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift (Poster)
Seeing is not Believing: Robust Reinforcement Learning against Spurious Correlation (Poster)
Optimization or Architecture: What Matters in Non-Linear Filtering? (Poster)
Improve Identity-Robustness for Face Models (Poster)
Reviving Shift Equivariance in Vision Transformers (Poster)
Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling (Poster)
Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline Reinforcement Learning (Poster)
Data Models for Dataset Drift Controls in Machine Learning With Optical Images (Poster)
Understanding the Detrimental Class-level Effects of Data Augmentation (Poster)
Transportable Representations for Out-of-distribution Generalization (Poster)
Towards Fair Knowledge Distillation using Student Feedback (Poster)
Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks (Poster)
Which Features are Learned by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression (Poster)
ModelDiff: A Framework for Comparing Learning Algorithms (Oral)
Pruning for Better Domain Generalizability (Poster)
Regularizing Adversarial Imitation Learning Using Causal Invariance (Poster)
ERM++: An Improved Baseline for Domain Generalization (Poster)
Neuro-Causal Factor Analysis (Poster)
Learning Linear Causal Representations from Interventions under General Nonlinear Mixing (Poster)
Shortcut Learning Through the Lens of Training Dynamics (Poster)
The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-language Models (Poster)
Sharpness-Aware Minimization Enhances Feature Diversity (Poster)
Identifiability of Discretized Latent Coordinate Systems via Density Landmarks Detection (Poster)
C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder (Poster)
Mitigating Simplicity Bias in Deep Learning for Improved OOD Generalization and Robustness (Poster)
Spurious Correlations and Where to Find Them (Poster)
Invariant Causal Set Covering Machines (Poster)
Bias-to-Text: Debiasing Unknown Visual Biases by Language Interpretation (Poster)
Complementing a Policy with a Different Observation Space (Poster)
Implications of Gaussian process kernel mismatch for out-of-distribution data (Poster)
Confident feature ranking (Poster)