Skip to yearly menu bar Skip to main content


( events)   Timezone:  
The 2021 schedule is still incomplete
Workshop
Sat Jul 24 05:40 AM -- 02:40 PM (PDT)
Workshop on Socially Responsible Machine Learning
Chaowei Xiao · Animashree Anandkumar · Mingyan Liu · Dawn Song · Raquel Urtasun · Jieyu Zhao · Xueru Zhang · Cihang Xie · Xinyun Chen · Bo Li





Workshop Home Page

Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems to safety-critical tasks. While the hope is to improve decision-making accuracy and societal outcomes with these ML models, concerns have been incurred that they can inflict harm if not developed or used with care. It has been well-documented that ML models can: (1) inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups; (2) be vulnerable to security and privacy attacks that deceive the models and leak the training data's sensitive information; (3) make hard-to-justify predictions with a lack of transparency. Therefore, it is essential to build socially responsible ML models that are fair, robust, private, transparent, and interpretable.

Although extensive studies have been conducted to increase trust in ML, many of them either focus on well-defined problems that enable nice tractability from a mathematical perspective but are hard to adapt to real-world systems, or they mainly focus on mitigating risks in real-world applications without providing theoretical justifications. Moreover, most work studies those issues separately; the connections among them are less well-understood. This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). We aim to synthesize promising ideas and research directions, as well as strengthen cross-community collaborations. We hope to chart out important directions for future work. We have an advisory committee and confirmed speakers whose expertise represents the diversity of the technical problems in this emerging research field.

Anima Anandkumar. Opening remarks (Opening remarks)
Workshop on Socially Responsible Machine Learning (Poster)
Opening remarks
Jun Zhu. Understand and Benchmark Adversarial Robustness of Deep Learning (Invited Talk)
Olga Russakovsky. Revealing, Quantifying, Analyzing and Mitigating Bias in Visual Recognition (Invited Talk)
Pin-Yu Chen. Adversarial Machine Learning for Good (Invited Talk)
Tatsu Hashimoto. Not all uncertainty is noise: machine learning with confounders and inherent disagreements (Invited Talk)
Nicolas Papernot. What Does it Mean for ML to be Trustworthy (talk)
Contributed Talk-1. Machine Learning API Shift Assessments (Contributed Talk)
Aaron Roth. Better Estimates of Prediction Uncertainty (Invited Talk)
Jun-Yan Zhu. Understanding and Rewriting GANs (Invited Talk)
Kai-Wei Chang. Societal Bias in Language Generation (Invited Talk)
Yulia Tsvetkov. Proactive NLP: How to Prevent Social and Ethical Problems in NLP Systems? (Invited Talk)
Contributed Talk-2. Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions (Contributed Talk)
Contributed Talk-3. FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information (Contributed Talk)
Contributed Talk-4. Auditing AI models for Verified Deployment under Semantic Specifications (Contributed Talk)
Poster Sessions
CrossWalk: Fairness-enhanced Node Representation Learning (Poster)
Margin-distancing for safe model explanation (Poster)
Are You Man Enough? Even Fair Algorithms Conform to Societal Norms (Poster)
Towards Explainable and Fair Supervised Learning (Poster)
Auditing AI models for Verified Deployment under Semantic Specifications (Poster)
Robust Counterfactual Explanations for Privacy-Preserving SVM (Poster)
Stateful Performative Gradient Descent (Poster)
Fairness in Missing Data Imputation (Poster)
Improving Adversarial Robustness in 3D Point Cloud Classification via Self-Supervisions (Poster)
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses (Poster)
Should Altruistic Benchmarks be the Default in Machine Learning? (Poster)
Flexible Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles (Poster)
Detecting and Quantifying Malicious Activity with Simulation-based Inference (Poster)
Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions (Poster)
An Empirical Investigation of Learning from Biased Toxicity Labels (Poster)
Statistical Guarantees for Fairness Aware Plug-In Algorithms (Poster)
Adversarial Stacked Auto-Encoders for Fair Representation Learning (Poster)
Have the Cake and Eat It Too? Higher Accuracy and Less Expense when Using Multi-label ML APIs Online (Poster)
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates (Poster)
Towards Quantifying the Carbon Emissions of Differentially Private Machine Learning (Poster)
FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information (Poster)
Stateful Strategic Regression (Poster)
Machine Learning API Shift Assessments: Change is Coming! (Poster)
Towards a Unified Framework for Fair and Stable Graph Representation Learning (Poster)
Delving into the Remote Adversarial Patch in Semantic Segmentation (Poster)