Workshop
Workshop on Socially Responsible Machine Learning
Chaowei Xiao · Animashree Anandkumar · Mingyan Liu · Dawn Song · Raquel Urtasun · Jieyu Zhao · Xueru Zhang · Cihang Xie · Xinyun Chen · Bo Li
Sat 24 Jul, 5:40 a.m. PDT
Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems to safety-critical tasks. While the hope is to improve decision-making accuracy and societal outcomes with these ML models, concerns have been incurred that they can inflict harm if not developed or used with care. It has been well-documented that ML models can: (1) inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups; (2) be vulnerable to security and privacy attacks that deceive the models and leak the training data's sensitive information; (3) make hard-to-justify predictions with a lack of transparency. Therefore, it is essential to build socially responsible ML models that are fair, robust, private, transparent, and interpretable.
Although extensive studies have been conducted to increase trust in ML, many of them either focus on well-defined problems that enable nice tractability from a mathematical perspective but are hard to adapt to real-world systems, or they mainly focus on mitigating risks in real-world applications without providing theoretical justifications. Moreover, most work studies those issues separately; the connections among them are less well-understood. This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). We aim to synthesize promising ideas and research directions, as well as strengthen cross-community collaborations. We hope to chart out important directions for future work. We have an advisory committee and confirmed speakers whose expertise represents the diversity of the technical problems in this emerging research field.
Schedule
Sat 5:45 a.m. - 5:58 a.m.
|
Anima Anandkumar. Opening remarks
(
Opening remarks
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 5:45 a.m. - 2:00 p.m.
|
Workshop on Socially Responsible Machine Learning
(
Poster
)
>
|
🔗 |
Sat 5:58 a.m. - 6:00 a.m.
|
Opening remarks
(
Opening remarks
)
>
SlidesLive Video |
🔗 |
Sat 6:00 a.m. - 6:40 a.m.
|
Jun Zhu. Understand and Benchmark Adversarial Robustness of Deep Learning
(
Invited Talk
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 6:40 a.m. - 7:20 a.m.
|
Olga Russakovsky. Revealing, Quantifying, Analyzing and Mitigating Bias in Visual Recognition
(
Invited Talk
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 7:20 a.m. -
|
Pin-Yu Chen. Adversarial Machine Learning for Good
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Sat 8:10 a.m. - 8:50 a.m.
|
Tatsu Hashimoto. Not all uncertainty is noise: machine learning with confounders and inherent disagreements
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Sat 8:50 a.m. - 9:30 a.m.
|
Nicolas Papernot. What Does it Mean for ML to be Trustworthy
(
talk
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 10:30 a.m. - 10:50 a.m.
|
Contributed Talk-1. Machine Learning API Shift Assessments
(
Contributed Talk
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 10:50 a.m. - 11:30 a.m.
|
Aaron Roth. Better Estimates of Prediction Uncertainty
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Sat 11:30 a.m. - 12:10 p.m.
|
Jun-Yan Zhu. Understanding and Rewriting GANs
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Sat 12:20 p.m. - 1:00 p.m.
|
Kai-Wei Chang. Societal Bias in Language Generation
(
Invited Talk
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 1:00 p.m. - 1:40 p.m.
|
Yulia Tsvetkov. Proactive NLP: How to Prevent Social and Ethical Problems in NLP Systems?
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Sat 1:40 p.m. - 2:00 p.m.
|
Contributed Talk-2. Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions
(
Contributed Talk
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 2:00 p.m. - 2:20 p.m.
|
Contributed Talk-3. FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information
(
Contributed Talk
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 2:20 p.m. - 2:40 p.m.
|
Contributed Talk-4. Auditing AI models for Verified Deployment under Semantic Specifications
(
Contributed Talk
)
>
SlidesLive Video |
Chaowei Xiao 🔗 |
Sat 3:00 p.m. -
|
Poster Sessions
(
Poster Sessions
)
>
|
🔗 |
-
|
Detecting and Quantifying Malicious Activity with Simulation-based Inference
(
Poster
)
>
|
Andrew Gambardella · Naeemullah Khan · Phil Torr · Atilim Gunes Baydin 🔗 |
-
|
Flexible Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles
(
Poster
)
>
|
Ana Lucic · Harrie Oosterhuis · Hinda Haned · Maarten de Rijke 🔗 |
-
|
Should Altruistic Benchmarks be the Default in Machine Learning?
(
Poster
)
>
|
Marius Hobbhahn 🔗 |
-
|
Have the Cake and Eat It Too? Higher Accuracy and Less Expense when Using Multi-label ML APIs Online
(
Poster
)
>
|
Lingjiao Chen · James Zou · Matei Zaharia 🔗 |
-
|
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
(
Poster
)
>
|
Dan Ley · Umang Bhatt · Adrian Weller 🔗 |
-
|
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses
(
Poster
)
>
|
Keegan Harris · Dung Ngo · Logan Stapleton · Hoda Heidari · Steven Wu 🔗 |
-
|
Stateful Strategic Regression
(
Poster
)
>
|
Keegan Harris · Hoda Heidari · Steven Wu 🔗 |
-
|
Machine Learning API Shift Assessments: Change is Coming!
(
Poster
)
>
|
Lingjiao Chen · James Zou · Matei Zaharia 🔗 |
-
|
Improving Adversarial Robustness in 3D Point Cloud Classification via Self-Supervisions
(
Poster
)
>
|
Jiachen Sun · yulong cao · Christopher Choy · Zhiding Yu · Chaowei Xiao · Anima Anandkumar · Zhuoqing Morley Mao 🔗 |
-
|
Fairness in Missing Data Imputation
(
Poster
)
>
|
Yiliang Zhang · Qi Long 🔗 |
-
|
Towards a Unified Framework for Fair and Stable Graph Representation Learning
(
Poster
)
>
|
Chirag Agarwal · Hima Lakkaraju · Marinka Zitnik 🔗 |
-
|
Stateful Performative Gradient Descent
(
Poster
)
>
|
Zachary Izzo · James Zou · Lexing Ying 🔗 |
-
|
FERMI: Fair Empirical Risk Minimization Via Exponential Rényi Mutual Information
(
Poster
)
>
|
Andrew Lowy · Rakesh Pavan · Sina Baharlouei · Meisam Razaviyayn · Ahmad Beirami 🔗 |
-
|
Robust Counterfactual Explanations for Privacy-Preserving SVM
(
Poster
)
>
|
Rami Mochaourab · Panagiotis Papapetrou 🔗 |
-
|
Auditing AI models for Verified Deployment under Semantic Specifications
(
Poster
)
>
|
Homanga Bharadhwaj · De-An Huang · Chaowei Xiao · Anima Anandkumar · Animesh Garg 🔗 |
-
|
Towards Quantifying the Carbon Emissions of Differentially Private Machine Learning
(
Poster
)
>
|
Rakshit Naidu · Harshita Diddee · Ajinkya Mulay · Vardhan Aleti · Krithika Ramesh · Ahmed Zamzam 🔗 |
-
|
Delving into the Remote Adversarial Patch in Semantic Segmentation
(
Poster
)
>
|
yulong cao · Jiachen Sun · Chaowei Xiao · Qi Chen · Zhuoqing Morley Mao 🔗 |
-
|
Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions
(
Poster
)
>
|
Kailas Vodrahalli · James Zou 🔗 |
-
|
Towards Explainable and Fair Supervised Learning
(
Poster
)
>
|
Aarshee Mishra · Nicholas Perello · Przemyslaw Grabowicz 🔗 |
-
|
Are You Man Enough? Even Fair Algorithms Conform to Societal Norms
(
Poster
)
>
|
Myra Cheng · Maria De-Arteaga · Lester Mackey · Adam Tauman Kalai 🔗 |
-
|
Margin-distancing for safe model explanation
(
Poster
)
>
|
Tom Yan · Chicheng Zhang 🔗 |
-
|
CrossWalk: Fairness-enhanced Node Representation Learning
(
Poster
)
>
|
Ahmad Khajehnejad · Moein Khajehnejad · Krishna Gummadi · Adrian Weller · Baharan Mirzasoleiman 🔗 |
-
|
An Empirical Investigation of Learning from Biased Toxicity Labels
(
Poster
)
>
|
Neel Nanda · Jonathan Uesato · Sven Gowal 🔗 |
-
|
Statistical Guarantees for Fairness Aware Plug-In Algorithms
(
Poster
)
>
|
Drona Khurana · Srinivasan Ravichandran · Sparsh Jain · Narayanan Edakunni 🔗 |
-
|
Adversarial Stacked Auto-Encoders for Fair Representation Learning
(
Poster
)
>
|
Patrik Joslin Kenfack · Adil Khan · Rasheed Hussain 🔗 |