Workshop
Workshop on Socially Responsible Machine Learning
Chaowei Xiao 路 Animashree Anandkumar 路 Mingyan Liu 路 Dawn Song 路 Raquel Urtasun 路 Jieyu Zhao 路 Xueru Zhang 路 Cihang Xie 路 Xinyun Chen 路 Bo Li
Sat 24 Jul, 5:40 a.m. PDT
Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems to safety-critical tasks. While the hope is to improve decision-making accuracy and societal outcomes with these ML models, concerns have been incurred that they can inflict harm if not developed or used with care. It has been well-documented that ML models can: (1) inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups; (2) be vulnerable to security and privacy attacks that deceive the models and leak the training data's sensitive information; (3) make hard-to-justify predictions with a lack of transparency. Therefore, it is essential to build socially responsible ML models that are fair, robust, private, transparent, and interpretable.
Although extensive studies have been conducted to increase trust in ML, many of them either focus on well-defined problems that enable nice tractability from a mathematical perspective but are hard to adapt to real-world systems, or they mainly focus on mitigating risks in real-world applications without providing theoretical justifications. Moreover, most work studies those issues separately; the connections among them are less well-understood. This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). We aim to synthesize promising ideas and research directions, as well as strengthen cross-community collaborations. We hope to chart out important directions for future work. We have an advisory committee and confirmed speakers whose expertise represents the diversity of the technical problems in this emerging research field.
Schedule
Sat 5:45 a.m. - 5:58 a.m.
|
Anima Anandkumar. Opening remarks
(
Opening remarks
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 5:45 a.m. - 2:00 p.m.
|
Workshop on Socially Responsible Machine Learning
(
Poster
)
>
|
馃敆 |
Sat 5:58 a.m. - 6:00 a.m.
|
Opening remarks
(
Opening remarks
)
>
SlidesLive Video |
馃敆 |
Sat 6:00 a.m. - 6:40 a.m.
|
Jun Zhu. Understand and Benchmark Adversarial Robustness of Deep Learning
(
Invited Talk
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 6:40 a.m. - 7:20 a.m.
|
Olga Russakovsky. Revealing, Quantifying, Analyzing and Mitigating Bias in Visual Recognition
(
Invited Talk
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 7:20 a.m. -
|
Pin-Yu Chen. Adversarial Machine Learning for Good
(
Invited Talk
)
>
SlidesLive Video |
馃敆 |
Sat 8:10 a.m. - 8:50 a.m.
|
Tatsu Hashimoto. Not all uncertainty is noise: machine learning with confounders and inherent disagreements
(
Invited Talk
)
>
SlidesLive Video |
馃敆 |
Sat 8:50 a.m. - 9:30 a.m.
|
Nicolas Papernot. What Does it Mean for ML to be Trustworthy
(
talk
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 10:30 a.m. - 10:50 a.m.
|
Contributed Talk-1. Machine Learning API Shift Assessments
(
Contributed Talk
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 10:50 a.m. - 11:30 a.m.
|
Aaron Roth. Better Estimates of Prediction Uncertainty
(
Invited Talk
)
>
SlidesLive Video |
馃敆 |
Sat 11:30 a.m. - 12:10 p.m.
|
Jun-Yan Zhu. Understanding and Rewriting GANs
(
Invited Talk
)
>
SlidesLive Video |
馃敆 |
Sat 12:20 p.m. - 1:00 p.m.
|
Kai-Wei Chang. Societal Bias in Language Generation
(
Invited Talk
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 1:00 p.m. - 1:40 p.m.
|
Yulia Tsvetkov. Proactive NLP: How to Prevent Social and Ethical Problems in NLP Systems?
(
Invited Talk
)
>
SlidesLive Video |
馃敆 |
Sat 1:40 p.m. - 2:00 p.m.
|
Contributed Talk-2. Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions
(
Contributed Talk
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 2:00 p.m. - 2:20 p.m.
|
Contributed Talk-3. FERMI: Fair Empirical Risk Minimization Via Exponential R茅nyi Mutual Information
(
Contributed Talk
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 2:20 p.m. - 2:40 p.m.
|
Contributed Talk-4. Auditing AI models for Verified Deployment under Semantic Specifications
(
Contributed Talk
)
>
SlidesLive Video |
Chaowei Xiao 馃敆 |
Sat 3:00 p.m. -
|
Poster Sessions
(
Poster Sessions
)
>
|
馃敆 |
-
|
Detecting and Quantifying Malicious Activity with Simulation-based Inference
(
Poster
)
>
|
Andrew Gambardella 路 Naeemullah Khan 路 Phil Torr 路 Atilim Gunes Baydin 馃敆 |
-
|
Flexible Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles
(
Poster
)
>
|
Ana Lucic 路 Harrie Oosterhuis 路 Hinda Haned 路 Maarten de Rijke 馃敆 |
-
|
Should Altruistic Benchmarks be the Default in Machine Learning?
(
Poster
)
>
|
Marius Hobbhahn 馃敆 |
-
|
Have the Cake and Eat It Too? Higher Accuracy and Less Expense when Using Multi-label ML APIs Online
(
Poster
)
>
|
Lingjiao Chen 路 James Zou 路 Matei Zaharia 馃敆 |
-
|
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
(
Poster
)
>
|
Dan Ley 路 Umang Bhatt 路 Adrian Weller 馃敆 |
-
|
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses
(
Poster
)
>
|
Keegan Harris 路 Dung Ngo 路 Logan Stapleton 路 Hoda Heidari 路 Steven Wu 馃敆 |
-
|
Stateful Strategic Regression
(
Poster
)
>
|
Keegan Harris 路 Hoda Heidari 路 Steven Wu 馃敆 |
-
|
Machine Learning API Shift Assessments: Change is Coming!
(
Poster
)
>
|
Lingjiao Chen 路 James Zou 路 Matei Zaharia 馃敆 |
-
|
Improving Adversarial Robustness in 3D Point Cloud Classification via Self-Supervisions
(
Poster
)
>
|
Jiachen Sun 路 yulong cao 路 Christopher Choy 路 Zhiding Yu 路 Chaowei Xiao 路 Anima Anandkumar 路 Zhuoqing Morley Mao 馃敆 |
-
|
Fairness in Missing Data Imputation
(
Poster
)
>
|
Yiliang Zhang 路 Qi Long 馃敆 |
-
|
Towards a Unified Framework for Fair and Stable Graph Representation Learning
(
Poster
)
>
|
Chirag Agarwal 路 Hima Lakkaraju 路 Marinka Zitnik 馃敆 |
-
|
Stateful Performative Gradient Descent
(
Poster
)
>
|
Zachary Izzo 路 James Zou 路 Lexing Ying 馃敆 |
-
|
FERMI: Fair Empirical Risk Minimization Via Exponential R茅nyi Mutual Information
(
Poster
)
>
|
Andrew Lowy 路 Rakesh Pavan 路 Sina Baharlouei 路 Meisam Razaviyayn 路 Ahmad Beirami 馃敆 |
-
|
Robust Counterfactual Explanations for Privacy-Preserving SVM
(
Poster
)
>
|
Rami Mochaourab 路 Panagiotis Papapetrou 馃敆 |
-
|
Auditing AI models for Verified Deployment under Semantic Specifications
(
Poster
)
>
|
Homanga Bharadhwaj 路 De-An Huang 路 Chaowei Xiao 路 Anima Anandkumar 路 Animesh Garg 馃敆 |
-
|
Towards Quantifying the Carbon Emissions of Differentially Private Machine Learning
(
Poster
)
>
|
Rakshit Naidu 路 Harshita Diddee 路 Ajinkya Mulay 路 Vardhan Aleti 路 Krithika Ramesh 路 Ahmed Zamzam 馃敆 |
-
|
Delving into the Remote Adversarial Patch in Semantic Segmentation
(
Poster
)
>
|
yulong cao 路 Jiachen Sun 路 Chaowei Xiao 路 Qi Chen 路 Zhuoqing Morley Mao 馃敆 |
-
|
Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions
(
Poster
)
>
|
Kailas Vodrahalli 路 James Zou 馃敆 |
-
|
Towards Explainable and Fair Supervised Learning
(
Poster
)
>
|
Aarshee Mishra 路 Nicholas Perello 路 Przemyslaw Grabowicz 馃敆 |
-
|
Are You Man Enough? Even Fair Algorithms Conform to Societal Norms
(
Poster
)
>
|
Myra Cheng 路 Maria De-Arteaga 路 Lester Mackey 路 Adam Tauman Kalai 馃敆 |
-
|
Margin-distancing for safe model explanation
(
Poster
)
>
|
Tom Yan 路 Chicheng Zhang 馃敆 |
-
|
CrossWalk: Fairness-enhanced Node Representation Learning
(
Poster
)
>
|
Ahmad Khajehnejad 路 Moein Khajehnejad 路 Krishna Gummadi 路 Adrian Weller 路 Baharan Mirzasoleiman 馃敆 |
-
|
An Empirical Investigation of Learning from Biased Toxicity Labels
(
Poster
)
>
|
Neel Nanda 路 Jonathan Uesato 路 Sven Gowal 馃敆 |
-
|
Statistical Guarantees for Fairness Aware Plug-In Algorithms
(
Poster
)
>
|
Drona Khurana 路 Srinivasan Ravichandran 路 Sparsh Jain 路 Narayanan Edakunni 馃敆 |
-
|
Adversarial Stacked Auto-Encoders for Fair Representation Learning
(
Poster
)
>
|
Patrik Joslin Kenfack 路 Adil Khan 路 Rasheed Hussain 馃敆 |