Timezone: »
The designers of a machine learning (ML) system typically have far more power over the system than the individuals who are ultimately impacted by the system and its decisions. Recommender platforms shape the users’ preferences; the individuals classified by a model often do not have means to contest a decision; and the data required by supervised ML systems necessitates that the privacy and labour of many yield to the design choices of a few.
The fields of algorithmic fairness and human-centered ML often focus on centralized solutions, lending increasing power to system designers and operators, and less to users and affected populations. In response to the growing social-science critique of the power imbalance present in the research, design, and deployment of ML systems, we wish to consider a new set of technical formulations for the ML community on the subject of more democratic, cooperative, and participatory ML systems.
Our workshop aims to explore methods that, by design, enable and encourage the perspectives of those impacted by an ML system to shape the system and its decisions. By involving affected populations in shaping the goals of the overall system, we hope to move beyond just tools for enabling human participation and progress towards a redesign of power dynamics in ML systems.
Fri 6:00 a.m. - 6:15 a.m.
|
Opening remarks
(
Live Talk
)
|
Deborah Raji · Angela Zhou · David Madras · Smitha Milli · Bogdan Kulynych 🔗 |
Fri 6:15 a.m. - 6:45 a.m.
|
AI’s Contradiction: King’s Radical Revolution in Values
(
Live Talk
)
Dr. King called for a radical revolution of values in 1967. He understood that if we did not "begin the shift from a thing-oriented society to a person-oriented society," and prioritize people over machines, computers and profit motives, we would be unable to undo the harms of racism, extreme materialism, and militarism. If we were to take Dr. King's challenge seriously today, how might we deepen our questions, intervene in harmful technologies and slow down innovation for innovation's sake? |
Tawana Petty 🔗 |
Fri 6:45 a.m. - 7:15 a.m.
|
What does it mean for ML to be trustworthy?
(
Talk
)
link »
SlidesLive Video » The attack surface of machine learning is large: training data can be poisoned, predictions manipulated using adversarial examples, models exploited to reveal sensitive information contained in training data, etc. This is in large parts due to the absence of security and privacy considerations in the design of ML algorithms. Yet, adversaries have clear incentives to target these systems. Thus, there is a need to ensure that computer systems that rely on ML are trustworthy. Fortunately, we are at a turning point where ML is still being adopted, which creates a rare opportunity to address the shortcomings of the technology before it is widely deployed. Designing secure ML requires that we have a solid understanding as to what we expect legitimate model behavior to look like. We structure our discussion around three directions, which we believe are likely to lead to significant progress. The first encompasses a spectrum of approaches to verification and admission control, which is a prerequisite to enable fail-safe defaults in machine learning systems. The second seeks to design mechanisms for assembling reliable records of compromise that would help understand the degree to which vulnerabilities are exploited by adversaries, as well as favor psychological acceptability of machine learning applications. The third pursues formal frameworks for security and privacy in machine learning, which we argue should strive to align machine learning goals such as generalization with security and privacy desiderata like robustness or privacy. We illustrate these directions with recent work on model extraction, privacy-preserving ML and machine unlearning. |
Nicolas Papernot 🔗 |
Fri 7:15 a.m. - 7:45 a.m.
|
Turning the tables on Facebook: How we audit Facebook using their own marketing tools
(
Talk
)
link »
SlidesLive Video » Researchers and journalists have found many ways that advertisers can target—or exclude—particular groups of users seeing their ads on Facebook, comparatively little attention has been paid to the implications of the platform's ad delivery process, where the platform decides which users see which ads. In this talk I will show how we audit Facebook's delivery algorithms for potential gender and race discrimination using Facebook's own tools tools designed to assist advertisers. Following these methods we find that Facebook delivers different job ads to men and women as well as white and Black users, despite inclusive targeting. We also identify how Facebook contributes to creating opinion filter bubbles by steering political ads towards users who already agree with their content. |
Piotr Sapiezynski 🔗 |
Fri 7:45 a.m. - 8:30 a.m.
|
Poster Session 1
(
Poster Session
)
Please check the details on our website: https://participatoryml.github.io/#poster-sessions Discord server: https://discord.gg/KSAwXKs |
🔗 |
Fri 8:30 a.m. - 9:15 a.m.
|
Breakout Sessions / Break
(
Discussion Sessions
)
Please check the details on our website: https://participatoryml.github.io/#breakout-sessions Discord server: https://discord.gg/KSAwXKs |
🔗 |
Fri 9:15 a.m. - 10:00 a.m.
|
Panel 1
(
Discussion Panel
)
|
Deborah Raji · Tawana Petty · Nicolas Papernot · Piotr Sapiezynski · Aleksandra Korolova 🔗 |
Fri 10:00 a.m. - 10:30 a.m.
|
Affected Community Perspectives on Algorithmic Decision-Making in Child Welfare Services
(
Talk
)
link »
SlidesLive Video » Algorithmic decision-making systems are increasingly being adopted by government public service agencies. Researchers, policy experts, and civil rights groups have all voiced concerns that such systems are being deployed without adequate consideration of potential harms, disparate impacts, and public accountability practices. Yet little is known about the concerns of those most likely to be affected by these systems. In this talk I will discuss what we learned from a series of workshops conducted to better understand the concerns of affected communities in the context of child welfare services. Through these workshops we learned about the perspectives of families involved in the child welfare system, employees of child welfare agencies, and service providers. |
Alexandra Chouldechova 🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Actionable Recourse in Machine Learning
(
Talk
)
link »
Machine learning models are often used to automate decisions that affect consumers: whether to approve a loan, a credit card application or provide insurance. In such tasks, consumers should have the ability to change the decision of the model. When a consumer is denied a loan by a credit score, for example, they should be able to alter its input variables in a way that guarantees approval. Otherwise, they will be denied the loan so long as the model is deployed, and – more importantly – lack control over a decision that affects their livelihood. In this talk, I will formally discuss these issues in terms of a notion called recourse -- i.e., the ability of a person to change the decision of a model by altering actionable input variables. I will describe how machine learning models may fail to provide recourse due to standard practices in model development. I will then describe integer programming tools to verify recourse in linear classification models. I will end with a brief discussion on how recourse can facilitate meaningful consumer protection in modern applications of machine learning. This is joint work with Alexander Spangher and Yang Liu. |
Berk Ustun 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Beyond Fairness and Ethics: Towards Agency and Shifting Power
(
Talk
)
link »
SlidesLive Video » When we consider power imbalances between those who craft ML systems and those most vulnerable to the impacts of those systems, what is often enabling that power is the localization of control in the hands of tech companies and technical experts who consolidate power using claims to perceived scientific objectivity and legal protections of intellectual property. At the same time, there is a legacy in the scientific community of data being wielded as an instrument of oppression, often reinforcing inequality and perpetuating injustice. At Data for Black Lives, we bring together scientists and community-based activists to take collective action using data for fighting bias, building progressive movements, and promoting civic engagement. In the ML community, people often take for granted the initial steps in the process of crafting ML systems that involve data collection, storage and access. Researchers often engage with datasets as if they appeared spontaneously with no social context. One method of moving beyond fairness metrics and generic discussions of ethics to meaningfully shifting agency to the people most marginalized is to stop ignoring the context, construction and implications of the datasets we use for research. I offer two considerations for shifting power in this way: Intentional data narratives and Data trusts - an alternative to current strategies of data governance. |
Jamelle Watson-Daniels 🔗 |
Fri 11:30 a.m. - 12:15 p.m.
|
Panel 2
(
Discussion Panel
)
Please check the details on our website: https://participatoryml.github.io/#poster-sessions Discord sever: https://discord.gg/JfA55Gv |
Deborah Raji · Berk Ustun · Alexandra Chouldechova · Jamelle Watson-Daniels 🔗 |
Fri 12:15 p.m. - 1:00 p.m.
|
Poster Session 2
(
Poster Session
)
Please check the details on our website: https://participatoryml.github.io/#poster-sessions Discord server: https://discord.gg/KSAwXKs |
🔗 |
Fri 1:00 p.m. - 1:45 p.m.
|
Breakout Sessions
(
Breakout sessions
)
Please check the details on our website: https://participatoryml.github.io/#breakout-sessions Discord server: https://discord.gg/KSAwXKs |
🔗 |
Author Information
Angela Zhou (Cornell University)
David Madras (University of Toronto)
Deborah Raji (AI Now Institute)
Smitha Milli (UC Berkeley)
Bogdan Kulynych (EPFL)
Richard Zemel (Vector Institute)
More from the Same Authors
-
2020 : Poster #12 »
Angela Zhou -
2021 : Online Algorithmic Recourse by Collective Action »
Elliot Creager · Richard Zemel -
2022 : Causal Prediction Can Induce Performative Stability »
Bogdan Kulynych -
2022 : Towards Environment-Invariant Representation Learning for Robust Task Transfer »
Benjamin Eyre · Richard Zemel · Elliot Creager -
2022 : What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning »
Bogdan Kulynych · Yao-Yuan Yang · Yaodong Yu · Jarosław Błasiok · Preetum Nakkiran -
2022 : Optimizing Personalized Assortment Decisions in the Presence of Platform Disengagement »
Mika Sumida · Angela Zhou -
2022 : Invited talks 3, Q/A, Amy, Rich and Liting »
Liting Sun · Amy Zhang · Richard Zemel -
2022 : Invited talks 3, Amy Zhang, Rich Zemel and Liting Sun »
Amy Zhang · Richard Zemel · Liting Sun -
2022 Workshop: Spurious correlations, Invariance, and Stability (SCIS) »
Aahlad Puli · Maggie Makar · Victor Veitch · Yoav Wald · Mark Goldstein · Limor Gultchin · Angela Zhou · Uri Shalit · Suchi Saria -
2021 : Smitha Milli -- Causal Inference Struggles with Agency on Online Platforms »
Smitha Milli -
2021 Poster: SketchEmbedNet: Learning Novel Concepts by Imitating Drawings »
Alexander Wang · Mengye Ren · Richard Zemel -
2021 Poster: Learning a Universal Template for Few-shot Dataset Generalization »
Eleni Triantafillou · Hugo Larochelle · Richard Zemel · Vincent Dumoulin -
2021 Poster: Environment Inference for Invariant Learning »
Elliot Creager · Joern-Henrik Jacobsen · Richard Zemel -
2021 Spotlight: Learning a Universal Template for Few-shot Dataset Generalization »
Eleni Triantafillou · Hugo Larochelle · Richard Zemel · Vincent Dumoulin -
2021 Spotlight: Environment Inference for Invariant Learning »
Elliot Creager · Joern-Henrik Jacobsen · Richard Zemel -
2021 Spotlight: SketchEmbedNet: Learning Novel Concepts by Imitating Drawings »
Alexander Wang · Mengye Ren · Richard Zemel -
2021 Poster: On Monotonic Linear Interpolation of Neural Network Parameters »
James Lucas · Juhan Bae · Michael Zhang · Stanislav Fort · Richard Zemel · Roger Grosse -
2021 Spotlight: On Monotonic Linear Interpolation of Neural Network Parameters »
James Lucas · Juhan Bae · Michael Zhang · Stanislav Fort · Richard Zemel · Roger Grosse -
2020 : Invited Talk 4: Prof. Richard Zemel from University of Toronto »
Richard Zemel -
2020 : Panel 2 »
Deborah Raji · Berk Ustun · Alexandra Chouldechova · Jamelle Watson-Daniels -
2020 : Panel 1 »
Deborah Raji · Tawana Petty · Nicolas Papernot · Piotr Sapiezynski · Aleksandra Korolova -
2020 : Opening remarks »
Deborah Raji · Angela Zhou · David Madras · Smitha Milli · Bogdan Kulynych -
2020 Poster: Strategic Classification is Causal Modeling in Disguise »
John Miller · Smitha Milli · Moritz Hardt -
2020 Poster: Causal Modeling for Fairness In Dynamical Systems »
Elliot Creager · David Madras · Toniann Pitassi · Richard Zemel -
2020 Poster: Optimizing Long-term Social Welfare in Recommender Systems: A Constrained Matching Approach »
Martin Mladenov · Elliot Creager · Omer Ben-Porat · Kevin Swersky · Richard Zemel · Craig Boutilier -
2020 Poster: Learning the Stein Discrepancy for Training and Evaluating Energy-Based Models without Sampling »
Will Grathwohl · Kuan-Chieh Wang · Joern-Henrik Jacobsen · David Duvenaud · Richard Zemel -
2019 Workshop: Learning and Reasoning with Graph-Structured Representations »
Ethan Fetaya · Zhiting Hu · Thomas Kipf · Yujia Li · Xiaodan Liang · Renjie Liao · Raquel Urtasun · Hao Wang · Max Welling · Eric Xing · Richard Zemel -
2019 Poster: Lorentzian Distance Learning for Hyperbolic Representations »
Marc Law · Renjie Liao · Jake Snell · Richard Zemel -
2019 Poster: Flexibly Fair Representation Learning by Disentanglement »
Elliot Creager · David Madras · Joern-Henrik Jacobsen · Marissa Weis · Kevin Swersky · Toniann Pitassi · Richard Zemel -
2019 Oral: Lorentzian Distance Learning for Hyperbolic Representations »
Marc Law · Renjie Liao · Jake Snell · Richard Zemel -
2019 Oral: Flexibly Fair Representation Learning by Disentanglement »
Elliot Creager · David Madras · Joern-Henrik Jacobsen · Marissa Weis · Kevin Swersky · Toniann Pitassi · Richard Zemel -
2019 Poster: Understanding the Origins of Bias in Word Embeddings »
Marc-Etienne Brunet · Colleen Alkalay-Houlihan · Ashton Anderson · Richard Zemel -
2019 Oral: Understanding the Origins of Bias in Word Embeddings »
Marc-Etienne Brunet · Colleen Alkalay-Houlihan · Ashton Anderson · Richard Zemel -
2018 Poster: Learning Adversarially Fair and Transferable Representations »
David Madras · Elliot Creager · Toniann Pitassi · Richard Zemel -
2018 Oral: Learning Adversarially Fair and Transferable Representations »
David Madras · Elliot Creager · Toniann Pitassi · Richard Zemel -
2018 Poster: Reviving and Improving Recurrent Back-Propagation »
Renjie Liao · Yuwen Xiong · Ethan Fetaya · Lisa Zhang · Kijung Yoon · Zachary S Pitkow · Raquel Urtasun · Richard Zemel -
2018 Poster: Residual Unfairness in Fair Machine Learning from Prejudiced Data »
Nathan Kallus · Angela Zhou -
2018 Poster: Distilling the Posterior in Bayesian Neural Networks »
Kuan-Chieh Wang · Paul Vicol · James Lucas · Li Gu · Roger Grosse · Richard Zemel -
2018 Oral: Distilling the Posterior in Bayesian Neural Networks »
Kuan-Chieh Wang · Paul Vicol · James Lucas · Li Gu · Roger Grosse · Richard Zemel -
2018 Oral: Reviving and Improving Recurrent Back-Propagation »
Renjie Liao · Yuwen Xiong · Ethan Fetaya · Lisa Zhang · Kijung Yoon · Zachary S Pitkow · Raquel Urtasun · Richard Zemel -
2018 Oral: Residual Unfairness in Fair Machine Learning from Prejudiced Data »
Nathan Kallus · Angela Zhou -
2018 Poster: Neural Relational Inference for Interacting Systems »
Thomas Kipf · Ethan Fetaya · Kuan-Chieh Wang · Max Welling · Richard Zemel -
2018 Oral: Neural Relational Inference for Interacting Systems »
Thomas Kipf · Ethan Fetaya · Kuan-Chieh Wang · Max Welling · Richard Zemel -
2017 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Jacob Steinhardt · Adrian Weller · Smitha Milli -
2017 Poster: Deep Spectral Clustering Learning »
Marc Law · Raquel Urtasun · Richard Zemel -
2017 Talk: Deep Spectral Clustering Learning »
Marc Law · Raquel Urtasun · Richard Zemel