Workshop

Participatory Approaches to Machine Learning

Angela Zhou, David Madras, Deborah Raji, Smitha Milli, Bogdan Kulynych, Richard Zemel

Keywords:  algorithmic accountability    fairness and equity    interactive machine learning    participatory design    participatory machine learning    user agency    community involvement  

Abstract:

The designers of a machine learning (ML) system typically have far more power over the system than the individuals who are ultimately impacted by the system and its decisions. Recommender platforms shape the users’ preferences; the individuals classified by a model often do not have means to contest a decision; and the data required by supervised ML systems necessitates that the privacy and labour of many yield to the design choices of a few.

The fields of algorithmic fairness and human-centered ML often focus on centralized solutions, lending increasing power to system designers and operators, and less to users and affected populations. In response to the growing social-science critique of the power imbalance present in the research, design, and deployment of ML systems, we wish to consider a new set of technical formulations for the ML community on the subject of more democratic, cooperative, and participatory ML systems.

Our workshop aims to explore methods that, by design, enable and encourage the perspectives of those impacted by an ML system to shape the system and its decisions. By involving affected populations in shaping the goals of the overall system, we hope to move beyond just tools for enabling human participation and progress towards a redesign of power dynamics in ML systems.

Chat is not available.

Timezone: »

Schedule

Fri 6:00 a.m. - 6:15 a.m. [iCal]
Opening remarks (Live Talk)
Deborah Raji, Angela Zhou, David Madras, Smitha Milli, Bogdan Kulynych
Fri 6:15 a.m. - 6:45 a.m. [iCal]

Dr. King called for a radical revolution of values in 1967. He understood that if we did not "begin the shift from a thing-oriented society to a person-oriented society," and prioritize people over machines, computers and profit motives, we would be unable to undo the harms of racism, extreme materialism, and militarism. If we were to take Dr. King's challenge seriously today, how might we deepen our questions, intervene in harmful technologies and slow down innovation for innovation's sake?

Tawana Petty
Fri 6:45 a.m. - 7:15 a.m. [iCal]

The attack surface of machine learning is large: training data can be poisoned, predictions manipulated using adversarial examples, models exploited to reveal sensitive information contained in training data, etc. This is in large parts due to the absence of security and privacy considerations in the design of ML algorithms. Yet, adversaries have clear incentives to target these systems. Thus, there is a need to ensure that computer systems that rely on ML are trustworthy.

Fortunately, we are at a turning point where ML is still being adopted, which creates a rare opportunity to address the shortcomings of the technology before it is widely deployed. Designing secure ML requires that we have a solid understanding as to what we expect legitimate model behavior to look like. We structure our discussion around three directions, which we believe are likely to lead to significant progress.

The first encompasses a spectrum of approaches to verification and admission control, which is a prerequisite to enable fail-safe defaults in machine learning systems. The second seeks to design mechanisms for assembling reliable records of compromise that would help understand the degree to which vulnerabilities are exploited by adversaries, as well as favor psychological acceptability of machine learning applications. The third pursues formal frameworks for security and privacy in machine learning, which we argue should strive to align machine learning goals such as generalization with security and privacy desiderata like robustness or privacy. We illustrate these directions with recent work on model extraction, privacy-preserving ML and machine unlearning.

Nicolas Papernot
Fri 7:15 a.m. - 7:45 a.m. [iCal]

Researchers and journalists have found many ways that advertisers can target—or exclude—particular groups of users seeing their ads on Facebook, comparatively little attention has been paid to the implications of the platform's ad delivery process, where the platform decides which users see which ads. In this talk I will show how we audit Facebook's delivery algorithms for potential gender and race discrimination using Facebook's own tools tools designed to assist advertisers. Following these methods we find that Facebook delivers different job ads to men and women as well as white and Black users, despite inclusive targeting. We also identify how Facebook contributes to creating opinion filter bubbles by steering political ads towards users who already agree with their content.

Piotr Sapiezynski
Fri 7:45 a.m. - 8:30 a.m. [iCal]

Please check the details on our website: https://participatoryml.github.io/#poster-sessions Discord server: https://discord.gg/KSAwXKs

Fri 8:30 a.m. - 9:15 a.m. [iCal]

Please check the details on our website: https://participatoryml.github.io/#breakout-sessions Discord server: https://discord.gg/KSAwXKs

Fri 9:15 a.m. - 10:00 a.m. [iCal]
Panel 1 (Discussion Panel)
Deborah Raji, Tawana Petty, Nicolas Papernot, Piotr Sapiezynski, Aleksandra Korolova
Fri 10:00 a.m. - 10:30 a.m. [iCal]

Algorithmic decision-making systems are increasingly being adopted by government public service agencies. Researchers, policy experts, and civil rights groups have all voiced concerns that such systems are being deployed without adequate consideration of potential harms, disparate impacts, and public accountability practices. Yet little is known about the concerns of those most likely to be affected by these systems. In this talk I will discuss what we learned from a series of workshops conducted to better understand the concerns of affected communities in the context of child welfare services. Through these workshops we learned about the perspectives of families involved in the child welfare system, employees of child welfare agencies, and service providers.

Alexandra Chouldechova
Fri 10:30 a.m. - 11:00 a.m. [iCal]

Machine learning models are often used to automate decisions that affect consumers: whether to approve a loan, a credit card application or provide insurance. In such tasks, consumers should have the ability to change the decision of the model. When a consumer is denied a loan by a credit score, for example, they should be able to alter its input variables in a way that guarantees approval. Otherwise, they will be denied the loan so long as the model is deployed, and – more importantly – lack control over a decision that affects their livelihood. In this talk, I will formally discuss these issues in terms of a notion called recourse -- i.e., the ability of a person to change the decision of a model by altering actionable input variables. I will describe how machine learning models may fail to provide recourse due to standard practices in model development. I will then describe integer programming tools to verify recourse in linear classification models. I will end with a brief discussion on how recourse can facilitate meaningful consumer protection in modern applications of machine learning. This is joint work with Alexander Spangher and Yang Liu.

Berk Ustun
Fri 11:00 a.m. - 11:30 a.m. [iCal]

When we consider power imbalances between those who craft ML systems and those most vulnerable to the impacts of those systems, what is often enabling that power is the localization of control in the hands of tech companies and technical experts who consolidate power using claims to perceived scientific objectivity and legal protections of intellectual property. At the same time, there is a legacy in the scientific community of data being wielded as an instrument of oppression, often reinforcing inequality and perpetuating injustice. At Data for Black Lives, we bring together scientists and community-based activists to take collective action using data for fighting bias, building progressive movements, and promoting civic engagement. In the ML community, people often take for granted the initial steps in the process of crafting ML systems that involve data collection, storage and access. Researchers often engage with datasets as if they appeared spontaneously with no social context. One method of moving beyond fairness metrics and generic discussions of ethics to meaningfully shifting agency to the people most marginalized is to stop ignoring the context, construction and implications of the datasets we use for research. I offer two considerations for shifting power in this way: Intentional data narratives and Data trusts - an alternative to current strategies of data governance.

Jamelle Watson-Daniels
Fri 11:30 a.m. - 12:15 p.m. [iCal]

Please check the details on our website: https://participatoryml.github.io/#poster-sessions Discord sever: https://discord.gg/JfA55Gv

Deborah Raji, Berk Ustun, Alexandra Chouldechova, Jamelle Watson-Daniels
Fri 12:15 p.m. - 1:00 p.m. [iCal]

Please check the details on our website: https://participatoryml.github.io/#poster-sessions Discord server: https://discord.gg/KSAwXKs

Fri 1:00 p.m. - 1:45 p.m. [iCal]

Please check the details on our website: https://participatoryml.github.io/#breakout-sessions Discord server: https://discord.gg/KSAwXKs