Affinity Workshop
Queer in AI @ ICML 2022 Affinity Workshop
Huan Zhang · Arjun Subramonian · Sharvani Jha · William Agnew · Krunoslav Lehman Pavasovic
Room 337 - 338
Queer in AI’s demographic survey reveals that most queer scientists in our community do not feel completely welcome in conferences and their work environments, with the main reasons being a lack of queer community and role models. Over the past years, Queer in AI has worked towards these goals, yet we have observed that the voices of underrepresented queer communities, especially transgender, non-binary folks and queer BIPOC folks have been neglected. The purpose of this workshop is to highlight issues that these communities face by featuring talks and panel discussions on the inclusiveness of non-Western non-binary identities; and Black, Indigenous, and Pacific Islander non-cis folks. Additionally, this proposal outlines making virtual/hybrid conferences more inclusive of queer folks.
Schedule
Sat 6:15 a.m. - 6:30 a.m.
|
Opening Remarks
(
Opening Remarks
)
SlidesLive Video |
🔗 |
Sat 6:30 a.m. - 7:00 a.m.
|
Invited Talk 1 (Sarthak Arora and Satyam Yadav): South Asian Feminism/s in the 21st Century: Notes on Paradoxes and Possibilities
(
Talk
)
SlidesLive Video Unpacking the term ‘South Asia,’ this talk candidly explores links between nationalism, state, identity, and gender and their significance in understanding feminist politics and its impacts on structures of queer inclusivity in the region. Examining cyberspaces ranging from Pakistani feminist blogs to queer art communities in India, it seeks to locate the feminist, intersectional unfoldings in the political economy of everyday life. |
Sarthak Arora 🔗 |
Sat 7:00 a.m. - 7:30 a.m.
|
Break
(
Break
)
|
🔗 |
Sat 7:30 a.m. - 8:00 a.m.
|
Invited Talk 2 (Jay Cunningham): Potentials of Community Participation in Machine Learning Research
(
Talk
)
SlidesLive Video This talk explores the potentials of participatory design approaches within machine learning (ML) research and design, toward developing more responsible, equitable, and sustainable experiences among underrepresented user communities. ML scholars and technologists are expressing emerging interest in the domain of participatory ML, seeking to extend collaborative research traditions in human-computer interaction, health equity, and community development. It is a firm position that participatory approaches that treat ML and AI systems developers and their stakeholders more equally in a democratic, iterative design process, presents opportunities for a more fair and equitable future of intelligent systems. This talk will urge more MI/AL research that employs participatory techniques and research on those techniques themselves, while providing background, scenarios, and impacts of such approaches on vulnerable and underrepresented users. We end by discussing existing frameworks for community participation that promote collective decision making in problem solving, selecting data for modeling, defining solution success criteria, and ensuring solutions have sustainably mutual benefits for all stakeholders. |
Jay Cunningham 🔗 |
Sat 8:00 a.m. - 9:00 a.m.
|
Sponsor Events
(
Sponsor Events
)
|
🔗 |
Sat 9:00 a.m. - 10:30 a.m.
|
Lunch Break
(
Break
)
|
🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
Invited Talk 3 (Kyra Yee): A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Modeling on Twitter
(
Talk
)
SlidesLive Video Harmful content detection models tend to have higher false positive rates for content from marginalized groups. Such disproportionate penalization poses the risk of reduced visibility, where marginalized communities lose the opportunity to voice their opinion online. Current approaches to algorithmic harm mitigation are often ad hoc and subject to human bias. We make two main contributions in this paper. First, we design a novel methodology, which provides a principled approach to detecting the severity of potential harms associated with a text-based model. Second, we apply our methodology to audit Twitter’s English marginal abuse model. Without utilizing demographic labels or dialect classifiers, which pose substantial privacy and ethical concerns, we are still able to detect and measure the severity of issues related to the over-penalization of the speech of marginalized communities, such as the use of reclaimed speech, counterspeech, and identity related terms. In order to mitigate the associated harms, we experiment with adding additional true negative examples to the training data. We find that doing so provides improvements to our fairness metrics without large degradations in model performance. Lastly, we discuss challenges to marginal abuse modeling on social media in practice. |
Kyra Yee 🔗 |
Sat 11:00 a.m. - 11:30 a.m.
|
Poster Session
(
Poster Session
)
|
🔗 |
Sat 11:30 a.m. - 12:00 p.m.
|
Online Social Event ( Social Event ) link | 🔗 |