Timezone: »
Queer in AI’s demographic survey reveals that most queer scientists in our community do not feel completely welcome in conferences and their work environments, with the main reasons being a lack of queer community and role models. Over the past years, Queer in AI has worked towards these goals, yet we have observed that the voices of underrepresented queer communities, especially transgender, non-binary folks and queer BIPOC folks have been neglected. The purpose of this workshop is to highlight issues that these communities face by featuring talks and panel discussions on the inclusiveness of non-Western non-binary identities; and Black, Indigenous, and Pacific Islander non-cis folks. Additionally, this proposal outlines making virtual/hybrid conferences more inclusive of queer folks.
Sat 6:15 a.m. - 6:30 a.m.
|
Opening Remarks
SlidesLive Video » |
🔗 |
Sat 6:30 a.m. - 7:00 a.m.
|
Invited Talk 1 (Sarthak Arora and Satyam Yadav): South Asian Feminism/s in the 21st Century: Notes on Paradoxes and Possibilities
(
Talk
)
SlidesLive Video » Unpacking the term ‘South Asia,’ this talk candidly explores links between nationalism, state, identity, and gender and their significance in understanding feminist politics and its impacts on structures of queer inclusivity in the region. Examining cyberspaces ranging from Pakistani feminist blogs to queer art communities in India, it seeks to locate the feminist, intersectional unfoldings in the political economy of everyday life. |
Sarthak Arora 🔗 |
Sat 7:00 a.m. - 7:30 a.m.
|
Break
|
🔗 |
Sat 7:30 a.m. - 8:00 a.m.
|
Invited Talk 2 (Jay Cunningham): Potentials of Community Participation in Machine Learning Research
(
Talk
)
SlidesLive Video » This talk explores the potentials of participatory design approaches within machine learning (ML) research and design, toward developing more responsible, equitable, and sustainable experiences among underrepresented user communities. ML scholars and technologists are expressing emerging interest in the domain of participatory ML, seeking to extend collaborative research traditions in human-computer interaction, health equity, and community development. It is a firm position that participatory approaches that treat ML and AI systems developers and their stakeholders more equally in a democratic, iterative design process, presents opportunities for a more fair and equitable future of intelligent systems. This talk will urge more MI/AL research that employs participatory techniques and research on those techniques themselves, while providing background, scenarios, and impacts of such approaches on vulnerable and underrepresented users. We end by discussing existing frameworks for community participation that promote collective decision making in problem solving, selecting data for modeling, defining solution success criteria, and ensuring solutions have sustainably mutual benefits for all stakeholders. |
Jay Cunningham 🔗 |
Sat 8:00 a.m. - 9:00 a.m.
|
Sponsor Events
|
🔗 |
Sat 9:00 a.m. - 10:30 a.m.
|
Lunch Break
|
🔗 |
Sat 10:30 a.m. - 11:00 a.m.
|
Invited Talk 3 (Kyra Yee): A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Modeling on Twitter
(
Talk
)
SlidesLive Video » Harmful content detection models tend to have higher false positive rates for content from marginalized groups. Such disproportionate penalization poses the risk of reduced visibility, where marginalized communities lose the opportunity to voice their opinion online. Current approaches to algorithmic harm mitigation are often ad hoc and subject to human bias. We make two main contributions in this paper. First, we design a novel methodology, which provides a principled approach to detecting the severity of potential harms associated with a text-based model. Second, we apply our methodology to audit Twitter’s English marginal abuse model. Without utilizing demographic labels or dialect classifiers, which pose substantial privacy and ethical concerns, we are still able to detect and measure the severity of issues related to the over-penalization of the speech of marginalized communities, such as the use of reclaimed speech, counterspeech, and identity related terms. In order to mitigate the associated harms, we experiment with adding additional true negative examples to the training data. We find that doing so provides improvements to our fairness metrics without large degradations in model performance. Lastly, we discuss challenges to marginal abuse modeling on social media in practice. |
Kyra Yee 🔗 |
Sat 11:00 a.m. - 11:30 a.m.
|
Poster Session
|
🔗 |
Sat 11:30 a.m. - 12:00 p.m.
|
Online Social Event ( Social Event ) link » | 🔗 |
Author Information
Huan Zhang (CMU)
Arjun Subramonian (University of California, Los Angeles)
Hello! I am a PhD student at UCLA conducting machine learning research, advised by Prof. Yizhou Sun. I also work with Prof. Kai-Wei Chang. My research is broadly about graph representation learning, self-supervised learning, and fairness.
Sharvani Jha (University of California, Los Angeles)
William Agnew (University of Washington)
Krunoslav Lehman Pavasovic (ETH Zurich)
MSc student in Statistics at ETH Zurich/Organizer at QueerInAI/Researcher at INRIA - Sierra
More from the Same Authors
-
2021 : Empirical robustification of pre-trained classifiers »
Mohammad Sadegh Norouzzadeh · Wan-Yi Lin · Leonid Boytsov · Leslie Rice · Huan Zhang · Filipe Condessa · Zico Kolter -
2021 : Fast Certified Robust Training with Short Warmup »
Zhouxing Shi · Yihan Wang · Huan Zhang · Jinfeng Yi · Cho-Jui Hsieh -
2021 : Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification »
Shiqi Wang · Huan Zhang · Kaidi Xu · Xue Lin · Suman Jana · Cho-Jui Hsieh · Zico Kolter -
2021 : Automating Power Networks: Improving RL Agent Robustness with Adversarial Training »
Alexander Pan · Yongkyun Lee · Huan Zhang -
2021 : Extended Abstract: The Affective Growth of Computer Vision »
David Crandall · Norman Su · Arjun Subramonian -
2021 : Feature Genuinization based Residual Squeeze-and-Excitation for Audio Anti-Spoofing in Sound AI »
Ruchira Ray · Arjun Subramonian -
2021 : Posthuman Intelligence OR The Importance of Digital Agenthood »
gripp prime · Arjun Subramonian -
2021 : Crowdsourcing a Corpus of Dogwhistle Transphobia »
Paige Treebridge · Arjun Subramonian -
2021 : Is digital colonisation redefining the understanding of agency, bodily autonomy and being human? »
Brindaalakshmi K · Arjun Subramonian -
2021 : Towards Understanding and Building a Multilingual and Inclusive Web »
John Samuel · Arjun Subramonian -
2021 : Examining narratives of "progress" in AI »
Leif Hancox-Li · Arjun Subramonian -
2021 : Corrective Discrimination in Repeated Bank-Lending Simulations »
Emma Ling · Arjun Subramonian · Emilia Cabrera -
2021 : Dichotomistic Bias and the Shared Teleology of Mathematics and Music »
MaryLena Bleile · Arjun Subramonian -
2021 : Lips »
Valerie Elefante · Annie Brown · Arjun Subramonian -
2021 : QUEERING CATEGORISATIONS IN AI FOR TALENT ACQUISITION »
Eleanor Drage · Arjun Subramonian -
2021 : Can You Explain That, Better? Comprehensible Text Analytics for SE Applications »
Quang Huy Tu · Arjun Subramonian -
2021 : Complicating Narratives of Afro-Asian Solidarity: A “Distant Reading” of the 1955 Bandung Conference Proceedings »
Nikhil Dharmaraj · Arjun Subramonian -
2021 : The Gender Panopticon: AI, Gender, and Design Justice »
Jessica Jung · Arjun Subramonian -
2021 : Automatic Gender Recognition: Perspectives from Phenomenology and Hermeneutics »
Yanan Long · Arjun Subramonian -
2023 : Formal Verification for Neural Networks with General Nonlinearities via Branch-and-Bound »
Zhouxing Shi · Qirui Jin · Huan Zhang · Zico Kolter · Suman Jana · Cho-Jui Hsieh -
2023 Workshop: 2nd Workshop on Formal Verification of Machine Learning »
Mark Müller · Brendon G. Anderson · Leslie Rice · Zhouxing Shi · Shubham Ugare · Huan Zhang · Martin Vechev · Zico Kolter · Somayeh Sojoudi · Cho-Jui Hsieh -
2023 Poster: Towards Robust and Safe Reinforcement Learning with Benign Off-policy Data »
Zuxin Liu · Zijian Guo · Zhepeng Cen · Huan Zhang · Yihang Yao · Hanjiang Hu · Ding Zhao -
2022 : Paper 15: On the Robustness of Safe Reinforcement Learning under Observational Perturbations »
Zuxin Liu · Zhepeng Cen · Huan Zhang · Jie Tan · Bo Li · Ding Zhao -
2022 : Characterizing Neural Network Verification for Systems with NN4SysBench »
Haoyu He · Tianhao Wei · Huan Zhang · Changliu Liu · Cheng Tan -
2022 Workshop: Workshop on Formal Verification of Machine Learning »
Huan Zhang · Leslie Rice · Kaidi Xu · aditi raghunathan · Wan-Yi Lin · Cho-Jui Hsieh · Clark Barrett · Martin Vechev · Zico Kolter -
2022 Poster: A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks »
Huan Zhang · Shiqi Wang · Kaidi Xu · Yihan Wang · Suman Jana · Cho-Jui Hsieh · Zico Kolter -
2022 Poster: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: A Branch and Bound Framework for Stronger Adversarial Attacks of ReLU Networks »
Huan Zhang · Shiqi Wang · Kaidi Xu · Yihan Wang · Suman Jana · Cho-Jui Hsieh · Zico Kolter -
2022 Spotlight: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Social: Black in AI and Queer in AI Joint Social Event »
Victor Silva · Huan Zhang · Nathaniel Rose · Arjun Subramonian · Krunoslav Lehman Pavasovic · Ana Da Hora -
2021 : Queer in AI Inclusive Conference Guide & Code of Conduct Reminder »
MaryLena Bleile · Arjun Subramonian -
2021 : Small-Group Discussion: Gender Across Cultures »
Sharvani Jha -
2021 Affinity Workshop: Queer in AI Workshop »
Arjun Subramonian · Sharvani Jha · Vishakha Agrawal · Juan Pajaro Velasquez · MaryLena Bleile · Michelle Julia Ng -
2021 : Panel Discussion: Gender Across Cultures »
Sharvani Jha · Shubha Chacko · Juan Pajaro Velasquez -
2020 : Amodal 3D Reconstruction for Robotic Manipulation via Stability and Connectivity »
William Agnew -
2020 Poster: On Lp-norm Robustness of Ensemble Decision Stumps and Trees »
Yihan Wang · Huan Zhang · Hongge Chen · Duane Boning · Cho-Jui Hsieh -
2020 Affinity Workshop: Queer in AI »
ST John · William Agnew · Anja Meunier · Alex Markham · Manu Saraswat · Andrew McNamara · Raphael Gontijo Lopes -
2019 Workshop: Generative Modeling and Model-Based Reasoning for Robotics and AI »
Aravind Rajeswaran · Emanuel Todorov · Igor Mordatch · William Agnew · Amy Zhang · Joelle Pineau · Michael Chang · Dumitru Erhan · Sergey Levine · Kimberly Stachenfeld · Marvin Zhang -
2019 Poster: Robust Decision Trees Against Adversarial Examples »
Hongge Chen · Huan Zhang · Duane Boning · Cho-Jui Hsieh -
2019 Oral: Robust Decision Trees Against Adversarial Examples »
Hongge Chen · Huan Zhang · Duane Boning · Cho-Jui Hsieh -
2018 Poster: Towards Fast Computation of Certified Robustness for ReLU Networks »
Tsui-Wei Weng · Huan Zhang · Hongge Chen · Zhao Song · Cho-Jui Hsieh · Luca Daniel · Duane Boning · Inderjit Dhillon -
2018 Oral: Towards Fast Computation of Certified Robustness for ReLU Networks »
Tsui-Wei Weng · Huan Zhang · Hongge Chen · Zhao Song · Cho-Jui Hsieh · Luca Daniel · Duane Boning · Inderjit Dhillon -
2017 Poster: Gradient Boosted Decision Trees for High Dimensional Sparse Output »
Si Si · Huan Zhang · Sathiya Keerthi · Dhruv Mahajan · Inderjit Dhillon · Cho-Jui Hsieh -
2017 Talk: Gradient Boosted Decision Trees for High Dimensional Sparse Output »
Si Si · Huan Zhang · Sathiya Keerthi · Dhruv Mahajan · Inderjit Dhillon · Cho-Jui Hsieh