Skip to yearly menu bar Skip to main content


Affinity Workshop

Queer in AI

ST John · William Agnew · Anja Meunier · Alex Markham · Manu Saraswat · Andrew McNamara · Raphael Gontijo Lopes


Abstract:

The quickly advancing field of machine learning is exciting but raises complex ethical and social questions. How can we best use AI for varying applications while avoiding discrimination and lack of sensitivity to its users? Particularly, queer users of machine learning systems can fall victim to these often discriminatory, biased, and insensitive algorithms. In addition, there is a fundamental tension between the queer community, which defies categorization and reduction, and the current ubiquitous use of machine learning to categorize and reduce people. We want to raise awareness of these issues among the research community. But in order to do so, we need to make sure that the queer community is comfortable among their peers both in the lab and at conferences.

Our survey data shows that well over half of the queer attendees at ICML and NeurIPS are not publicly out, and while we can see a slow improvement in how welcome queer attendees are feeling, we want to see this encouraging trend continue and make queer researchers feel that they can bring their whole selves to these conferences. The most commonly cited obstacles to this were lack of community and lack of role models. We have been working with conference organizers and the queer community to move towards these goals. By organizing this workshop we will give queer people at ICML a visible community as well as highlight role models in the form of openly queer speakers in high-profile, senior roles.

We focus on two topics: first, any struggles of queer researchers are multiplied for those who are also members of black and minority ethnic communities and/or from non-"Western" countries, and we want to focus on how we can engage and live solidarity with global queer communities.

Second, we believe the first step for creating more diverse and inclusive algorithms is talking about the problems and increasing the visibility of queer people in the machine learning community. By bringing together both queer people and allies, we can start conversations around biases in data and how these algorithms can have a negative impact on the queer community, and we want to discuss the intersection of AI policy and queer privacy.

Chat is not available.
Schedule