Skip to yearly menu bar Skip to main content




ICML 2020 Affinity Events

Zhen Xu · Sparkle Russell-Puleri · Zhengying Liu · Sinead A Williamson · Matthias W Seeger · Wei-Wei Tu · Samy Bengio · Isabelle Guyon

Is this your first time to a top conference? Have you ever wanted your own work recognized by this huge and active community? Do you encounter difficulties in polishing your ideas, experiments, paper writing, etc? Then, this session is exactly for you!

This year, we are organizing this special New In ML workshop, colocating with ICML 2020. We are targeting primarily junior researchers. We invite top researchers to share with you their experience on diverse aspects. The biggest goal is to help you publish papers at next year's top conferences (e.g. ICML, NeurIPS), and generally provide you with the guidance needed to contribute to ML research fully and effectively!

Tatjana Chavdarova · Caroline Weis · Amy Zhang · Fariba Yousefi · Mandana Samiei · Larissa Schiavo

Women in Machine Learning will be organizing the first “un-workshop” at ICML 2020. This is a new event format that encourages interaction between participants. The un-workshop is based on the concept of an “un-conference”, a form of discussion on a pre-selected topic that is primarily driven by participants. Different from the long-running WiML Workshop, the un-workshop’s main focus is topical breakout sessions, with short invited talks and casual, informal poster presentations.

Nils Murrugarra-Llerena · Pedro Braga · Walter Mayor · Karla Caballero · Ivan Dario Arraut Guerrero · Leonel Rozo · Juan Banda · Fabian Latorre · Kevin Bello · Leobardo Morales · Leonel Rozo · Angela M Flores-Saravia

AI is already perpetuating social bias and prejudice because it lacks representation of LatinX professionals in the AI industry. Machine learning algorithms can encode a discriminative bias during training with real-world data in which underrepresented groups are not properly characterized or represented. A question quickly emerges: how can we make sure Machine Learning does not discriminate against people from minority groups because of the color of their skin, gender, ethnicity, or historically unbalanced power structures in society?

Even more, as the tech industry does not represent the entire population, underrepresented populations in computing such as Hispanics, women, African-Americans, and Native Americans have limited control over the direction of machine learning breakthroughs. As an ethnicity, the Latinx population is an interesting case study for this research as members are comprised of all skin tones with a wide regional distribution across the world.

In this session, we claim that it is our responsibility to advance the progress of machine learning by increasing the presence of members of our minority group that are able to build solutions and algorithms to advance the progress of this field towards a direction in which AI is being used to solve problems in our communities while bias …

Affinity Workshop

This live poster session will be held in here for the listing of posters that will be presented.

Please note:

ICML registration required to enter.

Entry is first-come-first-serve.

If you are not able to enter, please check back again later, as people will be coming in and out of the Gather.town space, just like any in-person space.

Besides this live poster session in Gather.town, each WiML poster has a Slack channel in the WiML Slack that is active for the duration of ICML, and certain posters also pre-recorded 5-min talks on SlidesLive.

ST John · William Agnew · Anja Meunier · Alex Markham · Manu Saraswat · Andrew McNamara · Raphael Gontijo Lopes

The quickly advancing field of machine learning is exciting but raises complex ethical and social questions. How can we best use AI for varying applications while avoiding discrimination and lack of sensitivity to its users? Particularly, queer users of machine learning systems can fall victim to these often discriminatory, biased, and insensitive algorithms. In addition, there is a fundamental tension between the queer community, which defies categorization and reduction, and the current ubiquitous use of machine learning to categorize and reduce people. We want to raise awareness of these issues among the research community. But in order to do so, we need to make sure that the queer community is comfortable among their peers both in the lab and at conferences.

Our survey data shows that well over half of the queer attendees at ICML and NeurIPS are not publicly out, and while we can see a slow improvement in how welcome queer attendees are feeling, we want to see this encouraging trend continue and make queer researchers feel that they can bring their whole selves to these conferences. The most commonly cited obstacles to this were lack of community and lack of role models. We have been working with …