Women in Machine Learning Un-Workshop
Women in Machine Learning will be organizing the first “un-workshop” at ICML 2020. This is a new event format that encourages interaction between participants. The un-workshop is based on the concept of an “un-conference”, a form of discussion on a pre-selected topic that is primarily driven by participants. Different from the long-running WiML Workshop, the un-workshop’s main focus is topical breakout sessions, with short invited talks and casual, informal poster presentations.
New In ML
Is this your first time to a top conference? Have you ever wanted your own work recognized by this huge and active community? Do you encounter difficulties in polishing your ideas, experiments, paper writing, etc? Then, this session is exactly for you!
This year, we are organizing this special New In ML workshop, colocating with ICML 2020. We are targeting primarily junior researchers. We invite top researchers to share with you their experience on diverse aspects. The biggest goal is to help you publish papers at next year's top conferences (e.g. ICML, NeurIPS), and generally provide you with the guidance needed to contribute to ML research fully and effectively!
Test of Time: Gaussian Process Optimization in the Bandit Settings: No Regret and Experimental Design
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.
LatinX in AI Workshop
AI is already perpetuating social bias and prejudice because it lacks representation of LatinX professionals in the AI industry. Machine learning algorithms can encode a discriminative bias during training with real-world data in which underrepresented groups are not properly characterized or represented. A question quickly emerges: how can we make sure Machine Learning does not discriminate against people from minority groups because of the color of their skin, gender, ethnicity, or historically unbalanced power structures in society?
Even more, as the tech industry does not represent the entire population, underrepresented populations in computing such as Hispanics, women, African-Americans, and Native Americans have limited control over the direction of machine learning breakthroughs. As an ethnicity, the Latinx population is an interesting case study for this research as members are comprised of all skin tones with a wide regional distribution across the world.
In this session, we claim that it is our responsibility to advance the progress of machine learning by increasing the presence of members of our minority group that are able to build solutions and algorithms to advance the progress of this field towards a direction in which AI is being used to solve problems in our communities while bias and unfairness are accordingly addressed. As the number of Hispanic and Latinx identifying AI practitioners increases, it is also imperative for us to have access to share our work at international AI and Machine Learning conferences which yield new opportunities for collaboration, funding, and job prospects we would not have access to otherwise. The benefits will not only be for the LatinX community and minority groups, but also to the AI community in general. The reason for this is that the multiplicity of cultures and backgrounds is connected to higher creativity in the solution of problems in general. This applied to AI will bring positive results in the long term.
Queer in AI
The quickly advancing field of machine learning is exciting but raises complex ethical and social questions. How can we best use AI for varying applications while avoiding discrimination and lack of sensitivity to its users? Particularly, queer users of machine learning systems can fall victim to these often discriminatory, biased, and insensitive algorithms. In addition, there is a fundamental tension between the queer community, which defies categorization and reduction, and the current ubiquitous use of machine learning to categorize and reduce people. We want to raise awareness of these issues among the research community. But in order to do so, we need to make sure that the queer community is comfortable among their peers both in the lab and at conferences.
Our survey data shows that well over half of the queer attendees at ICML and NeurIPS are not publicly out, and while we can see a slow improvement in how welcome queer attendees are feeling, we want to see this encouraging trend continue and make queer researchers feel that they can bring their whole selves to these conferences. The most commonly cited obstacles to this were lack of community and lack of role models. We have been working with conference organizers and the queer community to move towards these goals. By organizing this workshop we will give queer people at ICML a visible community as well as highlight role models in the form of openly queer speakers in high-profile, senior roles.
We focus on two topics: first, any struggles of queer researchers are multiplied for those who are also members of black and minority ethnic communities and/or from non-"Western" countries, and we want to focus on how we can engage and live solidarity with global queer communities.
Second, we believe the first step for creating more diverse and inclusive algorithms is talking about the problems and increasing the visibility of queer people in the machine learning community. By bringing together both queer people and allies, we can start conversations around biases in data and how these algorithms can have a negative impact on the queer community, and we want to discuss the intersection of AI policy and queer privacy.
This live poster session will be held in here for the listing of posters that will be presented.
Please note:
ICML registration required to enter.
Entry is first-come-first-serve.
If you are not able to enter, please check back again later, as people will be coming in and out of the Gather.town space, just like any in-person space.
Besides this live poster session in Gather.town, each WiML poster has a Slack channel in the WiML Slack that is active for the duration of ICML, and certain posters also pre-recorded 5-min talks on SlidesLive.