LatinX in AI Social
Launched in January 2018, leaders from academia and industry in Artificial Intelligence, Education, Research, Finance, Community and Social Impact Nonprofits banded together to create a group that would be focused on “Creating Opportunity for LatinX in AI.”
Artificial Intelligence has the potential to displace workers of marginalized populations including those of Latinx origin. AI is already perpetuating social bias and prejudice because it lacks representation of LatinX professionals in the AI industry. Machine learning algorithms can encode a discriminative bias during training with real-world data in which underrepresented groups are not properly characterized or represented. A question quickly emerges: how can we make sure Machine Learning does not discriminate against people from minority groups because of the color of their skin, gender, ethnicity, or historically unbalanced power structures in society?
Even more, as the tech industry does not represent the entire population, underrepresented populations in computing such as Hispanics, women, African-Americans, and Native Americans have limited control over the direction of machine learning breakthroughs. As an ethnicity, the Latinx population is an interesting case study for this research as members are comprised of all skin tones with a wide regional distribution across the world.
In this session, we claim that it is our responsibility to advance the progress of machine learning by increasing the presence of members of our minority group that are able to build solutions and algorithms to advance the progress of this field towards a direction in which AI is being used to solve problems in our communities while bias and unfairness are accordingly addressed. As the number of Hispanic and Latinx identifying AI practitioners increases, it is also imperative for us to have access to share our work at international AI and Machine Learning conferences which yield new opportunities for collaboration, funding, and job prospects we would not have access to otherwise.
Privacy in learning: Basics and the interplay
In the real world, more and more customers view privacy as a concern when using an AI service, especially when the customer content consists of sensitive data. Recent research demonstrates that large language model like GPT-2 can memorize content, which can be extracted by an adversary. This poses high privacy risk in deployed scenarios when models are trained on customer data. Differential privacy is widely recognized as a golden standard of privacy protection due to its mathematical rigor. To alleviate the privacy concern in machine learning, many research works have studied the machine learning with differential privacy guarantee. It is the time to clarify the challenge and opportunity for learning with differential privacy. In this tutorial, we first describe the potential privacy risk in machine learning models and introduce the background of differential privacy, then present the popular approaches of guaranteeing differential privacy in machine learning. In the rest of the tutorial, we highlight the interplay between learning and privacy. In the second section, we show how to utilize the learning property to improve the utility of private learning, especially with recent advances towards solving these challenges by exploiting the correlation across data points and the low-rank property of the deep learning models. In the third section, we present the other direction of research, i.e., using the tools in differential privacy to tackle the classical generalization problem and we also present concrete scenarios of using ideas in differential privacy to resist attacks in machine learning.
Self-Attention for Computer Vision
The tutorial will be about the application of self-attention mechanisms in computer vision. Self-Attention has been widely adopted in NLP, with the fully attentional Transformer model having largely replaced RNNs and now being used in state-of-the-art language understanding models like GPT, BERT, XLNet, T5, Electra, and Meena. Thus, there has been a tremendous interest in studying whether self-attention can have a similarly big and far-reaching impact in computer vision. However, vision tasks have different properties compared to language tasks, so a lot of research has been devoted to exploring the best way to apply self-attention to visual models. This tutorial will cover many of the different applications of self-attention in vision in order to give the viewer a broad and precise understanding of this subfield.
Lapsed Physicists Wine-and-Cheese
Lapsed" (aka. Former) Physicists are plentiful in the machine learning community. Inspired by Wine and Cheese seminars at many institutions, this BYOWC (Bring Your Own Wine and Cheese) event is an informal opportunity to connect with members of the community. Hear how others made the transition between fields. Discuss how your physics training prepared you to switch fields or what synergies between physics and machine learning excite you the most. Share your favorite physics jokes your computer science colleagues don't get, and just meet other cool people. Open to everyone, not only physicists; you'll just have to tolerate our humor. Wine and Cheese encouraged, but not required.
Queer in AI Workshop
Queer in AI’s demographic survey reveals that most queer scientists in our community do not feel completely welcome in conferences and their work environments, with the main reasons being a lack of queer community and role models. Over the past years, Queer in AI has worked towards these goals, yet we have observed that the voices of marginalized queer communities - especially transgender, non-binary folks and queer BIPOC folks - have been neglected. The purpose of this workshop is to highlight issues that these communities face by featuring talks and panel discussions on the inclusion of non-Western non-binary identities; and Black, Indigenous, and Pacific Islander non-cis folks.
Rethinking Drug Discovery in the Era of Digital Biology
Modern medicine has given us effective tools to treat some of the most significant and burdensome diseases. At the same time, it is becoming consistently more challenging and more expensive to develop new therapeutics. A key factor in this trend is that the drug development process involves multiple steps, each of which involves a complex and protracted experiment that often fails. We believe that, for many of these phases, it is possible to develop machine learning models to help predict the outcome of these experiments, and that those models, while inevitably imperfect, can outperform predictions based on traditional heuristics. To achieve this goal, we are bringing together high-quality data from human cohorts, while also developing cutting edge methods in high throughput biology and chemistry that can produce massive amounts of in vitro data relevant to human disease and therapeutic interventions. Those are then used to train machine learning models that make predictions about novel targets, coherent patient segments, and the clinical effect of molecules. Our ultimate goal is to develop a new approach to drug development that uses high-quality data and ML models to design novel, safe, and effective therapies that help more people, faster, and at a lower cost.
The ICML Debate: Should AI Research and Development Be Controlled by a Regulatory Body or Government Oversight?
Come and watch experts debate whether AI research and development should be controlled by a regulatory body or government oversight, with Charles Isbell (Georgia Tech), Michael Kearns (UPenn), Rich Sutton (Alberta), Steve Roberts (Oxford), Ti John (Finnish Center for Artificial Intelligence / Aalto), Suchi Saria (John Hopkins), Shakir Mohamed (DeepMind), Martha White (Alberta).
AI has found its way into our everyday life, from healthcare to custom control, creditability check to autonomous driving. Its power is continuously growing, and gradually becomes easier to access for organisations and individuals. This leads to a natural question of the debate.
Enjoy an entertaining social event with 8 leading AI/ML academics and researchers debating the topic following the British Parliament Style. You are welcome to tell us your opinion of the topic before the debate poll. We will also host live votes right before and after the debate to see whether you are convinced by our debaters. Do join us for an unmatched fun and thought-provoking Social.
Improving Global Research Collaboration & Communication
Come learn and share best practices collaborating with researchers around the world, and discuss how to bridge the remote work, cultural, and social divides.
Black in AI Social
For over four years, Black in AI has been a place for sharing ideas, fostering collaborations, and discussing initiatives to increase the presence of Black people in the field of Artificial Intelligence. If you are in AI and either self-identify as Black, African, Diaspora or an ally, please join us at ICML21 to discuss interests, challenges, opportunities, collaborations, and other related issues. We plan to gather for a one-hour town hall and Q&A session. We'll then continue with informal socializing for the remaining hour.
Town Hall
The ICML town hall is primarily a chance for the community to interact with the ICML organizers and give feedback. We cover various details of this ICML and future plans, with the bulk of the time relegated to discussion.