Timezone:
Schedule Sun Mon Tue Wed Thu Fri Sat

Queer in AI Social: Storytelling: Intersectional Queer Experiences Around the World Fri 23 Jul 06:00 p.m.  

Vishakha Agrawal · Shubha Chacko

Share stories with and listen to anecdotes from fellow queer + trans folks from around the world. Register for the socials here


ICML Workshop on Human in the Loop Learning (HILL) Sat 24 Jul 04:15 a.m.  

Trevor Darrell · Xin Wang · Li Erran Li · Fisher Yu · Zeynep Akata · wenwu zhu · Pradeep Ravikumar · Shiji Zhou · Shanghang Zhang · Kalesha Bullard

Recent years have witnessed the rising need for the machine learning systems that can interact with humans in the learning loop. Such systems can be applied to computer vision, natural language processing, robotics, and human computer interaction. Creating and running such systems call for interdisciplinary research of artificial intelligence, machine learning, and software engineering design, which we abstract as Human in the Loop Learning (HILL). The HILL workshop aims to bring together researchers and practitioners working on the broad areas of HILL, ranging from the interactive/active learning algorithms for real-world decision making systems (e.g., autonomous driving vehicles, robotic systems, etc.), lifelong learning systems that retain knowledge from different tasks and selectively transfer knowledge to learn new tasks over a lifetime, models with strong explainability, as well as interactive system designs (e.g., data visualization, annotation systems, etc.). The HILL workshop continues the previous effort to provide a platform for researchers from interdisciplinary areas to share their recent research. In this year’s workshop, a special feature is to encourage the debate between HILL and label-efficient learning: Are these two learning paradigms contradictory with each other, or can they be organically combined to create a more powerful learning system? We believe the theme of the workshop will be of interest for broad ICML attendees, especially those who are interested in interdisciplinary study.


ICML Workshop on Algorithmic Recourse Sat 24 Jul 04:45 a.m.  

Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez Rodriguez · Isabel Valera · Hima Lakkaraju

Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role for the adoption and impact of said technologies. In this workshop, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems. Specifically, we plan to facilitate workshop interactions that will shed light onto the following 3 questions: (i) What are the practical, legal and ethical considerations that decision-makers need to account for when providing recourse? (ii) How do humans understand and act based on recourse explanations from a psychological and behavioral perspective? (iii) What are the main technical advances in explainability and causality in ML required for achieving recourse? Our ultimate goal is to foster conversations that will help bridge the gaps arising from the interdisciplinary nature of algorithmic recourse and contribute towards the wider adoption of such methods.


Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning Sat 24 Jul 04:45 a.m.  

Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian

Adversarial machine learning is a new gamut of technologies that aim to study the vulnerabilities of ML approaches and detect malicious behaviors in adversarial settings. The adversarial agents can deceive an ML classifier by significantly altering its response with imperceptible perturbations to the inputs. Although it is not to be alarmist, researchers in machine learning are responsible for preempting attacks and building safeguards, especially when the task is critical for information security and human lives. We need to deepen our understanding of machine learning in adversarial environments.

While the negative implications of this nascent technology have been widely discussed, researchers in machine learning are yet to explore their positive opportunities in numerous aspects. The positive impacts of adversarial machine learning are not limited to boost the robustness of ML models but cut across several other domains.

Since there are both positive and negative applications of adversarial machine learning, tackling adversarial learning to its use in the right direction requires a framework to embrace the positives. This workshop aims to bring together researchers and practitioners from various communities (e.g., machine learning, computer security, data privacy, and ethics) to synthesize promising ideas and research directions and foster and strengthen cross-community collaborations on both theoretical studies and practical applications. Different from the previous workshops on adversarial machine learning, our proposed workshop seeks to explore the prospects besides reducing the unintended risks for sophisticated ML models.

This is a one-day workshop, planned with a 10-minute opening, 11 invited keynotes, about 9 contributed talks, 2 poster sessions, and 2 special sessions for panel discussion about the prospects and perils of adversarial machine learning.

The workshop is kindly sponsored by RealAI Inc. and Bosch.


Workshop on Socially Responsible Machine Learning Sat 24 Jul 05:40 a.m.  

Chaowei Xiao · Animashree Anandkumar · Mingyan Liu · Dawn Song · Raquel Urtasun · Jieyu Zhao · Xueru Zhang · Cihang Xie · Xinyun Chen · Bo Li

Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems to safety-critical tasks. While the hope is to improve decision-making accuracy and societal outcomes with these ML models, concerns have been incurred that they can inflict harm if not developed or used with care. It has been well-documented that ML models can: (1) inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups; (2) be vulnerable to security and privacy attacks that deceive the models and leak the training data's sensitive information; (3) make hard-to-justify predictions with a lack of transparency. Therefore, it is essential to build socially responsible ML models that are fair, robust, private, transparent, and interpretable.

Although extensive studies have been conducted to increase trust in ML, many of them either focus on well-defined problems that enable nice tractability from a mathematical perspective but are hard to adapt to real-world systems, or they mainly focus on mitigating risks in real-world applications without providing theoretical justifications. Moreover, most work studies those issues separately; the connections among them are less well-understood. This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). We aim to synthesize promising ideas and research directions, as well as strengthen cross-community collaborations. We hope to chart out important directions for future work. We have an advisory committee and confirmed speakers whose expertise represents the diversity of the technical problems in this emerging research field.


ICML 2021 Workshop on Computational Biology Sat 24 Jul 05:43 a.m.  

Yubin Xie · Cassandra Burdziak · Amine Remita · Elham Azizi · Abdoulaye Baniré Diallo · Sandhya Prabhakaran · Debora Marks · Dana Pe'er · Wesley Tansey · Julia Vogt · Engelbert MEPHU NGUIFO · Jaan Altosaar · Anshul Kundaje · Sabeur Aridhi · Bishnu Sarker · Wajdi Dhifli · Alexander Anderson

The ICML Workshop on Computational Biology will highlight how machine learning approaches can be tailored to making discoveries with biological data. Practitioners at the intersection of computation, machine learning, and biology are in a unique position to frame problems in biomedicine, from drug discovery to vaccination risk scores, and the Workshop will showcase such recent research. Commodity lab techniques lead to the proliferation of large complex datasets, and require new methods to interpret these collections of high-dimensional biological data, such as genetic sequences, cellular features or protein structures, and imaging datasets. These data can be used to make new predictions towards clinical response, to uncover new biology, or to aid in drug discovery.
This workshop aims to bring together interdisciplinary machine learning researchers working at the intersection of machine learning and biology that includes areas such as computational genomics; neuroscience; metabolomics; proteomics; bioinformatics; cheminformatics; pathology; radiology; evolutionary biology; population genomics; phenomics; ecology, cancer biology; causality; representation learning and disentanglement to present recent advances and open questions to the machine learning community.
The workshop is a sequel to the WCB workshops we organized in the last five years at ICML, which had excellent line-ups of talks and were well-received by the community. Every year, we received 60+ submissions. After multiple rounds of rigorous reviewing, around 50 submissions were selected from which the best set of papers were chosen for Contributed talks and Spotlights and the rest were invited for Poster presentations. We have a steadfast and growing base of reviewers making up the Program Committee. For two of the previous editions, a special issue of Journal of Computational Biology has been released with extended versions of a selected set of accepted papers.


Workshop on Computational Approaches to Mental Health @ ICML 2021 Sat 24 Jul 06:20 a.m.  

Niranjani Prasad · Caroline Weis · Shems Saleh · Rosanne Liu · Jake Vasilakes · Agni Kumar · Tianlin Zhang · Ida Momennejad · Danielle Belgrave

The rising prevalence of mental illness has posed a growing global burden, with one in four people adversely affected at some point in their lives, accounting for 32.4% of years lived with disability. This has only been exacerbated during the current pandemic, and while the capacity of acute care has been significantly increased in response to the crisis, it has at the same time led to the scaling back of many mental health services. This, together with the advances in the field of machine learning (ML), has motivated exploration of how machine learning methods can be applied to the provision of more effective and efficient mental healthcare, from varied approaches to continual monitoring of individual mental health or identification of mental health issues through inferences about behaviours on social media, online searches or mobile apps, to predictive models for early diagnosis and intervention, understanding disease progression or recovery, and the personalization of therapies.

This workshop aims to bring together clinicians, behavioural scientists and machine learning researchers working in various facets of mental health and care provision, to identify the key opportunities and challenges in developing solutions for this domain, and discussing the progress made.