Peng Xu · Tingting Zhu · Pengkai Zhu · Tianrui Chen · David Clifton · Danielle Belgrave · Yuanting Zhang

In recent two years, the COVID-19 pandemic continues to disrupt the world, and has changed most aspects of human life. Healthcare AI has a mission to help humans to tackle the issues that are caused by COVID-19, e.g., COVID-19 vaccine related prediction, COVID-19 medical imaging diagnosis. With the development of the epidemic, the virus keeps mutating, and meanwhile the related research is also evolving. As a result, more and more understanding, observation, policy are involved into daily life. All of these factors bring new challenges and opportunities to scientific research, including Healthcare AI. The goal of this workshop is to bring together perspectives from multiple disciplines (e.g., Healthcare AI, Machine Learning, Medical Image ML, Bioinformatics, Genomics, Epidemiology, Public Health, Health Policy, Computer Vision, Deep Learning, Cognitive Science) to highlight major open questions and to identify collaboration opportunities to address outstanding challenges in the domain of COVID-19 related Healthcare AI.

Cassandra Burdziak · Yubin Xie · Amine Remita · Mauricio Tec · Achille O R Nazaret · Pascal Notin · Mafalda Dias · Steffan Paul · Cameron Park · Dana Pe'er · Debora Marks · Alexander Anderson · Elham Azizi · Abdoulaye Baniré Diallo · Wesley Tansey · Julia Vogt · Sandhya Prabhakaran

Machine learning advances are used in self-driving cars, speech recognition systems, and translation software. However, the COVID-19 pandemic has highlighted the urgency of translating such advances to the domain of biomedicine. Such a pivot requires new machine learning methods to build long-term vaccines and therapeutic strategies, predict immune avoidance, and better repurpose small molecules as drugs.The ICML Workshop on Computational Biology (WCB) will highlight how machine learning approaches can be tailored to making both translational and basic scientific discoveries with biological data. Practitioners at the intersection of computation, machine learning, and biology are in a unique position to frame problems in biomedicine, from drug discovery to vaccination risk scores, and WCB will showcase such recent research. Commodity lab techniques lead to the proliferation of large complex datasets and require new methods to interpret these collections of high-dimensional biological data, such as genetic sequences, cellular features or protein structures and imaging datasets. These data can be used to make new predictions towards clinical response, uncover new biology, or aid in drug discovery.This workshop aims to bring together interdisciplinary machine learning researchers working in areas such as computational genomics; neuroscience; metabolomics; proteomics; bioinformatics; cheminformatics; pathology; radiology; evolutionary biology; population genomics; phenomics; ecology, …

Mojmir Mutny · Willie Neiswanger · Ilija Bogunovic · Stefano Ermon · Yisong Yue · Andreas Krause

Whether in robotics, protein design, or physical sciences, one often faces decisions regarding which data to collect or which experiments to perform. There is thus a pressing need for algorithms and sampling strategies that make intelligent decisions about data collection processes that allow for data-efficient learning. Experimental design and active learning have been major research focuses within machine learning and statistics, aiming to answer both theoretical and algorithmic aspects of efficient data collection schemes. The goal of this workshop is to identify missing links that hinder the direct application of these principled research ideas into practically relevant solutions.

Aahlad Puli · Maggie Makar · Victor Veitch · Yoav Wald · Mark Goldstein · Limor Gultchin · Angela Zhou · Uri Shalit · Suchi Saria

Machine learning models often break when deployed in the wild, despite excellent performance on benchmarks. In particular, models can learn to rely on apparently unnatural or irrelevant features. For instance, 1) in detecting lung disease from chest X-rays, models rely on the type of scanner rather than physiological signals, 2) in natural language inference, models rely on the number of shared words rather than the subject’s relationship with the object, 3) in precision medicine, polygenic risk scores for diseases like breast cancer rely on genes prevalent mainly in European populations, and predict poorly in other populations. In examples like these and others, the undesirable behavior stems from the model exploiting a spurious correlation. Improper treatment of spurious correlations can discourage the use of ML in the real world and lead to catastrophic consequences in extreme cases. The recent surge of interest in this issue is accordingly welcome and timely: more than 50 closely related papers have been published just in ICML 2021, NeurIPS 2021, and ICLR 2022. However, the most fundamental questions remain unanswered— e.g., how should the notion of spurious correlations be made precise? How should one evaluate models in the presence of spurious correlations? In which situations can …

Francois Lanusse · Marc Huertas-Company · Vanessa Boehm · Brice Menard · Xavier Prochaska · Uros Seljak · Francisco Villaescusa-Navarro · Ashley Villar

As modern astrophysical surveys deliver an unprecedented amount of data, from the imaging of hundreds of millions of distant galaxies to the mapping of cosmic radiation fields at ultra-high resolution, conventional data analysis methods are reaching their limits in both computational complexity and optimality. Deep Learning has rapidly been adopted by the astronomical community as a promising way of exploiting these forthcoming big-data datasets and of extracting the physical principles that underlie these complex observations. This has led to an unprecedented exponential growth of publications with in the last year alone about 500 astrophysics papers mentioning deep learning or neural networks in their abstract. Yet, many of these works remain at an exploratory level and have not been translated into real scientific breakthroughs.The goal of this workshop is to bring together Machine Learning researchers and domain experts in the field of Astrophysics to discuss the key open issues which hamper the use of Deep Learning for scientific discovery. Rather than focusing on the benefits of deep learning for astronomy, the proposed workshop aims at overcoming its limitations.Topics that we aim to cover include, but are not limited to, high-dimensional Bayesian inference, simulation-based inference, uncertainty quantification and robustness to covariate shifts, …

John Emanuello · Andy Applebaum · William Arbaugh · Jack Davidson · Joseph Edappully · H. Howie Huang · Andrew Golczynski · Nicole Nichols · Tejas Patel · Ahmad Ridley · Vance Wong

Following a series of crippling cyber-attacks that targeted major of the public and social sectors — including schools, hospitals, critical infrastructure, and private businesses — the global community has increased its attention on the wider societal impacts of major cyber security events, forming task forces like the UN Open Ended Working Group on Cyber Security and undertaking policy efforts to mitigate these impacts. These actions are important, but policy changes only represent one side of the solution. On the other are technical developments, within which machine learning has been proposed as a key component of future of cyber defense tools, requiring rapid development to provide the speed and scale needed to detect and respond to new and emerging cyber security threats. Cybersecurity is inherently a systems problem and piece-wise application of off-the-shelf ML tools leave critical gaps in both sophistication and interpretable context needed for comprehensive security systems. To successfully develop ML-based cybersecurity defenses, a greater degree of cross-pollination across the ML and cybersecurity communities is needed because both are highly specialized technical domains. Moreover, the requisite ML topics needed to successfully leverage ML for cybersecurity — such as time series analytics, game theory, deep learning, reinforcement learning, representation learning, …

Lora Aroyo · Newsha Ardalani · Colby Banbury · Gregory Diamos · William Gaviria Rojas · Tzu-Sheng Kuo · Mark Mazumder · Peter Mattson · Praveen Paritosh

This workshop proposal builds on the success of the 1st Data-Centric AI Workshop organized at NeurIPS 2021 (which attracted more than 160 submissions and close to 200 participants) and expands the effort to engage the community with the active interdisciplinary MLCommons community of practitioners, researchers and engineers from both academia and industry by presenting the current state-of-the-art, work-in-progress and a set of open problems in the field of benchmarking data for ML. Many of these areas are in a nascent stage, and we hope to further their development by knitting them together into a coherent whole. We seek to drive progress in addressing these core problems by promoting the creation of a set of benchmarks for data quality and data-related algorithms. We want to bring together work that pushes forward this new view of data-centric ML benchmarks, e.g. the initiatives at MLCommons, a non-profit that operates the MLPerf benchmarks that have become standard for AI chip speed but also others including Dynabench, OpenML, data-centric AI hub, etc. We envision MLCommons as providing a framework and resources for the evolution of benchmarks in this space, and our workshop as showcasing the best innovations revealed by those benchmarks and providing a …

Tegan Emerson · Tim Doster · Henry Kvinge · Alexander Cloninger · Sarah Tymochko

Much of the data that is fueling current rapid advances in machine learning is: high dimensional, structurally complex, and strongly nonlinear. This poses challenges for researcher intuition when they ask (i) how and why current algorithms work and (ii) what tools will lead to the next big break-though. Mathematicians working in topology, algebra, and geometry have more than a hundred years worth of finely-developed machinery whose purpose is to give structure to, help build intuition about, and generally better understand spaces and structures beyond those that we can naturally understand. This workshop will show-case work which brings methods from topology, algebra, and geometry and uses them to help answer challenging questions in machine learning. With this workshop we will create a vehicle for disseminating machine learning techniques that utilize rich mathematics and address core challenges described in the ICML call for papers. Additionally, this workshop creates opportunity for presentation of approaches which may address critical, domain-specific ML challenges but do not necessarily demonstrate improved performance on mainstream, data-rich benchmarks. To this end our proposed workshop will open up IMCL to new researchers who in the past were not able to discuss their novel but data set-dependent analysis methods.We interpret topology, …

Huan Zhang · Leslie Rice · Kaidi Xu · aditi raghunathan · Wan-Yi Lin · Cho-Jui Hsieh · Clark Barrett · Martin Vechev · Zico Kolter

Formal verification of machine learning-based building blocks is important for complex and critical systems such as autonomous vehicles, medical devices, or cybersecurity systems where guarantees on safety, fault tolerance and correctness are essential. Formal verification of machine learning is an emerging and interdisciplinary field, intersecting with fields of computer-aided verification, programming languages, robotics, computer security, and optimization, with many challenging open problems. This workshop aims to raise awareness of the importance of formal verification methods in the machine learning community and to bring together researchers and practitioners interested in this emerging field from a broad range of disciplines and backgrounds. Organizers of this workshop include pioneering proponents of machine learning verification and six confirmed invited speakers who have solid works in this field with diverse research and demographic backgrounds. The workshop includes posters, contributed talks, and a panel to encourage novel contributed work and interdisciplinary discussions on open challenges.

Zenna Tavares · Emily Mackevicius · Elias Bingham · Nan Rosemary Ke · Talia Ringer · Armando Solar-Lezama · Nada Amin · John Krakauer · Robert O Ness · Alexis Avedisian

A long-standing objective of AI research has been to discover theories of reasoning that are general: accommodating various forms of knowledge and applicable across a diversity of domains. The last two decades have brought steady advances toward this goal, notably in the form of mature theories of probabilistic and causal inference, and in the explosion of reasoning methods built upon the deep learning revolution. However, these advances have only further exposed gaps in both our basic understanding of reasoning and in limitations in the flexibility and composability of automated reasoning technologies. This workshop aims to reinvigorate work on the grand challenge of developing a computational foundation for reasoning in minds, brains, and machines.

Maithra Raghu · Urvashi Khandelwal · Chiyuan Zhang · Matei Zaharia · Alexander Rush

In just the past couple of years, we have seen significant advances in the capabilities of (Large) Language Models. One of the most striking capabilities of these systems is knowledge retrieval — Language Models can answer a diverse set of questions, which differ substantially in the domain knowledge needed for their responses, and their input structure. The precise methods for knowledge retrieval vary from the language model directly generating a response (parametric approaches) to a combination of generation and referencing an external knowledge corpus, e.g. retrieval augmented generation, to primarily using an external knowledge corpus with language model embeddings (semi-parametric approaches.) Despite the rapid advances, there remain many pressing open questions on the limits of knowledge retrieval with language models, and connections between these different approaches. How factual are generated responses, and how does this vary with question complexity, model scale, and importantly, different methods of knowledge retrieval? How important is the role of (self-supervised/supervised) pretraining? What are the tradeoffs between few-shot (prompt based) approaches and finetuning when adapting to novel domains? And relatedly, to what extent do different knowledge retrieval approaches generalize to unseen settings? This workshop seeks to bring together a diverse set of researchers across NLP, Machine …

Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo

Adversarial machine learning (AdvML), which aims at tricking ML models by providing deceptive inputs, has been identified as a powerful method to improve various trustworthiness metrics (e.g., adversarial robustness, explainability, and fairness) and to advance versatile ML paradigms (e.g., supervised and self-supervised learning, and static and continual learning). As a consequence of the proliferation of AdvML-inspired research works, the proposed workshop–New Frontiers in AdvML–aims to identify the challenges and limitations of current AdvML methods and explore new prospective and constructive views of AdvML across the full theory/algorithm/application stack. The workshop will explore the new frontiers of AdvML from the following new perspectives: (1) advances in foundational AdvML research, (2) principles and practice of scalable AdvML, and (3) AdvML for good. This will be a full-day workshop, which accepts full paper submissions (up to 6 pages) as well as “blue sky” extended abstract submissions (up to 2 pages).

Jonathan Francis · Bingqing Chen · Hitesh Arora · Xinshuo Weng · Siddha Ganju · Daniel Omeiza · Jean Oh · Erran Li · Sylvia Herbert · Eric Nyberg · Eric Nyberg

We propose the 1st ICML Workshop on Safe Learning for Autonomous Driving (SL4AD), as a venue for researchers in artificial intelligence to discuss research problems on autonomous driving, with a specific focus on safe learning. While there have been significant advances in vehicle autonomy (e.g., perception, trajectory forecasting, planning and control, etc.), it is of paramount importance for autonomous systems to adhere to safety specifications, as any safety infraction in urban and highway driving, or high-speed racing, could lead to catastrophic failures. We envision the workshop to bring together regulators, researchers, and industry practitioners from different AI subfields, to work towards safer and more robust autonomous technology. This workshop aims to: (i) highlight open questions about safety issues, when autonomous agents must operate in uncertain and dynamically-complex real-world environments; (ii) bring together researchers and industrial practitioners in autonomous driving with control theoreticians in safety analysis, dependability, and verification; (iii) provide a strong AI benchmark, where the joint evaluation of safety, performance, and generalisation capabilities of AD perception and control algorithms is systematically performed; (iv) provide a forum for discussion among researchers, industrial practitioners, and regulators on the core challenges, promising solution strategies, fundamental limitations, and regulatory realities involved in deploying …

Rachel Manzelli · Brian Kulis · Sadie Allen · Sander Dieleman · Yu Zhang

The 1st Machine Learning for Audio Synthesis workshop at ICML will attempt to cover the space of novel methods and applications of audio generation via machine learning. These include, but are not limited to: methods of speech modeling, environmental sound generation or other forms of ambient sound, novel generative models, music generation in the form of raw audio, and text-to-speech methods. Audio synthesis plays a significant and fundamental role in many audio-based machine learning systems, including smart speakers and voice-based interaction systems, real-time voice modification systems, and music or other content generation systems.We plan to solicit original workshop papers in these areas, some of which will present contributed talks and spotlights. Alongside these presentations will be talks from invited speakers, a poster session and interactive live demo session, and an invited speaker panel.We believe that a machine learning workshop focused around generation in the audio domain would provide a good opportunity to bring together both practitioners of audio generation tools along with core machine learning researchers interested in audio, in order to hopefully forge new directions in this important area of research.

Gautam Kamath · Audra McMillan

Differential privacy is a promising approach to privacy-preserving data analysis. It has been the subject of a decade of intense scientific study, and has now been deployed in products at government agencies such as the U.S. Census Bureau and companies like Microsoft, Apple, and Google. MIT Technology Review named differential privacy one of 10 breakthrough technologies of 2020.Since data privacy is a pervasive concern, differential privacy has been studied by researchers from many distinct communities, including machine learning, statistics, algorithms, computer security, cryptography, databases, data mining, programming languages, social sciences, and law. We believe that this combined effort across a broad spectrum of computer science is essential for differential privacy to realize its full potential. To this end, our workshop will stimulate discussion among participants about both the state-of-the-art in differential privacy and the future challenges that must be addressed to make differential privacy more practical.

Andrew Spielberg · Caitlin Mueller · Lydia Chilton · Rafael Gomez-Bombarelli · Vladimir Kim · Daniel Ritchie · Wengong Jin

Recent years have seen a proliferation of models, algorithms, and infrastructure well-suited to complex problems in computational design, from virtual design problems in geometry, program synthesis, and web design to tangible design of molecules, materials, robots, architecture, carpentry, 3D printed models, and other domains. This workshop provides an opportunity for researchers and practitioners to discuss shared problems and solutions in computational design and bridge the gaps between (and within) theory and practice. The workshop will be highly interactive, featuring long talks, short talks, poster sessions, discussion panels, and demos of multiple forms. This is the first workshop of its kind at ICML; we hope that this event will set the stage for many follow-on workshops to come.

Evgenii Nikishin · Pierluca D'Oro · Doina Precup · Andre Barreto · Amir-massoud Farahmand · Pierre-Luc Bacon

The goal of reinforcement learning (RL) is to maximize a reward signal by taking optimal decisions. An RL system typically contains several moving components, possibly including a policy, a value function, and a model of the environment. We refer to decision awareness as the notion that each of the components and their combination should be explicitly trained to help the agent improve the total amount of collected reward. To better understand decision awareness, consider as an example a model-based method. For environments with rich observations (e.g., pixel-based), the world model is complex and standard approaches would need a large number of samples and a high-capacity function approximator to learn a reasonable approximation of the dynamics. However, a decision-aware agent might recognize that modeling all the granular complexity of the environment is neither feasible nor necessary to learn an optimal policy and instead focus on modeling aspects that are important for decision making. Decision awareness goes beyond the model learning aspect. In actor-critic algorithms, a critic is trained to predict the expected return while later used to aid policy learning. Is return prediction an optimal strategy for critic learning? And, in general, what is the best way to learn each component …

Roland S. Zimmermann · Julian Bitterwolf · Evgenia Rusak · Steffen Schneider · Matthias Bethge · Wieland Brendel · Matthias Hein

Deep vision models are prone to short-cut learning, vulnerable to adversarial attacks, as well as natural and synthetic image corruptions. While OOD test sets have been proposed to measure the vulnerability of DNNs to distribution shifts of different kinds, it has been shown that the performance on popular OOD test sets such as ImageNet-C or ObjectNet is strongly correlated to the performance on clean ImageNet. Since performance on clean ImageNet clearly tests IID but not OOD generalization, this calls for new challenging OOD datasets testing different aspects of generalization.Our goal is to bring the robustness, domain adaptation, and out-of-distribution detection communities together to work on a new broad-scale benchmark that tests diverse aspects of current computer vision models and guides the way towards the next generation of models. Submissions to this workshop will contain novel datasets, metrics and evaluation settings.

Tomasz Trzcinski · marco levorato · Simone Scardapane · Bradley McDanel · Andrea Banino · Carlos Riquelme Ruiz

Deep networks have shown outstanding scaling properties both in terms of data and model sizes: larger does better. Unfortunately, the computational cost of current state-of-the-art methods is prohibitive. A number of new techniques have recently arisen to address and improve this fundamental quality-cost trade-off. For instance, methods like conditional computation, adaptive computation, dynamic model sparsification, and early-exit approaches are all aimed at addressing the above-mentioned quality-cost trade-off. This workshop explores such exciting and practically-relevant research avenues.More specifically, as part of contributed content we will invite high-quality papers on the following topics: dynamic routing, mixture-of-experts models, early-exit methods, conditional computations, capsules and object-oriented learning, reusable components, online network growing and pruning, online neural architecture search and applications of dynamic networks (continual learning, wireless/embedded devices and similar).The workshop is planned as a whole day event and will feature 2 keynote talks, a mix of panel discussion, contributed and invited talks, and a poster session. The invited speakers cover a diverse range of research fields (machine learning, computer vision, neuroscience, natural language processing) and backgrounds (academic, industry) and include speakers from underrepresented groups. All speakers confirmed their talks and the list ranges from senior faculty members (Gao Huang, Tinne Tuytelaars) to applied and …

Christian Schroeder · Yang Zhang · Anisoara Calinescu · Dylan Radovic · Prateek Gupta · Jakob Foerster

Many of the world's most pressing issues, such as climate change, pandemics, financial market stability and fake news, are emergent phenomena that result from the interaction between a large number of strategic or learning agents. Understanding these systems is thus a crucial frontier for scientific and technology development that has the potential to permanently improve the safety and living standards of humanity. Agent-Based Modelling (ABM) (also known as individual-based modelling) is an approach toward creating simulations of these types of complex systems by explicitly modelling the actions and interactions of the individual agents contained within. However, current methodologies for calibrating and validating ABMs rely on human expert domain knowledge and hand-coded behaviours for individual agents and environment dynamics. Recent progress in AI has the potential to offer exciting new approaches to learning, calibrating, validation, analysing and accelerating ABMs. This interdisciplinary workshop is meant to bring together practitioners and theorists to boost ABM method development in AI, and stimulate novel applications across disciplinary boundaries - making ICML the ideal venue.Our inaugural workshop will be organised along two axes. First, we seek to provide a venue where ABM researchers from a variety of domains can introduce AI researchers to their respective domain …

Gonçalo Mordido · Yoshua Bengio · Ghouthi BOUKLI HACENE · Vincent Gripon · François Leduc-Primeau · Vahid Partovi Nia · Julie Grollier

To reach top-tier performance, deep learning models usually require a large number of parameters and operations, using considerable power and memory. Several methods have been proposed to tackle this problem by leveraging quantization of parameters, pruning, clustering of parameters, decompositions of convolutions, or using distillation. However, most of these works focus mainly on improving efficiency at inference time, disregarding the training cost. In practice, however, most of the energy footprint of deep learning results from training. Hence, this workshop focuses on reducing the training complexity of deep neural networks. Our aim is to encourage submissions specifically concerning the reduction in energy, time, or memory usage at training time. Topics of interest include but are not limited to: (i) compression methods for memory and complexity reduction during training, (ii) energy-efficient hardware architectures, (iii) energy-efficient training algorithms, (iv) novel energy models or energy efficiency training benchmarks, (v) practical applications of low-energy training.

Rémy Degenne · Pierre Gaillard · Wouter Koolen · Aadirupa Saha

While online learning has become one of the most successful and studied approaches in machine learning, in particular with reinforcement learning, online learning algorithms still interact with their environments in a very simple way.The complexity and diversity of the feedback coming from the environment in real applications is often reduced to the observation of a scalar reward. More and more researchers now seek to exploit fully the available feedback to allow faster and more human-like learning.This workshop aims to present a broad overview of the feedback types being actively researched, highlight recent advances and provide a networking forum for researchers and practitioners.

Huaxiu Yao · Hugo Larochelle · Percy Liang · Colin Raffel · Jian Tang · Ying WEI · Saining Xie · Eric Xing · Chelsea Finn

The past five years have seen rapid progress in large-scale pre-trained models across a variety of domains, such as computer vision, natural language processing, robotics, bioinformatics, etc. Leveraging a huge number of parameters, large-scale pre-trained models are capable of encoding rich knowledge from labeled and/or unlabeled examples. Supervised and self-supervised pre-training have been the two most representative paradigms, through which pre-trained models have demonstrated large benefits on a wide spectrum of downstream tasks. There are also other pre-training paradigms, e.g., meta-learning for few-shot learning, where pre-trained models are trained so that they quickly adapt to solve new tasks. However, there are still many remaining challenges and new opportunities ahead for pre-training, In this workshop, we propose to have the following two foci: (1) Which pre-training methods transfer across different applications/domains, which ones don't, and why? (2) In what settings should we expect pre-training to be effective, compared to learning from scratch?

Umang Bhatt · Katie Collins · Maria De-Arteaga · Bradley Love · Adrian Weller

Machine learning (ML) approaches can support decision-making in key societal settings including healthcare and criminal justice, empower creative discovery in mathematics and the arts, and guide educational interventions. However, deploying such human-machine teams in practice raises critical questions, such as how a learning algorithm may know when to defer to a human teammate and broader systemic questions of when and which tasks to dynamically allocate to a human versus a machine, based on complementary strengths while avoiding dangerous automation bias. Effective synergistic teaming necessitates a prudent eye towards explainability and offers exciting potential for personalisation in interaction with human teammates while considering real-world distribution shifts. In light of these opportunities, our workshop offers a forum to focus and inspire core algorithmic developments from the ICML community towards efficacious human-machine teaming, and an open environment to advance critical discussions around the issues raised by human-AI collaboration in practice.

Ayush Sekhari · Gautam Kamath · Jayadev Acharya

In modern ML domains, state-of-the-art performance is attained by highly overparameterized models that are expensive to train, costing weeks of time and millions of dollars. At the same time, after deploying the model, the learner may realize issues such as leakage of private data or vulnerability to adversarial examples. The learner may also wish to impose additional constraints post-deployment, for example, to ensure fairness for different subgroups. Retraining the model from scratch to incorporate additional desiderata would be expensive. As a consequence, one would instead prefer to update the model, which can yield significant savings of resources such as time, computation, and memory over retraining from scratch. Some instances of this principle in action include the emerging field of machine unlearning, and the celebrated paradigm of fine-tuning pretrained models. The goal of our workshop is to provide a platform to stimulate discussion about both the state-of-the-art in updatable ML and future challenges in the field.

Alice Baird · Panagiotis Tzirakis · Kory Mathewson · Gauthier Gidel · Eilif Muller · Bjoern Schuller · Erik Cambria · Dacher Keltner · Alan Cowen

The ICML Expressive Vocalizations (ExVo) Workshop and Competition 2022 introduces, for the first time in a competition setting, the machine learning problem of understanding and generating vocal bursts – a wide range of emotional non-linguistic utterances. Participants of ExVo are presented with three tasks that utilize a single dataset. The dataset and three tasks draw attention to new innovations in emotion science and capture 10 dimensions of emotion reliably perceived in distinct vocal bursts: Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness and Surprise. Of particular interest to the ICML community, these tasks highlight the need for advanced machine learning techniques for multi-task learning, audio generation, and personalized few-shot learning of nonverbal expressive style.

With studies of vocal emotional expression often relying on significantly smaller datasets insufficient to apply the latest machine learning innovations, the ExVo competition and workshop provides an unprecedented platform for the development and discussion of novel strategies for understanding vocal bursts and will enable unique forms of collaborations by leading researchers from diverse disciplines. Organized by leading researchers in emotion science and machine learning, the following three tasks are proposed: the Multi-task High-Dimensional Emotion, Age & Country Task (ExVo Multi-Task); the Generative Emotional Vocal Burst …

Virginie Do · Thorsten Joachims · Alessandro Lazaric · Joelle Pineau · Matteo Pirotta · Harsh Satija · Nicolas Usunier

Algorithmic decision-making systems are increasingly used in sensitive applications such as advertising, resume reviewing, employment, credit lending, policing, criminal justice, and beyond. The long-term promise of these approaches is to automate, augment and/or eventually improve on the human decisions which can be biased or unfair, by leveraging the potential of machine learning to make decisions supported by historical data. Unfortunately, there is a growing body of evidence showing that the current machine learning technology is vulnerable to privacy or security attacks, lacks interpretability, or reproduces (and even exacerbates) historical biases or discriminatory behaviors against certain social groups.

Most of the literature on building socially responsible algorithmic decision-making systems focus on a static scenario where algorithmic decisions do not change the data distribution. However, real-world applications involve nonstationarities and feedback loops that must be taken into account to measure and mitigate fairness in the long-term. These feedback loops involve the learning process which may be biased because of insufficient exploration, or changes in the environment's dynamics due to strategic responses of the various stakeholders. From a machine learning perspective, these sequential processes are primarily studied through counterfactual analysis and reinforcement learning.

The purpose of this workshop is to bring together researchers …

Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski

The importance of robust predictions continues to grow as machine learning models are increasingly relied upon in high-stakes settings. Ensuring reliability in real-world applications remains an enormous challenge, particularly because data in the wild frequently differs substantially from the data on which models were trained. This phenomenon, broadly known as “distribution shift”, has become a major recent focus of the research community.

With the growing interest in addressing this problem has come growing awareness of the multitude of possible meanings of “distribution shift” and the importance of understanding the distinctions between them: which types of shift occur in the real world, and under which of these is generalization feasible? Negative results seem just as common as positive ones; where provable generalization is possible, it often depends on strong structural assumptions whose likelihood of holding in reality is questionable. Existing approaches often lack rigor and clarity with regards to the precise problem they are trying to solve. Some work has been done to precisely define distribution shift and to produce benchmarks which properly reflect real-world distribution shift, but overall there seems to be little communication between the communities tackling foundations and applications respectively. Recent strides have been made to move beyond …

Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Hanchen Wang · Connor Coley · Le Song · Linfeng Zhang · Marinka Zitnik

Machine learning (ML) has revolutionized a wide array of scientific disciplines, including chemistry, biology, physics, material science, neuroscience, earth science, cosmology, electronics, mechanical science. It has solved scientific challenges that were never solved before, e.g., predicting 3D protein structure, imaging black holes, automating drug discovery, and so on. Despite this promise, several critical gaps stifle algorithmic and scientific innovation in AI for Science: (1) Under-explored theoretical analysis, (2) Unrealistic methodological assumptions or directions, (3) Overlooked scientific questions, (4) Limited exploration at the intersections of multiple disciplines, (5) Science of science, (6) Responsible use and development of AI for science. However, very little work has been done to bridge these gaps, mainly because of the missing link between distinct scientific communities. While many workshops focus on AI for specific scientific disciplines, they are all concerned with the methodological advances within a single discipline (e.g., biology) and are thus unable to examine the crucial questions mentioned above. This workshop will fulfill this unmet need and facilitate community building; with hundreds of ML researchers beginning projects in this area, the workshop will bring them together to consolidate the fast growing area of AI for Science into a recognized field.

Mihaela Rosca · Chongli Qin · Julien Mairal · Marc Deisenroth

In machine learning, discrete time approaches such as gradient descent algorithms and discrete building layers for neural architectures have traditionally dominated. Recently, we have seen that by bridging these discrete systems with their continuous counterparts we can not only develop new insights but we can construct novel and competitive ML approaches. By leveraging time, we can tap into the centuries of research such as dynamical systems, numerical integration and differential equations, and continue enhancing what is possible in ML.The workshop aims to to disseminate knowledge about the use of continuous time methods in ML; to create a discussion forum and create a vibrant community around the topic; to provide a preview of what dynamical system methods might further bring to ML; to find the biggest hurdles in using continuous time systems in ML and steps to alleviate them; to showcase how continuous time methods can enable ML to have large impact in certain application domains, such as climate prediction and physical sciences.Recent work has shown that continuous time approaches can be useful in ML, but their applicability can be extended by increasing the visibility of these methods, fostering collaboration and an interdisciplinary approach to ensure their long-lasting impact. We thus …

George Cybenko · Ludmilla Huntsman · Steve Huntsman · Paul Vines

The Disinformation Countermeasures and Machine Learning (DisCoML) workshop at ICML 2022 in Baltimore will address machine learning techniques to counter disinformation. Today, disinformation is an important challenge that all governments and their citizens face, affecting politics, public health, financial markets, and elections. Specific examples such as lynchings catalyzed by disinformation spread over social media highlight that the threat it poses crosses social scales and boundaries. This threat even extends into the realm of military combat, as a recent NATO StratCom experiment highlighted. Machine learning plays a central role in the production and propagation of dissemination. Bad actors scale disinformation operations by using ML-enabled bots, deepfakes, cloned websites, and forgeries. The situation is exacerbated by proprietary algorithms of search engines and social media platforms, driven by advertising models, that can effectively isolate internet users from alternative information and viewpoints. In fact, social media's business model, with its behavioral tracking algorithms, is arguably optimized for launching a global pandemic of cognitive hacking. Machine learning is also essential for identifying and inhibiting the spread of disinformation at internet speed and scale, but DisCoML welcomes approaches that contribute to countering disinformation in a broad sense. While the "cybersecurity paradox"–i.e. increased technology spending has not …

Ramin Zabih · S. Kevin Zhou · Weina Jin · Yuyin Zhou · Ipek Oguz · Xiaoxiao Li · Yifan Peng · Zongwei Zhou · Yucheng Tang

Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of the existing ML approach inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could ultimately facilitate the deployment. In addition, it is essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more aligned with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions. In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, public health, computational biology, biomedical informatics, and clinical fields to facilitate discussions including related challenges, definition, formalisms, and evaluation protocols regarding interpretable medical machine intelligence. The workshop appeals to ICML audiences as interpretability is a major challenge to deploy ML in critical domains such as healthcare. …

Anastasios Angelopoulos · Stephen Bates · Yixuan Li · Ryan Tibshirani · Aaditya Ramdas · Stephen Bates

While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making. Deploying learning systems in consequential settings also requires calibrating and communicating the uncertainty of predictions. A recent line of work we call distribution-free predictive inference (i.e., conformal prediction and related methods) has developed a set of methods that give finite-sample statistical guarantees for any (possibly incorrectly specified) predictive model and any (unknown) underlying distribution of the data, ensuring reliable uncertainty quantification (UQ) for many prediction tasks. This line of work represents a promising new approach to UQ with complex prediction systems but is relatively unknown in the applied machine learning community. Moreover, much remains to be done integrating distribution-free methods with existing approaches to modern machine learning in computer vision, natural language, reinforcement learning, and so on -- little work has been done to bridge these two worlds. To facilitate the emerging topics on distribution-free methods, the proposed workshop has two goals. First, to bring together researchers in distribution-free methods with researchers specializing in applications of machine learning to catalyze work at this interface. Second, to bring together the existing community of distribution-free uncertainty quantification research, as no …