Skip to yearly menu bar Skip to main content



Workshops
Alessandra Tosi · Nathan Korda · Michael A Osborne · Stephen Roberts · Andrei Paleyes · Fariba Yousefi

Until recently, many industrial Machine Learning applications have been the remit of consulting academics, data scientists within larger companies, and a number of dedicated Machine Learning research labs within a few of the world’s most innovative tech companies. Over the last few years we have seen the dramatic rise of companies dedicated to providing Machine Learning software-as-a-service tools, with the aim of democratizing access to the benefits of Machine Learning. All these efforts have revealed major hurdles to ensuring the continual delivery of good performance from deployed Machine Learning systems. These hurdles range from challenges in MLOps, to fundamental problems with deploying certain algorithms, to solving the legal issues surrounding the ethics involved in letting algorithms make decisions for your business.

This workshop will invite papers related to the challenges in deploying and monitoring ML systems. It will encourage submission on subjects related to: MLOps for deployed ML systems; the ethics around deploying ML systems; useful tools and programming languages for deploying ML systems; specific challenges relating to deploying reinforcement learning in ML systems and performing continual learning and providing continual delivery in ML systems;
and finally data challenges for deployed ML systems.

We will also invite the submission of …

Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Ricky T. Q. Chen · Danilo J. Rezende

Normalizing flows are explicit likelihood models (ELM) characterized by a flexible invertible reparameterization of high-dimensional probability distributions. Unlike other ELMs, they offer both exact and efficient likelihood computation and data generation. Since their recent introduction, flow-based models have seen a significant resurgence of interest in the machine learning community. As a result, powerful flow-based models have been developed, with successes in density estimation, variational inference, and generative modeling of images, audio and video.

As the field is moving forward, the main goal of the workshop is to consolidate recent progress and connect ideas from related fields. Over the past few years, we’ve seen that normalizing flows are deeply connected to latent variable models, autoregressive models, and more recently, diffusion-based generative models. This year, we would like to further push the forefront of these explicit likelihood models through the lens of invertible reparameterization. We encourage researchers to use these models in conjunction to exploit the their benefits at once, and to work together to resolve some common issues of likelihood-based methods, such as mis-calibration of out-of-distribution uncertainty.

Thang Doan · Bogdan Mazoure · Amal Rannen Triki · Rahaf Aljundi · Vincenzo Lomonaco · Xu He · Arslan Chaudhry

Machine learning systems are commonly applied to isolated tasks (such as image recognition or playing chess) or narrow domains (such as control over similar robotic bodies). It is further assumed that the learning system has simultaneous access to all annotated data points of the tasks at hand. In contrast, Continual Learning (CL), also referred to as Lifelong or Incremental Learning, studies the problem of learning from a stream of data from changing domains, each connected to a different learning task. The objective of CL is to quickly adapt to new situations or tasks by exploiting previously acquired knowledge, while protecting previous learning from being erased.

Significant advances have been made in CL over the past few years, mostly through empirical investigations and benchmarking. However, theoretical understanding is still lagging behind. For instance, while Catastrophic Forgetting (CF) is a recurring ineffectiveness that most works try to tackle, little understanding is provided in the literature from a theoretical point of view. Many real life applications share common assumptions and settings with CL, what are the convergence guarantees when deploying a certain method? If memory capacity is an important constraint for replay methods, how can we select the minimal examples such that CF …

Hari Prasanna Das · Katarzyna Tokarska · Maria João Sousa · Meareg Hailemariam · David Rolnick · Xiaoxiang Zhu · Yoshua Bengio

The focus of this workshop is on the use of machine learning to help in addressing climate change, encompassing mitigation efforts (reducing the severity of climate change), adaptation measures (preparing for unavoidable consequences), and climate science (our understanding of the climate and future climate predictions). Topics within the scope of this workshop include climate-relevant applications of machine learning to the power sector, buildings and transportation infrastructure, agriculture and land use, extreme event prediction, disaster response, climate policy, and climate finance. The goals of the workshop are: (1) to showcase high-impact applications of ML to climate change mitigation, adaptation, and climate science, (2) to demonstrate that the associated ML methods are interesting in their own right, (3) to encourage fruitful collaboration between the ML community and a diverse set of researchers and practitioners from climate change-related fields, and (4) to promote dialogue with decision-makers in the private and public sectors, ensuring that the works presented in this workshop have impact on the thoughtful deployment of ML in climate solutions. Building on our previous workshops in this series, this workshop will have a particular focus on ML for the assessment and implementation of objectives set under the Paris Agreement, though submitted works …

Quanshi Zhang · Tian Han · Lixin Fan · Zhanxing Zhu · Hang Su · Ying Nian Wu

The proposed workshop pays a special interests in theoretic foundations, limitations, and new application trends in the scope of XAI. These issues reflect new bottlenecks in the future development of XAI, for example: (1) no theoretic definition of XAI and no solid and widely-used formulation for even a specific explanation task. (2) No sophisticated formulation of the essence of ``semantics'' encoded in a DNN. (3) How to bridge the gap between connectionism and symbolism in AI research has not been sophisticatedly explored. (4) How to evaluate the correctness and trustworthiness of an explanation result is still an open problem. (5) How to bridge the intuitive explanation (e.g., the attribution/importance-based explanation) and a DNN's representation capacity (e.g., the generalization power) is still a significant challenge. (6) Using the explanation to guide the architecture design or substantially boost the performance of a DNN is a bottleneck. Therefore, this workshop aims to bring together researchers, engineers as well as industrial practitioners, who concern about the interpretability, safety, and reliability of artificial intelligence. In this workshop, we hope to use a broad discussion on the above bottleneck issues to explore new critical and constructive views of the future development of XAI. Research outcomes are …

Feryal Behbahani · Joelle Pineau · Lerrel Pinto · Roberta Raileanu · Aravind Srinivas · Denis Yarats · Amy Zhang

Unsupervised learning has begun to deliver on its promise in the recent past with tremendous progress made in the fields of natural language processing and computer vision whereby large scale unsupervised pre-training has enabled fine-tuning to downstream supervised learning tasks with limited labeled data. This is particularly encouraging and appealing in the context of reinforcement learning considering that it is expensive to perform rollouts in the real world with annotations either in the form of reward signals or human demonstrations. We therefore believe that a workshop in the intersection of unsupervised and reinforcement learning is timely and we hope to bring together researchers with diverse views on how to make further progress in this exciting and open-ended subfield.

Besmira Nushi · Adish Singla · Sebastian Tschiatschek

A key challenge for the successful deployment of many real world human-facing automated sequential decision-making systems is the need for human-AI collaboration. Effective collaboration ensures that the complementary abilities and skills of the human-users and the AI system are leveraged to maximize utility. This is for instance important in applications such as autonomous driving, in which a human user’s skill might be required in safety critical situations, or virtual personal assistants, in which a human user can perform real-world physical interactions which the AI system cannot. Facilitating such collaboration requires cooperation, coordination, and communication, e.g., in the form of accountability, teaching interactions, provision of feedback, etc. Without effective human-AI collaboration, the utility of automated sequential decision-making systems can be severely limited. Thus there is a surge of interest in better facilitating human-AI collaboration in academia and industry. Most existing research has focussed only on basic approaches for human-AI collaboration with little focus on long-term interactions and the breadth needed for next-generation applications. In this workshop we bring together researchers to advance this important topic, focussing on the following three directions: (a) Accountability and trust; (b) Adaptive behavior for long-term collaboration; (c) Robust collaboration under mismatch.

Senthil Kumar · Sameena Shah · Joan Bruna · Tom Goldstein · Erik Mueller · Oleg Rokhlenko · Hongxia Yang · Jianpeng Xu · Oluwatobi O Olabiyi · Charese Smiley · C. Bayan Bruss · Saurabh H Nagrecha · Svitlana Vyetrenko

One of the fundamental promises of deep learning is its ability to build increasingly meaningful representations of data from complex but raw inputs. These techniques demonstrate remarkable efficacy on high dimensional data with unique proximity structures (image, natural language, graphs).

Not only are these types of data prevalent in financial services and e-commerce, but also they often capture extremely interesting aspects of social and economic behavior. For example, financial transactions and online purchases can be viewed as edges on graphs of economic activity. To date, these graphs are far less studied than social networks, though they provide a unique look at behavior, social structures, and risk. ​Meanwhile, activity or transaction sequences, usually determined by user sessions, can reflect the users’ long term and short term interests, which can be modeled by sequential models, and used to predict the user’s future activities. Although language models have been explored in session data modeling, how to re-use the representations learned from one job to another job effectively is still an open question.

Our goal is to bring together researchers from different domains to discuss the application of representation learning to financial services and e-commerce. For the first time, four major e-commerce companies (Amazon, …

Yuxi Li · Minmin Chen · Omer Gottesman · Lihong Li · Zongqing Lu · Rupam Mahmood · Niranjani Prasad · Zhiwei (Tony) Qin · Csaba Szepesvari · Matthew Taylor

Reinforcement learning (RL) is a general learning, predicting, and decision making paradigm and applies broadly in many disciplines, including science, engineering and humanities. RL has seen prominent successes in many problems, such as games, robotics, recommender systems. However, applying RL in the real world remains challenging, and a natural question is:

Why isn’t RL used even more often and how can we improve this?

The main goals of the workshop are to: (1) identify key research problems that are critical for the success of real-world applications; (2) report progress on addressing these critical issues; and (3) have practitioners share their success stories of applying RL to real-world problems, and the insights gained from such applications.

We invite paper submissions successfully applying RL algorithms to real-life problems and/or addressing practically relevant RL issues. Our topics of interest are general, including (but not limited to): 1) practical RL algorithms, which covers all algorithmic challenges of RL, especially those that directly address challenges faced by real-world applications; 2) practical issues: generalization, sample efficiency, exploration, reward, scalability, model-based learning, prior knowledge, safety, accountability, interpretability, reproducibility, hyper-parameter tuning; and 3) applications.

We have 6 premier panel discussions and 70+ great papers/posters. Welcome!

Gresa Shala · Frank Hutter · Joaquin Vanschoren · Marius Lindauer · Katharina Eggensperger · Colin White · Erin LeDell

Machine learning (ML) has achieved considerable successes in recent years, but this success often relies on human experts, who construct appropriate features, design learning architectures, set their hyperparameters, and develop new learning algorithms. Driven by the demand for robust, off-the-shelf ML methods from an ever-growing community, the research area of AutoML targets the progressive automation of machine learning aiming to make effective methods available to everyone. Hence, the workshop targets a broad audience ranging from core ML researchers in different fields of ML connected to AutoML, such as neural architecture search (NAS), hyperparameter optimization, meta-learning, and learning-to-learn, to domain experts aiming to apply ML to new types of problems.

Balaji Lakshminarayanan · Dan Hendrycks · Yixuan Li · Jasper Snoek · Silvia Chiappa · Sebastian Nowozin · Thomas Dietterich

There has been growing interest in ensuring that deep learning systems are robust and reliable. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving vehicles and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models safely deployed in open environments, we must deepen technical understanding in the following areas:

(1) Learning algorithms that can detect changes in data distribution (e.g. out-of-distribution examples) and improve out-of-distribution generalization (e.g. temporal, geographical, hardware, adversarial shifts);
(2) Mechanisms to estimate and calibrate confidence produced by neural networks in typical and unforeseen scenarios;
(3) Guide learning towards an understanding of the underlying causal mechanisms that can guarantee robustness with respect to distribution shift.

In order to achieve these goals, it is critical to dedicate substantial effort on
(4) Creating benchmark datasets and protocols for evaluating model performance under distribution shift
(5) Studying key applications …

Yuyin Zhou · Xiaoxiao Li · Vicky Yao · Pengtao Xie · DOU QI · Nicha Dvornek · Julia Schnabel · Judy Wawira · Yifan Peng · Ronald Summers · Alan Karthikesalingam · Lei Xing · Eric Xing

Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of existing ML approaches inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could facilitate the deployment ultimately. On the other hand, it is also essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more coherent with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions.

In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, and clinical fields to facilitate discussions including related challenges, definition, formalisms, evaluation protocols regarding interpretable medical machine intelligence. Additionally, we will also introduce possible solutions such as logic and symbolic reasoning over medical knowledge graphs, uncertainty quantification, composition models, etc. We hope that the …

Rachel Cummings · Gautam Kamath

Differential privacy is a promising approach to privacy-preserving data analysis. It has been the subject of a decade of intense scientific study, and has now been deployed in products at government agencies such as the U.S. Census Bureau and companies like Microsoft, Apple, and Google. MIT Technology Review named differential privacy one of 10 breakthrough technologies of 2020.
Since data privacy is a pervasive concern, differential privacy has been studied by researchers from many distinct communities, including machine learning, statistics, algorithms, computer security, cryptography, databases, data mining, programming languages, social sciences, and law. We believe that this combined effort across a broad spectrum of computer science is essential for differential privacy to realize its full potential. To this end, our workshop will stimulate discussion among participants about both the state-of-the-art in differential privacy and the future challenges that must be addressed to make differential privacy more practical.

Zhiting Hu · Li Erran Li · Willie Neiswanger · Benedikt Boecking · Yi Xu · Belinda Zeng

As the use of machine learning (ML) becomes ubiquitous, there is a growing understanding and appreciation for the role that data plays for building successful ML solutions. Classical ML research has been primarily focused on learning algorithms and their guarantees. Recent progress has shown that data is playing an increasingly central role in creating ML solutions, such as the massive text data used for training powerful language models, (semi-)automatic engineering of weak supervision data that enables applications in few-labels settings, and various data augmentation and manipulation techniques that lead to performance boosts on many real world tasks. On the other hand, data is one of the main sources of security, privacy, and bias issues in deploying ML solutions in the real world. This workshop will focus on the new perspective of machine learning for data --- specifically how ML techniques can be used to facilitate and automate a range of data operations (e.g. ML-assisted labeling, synthesis, selection, augmentation), and the associated challenges of quality, security, privacy and fairness for which ML techniques can also enable solutions.

Niki Kilbertus · Lily Hu · Laura Balzer · Uri Shalit · Alexander D'Amour · Razieh Nabi

As causality enjoys increasing attention in various areas of machine learning, this workshop turns the spotlight on the assumptions behind the successful application of causal inference techniques. It is well known that answering causal queries from observational data requires strong and sometimes untestable assumptions. On the theoretical side, a whole host of settings as been established in which causal effects are identifiable and consistently estimable under a set of by now considered "standard" assumptions. While these can be reasonable in specific scenarios, they were often at least partially motivated by rendering estimation theoretically feasible. Such assumptions tell us what we would need to assert about the data generating process in order to be able to answer causal queries. Unfortunately, in applications we often find them taken for granted as properties that can safely be assumed to hold without further scrutiny. This starts with fundamentally untestable assumptions such as the stable unit treatment value assumption or ignorability and continues to no interference, faithfulness, positivity or overlap, no unobserved confounding and even reaches blanket one-size-fits all assumptions on the linearity of structural equations or the additivity of noise. This situation may lead practitioners to either believe that well founded causal inference is …

Trevor Darrell · Xin Wang · Li Erran Li · Fisher Yu · Zeynep Akata · Wenwu Zhu · Pradeep Ravikumar · Shiji Zhou · Shanghang Zhang · Kalesha Bullard

Recent years have witnessed the rising need for the machine learning systems that can interact with humans in the learning loop. Such systems can be applied to computer vision, natural language processing, robotics, and human computer interaction. Creating and running such systems call for interdisciplinary research of artificial intelligence, machine learning, and software engineering design, which we abstract as Human in the Loop Learning (HILL). The HILL workshop aims to bring together researchers and practitioners working on the broad areas of HILL, ranging from the interactive/active learning algorithms for real-world decision making systems (e.g., autonomous driving vehicles, robotic systems, etc.), lifelong learning systems that retain knowledge from different tasks and selectively transfer knowledge to learn new tasks over a lifetime, models with strong explainability, as well as interactive system designs (e.g., data visualization, annotation systems, etc.). The HILL workshop continues the previous effort to provide a platform for researchers from interdisciplinary areas to share their recent research. In this year’s workshop, a special feature is to encourage the debate between HILL and label-efficient learning: Are these two learning paradigms contradictory with each other, or can they be organically combined to create a more powerful learning system? We believe the theme …

Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian

Adversarial machine learning is a new gamut of technologies that aim to study the vulnerabilities of ML approaches and detect malicious behaviors in adversarial settings. The adversarial agents can deceive an ML classifier by significantly altering its response with imperceptible perturbations to the inputs. Although it is not to be alarmist, researchers in machine learning are responsible for preempting attacks and building safeguards, especially when the task is critical for information security and human lives. We need to deepen our understanding of machine learning in adversarial environments.

While the negative implications of this nascent technology have been widely discussed, researchers in machine learning are yet to explore their positive opportunities in numerous aspects. The positive impacts of adversarial machine learning are not limited to boost the robustness of ML models but cut across several other domains.

Since there are both positive and negative applications of adversarial machine learning, tackling adversarial learning to its use in the right direction requires a framework to embrace the positives. This workshop aims to bring together researchers and practitioners from various communities (e.g., machine learning, computer security, data privacy, and ethics) to synthesize promising ideas and research directions and foster and strengthen cross-community collaborations on …

Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez-Rodriguez · Isabel Valera · Hima Lakkaraju

Machine learning is increasingly used to inform decision-making in sensitive situations where decisions have consequential effects on individuals' lives. In these settings, in addition to requiring models to be accurate and robust, socially relevant values such as fairness, privacy, accountability, and explainability play an important role for the adoption and impact of said technologies. In this workshop, we focus on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavourably treated by automated decision-making systems. Specifically, we plan to facilitate workshop interactions that will shed light onto the following 3 questions: (i) What are the practical, legal and ethical considerations that decision-makers need to account for when providing recourse? (ii) How do humans understand and act based on recourse explanations from a psychological and behavioral perspective? (iii) What are the main technical advances in explainability and causality in ML required for achieving recourse? Our ultimate goal is to foster conversations that will help bridge the gaps arising from the interdisciplinary nature of algorithmic recourse and contribute towards the wider adoption of such methods.

Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Shiqiang Wang · Han Yu

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Despite the advantages of FL, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.

The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical …

Chaowei Xiao · Animashree Anandkumar · Mingyan Liu · Dawn Song · Raquel Urtasun · Jieyu Zhao · Xueru Zhang · Cihang Xie · Xinyun Chen · Bo Li

Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems to safety-critical tasks. While the hope is to improve decision-making accuracy and societal outcomes with these ML models, concerns have been incurred that they can inflict harm if not developed or used with care. It has been well-documented that ML models can: (1) inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups; (2) be vulnerable to security and privacy attacks that deceive the models and leak the training data's sensitive information; (3) make hard-to-justify predictions with a lack of transparency. Therefore, it is essential to build socially responsible ML models that are fair, robust, private, transparent, and interpretable.

Although extensive studies have been conducted to increase trust in ML, many of them either focus on well-defined problems that enable nice tractability from a mathematical perspective but are hard to adapt to real-world systems, or they mainly focus on mitigating risks in real-world applications without providing theoretical justifications. Moreover, most work studies those issues separately; the connections among them are less well-understood. This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness …

Yubin Xie · Cassandra Burdziak · Amine Remita · Elham Azizi · Abdoulaye Baniré Diallo · Sandhya Prabhakaran · Debora Marks · Dana Pe'er · Wesley Tansey · Julia Vogt · Engelbert MEPHU NGUIFO · Jaan Altosaar · Anshul Kundaje · Sabeur Aridhi · Bishnu Sarker · Wajdi Dhifli · Alexander Anderson

The ICML Workshop on Computational Biology will highlight how machine learning approaches can be tailored to making discoveries with biological data. Practitioners at the intersection of computation, machine learning, and biology are in a unique position to frame problems in biomedicine, from drug discovery to vaccination risk scores, and the Workshop will showcase such recent research. Commodity lab techniques lead to the proliferation of large complex datasets, and require new methods to interpret these collections of high-dimensional biological data, such as genetic sequences, cellular features or protein structures, and imaging datasets. These data can be used to make new predictions towards clinical response, to uncover new biology, or to aid in drug discovery.
This workshop aims to bring together interdisciplinary machine learning researchers working at the intersection of machine learning and biology that includes areas such as computational genomics; neuroscience; metabolomics; proteomics; bioinformatics; cheminformatics; pathology; radiology; evolutionary biology; population genomics; phenomics; ecology, cancer biology; causality; representation learning and disentanglement to present recent advances and open questions to the machine learning community.
The workshop is a sequel to the WCB workshops we organized in the last five years at ICML, which had excellent line-ups of talks and were well-received by the …

Rishabh Iyer · Abir De · Ganesh Ramakrishnan · Jeff Bilmes

A growing number of machine learning problems involve finding subsets of data points. Examples range from selecting subset of labeled or unlabeled data points, to subsets of features or model parameters, to selecting subsets of pixels, keypoints, sentences etc. in image segmentation, correspondence and summarization problems. The workshop would encompass a wide variety of topics ranging from theoretical aspects of subset selection e.g. coresets, submodularity, determinantal point processes, to several practical applications, {\em e.g.}, time and energy efficient learning, learning under resource constraints, active learning, human assisted learning, feature selection, model compression, feature induction, {\em etc.}

We believe that this workshop is very timely since, a) subset selection is naturally emerging and has often been considered in isolation in many of the above applications, and b) by connecting researchers working on both the theoretical and application domains above, we can foster a much needed discussion on reusing a several technical innovations across these subareas and applications. Furthermore, we would also like to connect researchers working on the theoretical foundations of subset selection (in areas such as coresets and submodularity) with researchers working in applications (such as feature selection, active learning, data efficient learning, model compression, and human assisted machine learning).

Niranjani Prasad · Caroline Weis · Shems Saleh · Rosanne Liu · Jake Vasilakes · Agni Kumar · Tianlin Zhang · Ida Momennejad · Danielle Belgrave

The rising prevalence of mental illness has posed a growing global burden, with one in four people adversely affected at some point in their lives, accounting for 32.4% of years lived with disability. This has only been exacerbated during the current pandemic, and while the capacity of acute care has been significantly increased in response to the crisis, it has at the same time led to the scaling back of many mental health services. This, together with the advances in the field of machine learning (ML), has motivated exploration of how machine learning methods can be applied to the provision of more effective and efficient mental healthcare, from varied approaches to continual monitoring of individual mental health or identification of mental health issues through inferences about behaviours on social media, online searches or mobile apps, to predictive models for early diagnosis and intervention, understanding disease progression or recovery, and the personalization of therapies.

This workshop aims to bring together clinicians, behavioural scientists and machine learning researchers working in various facets of mental health and care provision, to identify the key opportunities and challenges in developing solutions for this domain, and discussing the progress made.

Ahmad Beirami · Flavio Calmon · Berivan Isik · Haewon Jeong · Matthew Nokleby · Cynthia Rush

The empirical success of state-of-the-art machine learning (ML) techniques has outpaced their theoretical understanding. Deep learning models, for example, perform far better than classical statistical learning theory predicts, leading to its widespread use by Industry and Government. At the same time, the deployment of ML systems that are not fully understood often leads to unexpected and detrimental individual-level impact. Finally, the large-scale adoption of ML means that ML systems are now critical infrastructure on which millions rely. In the face of these challenges, there is a critical need for theory that provides rigorous performance guarantees for practical ML models; guides the responsible deployment of ML in applications of social consequence; and enables the design of reliable ML systems in large-scale, distributed environments.

For decades, information theory has provided a mathematical foundation for the systems and algorithms that fuel the current data science revolution. Recent advances in privacy, fairness, and generalization bounds demonstrate that information theory will also play a pivotal role in the next decade of ML applications: information-theoretic methods can sharpen generalization bounds for deep learning, provide rigorous guarantees for compression of neural networks, promote fairness and privacy in ML training and deployment, and shed light on the limits …

Albert S Berahas · Anastasios Kyrillidis · Fred Roosta · Amir Gholaminejad · Michael Mahoney · Rachael Tappenden · Raghu Bollapragada · Rixon Crane · J. Lyle Kim

Optimization lies at the heart of many exciting developments in machine learning, statistics and signal processing. As models become more complex and datasets get larger, finding efficient, reliable and provable methods is one of the primary goals in these fields.

In the last few decades, much effort has been devoted to the development of first-order methods. These methods enjoy a low per-iteration cost and have optimal complexity, are easy to implement, and have proven to be effective for most machine learning applications. First-order methods, however, have significant limitations: (1) they require fine hyper-parameter tuning, (2) they do not incorporate curvature information, and thus are sensitive to ill-conditioning, and (3) they are often unable to fully exploit the power of distributed computing architectures.

Higher-order methods, such as Newton, quasi-Newton and adaptive gradient descent methods, are extensively used in many scientific and engineering domains. At least in theory, these methods possess several nice features: they exploit local curvature information to mitigate the effects of ill-conditioning, they avoid or diminish the need for hyper-parameter tuning, and they have enough concurrency to take advantage of distributed computing environments. Researchers have even developed stochastic versions of higher-order methods, that feature speed and scalability by incorporating …

Anastasios Angelopoulos · Stephen Bates · Yixuan Li · Aaditya Ramdas · Ryan Tibshirani

Visit https://sites.google.com/berkeley.edu/dfuq21/ for details!

While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making. Deploying learning systems in consequential settings also requires calibrating and communicating the uncertainty of predictions. A recent line of work we call distribution-free predictive inference (i.e., conformal prediction and related methods) has developed a set of methods that give finite-sample statistical guarantees for any (possibly incorrectly specified) predictive model and any (unknown) underlying distribution of the data, ensuring reliable uncertainty quantification (UQ) for many prediction tasks. This line of work represents a promising new approach to UQ with complex prediction systems but is relatively unknown in the applied machine learning community. Moreover, much remains to be done integrating distribution-free methods with existing approaches to UQ via calibration (e.g. with temperature scaling) -- little work has been done to bridge these two worlds. To facilitate the emerging topics on distribution-free methods, the proposed workshop has two goals. First, to bring together researchers in distribution-free methods with researchers specializing in calibration techniques to catalyze work at this interface. Second, to introduce distribution-free methods to a wider ML audience. Given the important recent emphasis on the reliable …

Pengtao Xie · Shanghang Zhang · Ishan Misra · Pulkit Agrawal · Katerina Fragkiadaki · Ruisi Zhang · Tassilo Klein · Asli Celikyilmaz · Mihaela van der Schaar · Eric Xing

Self-supervised learning (SSL) is an unsupervised approach for representation learning without relying on human-provided labels. It creates auxiliary tasks on unlabeled input data and learns representations by solving these tasks. SSL has demonstrated great success on images, texts, robotics, etc. On a wide variety of tasks, SSL without using human-provided labels achieves performance that is close to fully supervised approaches. Existing SSL research mostly focuses on perception tasks such as image classification, speech recognition, text classification, etc. SSL for reasoning tasks (e.g., symbolic reasoning on graphs, relational reasoning in computer vision, multi-hop reasoning in NLP) is largely ignored. In this workshop, we aim to bridge this gap. We bring together SSL-interested researchers from various domains to discuss how to develop SSL methods for reasoning tasks, such as how to design pretext tasks for symbolic reasoning, how to develop contrastive learning methods for relational reasoning, how to develop SSL approaches to bridge reasoning and perception, etc. Different from previous SSL-related workshops which focus on perception tasks, our workshop focuses on promoting SSL research for reasoning.

Yian Ma · Ehi Nosakhare · Yuyang Wang · Scott Yang · Rose Yu

Time series is one of the fastest growing and richest types of data. In a variety of domains including dynamical systems, healthcare, climate science and economics, there have been increasing amounts of complex dynamic data due to a shift away from parsimonious, infrequent measurements to nearly continuous real-time monitoring and recording. This burgeoning amount of new data calls for novel theoretical and algorithmic tools and insights.

The goals of our workshop are to: (1) highlight the fundamental challenges that underpin learning from time series data (e.g. covariate shift, causal inference, uncertainty quantification), (2) discuss recent developments in theory and algorithms for tackling these problems, and (3) explore new frontiers in time series analysis and their connections with emerging fields such as causal discovery and machine learning for science. In light of the recent COVID-19 outbreak, we also plan to have a special emphasis on non-stationary dynamics, causal inference, and their applications to public health at our workshop.

Time series modeling has a long tradition of inviting novel approaches from many disciplines including statistics, dynamical systems, and the physical sciences. This has led to broad impact and a diverse range of applications, making it an ideal topic for the rapid dissemination …

Shipra Agrawal · Simon Du · Niao He · Csaba Szepesvari · Lin Yang

While over many years we have witnessed numerous impressive demonstrations of the power of various reinforcement learning (RL) algorithms, and while much progress was made on the theoretical side as well, the theoretical understanding of the challenges that underlie RL is still rather limited. The best-studied problem settings, such as learning and acting in finite state-action Markov decision processes, or simple linear control systems fail to capture the essential characteristics of seemingly more practically relevant problem classes, where the size of the state-action space is often astronomical, the planning horizon is huge, the dynamics is complex, interaction with the controlled system is not permitted, or learning has to happen based on heterogeneous offline data, etc. To tackle these diverse issues, more and more theoreticians with a wide range of backgrounds came to study RL and have proposed numerous new models along with exciting novel developments on both algorithm design and analysis. The workshop's goal is to highlight advances in theoretical RL and bring together researchers from different backgrounds to discuss RL theory from different perspectives: modeling, algorithm, analysis, etc.

Yasaman Bahri · Quanquan Gu · Amin Karbasi · Hanie Sedghi

Modern machine learning models are often highly over-parameterized. The prime examples are neural network architectures achieving state-of-the-art performance, which have many more parameters than training examples. While these models can empirically perform very well, they are not well understood. Worst-case theories of learnability do not explain their behavior. Indeed, over-parameterized models sometimes exhibit "benign overfitting", i.e., they have the power to perfectly fit training data (even data modified to have random labels), yet they achieve good performance on the test data. There is evidence that over-parameterization may be helpful both computational and statistically, although attempts to use phenomena like double/multiple descent to explain that over-parameterization helps to achieve small test error remain controversial. Besides benign overfitting and double/multiple descent, many other interesting phenomena arise due to over-parameterization, and many more may have yet to be discovered. Many of these effects depend on the properties of data, but we have only simplistic tools to measure, quantify, and understand data. In light of rapid progress and rapidly shifting understanding, we believe that the time is ripe for a workshop focusing on understanding over-parameterization from multiple angles.

Gathertown room1 link: https://eventhosts.gather.town/DbHJbA5ArXpTIoap/icml-oppo-2021
Gathertown room2 link: https://eventhosts.gather.town/UtqQ3jSJ7wnN0anj/icml-oppo-2021-room-2