Skip to yearly menu bar Skip to main content



Tutorials
Nina Balcan · Tuomas Sandholm · Ellen Vitercik

[ A9 ]

Mechanism design is a field of game theory with tremendous real-world impact, encompassing areas such as pricing and auction design. A powerful approach in this field is automated mechanism design, which uses machine learning and optimization to design mechanisms based on data. This automated approach helps overcome challenges faced by traditional, manual approaches to mechanism design, which have been stuck for decades due to inherent computational complexity challenges: the revenue-maximizing mechanism is not known even for just two items for sale! In this tutorial, we cover the rapidly growing area of automated mechanism design for revenue maximization. This encompasses both the foundations of batch and online learning (including statistical guarantees and optimization procedures), as well as real-world success stories.

URL: https://sites.google.com/view/amdtutorial

Yisong Yue · Hoang Le

[ Victoria ]

In this tutorial, we aim to present to researchers and industry practitioners a broad overview of imitation learning techniques and recent applications. Imitation learning is a powerful and practical alternative to reinforcement learning for learning sequential decision-making policies. Also known as learning from demonstrations or apprenticeship learning, imitation learning has benefited from recent progress in core learning techniques, increased availability and fidelity of demonstration data, as well as the computational advancements brought on by deep learning. We expect the tutorial to be highly relevant for researchers & practitioners who have interests in reinforcement learning, structured prediction, planning and control. The ideal audience member should have familiarity with basic supervised learning concepts. No knowledge of reinforcement learning techniques will be assumed.

Website https://sites.google.com/view/icml2018-imitation-learning/

Manuel Gomez-Rodriguez · Isabel Valera

[ K1 + K2 ]

In recent years, there has been an increasing number of machine learning models, inference methods and control algorithms using temporal point processes. They have been particularly popular for understanding, predicting, and enhancing the functioning of social and information systems, where they have achieved unprecedented performance. This tutorial aims to introduce temporal point processes to the machine learning community at large. In the first part of the tutorial, we will first provide an introduction to the basic theory of temporal point processes, then revisit several types of points processes, and finally introduce advanced concepts such as marks and dynamical systems with jumps. In the second and third parts of the tutorial, we will explain how temporal point processes have been used in developing a variety of recent machine learning models and control algorithms, respectively. Therein, we will revisit recent advances related to, e.g., deep learning, Bayesian nonparametrics, causality, stochastic optimal control and reinforcement learning. In each of the above parts, we will highlight open problems and future research to facilitate further research in temporal point processes within the machine learning community.

http://learning.mpi-sws.org/tpp-icml18/

Sam Corbett-Davies · Sharad Goel

[ K1 + K2 ]

Machine learning algorithms are increasingly used to guide decisions by human experts, including judges, doctors, and managers. Researchers and policymakers, however, have raised concerns that these systems might inadvertently exacerbate societal biases. To measure and mitigate such potential bias, there has recently been an explosion of competing mathematical definitions of what it means for an algorithm to be fair. But there’s a problem: nearly all of the prominent definitions of fairness suffer from subtle shortcomings that can lead to serious adverse consequences when used as an objective. In this tutorial, we illustrate these problems that lie at the foundation of this nascent field of algorithmic fairness, drawing on ideas from machine learning, economics, and legal theory. In doing so we hope to offer researchers and practitioners a way to advance the area.

Website: https://policylab.stanford.edu/projects/defining-and-designing-fair-algorithms.html

Sanjoy Dasgupta · Samory Kpotufe

[ A9 ]

Nearest-neighbor methods are among the most ubiquitous and oldest approaches in Machine Learning and other areas of data analysis. They are often used directly as predictive tools, or indirectly as integral parts of more sophisticated modern approaches (e.g. recent uses that exploit deep representations, uses in geometric graphs for clustering, integrations into time-series classification, or uses in ensemble methods for matrix completion). Furthermore, they have strong connections to other tools such as classification and regression trees, or even kernel machines, which are all (more sophisticated) forms of local prediction. Interestingly, our understanding of these methods is still evolving, with many recent results shedding new insights on performance under various settings describing a range of modern uses and application domains. Our aim is to cover such new perspectives on k-NN, and in particular, translate new theoretical insights (with practical implications) to a broader audience.

Website: http://www.princeton.edu/~samory/Documents/ICML-kNN-Tutorial.pdf

Sanjeev Arora

[ Victoria ]

We survey progress in recent years toward developing a theory of deep learning. Works have started addressing issues such as: (a) the effect of architecture choices on the optimization landscape, training speed, and expressiveness (b) quantifying the true "capacity" of the net, as a step towards understanding why nets with hugely more parameters than training examples nevertheless do not overfit (c) understanding inherent power and limitations of deep generative models, especially (various flavors of) generative adversarial nets (GANs) (d) understanding properties of simple RNN-style language models and some of their solutions (word embeddings and sentence embeddings)

While these are early results, they help illustrate what kind of theory may ultimately arise for deep learning.

The tutorial website: http://unsupervised.cs.princeton.edu/deeplearningtutorial.html

Tamara Broderick

[ Victoria ]

Bayesian methods exhibit a number of desirable properties for modern data analysis---including (1) coherent quantification of uncertainty, (2) a modular modeling framework able to capture complex phenomena, (3) the ability to incorporate prior information from an expert source, and (4) interpretability. In practice, though, Bayesian inference necessitates approximation of a high-dimensional integral, and some traditional algorithms for this purpose can be slow---notably at data scales of current interest. The tutorial will cover modern tools for fast, approximate Bayesian inference at scale. One increasingly popular framework is provided by "variational Bayes" (VB), which formulates Bayesian inference as an optimization problem. We will examine key benefits and pitfalls of using VB in practice, with a focus on the widespread "mean-field variational Bayes" (MFVB) subtype. We will highlight properties that anyone working with VB, from the data analyst to the theoretician, should be aware of. In addition to VB, we will cover recent data summarization techniques for scalable Bayesian inference that come equipped with finite-data theoretical guarantees on quality. We will motivate our exploration throughout with practical data analysis examples and point to a number of open problems in the field.

Webpage: http://www.tamarabroderick.com/tutorial2018icml.html

Benjamin Recht

[ A9 ]

Given the dramatic successes in machine learning over the past half decade, there has been a resurgence of interest in applying learning techniques to continuous control problems in robotics, self-driving cars, and unmanned aerial vehicles. Though such applications appear to be straightforward generalizations of reinforcement learning, it remains unclear which machine learning tools are best equipped to handle decision making, planning, and actuation in highly uncertain dynamic environments.

This tutorial will survey the foundations required to build machine learning systems that reliably act upon the physical world. The primary technical focus will be on numerical optimization tools at the interface of statistical learning and dynamical systems. We will investigate how to learn models of dynamical systems, how to use data to achieve objectives in a timely fashion, how to balance model specification and system controllability, and how to safely acquire new information to improve performance. We will close by listing several exciting open problems that must be solved before we can build robust, reliable learning systems that interact with an uncertain environment.

Danielle Belgrave · Konstantina Palla · LAMIAE Azizi

[ K1 + K2 ]

Machine learning advances are opening new routes to more precise healthcare, from the discovery of disease subtypes for patient stratification to the development of personalised interactions and interventions. As medicine pivots from treating diagnoses to treating mechanisms, there is an increasing need for personalised health through more intelligent feature extraction and phenotyping. This offers an exciting opportunity for machine learning techniques to impact healthcare in a meaningful way, by putting patients at the centre of research. Health presents some of the most challenging and under-investigated domains of machine learning research. This tutorial presents a timely opportunity to engage the machine learning community with the unique challenges presented within the healthcare domain as well as to provide motivation for meaningful collaborations within this domain. We will evaluate the current drivers of machine learning in healthcare and present machine learning strategies for personalised health. Some of the challenges we will address include, but are not limited to, integrating heterogenous types of data to understand disease subtypes, causal inference to understand underlying disease mechanisms, learning from “small” labelled data, striking a balance between privacy, transparency, interpretability and model performance. This tutorial will be targeted towards a broad machine learning audience with various skill …