Invited Talks
Weinan E

Given a machine learning model, what are the class of functions that can be approximated by this particular model efficiently, in the sense that the convergence rate for the approximation, estimation and optimization errors does not deteriorate as dimensionality goes up? We address this question for three classes of machine learning models: The random feature model, two-layer neural networks and the residual neural network model. During the process, we will also summarize the current status of the theoretical foundation of deep learning, and discuss some of the key open questions.

Regina Barzilay

The first-generation models for drug discovery and clinical applications were mostly direct modifications of algorithms developed for NLP, computer vision, and other well-established application areas. However, deployment of these models revealed the significant mismatch between their basic assumptions and the needs of these new life sciences applications. Examples include challenging generalization scenarios, unknown biases in the collected data, and the inability of domain experts to validate model predictions. In my talk, I will illustrate some of these problems, and introduce our initial solutions to them.

Guido Imbens

There is a fast growing literature in econometrics on estimating causal effects in settings with panel or longitudinal data, building on the recent difference-in-differences and synthetic control literatures. This is driven by an empirical literature with many applications. These range from settings with few cross-sectional units and many or few time periods, to many cross-sectional units. Sometimes there are few, or even only one treated unit, and sometimes many. I will review some of this recent literature including some of the examples, focusing in particular on the synthetic difference in differences estimator, and some of the relations with matrix completion literature. I will also discuss implications for randomized experiments. I will discuss some of the remaining challenges in the literature.