Léon Bottou – Two high stakes challenges in machine learning
Facebook AI Research
Léon Bottou received the Diplôme d’Ingénieur de l’École Polytechnique (X84) in 1987, the Magistère de Mathématiques Fondamentales et Appliquées et d’Informatique from École Normale Superieure in 1988, and a Ph.D. in Computer Science from Université de Paris-Sud in 1991. His research career took him to AT&T Bell Laboratories, AT&T Labs Research, NEC Labs America and Microsoft. He joined Facebook AI Research in 2015.
The long term goal of Léon’s research is to understand how to build human-level intelligence. Although reaching this goal requires conceptual advances that cannot be anticipated at this point, it certainly entails clarifying how to learn and how to reason. Leon Bottou best known contributions are his work on “deep” neural networks in the 90s, his work on large scale learning in the 00’s, and possibly his more recent work on causal inference in learning systems. Léon is also known for the DjVu document compression technology.
Download slides
Abstract: This presentation describes and discusses two serious challenges:
Machine learning technologies are increasingly used in complex software systems such as those underlying internet services today or self-driving vehicles tomorrow. Despite famous successes, there is more and more evidence that machine learning components tend to disrupt established software engineering practices. I will present examples and offer an explanation of this annoying and often very costly effect. Our first high-stake challenge consists therefore in formulating sound and efficient engineering principles for machine learning applications.
Machine learning research can often be viewed as an empirical science. Unlike nearly all other empirical sciences, progress in machine learning has largely been driven by a single experimental paradigm: fitting a training set and reporting performance on a testing set. Three forces may terminate this convenient state of affairs: the first one is the engineering challenge outlined above, the second one arises from the statistics of large-scale datasets, and the third one is our growing ambition to address more serious AI tasks. Our second high-stakes challenge consists therefore in enriching our experimental repertoire, redefining our scientific processes, and still maintain our progress speed.
Jon Kleinberg – Social Interaction in Global Networks
Cornell University
Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on issues at the interface of networks and information, with an emphasis on the social and information networks that underpin the Web and other on-line media.
He is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences; and he is the recipient of research fellowships from the MacArthur, Packard, Simons, and Sloan Foundations, as well as awards including the Nevanlinna Prize, the Harvey Prize, the ACM SIGKDD Innovation Award, and the ACM-Infosys Foundation Award in the Computing Sciences.
Abstract: With an increasing amount of social interaction taking place in the digital domain, and often in public on-line settings, we are accumulating enormous amounts of data about phenomena that were once essentially invisible to us: the collective behavior and social interactions of hundreds of millions of people, recorded at unprecedented levels of scale and resolution. Analyzing this data computationally offers new insights into the design of on-line applications, as well as a new perspective on fundamental questions in the social sciences. We will review some of the basic issues around these developments; these include the problem of designing information systems in the presence of complex social feedback effects, and the emergence of a growing research interface between computing and the social sciences, facilitated by the availability of large new datasets on human interaction.
Susan Murphy – Learning Treatment Policies in Mobile Health
University of Michigan
Susan Murphy’s research focuses on improving sequential, individualized, decision making in health, in particular on clinical trial design and data analysis to inform the development of adaptive interventions (e.g. treatment policies). She is a leading developer of the Sequential Multiple Assignment Randomized Trial (SMART) design which has been and is being used by clinical researchers to develop adaptive interventions in depression, alcoholism, treatment of ADHD, substance abuse, HIV treatment, obesity, diabetes, autism and drug court programs. She collaborates with clinical scientists, computer scientists and engineers and mentors young clinical scientists on developing adaptive interventions.
Susan is currently working as part of several interdisciplinary teams to develop clinical trial designs and learning algorithms to settings in which patient information is collected in real time (e.g. via smart phones or other wearable devices) and thus sequences of interventions can be individualized online. She is a Fellow of the College on Problems in Drug Dependence, a former editor of the Annals of Statistics and is a 2013 MacArthur Fellow.
Abstract: We describe a sequence of steps that facilitate effective learning of treatment policies in mobile health. These include a clinical trial with associated sample size calculator and data analytic methods. An off-policy Actor-Critic algorithm is developed for learning a treatment policy from this clinical trial data. Open problems abound in this area, including the development of a variety of online predictors of risk of health problems, missing data and disengagement.