Skip to yearly menu bar Skip to main content


July 26, 2023, 12:30 p.m.

Jennifer A. Doudna, Innovative Genomics Institute, Howard Hughes Medical Institute and University of California Berkeley & UCSF/Gladstone Institutes Machine learning will have profound impacts on biological research in ways that are just beginning to be imagined. The intersection of ML and CRISPR provides exciting examples of the opportunities and challenges in fields ranging from healthcare to climate change. CRISPR-Cas programmable proteins can edit specific DNA sequences in cells and organisms, generating new biological insights as well as approved therapeutics and improved crops. I will discuss how ML may accelerate and perhaps fundamentally alter our use of CRISPR genome editing in both humans and microbes.


Jennifer Doudna

Jennifer Doudna, PhD is a biochemist at the University of California, Berkeley. Her groundbreaking development of CRISPR-Cas9 — a genome engineering technology that allows researchers to edit DNA — with collaborator Emmanuelle Charpentier earned the two the 2020 Nobel Prize in Chemistry and forever changed the course of human and agricultural genomics research. She is also the Founder of the Innovative Genomics Institute, the Li Ka Shing chancellor’s chair in Biomedical and Health Sciences, and a member of the Howard Hughes Medical Institute, Lawrence Berkeley National Lab, Gladstone Institutes, the National Academy of Sciences, and the American Academy of Arts and Sciences. She is a leader in the global public debate on the responsible use of CRISPR and has co-founded and serves on the advisory panel of several companies that use the technology in unique ways. Doudna is the co-author of “A Crack in Creation,” a personal account of her research and the societal and ethical implications of gene editing. Learn more at innovativegenomics.org/jennifer-doudna.

July 25, 2023, 7 p.m.

This talk talk has a single objective: to advocate for machine learning infused with social purpose. Social purpose here is an invitation to deepen our inquiries as investigators and inventors into the relationships between machine learning, our planet, and each other. In this way, social purpose transforms our field of machine learning: into something that is both technical and social. And my belief is that machine learning with social purpose will provide the passion and momentum for the contributions that are needed in overcoming the myriad of global challenges and in achieving our global goals. To make this all concrete, the talk will have three parts: machine learning for the Earth systems, sociotechnical AI, and strengthening global communities. And we’ll cover topics on generative models; evaluations and experts; healthcare and climate; fairness, ethics and safety; and bias and global inclusion. By the end, I hope we’ll have set the scene for a rich discussion on our responsibility and agency as researchers, and new ways of driving machine learning with social purpose.


Shakir Mohamed

Shakir Mohamed works on technical and sociotechnical questions in machine learning research, working on problems in machine learning principles, applied problems in healthcare and environment, and ethics and diversity. Shakir is a Director for Research at DeepMind in London, an Associate Fellow at the Leverhulme Centre for the Future of Intelligence, and an Honorary Professor of University College London. Shakir is also a founder and trustee of the Deep Learning Indaba, a grassroots charity whose work is to build pan-African capacity and leadership in AI. Amongst other roles, Shakir served as the senior programme chair for ICLR 2021, and as the General Chair for NeurIPS 2022. Shakir also serves on the Board of Directors for some of the leading conferences in the field of machine learning and AI (ICML, ICLR, NeurIPS), is a member of the Royal Society diversity and inclusion committee, and on the international scientific advisory committee for the pan-Canadian AI strategy. Shakir is from South Africa, completed a postdoc at the University of British Columbia, received his PhD from the University of Cambridge, and received his masters and undergraduate degrees in Electrical and Information engineering from the University of the Witwatersrand, Johannesburg.

July 25, 2023, 12:15 p.m.

Machine learning in health has made impressive progress in recent years, powered by an increasing availability of health-related data and high-capacity models. While many models in health now perform at, or above, humans in a range of tasks across the human lifespan, models also learn societal biases and may replicate or expand them. In this talk, Dr. Marzyeh Ghassemi will focus on the need for machine learning researchers and model developers to create robust models that can be ethically deployed in health settings, and beyond. Dr. Ghassemi's talk will span issues in data collection, outcome definition, algorithm development, and deployment considerations.


Marzyeh Ghassemi

Dr. Marzyeh Ghassemi is an Assistant Professor at MIT in Electrical Engineering and Computer Science (EECS) and Institute for Medical Engineering & Science (IMES), and a Vector Institute faculty member holding a Canadian CIFAR AI Chair and Canada Research Chair. She holds MIT affiliations with the Jameel Clinic and CSAIL.

Professor Ghassemi holds a Herman L. F. von Helmholtz Career Development Professorship, and was named a CIFAR Azrieli Global Scholar and one of MIT Tech Review’s 35 Innovators Under 35. Previously, she was a Visiting Researcher with Alphabet’s Verily. She is currently on leave from the University of Toronto Departments of Computer Science and Medicine. Prior to her PhD in Computer Science at MIT, she received an MSc. degree in biomedical engineering from Oxford University as a Marshall Scholar, and B.S. degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University.

July 27, 2023, 12:30 p.m.

Proxy objectives are a fundamental concept in machine learning. That is, there's a true objective that we care about, but it's hard to compute or estimate, so instead we construct a locally-valid approximation and optimize that. I will examine reinforcement from human feedback with this lens, as a chain of approximations, each of which can widen the gap between the desired and achieved result.


John Schulman

John now leads a team working on ChatGPT and RL from Human Feedback at OpenAI, where he was a cofounder. His recent published work includes combining language models with retrieval (WebGPT) and scaling laws of RL and alignment. Earlier he developed some of the foundational methods of deep RL (TRPO, PPO). Before OpenAI, John got a PhD from UC Berkeley, advised by Pieter Abbeel. In his free time, he enjoys running, jazz piano, and raising chickens.

July 28, 2023, 12:10 p.m.

Quentin Berthet


Quentin Berthet

Invited Talk: Aaron Wagner

July 29, 2023, 6:45 p.m.


July 29, 2023, 12:50 p.m.

Learning algorithms are often top-down and prescriptive, directly descending the gradient of a prescribed loss function. This includes backpropagation, its more localized approximations such as Equilibrium Propagation or Predictive Coding, as well as local self-supervised objectives, as in the Forward-Forward algorithm. Other algorithms could instead be characterized as emergent or descriptive, where network-wide function is learned from the bottom up, from mere descriptions of processes in synapses (i.e. connections) and neuronal units. This latter type of learning, which results e.g. from so-called Hebbian plastic synapses, spike timing-dependent plasticity (STDP), and short-term plasticity, fully satisfies the constraints of biological and neuromorphic circuitry, because neuroscience textbook mechanisms local to each synapse are the entire seeding premise. However, such emergent learning rules have struggled to be useful in difficult tasks for modern machine learning standards. In contrast, our recent work shows that learning resulting from plasticity is applicable to previously unattainable problem settings and can even outperform global loss-driven networks under certain conditions. Specifically, the talk will focus on short-term STDP, short-term plasticity neurons (STPN), SoftHebb, i.e. our version of Hebbian learning in circuits with soft competition, and on their advantages in sequence modelling, adversarial robustness, learning speed, and unsupervised deep learning. The picture will be completed with a mention of our related works on neuromorphic nanodevices that emulate the biophysics of plastic synapses through the physics of analog electronics and photonics.


July 28, 2023, 12:10 p.m.

Bayesian neural networks (BNNs), a family of neural networks with a probability distribution placed on their weights, have the advantage of being able to reason about uncertainty in their predictions as well as data. Their deployment in safety-critical applications demands rigorous robustness guarantees. This paper summarises recent progress in developing algorithmic methods to ensure certifiable safety and robustness guarantees for BNNs, with the view to support design automation for systems incorporating BNN components.


Marta Kwiatkowska

July 28, 2023, 6:15 p.m.

Tzu-Mao Li


Tzu-Mao Li

July 28, 2023, 4:30 p.m.

Marin Vlastelica


Marin Vlastelica

July 28, 2023, 5:15 p.m.

The ability of machine learning techniques to process rich sensory inputs such as vision makes them highly appealing for use in robotic systems (e.g., micro aerial vehicles and robotic manipulators). However, the increasing adoption of learning-based components in the robotics perception and control pipeline poses an important challenge: how can we guarantee the safety and performance of such systems? As an example, consider a micro aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to remain safe and perform well on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize to environments that our robot has not previously encountered? Unfortunately, existing approaches either do not provide such guarantees or do so only under very restrictive assumptions.

In this talk, I will present our group’s work on developing a principled theoretical and algorithmic framework for learning control policies for robotic systems with formal guarantees on generalization to novel environments. The key technical insight is to leverage and extend powerful techniques from PAC-Bayes theory. We apply our techniques on problems including vision-based navigation and manipulation in order to demonstrate the ability to provide strong generalization guarantees on robotic systems with nonlinear or hybrid dynamics, rich sensory inputs, and neural network-based control policies.


Anirudha Majumdar

Anirudha Majumdar is an Assistant Professor in the Mechanical and Aerospace Engineering (MAE) department at Princeton University. He also holds a part-time visiting research scientist position at the Google AI Lab in Princeton. Majumdar received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. He is a recipient of the Sloan Fellowship, ONR Young Investigator Program (YIP) award, the NSF CAREER award, the Google Faculty Research Award (twice), the Amazon Research Award (twice), the Young Faculty Researcher Award from the Toyota Research Institute, the Paper of the Year Award from the International Journal of Robotics Research (IJRR), the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA), the Alfred Rheinstein Faculty Award (Princeton), and the Excellence in Teaching Award (Princeton SEAS).

July 29, 2023, 4:10 p.m.

Reactions such as gestures, facial expressions, and vocalizations are an abundant, naturally occurring channel of information that humans provide during interactions. A robot or other agent could leverage an understanding of such implicit human feedback to improve its task performance at no cost to the human. This approach contrasts with common agent teaching methods based on demonstrations, critiques, or other guidance that need to be attentively and intentionally provided. In this talk, we first define the general problem of learning from implicit human feedback and then propose to address this problem through a novel data-driven framework, EMPATHIC. This two-stage method consists of (1) mapping implicit human feedback to relevant task statistics such as rewards, optimality, and advantage; and (2) using such a mapping to learn a task. We instantiate the first stage and three second-stage evaluations of the learned mapping. To do so, we collect a dataset of human facial reactions while participants observe an agent execute a sub-optimal policy for a prescribed training task. We train a deep neural network on this data and demonstrate its ability to (1) infer relative reward ranking of events in the training task from prerecorded human facial reactions; (2) improve the policy of an agent in the training task using live human facial reactions; and (3) transfer to a novel domain in which it evaluates robot manipulation trajectories.


July 29, 2023, 4:35 p.m.

Modern reinforcement learning has been in large part shaped by three dogmas. The first is what I call the environment spotlight, which refers to our focus on environments rather than agents. The second is our implicit treatment of learning as finding a solution, rather than endless adaptation. The last is the reward hypothesis, which states that all goals and purposes can be well thought of as maximization of a reward signal. In this talk I discuss how these dogmas have shaped our views on learning. I argue that, when agents learn from human feedback, we ought to dispense entirely with the first two dogmas, while we must recognize and embrace the nuance implicit in the third.


July 29, 2023, 5 p.m.

Contextual bandits are highly practical, but the need to specify a scalar reward limits their adoption. This motivates study of contextual bandits where a latent reward must be inferred from post-decision observables, aka Interactive Grounded Learning. An information theoretic argument indicates the need for additional assumptions to succeed, and I review some sufficient conditions from the recent literature. I conclude with speculation about composing IGL with active learning.


July 29, 2023, 5:15 p.m.

Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher- student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips.


Qu Yang

July 29, 2023, 6:30 p.m.

Robots deployed in the wild can improve their performance by using input from human teachers. Furthermore, both robots and humans can benefit when robots adapt to and learn from the people around them. However, real people can act in imperfect ways, and can often be unable to provide input in large quantities. In this talk, I will address some of the past research I have conducted toward addressing these issues, which has focused on creating learning algorithms that can learn from imperfect teachers. I will also talk about my current work on the Robot-Assisted Feeding project in the Personal Robotics Lab at the University of Washington, which I am approaching through a similar lens of working with real teachers and possibly imperfect information.


July 28, 2023, 12:15 p.m.

This presentation considers the learning of logical (Boolean) functions with focus on the generalization on the unseen (GOTU) setting, a strong case of out-of-distribution generalization. This is motivated by the fact that the rich combinatorial nature of data in certain reasoning tasks (e.g., arithmetic/logic) makes representative data sampling challenging, and learning successfully under GOTU gives a first vignette of an 'extrapolating' or 'reasoning' learner. We then study how different network architectures trained by (S)GD perform under GOTU and provide both theoretical and experimental evidence that for a class of network models including instances of Transformers, random features models, and diagonal linear networks, a min-degree-interpolator (MDI) is learned on the unseen. We also provide evidence that other instances with larger learning rates or mean-field networks reach leaky MDIs. These findings lead to two implications: (1) we provide an explanation to the length generalization problem (e.g., Anil et al. 2022); (2) we introduce a curriculum learning algorithm called Degree-Curriculum that learns monomials more efficiently by incrementing supports.


July 28, 2023, 5:30 p.m.

Recent large neural models have shown impressive performance on various data modalities, including natural language, vision, programming language and molecules. However, they still have surprising deficiency (near-random performance) in acquiring certain types of knowledge such as structured knowledge and action knowledge In this talk I propose a two-way knowledge acquisition framework to make symbolic and neural learning approaches mutually enhance each other. In the first stage, we will elicit and acquire explicit symbolic knowledge from Large neural models. In the second stage, we will leverage the acquired symbolic knowledge to augment and enhance these big models. I will present two recent case studies to demonstrate this framework:

(1) The first task is to induce event schemas (stereotypical structures of events and their connections) from large language models by incremental prompting and verification [Li et al., ACL2023], and apply the induced schemas to enhance event extraction and event prediction.

(2) In the second task, we noticed that current large video-language models rely on object recognition abilities as a shortcut for action understanding. We utilize a Knowledge Patcher network to elicit new action knowledge from the current models and a Knowledge Fuser component to integrate the Patcher into frozen video-language models.


July 28, 2023, 6:30 p.m.


Yisong Yue

Yisong Yue is a Professor of Computing and Mathematical Sciences at Caltech and (via sabbatical) a Principal Scientist at Latitude AI. His research interests span both fundamental and applied pursuits, from novel learning-theoretic frameworks all the way to deep learning deployed in autonomous driving on public roads. His work has been recognized with multiple paper awards and nominations, including in robotics, computer vision, sports analytics, machine learning for health, and information retrieval. At Latitude AI, he is working on machine learning approaches to motion planning for autonomous driving.

invited talk: Invited talk: Cihang Xie

July 28, 2023, 5:30 p.m.


July 29, 2023, 4:30 p.m.

Text reasoning and generation in practice often needs to meet complex objectives, integrate diverse contextual constraints, and ground in logical structures for consistency. Current large LMs can produce fluent text and follow human instructions, but they still struggle to effectively optimize toward specific objectives. The discrete nature of text poses one of the key challenges to the optimization. In this talk, I will present our work on optimizing text reasoning and generation with continuous and discrete methods. I will first introduce COLD, a unified energy-based framework that empowers any off-the-shelf LMs to reason with any objectives in a continuous space. This approach brings forward differentiable reasoning over discrete text, thus improving efficiency. Following this, I will discuss Maieutic prompting, a method that enhances the logical consistency of neural reasoning in a discrete space by integrating with logical structures.


Invited Talk: Johannes Ballé

July 29, 2023, 12:05 p.m.


July 28, 2023, 1:40 p.m.

Swarat Chaudhuri


Swarat Chaudhuri

July 29, 2023, 12:35 p.m.

Pretrained language models (PTLM) are "all the rage" right now. From the perspective of folks who have been working at the intersection of language, vision, and robotics since before it was cool, the noticeable impact is that researchers outside NLP feel like they should plug language into their work. However, these models are exclusively trained on text data, usually only for next word prediction, and potentially for next word prediction but under a fine-tuned words-as-actions policy with thousands of underpaid human annotators in the loop (e.g., RLHF). Even when a PTLM is "multimodal" that usually means "training also involved images and their captions, which describe the literal content of the image." What meaning can we hope to extract from those kinds of models in the context of embodied, interactive systems? In this talk, I'll cover some applications our lab has worked through in the space language and embodied systems with a broader lens towards open questions about the limits and (in)appropriate applications of current PTLMs with those systems.


July 28, 2023, 1:30 p.m.

In this talk, we study safety properties of a dynamical system in feedback with a neural network controller from a reachability perspective. We first embed the closed-loop dynamics into a larger system using the theory of mixed monotone dynamical systems with favorable control theoretic properties. In particular, we show that hyper-rectangular over-approximations of the reachable sets are efficiently computed using a single trajectory of the embedding system. Numerically computing this trajectory requires bounds on the input-output behavior of the neural network controller, which we obtain via carefully selected and infrequent queries to an oracle. We assume the oracle provides these input-output bounds as intervals or as affine bounding functions, which is common for many state-of-the-art methods. Moreover, we show that, if this embedding system is constructed in a certain way, the contraction rate of the embedding system is the same as the original closed-loop system. Thus, this embedding provides a scalable approach for reachability analysis of the neural network control loop while preserving the nonlinear structure of the system. We design an algorithm to leverage this computational advantage through partitioning strategies, improving our reachable set estimates while balancing its runtime with tunable parameters.


Samuel Coogan

July 29, 2023, 2:25 p.m.


July 28, 2023, 7 p.m.


Chi Jin

July 28, 2023, 4:30 p.m.


Jiajun Wu

Jiajun Wu is a Visiting Faculty Researcher at Google Research, New York City. In July 2020, he will join Stanford University as an Assistant Professor of Computer Science. He studies machine perception, reasoning, and its interaction with the physical world, drawing inspiration from human cognition.

July 29, 2023, 2:10 p.m.

The mean-field Langevin dynamics (MFLD) is a nonlinear generalization of the gradient Langevin dynamics (GLD) that minimizes an entropy regularized convex function defined on the space of probability distributions, and it naturally arises from the optimization of two-layer neural networks via (noisy) gradient descent. In this talk, I will present the convergence result of MFLD and explain how the convergence of MFLD is connected to its dual objective. Indeed, its convergence is characterized by the log-Sobolev inequality of the so-called proximal Gibbs measure corresponding to the current solution. Based on this duality principle, we can construct several optimization methods with convergence guarantees including the particle dual averaging method and particle stochastic dual coordinate ascent method. Finally, I will provide a general framework to prove a uniform-in-time propagation of chaos for MFLD that takes into account the errors due to finite-particle approximation, time-discretization, and stochastic gradient approximation.


July 28, 2023, 5:05 p.m.

Mathieu Blondel


Mathieu Blondel

July 28, 2023, 12:10 p.m.


Karen Ullrich

July 29, 2023, 12:10 p.m.

In this talk, I will discuss the role of language in learning from interactions with humans. I will first talk about how language instructions along with latent actions can enable shared autonomy in robotic manipulation problems. I will then talk about creative ways of tapping into the rich context of large models to enable more aligned AI agents. Specifically, I will discuss a few vignettes about how we can leverage LLMs and VLMs to learn human preferences, allow for grounded social reasoning, or enable teaching humans using corrective feedback. I will finally conclude the talk by discussing how large models can be effective pattern machines that can identify patterns in a token invariant fashion and enable pattern transformation, extrapolation, and even show some evidence of pattern optimization for solving control problems.


July 29, 2023, 1:30 p.m.

I will propose a 2x2 matrix to position interactive learning systems and argue that the 4th corner of that space is yet to be fully explored by our research efforts. By positioning recent work on that matrix, I hope to highlight a possible research direction and expose barriers to be overcome. In that effort, I will attempt a live demonstration of IFTT-PIN, a self-calibrating interface we developed that permits a user to control an interface using signals whose meaning are initially unknown.


July 29, 2023, 1:55 p.m.

Human feedback is often incomplete, suboptimal, biased, and ambiguous, leading to misidentification of the human's true reward function and suboptimal agent behavior. I will discuss these pitfalls as well as some of our recent work that seeks to overcome these problems via techniques that calibrate to user biases, learn from multiple feedback types, use human feedback to align robot feature representations, and enable interpretable reward learning.


July 28, 2023, 12:40 p.m.

Real-world adoption of deep neural networks (DNNs) in critical applications requires ensuring strong generalization beyond testing datasets. Unfortunately, the standard practice of measuring DNN performance on a finite set of test inputs cannot ensure DNN safety on inputs in the wild. In this talk, I will focus on how certified AI can be leveraged as a service to bridge this gap by building DNNs with strong generalization on an infinite set of unseen inputs. In the process, I will discuss some of our recent work for building trust and safety in diverse domains such as vision, systems, finance, and more. I will also describe a path toward making certified AI more scalable, easy to develop, and accessible to DNN developers lacking formal backgrounds.


Gagandeep Singh

I am an Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC). I also hold an Affiliated Researcher position with VMware Research. My current focus is on combining ideas from Formal Logic, Machine Learning, and Systems research to construct intelligent compute systems with formal guarantees about their behavior and safety. I obtained a PhD in Computer Science from ETH Zurich in 2020 working with Prof. Markus Püschel and Prof. Martin Vechev. During my PhD, I designed scalable and precise automated reasoning methods and tools for programs and deep neural networks. I co-received the ACM SIGPLAN Doctoral Dissertation Award given annually to the best dissertations in the area of Programming Languages. Before that, I completed a Masters in Computer Science at ETH in 2014 receiving the ETH Master Medal and Bachelors in Computer Science and Engineering from IIT Patna in 2012 receiving the President of India Gold Medal.

July 28, 2023, 4:30 p.m.

State density distribution, in contrast to worst-case reachability, can be leveraged for safety-related problems to better quantify the likelihood of the risk for potentially hazardous situations. We developed a data-driven method to compute the density distribution of reachable states for nonlinear and even black-box systems. Our approach can estimate the set of all possible future states as well as their density. Moreover, we could perform online safety verification with probability ranges for unsafe behaviors to occur. We show that our approach can learn the density distribution of the reachable set more accurately with less data and quantify risks less conservatively and flexibly compared with worst-case analysis. We also study the use of such an approach in combination with model predictive control for verifiable safe path planning under uncertainties.


Chuchu Fan

July 28, 2023, 5 p.m.

Researchers have demonstrated that the machine-learning pipeline is susceptible to attacks both at training and inference time -- poisoning, backdoor, and evasion attacks. In this talk, we will describe new results on holistic approaches for certifying robustness. Our techniques draw upon ideas from test-time certification and ensembling to simultaneously establish formal robustness guarantees for both training and inference.


Aws Albarghouthi

Yuhao Zhang

First-year PhD student at madPL, UW-Madison

July 29, 2023, 2 p.m.

Animals use afferent feedback to rapidly correct ongoing movements in the presence of a perturbation. Repeated exposure to a predictable perturbation leads to behavioural adaptation that counteracts its effects. Primary motor cortex (M1) is intimately involved in both processes, integrating inputs from various sensorimotor brain regions to update the motor output. Here, we investigate whether feedback-based motor control and motor adaptation may share a common implementation in M1 circuits. We trained a recurrent neural network to control its own output through an error feedback signal, which allowed it to recover rapidly from external perturbations. Implementing a biologically plausible plasticity rule based on this same feedback signal also enabled the network to learn to counteract persistent perturbations through a trial-by-trial process, in a manner that reproduced several key aspects of human adaptation. Moreover, the resultant network activity changes were also present in neural population recordings from monkey M1. Online movement correction and longer-term motor adaptation may thus share a common implementation in neural circuits.


Claudia Clopath

July 28, 2023, 12:05 p.m.


Vineet Goyal

July 29, 2023, 5:20 p.m.


Alison Gopnik

Alison Gopnik is a professor of psychology and affiliate professor of philosophy at the University of California at Berkeley. She received her BA from McGill University and her PhD. from Oxford University. She is an internationally recognized leader in the study of cognitive science and of children’s learning and development and was one of the founders of the field of “theory of mind”, an originator of the “theory theory” of children’s development and more recently introduced the idea that probabilistic models and Bayesian inference could be applied to children’s learning. She has held a Center for Advanced Studies in the Behavioral Sciences Fellowship, the Moore Distinguished Scholar fellowship at the California Institute of Technology, the All Souls College Distinguished Visiting Fellowship at Oxford, and King’s College Distinguished Visiting Fellowship at Cambridge. She is an elected member of the Society of Experimental Psychologists, and the American Academy of Arts and Sciences and a fellow of the Cognitive Science Society and the American Association for the Advancement of Science. She has been continuously supported by the NSF and was PI on a 2.5 million dollar interdisciplinary collaborative grant on causal learning from the McDonnell Foundation.

She is the author or coauthor of over 100 journal articles and several books including “Words, thoughts and theories” MIT Press, 1997, and the bestselling and critically acclaimed popular books “The Scientist in the Crib” William Morrow, 1999, “The Philosophical Baby; What children’s minds tell us about love, truth and the meaning of life”, and “The Gardener and the Carpenter”, Farrar, Strauss and Giroux, the latter two won the Cognitive Development Society Best Book Prize in 2009 and 2016. She has also written widely about cognitive science and psychology for The New York Times, The Atlantic, The New Yorker, Science, Scientific American, The Times Literary Supplement, The New York Review of Books, New Scientist and Slate, among others. Her TED talk on her work has been viewed more than 3 and a half million times. And she has frequently appeared on TV and radio including “The Charlie Rose Show” and “The Colbert Report”. Since 2013 she has written the Mind and Matter column for the Wall Street Journal. She lives in Berkeley with her husband Alvy Ray Smith, and has three children and three grandchildren.

July 28, 2023, 4:30 p.m.

Advances in machine learning and the explosion of clinical data have demonstrated immense potential to fundamentally improve clinical care and deepen our understanding of human health. However, algorithms for medical interventions and scientific discovery in heterogeneous patient populations are particularly challenged by the complexities of healthcare data. Not only are clinical data noisy, missing, and irregularly sampled, but questions of equity and fairness also raise grave concerns and create additional computational challenges. In this talk, I examine how to incorporate differences in access to care into the modeling step. Using a deep generative model, we examine the task of disease phenotyping in heart failure and Parkinson's disease. The talk concludes with a discussion about how to rethink the entire machine learning pipeline with an ethical lens to building algorithms that serve the entire patient population.


July 28, 2023, 12:30 p.m.

We have seen in the last year an incredible pace of progress in large AI models, with increasing abilities to generate high quality images, videos, text, sound and more. The best of these models display signs of creativity, reasoning, generalization and plasticity beyond what we could imagine just a few years ago. Yet many challenges and open questions remain, both on the technological aspects and the societal impact of these models. Further progress, especially on mitigating the social risks of these models, is hampered by a lack of transparency and reproducibility. In this talk, Joelle will describe ongoing efforts to increase best practices towards the responsible training and deployment of AI research systems, drawing on her experience with the ML reproducibility program, and the recent release of several state-of-the-art large models


July 28, 2023, 1:30 p.m.

Jennifer Doudna will discuss her professional and personal journey working on CRISPR technology, from its genesis to its applications today and focusing on ethical challenges that mirror challenges with AI/ML.


July 28, 2023, 1:50 p.m.


Diederik Kingma

July 28, 2023, 5 p.m.


July 28, 2023, 5:40 p.m.


Stefano Ermon

July 29, 2023, 12:15 p.m.

Generative flow networks (GFlowNets) are generative policies trained to sample proportionally to a given reward function. If the reward function is a prior distribution times a likelihood, then the GFlowNet learns to sample from the corresponding posterior. Unlike MCMC, a GFlowNet does not suffer from the problem of mixing between modes, but like RL methods, it needs an exploratory training policy in order to discover modes. This can be conveniently done without any kind of importance weighting because the training objectives for GFlowNets can all be correctly applied in an off-policy fashion without reweighting. One can view GFlowNets also as extensions of amortized variational inference with this off-policy advantage. We show how training the GFlowNet sampler also learns how to marginalize over the target distribution or part of it, at the same time as it learns to sample from it, which makes it possible to train amortized posterior predictives. Finally, we show examples of application of GFlowNets for Bayesian inference over causal graphs, discuss open problems and how scaling up such methodologies opens the door to system 2 deep learning to discover explanatory theories and form Bayesian predictors, with the approximation error asymptotically going to zero as we increase the size and training time of the neural network.


July 29, 2023, 12:45 p.m.


July 29, 2023, 6:30 p.m.


Invited Talk: PAC-Bayes Tutorial

July 28, 2023, 12:15 p.m.


July 28, 2023, 6:20 p.m.

Deep learning models have been demonstrated to have superhuman performance for prediction of features that are not obvious to the human readers. For example, AI can predict the self-reported race of patients, age, sex, diagnosis and insurance of patients. While some of these features are biological, most are social constructs, and given the black box nature of models it remains difficult to assess how this ability is achieved. In this session, we will review some of the approaches that are both technical and non-technical in understanding the performance of these models which has an impact on real world deployment of AI.


July 28, 2023, 2:15 p.m.

We present a unified framework for deriving PAC-Bayesian generalization bounds. Unlike most previous literature on this topic, our bounds are anytime-valid (i.e., time-uniform), meaning that they hold at all stopping times, not only for a fixed sample size. Our approach combines four tools in the following order: (a) nonnegative supermartingales or reverse submartingales, (b) the method of mixtures, (c) the Donsker-Varadhan formula (or other convex duality principles), and (d) Ville's inequality. Our main result is a PAC-Bayes theorem which holds for a wide class of discrete stochastic processes. We show how this result implies time-uniform versions of well-known classical PAC-Bayes bounds, such as those of Seeger, McAllester, Maurer, and Catoni, in addition to many recent bounds. We also present several novel bounds.


Aaditya Ramdas

Aaditya Ramdas is an assistant professor in the Departments of Statistics and Machine Learning at Carnegie Mellon University.

These days, he has 3 major directions of research: 1. selective and simultaneous inference (interactive, structured, post-hoc control of false discovery/coverage rate,…), 2. sequential uncertainty quantification (confidence sequences, always-valid p-values, bias in bandits,…), and 3. assumption-free black-box predictive inference (conformal prediction, calibration,…).

July 28, 2023, 4:30 p.m.

To navigate the exploration-exploitation trade-off in interactive learning we often rely on the uncertainty estimates of a probabilistic or Bayesian model. A key challenge is to correctly specify the prior of our model so that its epistemic uncertainty estimates are reliable. In this talk, we explore how we can harness related datasets or previous experience to meta-learn priors in a data-driven way. We study this problem through the lens of PAC-Bayesian theory and derive practical and scalable meta-learning algorithms. In particular, we discuss how to make sure that the meta-learned priors yield confidence intervals that are not overconfident so that our interactive learners explore sufficiently. Overall, the proposed meta-learning framework allows us significantly speed up interactive learning through transfer from previous tasks/runs.


Jonas Rothfuss

July 28, 2023, 1:50 p.m.


Thorsten Joachims

July 28, 2023, 2:20 p.m.


Vincent Conitzer

July 28, 2023, 4:15 p.m.


Sanmi Koyejo

July 28, 2023, 4:45 p.m.


July 28, 2023, 12:45 p.m.

Dami Choi


Dami Choi

July 28, 2023, 12:05 p.m.


July 28, 2023, 3:10 p.m.


July 28, 2023, 2:10 p.m.

Interpretability enriches what can be gleaned from a good predictive model. Techniques that learn-to-explain have arisen because they require only a single evaluation of a model to provide an interpretation. I will discuss a flaw with several methods that learn-to-explain: the optimal explainer makes the prediction rather than highlighting the inputs that are useful for prediction, and I will discuss how to correct this flaw. Along the way, I will develop evaluations grounded in the data and convey why interpretability techniques need to be quantitatively evaluated before their use.

References:

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations: https://arxiv.org/pdf/2103.01890.pdf FastSHAP: Real-Time Shapley Value Estimation: https://arxiv.org/pdf/2107.07436.pdf Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation: https://arxiv.org/pdf/2302.12893.pdf New-Onset Diabetes Assessment Using Artificial Intelligence-Enhanced Electrocardiography: https://arxiv.org/pdf/2205.02900.pdf


July 29, 2023, 4 p.m.

Discrete sampling is a challenging and important problem. Despite much research, we still lack generic methods to sampling from discrete distribution without considerable knowledge of the structure of the target distribution. Conversely, in continuous settings much more successful generic methods exist. These methods exploit the gradients of the distribution's log-likelihood function to approximate the distribution's local structure which is used to parameterize fast-mixing markov transition kernels. A number of approaches have attempted to apply these methods to discrete problems with varying levels of success. Typically we create a related continuous distribution, sampling from this using continuous methods, and map these continuous samples back into the original discrete space. Recently, a new class of approaches has emerged which utilize gradient information in a different way. These approaches stay completely in the original discrete space but utilize gradient information to define markov transition kernels which propose discrete transitions. These approaches have shown to scale better and are widely applicable. In this talk I will discuss the development of these methods starting Gibbs-With-Gradients, further work improving or expanding upon these ideas, and new directions for further research.


July 29, 2023, 5:30 p.m.

With the eyes of the AI world pointed at the alignment of large language models, another revolution has been more silently---yet intensely---taking place: the algorithmic alignment of neural networks. After briefly surveying how we got here, I'll present some of the interesting 2023 works I've had the pleasure to co-author, many of which were presented at this year's ICML.


July 28, 2023, 4 p.m.

LLMs are on track to reverse what seemed like an inexorable shift of AI from explicit to tacit knowledge tasks. Trained as they are on everything ever written on the web, LLMs exhibit "approximate omniscience"--they can provide answers to all sorts of queries, with nary a guarantee. This could herald a new era for knowledge-based AI systems--with LLMs taking the role of (blowhard?) experts. But first, we have to stop confusing the impressive form of the generated knowledge for correct content, and resist the temptation to ascribe reasoning powers to approximate retrieval by these n-gram models on steroids. We have to focus instead on LLM-Modulo techniques that complement the unfettered idea generation of LLMs with careful vetting by model-based AI systems. In this talk, I will reify this vision and attendant caveats in the context of the role of LLMs in planning tasks.


Subbarao Kambhampati

Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He served as the president of the Association for the Advancement of Artificial Intelligence, a trustee of the International Joint Conference on Artificial Intelligence, the chair of AAAS Section T (Information, Communication and Computation), and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.

July 28, 2023, 6:50 p.m.

Learning representations that combine program structure with neural components can help provide a path to learned systems with stronger safety guarantees. In this talk, I will describe some of our recent work on learning with program representations and its connection to safety and verification.


Armando Solar-Lezama

July 28, 2023, 12:50 p.m.


July 28, 2023, 4:30 p.m.

In the talk, Rihab will share her personal journey as a mid-career woman coming from Africa in the field of Artificial Intelligence (AI) and highlight the remarkable experiences she has gained working at an African AI startup. With a focus on both technical accomplishments and driving forces that have propelled her forward, she aims to inspire the audience while providing valuable insights into her professional growth - particularly to women who aspire to build their careers in AI.


Rihab Gorsane

July 28, 2023, 12:35 p.m.

Aligning robot objectives with human preferences is a key challenge in robot learning. In this talk, I will start with discussing how active learning of human preferences can effectively query humans with the most informative questions to learn their preference reward functions. I will discuss some of the limitations of prior work, and how approaches such as few-shot learning can be integrated with active preference based learning for the goal of reducing the number of queries to a human expert and allowing for truly bringing in humans in the loop of learning neural reward functions. I will then talk about how we could go beyond active learning from a single human, and tap into large language models (LLMs) as another source of information to capture human preferences that are hard to specify. I will discuss how LLMs can be queried within a reinforcement learning loop and help with reward design. Finally I will discuss how the robot can also provide useful information to the human and be more transparent about its learning process. We demonstrate how the robot’s transparent behavior would guide the human to provide compatible demonstrations that are more useful and informative for learning.


Dorsa Sadigh

July 28, 2023, 12:45 p.m.

Many expect that AI will go from powering chatbots to providing mental health services. That it will go from advertisement to deciding who is given bail. The expectation is that AI will solve society’s problems by simply being more intelligent than we are. Implicit in this bullish perspective is the assumption that AI will naturally learn to reason from data: that it can form trains of thought that “make sense”, similar to how a mental health professional or judge might reason about a case, or more formally, how a mathematician might prove a theorem. This talk will investigate the question whether this behavior can be learned from data, and how we can design the next generation of AI techniques that can achieve such capabilities, focusing on constrained language generation, neuro-symbolic learning and tractable deep generative models.


July 29, 2023, 5:40 p.m.

This presentation summarizes the main idea from interlocking backpropagation paper and discusses the intriguing intersection of large language models and local learning, investigating the unique challenges and opportunities. Additionally we'll touch on the state-of-the-art training paradigm of LLM and local learning's potential.


Stephen Gou

Invited Talk: Suchi Saria - TBD

July 29, 2023, 5 p.m.


July 28, 2023, 12:30 p.m.


July 28, 2023, 5 p.m.

Only the bravest machine learners have dared to tackle problems in medicine. Why? The most important reason is that the end users of ML models in medicine are skeptics of ML, and therefore one must jump through a multitude of hoops in order to deploy ML solutions. The common approach in the field is to focus on interpretability and force our ML solutions to be white box. However, this handcuffs the potential of our ML models from the start, and medicine is already a challenging enough space to model since data is hard to collect, the data one gets is always messy, and the tasks one must achieve in medicine are often not as intuitive as working on images or text.

Is there another way? Yes! Our approach is to embrace black box ML solutions, but deploy them carefully in clinical trials by rigorously controlling the risk exposure from trusting the ML solutions. I will use Alzheimer’s disease as an example to dive into our state of the art deep time series neural networks. Once I have explained our black box as best as a human reasonably can, I will detail how the outputs of the deep nets can be used in different clinical trials. In these applications, the end user prespecifies their risk tolerance, which leads to different context of use for the ML models. Our work demonstrates that we can embrace black box solutions by focusing on development rigorous deployment methods.


Invited Talk: Tsachy Weissman

July 29, 2023, 12:35 p.m.


July 29, 2023, 12:15 p.m.


July 29, 2023, 12:10 p.m.


Mihaela van der Schaar

July 29, 2023, 7:10 p.m.


invited talk: Invited talk: Jimeng Sun

July 28, 2023, 1 p.m.


July 28, 2023, 2:40 p.m.


Invited Talk: Giacomo Zanella

July 29, 2023, 12:45 p.m.


July 29, 2023, 1:20 p.m.


Invited Talk: Hyeji Kim

July 29, 2023, 3:55 p.m.


July 28, 2023, 1:30 p.m.

While two systems of reasoning have been a useful abstraction, emergent reasoning (in humans and LLMs) seems to be more intertwined. I'll start by presenting some work highlighting the challenges of interpreting emergent reasoning as two distinct systems and present a few directions for unifying the systems -- focusing on using soft supervision signals from system 2 sources, toward improving traditionally system 1 agents.


Invited Talk: Yan Lu

July 29, 2023, 4:45 p.m.


July 29, 2023, 1:40 p.m.


July 29, 2023, 4:30 p.m.


July 29, 2023, 6:45 p.m.