Invited Talks
Invited Talk
[ West Exhibition Hall C ]
Abstract
Many different threads in recent work on generative AI address the simultaneous challenge of evaluating an AI system's explicit behavior at one level and its implicit representations of the world at another. Such distinctions become crucial as we interact with powerful AI systems, where a mismatch between the system's model of the world and our model of the world can lead to measurable situations in which the system has inadvertently `set us up to fail' through our interaction with it. We explore these questions through the lens of generation, drawing on examples from game-playing, geographic navigation, and other complex tasks: When we train a model to win chess games, what happens when we pair it with a weaker partner who makes some of the moves? When we train a model to find shortest paths, what happens when it has to deal with unexpected detours? The picture we construct is further complicated by theoretical results indicating that successful generation can be achieved even by agents that are provably incapable of identifying the model they're generating from.
The talk will include joint work with Ashton Anderson, Karim Hamade, Reid McIlroy-Young, Siddhartha Sen, Justin Chen, Sendhil Mullainathan, Ashesh Rambachan, Keyon Vafa, and Fan …
Invited Talk
[ West Exhibition Hall C ]
Abstract
The development of generative AI models has understandably caused considerable excitement among machine learning professionals. Few have paid attention to the potential copyright implications of using massive amounts of data publicly available on the Internet to train these models. Commercial developers in the U.S. have expressed confidence that the copyright doctrine of fair use would shield them from liability. In the EU, recently adopted text and data mining exceptions seemed to legalize generative AI training. Israel and Japan have similar rules. But with more than forty copyright-related lawsuits pending against the largest generative AI developers in the U.S. and now a few in Canada, and with the EU and UK aiming to require compliance with their laws, copyright is looming large in the future of generative AI developers. While it is seemingly impossible to create a global licensing regime that would cover all uses of all in-copyright works as training data, proposals to establish collective licensing regimes are under discussion in the EU, UK, and U.S. The machine learning community needs to understand enough about these copyright debates to participate meaningfully in shaping legal environments that will foster innovation in this field, support scientific research, create socially valuable tools, and …
Invited Talk
[ West Exhibition Hall C ]
Abstract
As artificial intelligence systems become deeply embedded in our institutions, economies, and personal lives, the challenge of alignment—ensuring AI acts in accordance with human values and societal norms—has become both urgent and complex.
But what exactly should these systems be aligned to—and how do we know we're getting it right? To address this, we turn to a long-standing body of work: how societies have historically measured public preferences and moral norms—and what often goes wrong in the process.
The talk will introduce underutilized datasets—from decades of survey archives to international value studies—that could serve as empirical benchmarks for aligning AI systems with lived human norms. In addition to highlighting valuable data sources, we will examine how lessons from social science can inform the design of human feedback loops in AI. These insights help avoid common pitfalls in capturing human intentions and preferences—such as measurement error, framing effects, and unrepresentative sampling—that have plagued opinion research for decades.
We'll close by addressing the fluid and evolving nature of societal norms, emphasizing the need for alignment strategies that are adaptive to cultural and temporal change. Achieving this kind of adaptability requires not just better data, but durable collaborations between social scientists and machine …
Invited Talk
[ West Exhibition Hall C ]
Abstract
How to move losses down, and rewards and metrics up: from a robot’s arm motion in my PhD, to the policy of a virtual assistant or of a self-driving car in my Berkeley lab and at Waymo later, to the Gemini model today at Google DeepMind, that’s been the name of the game. But throughout it all, what I cared about more was what those losses/rewards/metrics ought to be in the first place. What started as an intuition in grad school – that what to optimize was the deeper and harder question than how to optimize – became a central pursuit when I became faculty, as my lab and I sought to understand the ins and outs of how agents can accomplish what we want without unintended side effects. Now at the heart of frontier AI development, that experience is coming in handy as we work to make Gemini a useful and safe collaborator for humanity.
Invited Talk
[ West Exhibition Hall C ]
Abstract
How can we accelerate scientific discovery when experiments are costly and uncertainty is high? From protein engineering to robotics, data efficiency is critical—but advances in lab automation and the rise of foundation models are creating rich new opportunities for intelligent exploration. In this talk, I’ll share recent work toward closing the loop between learning and experimentation, drawing on active learning, Bayesian optimization, and reinforcement learning. I’ll show how we can guide exploration in complex, high-dimensional spaces; how meta-learned generative priors enable rapid adaptation from simulation to reality; and how even foundation models can be adaptively steered at test time to reduce their epistemic uncertainty. I’ll conclude by highlighting key challenges and exciting opportunities for machine learning to drive optimization and discovery across science and engineering.
Successful Page Load