Invited Talks
Many different threads in recent work on generative AI address the simultaneous challenge of evaluating an AI system's explicit behavior at one level and its implicit representations of the world at another. Such distinctions become crucial as we interact with powerful AI systems, where a mismatch between the system's model of the world and our model of the world can lead to measurable situations in which the system has inadvertently `set us up to fail' through our interaction with it. We explore these questions through the lens of generation, drawing on examples from game-playing, geographic navigation, and other complex tasks: When we train a model to win chess games, what happens when we pair it with a weaker partner who makes some of the moves? When we train a model to find shortest paths, what happens when it has to deal with unexpected detours? The picture we construct is further complicated by theoretical results indicating that successful generation can be achieved even by agents that are provably incapable of identifying the model they're generating from.
The talk will include joint work with Ashton Anderson, Karim Hamade, Reid McIlroy-Young, Siddhartha Sen, Justin Chen, Sendhil Mullainathan, Ashesh Rambachan, Keyon Vafa, and Fan Wei.

Jon Kleinberg
I am a professor at Cornell University. My research focuses on algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. My work has been supported by an NSF Career Award, an ONR Young Investigator Award, a MacArthur Foundation Fellowship, a Packard Foundation Fellowship, a Simons Investigator Award, a Sloan Foundation Fellowship, a Vannevar Bush Faculty Fellowship, and grants from Facebook, Google, Yahoo, the MacArthur and Simons Foundations, and the AFOSR, ARO, and NSF. I am a member of the National Academy of Sciences, the National Academy of Engineering, the American Academy of Arts and Sciences, and the American Philosophical Society.
The development of generative AI models has understandably caused considerable excitement among machine learning professionals. Few have paid attention to the potential copyright implications of using massive amounts of data publicly available on the Internet to train these models. Commercial developers in the U.S. have expressed confidence that the copyright doctrine of fair use would shield them from liability. In the EU, recently adopted text and data mining exceptions seemed to legalize generative AI training. Israel and Japan have similar rules. But with more than forty copyright-related lawsuits pending against the largest generative AI developers in the U.S. and now a few in Canada, and with the EU and UK aiming to require compliance with their laws, copyright is looming large in the future of generative AI developers. While it is seemingly impossible to create a global licensing regime that would cover all uses of all in-copyright works as training data, proposals to establish collective licensing regimes are under discussion in the EU, UK, and U.S. The machine learning community needs to understand enough about these copyright debates to participate meaningfully in shaping legal environments that will foster innovation in this field, support scientific research, create socially valuable tools, and treat works and their authors with respect.

Pamela Samuelson
Pamela Samuelson is the Richard M. Sherman Distinguished Professor of Law and Information at the University of California, Berkeley. She is recognized as a pioneer in digital copyright law, intellectual property, cyberlaw and information policy. Since 1996, she has held a joint appointment at Berkeley Law School and UC Berkeley’s School of Information. Samuelson is a director of the internationally-renowned Berkeley Center for Law & Technology. She is co-founder and chair of the board of Authors Alliance, a nonprofit organization that promotes the public interest in access to knowledge. She also serves on the board of directors of the Electronic Frontier Foundation, as well as on the advisory boards for the Electronic Privacy Information Center , the Center for Democracy & Technology, and Public Knowledge.
As artificial intelligence systems become deeply embedded in our institutions, economies, and personal lives, the challenge of alignment—ensuring AI acts in accordance with human values and societal norms—has become both urgent and complex.
But what exactly should these systems be aligned to—and how do we know we're getting it right? To address this, we turn to a long-standing body of work: how societies have historically measured public preferences and moral norms—and what often goes wrong in the process.
The talk will introduce underutilized datasets—from decades of survey archives to international value studies—that could serve as empirical benchmarks for aligning AI systems with lived human norms. In addition to highlighting valuable data sources, we will examine how lessons from social science can inform the design of human feedback loops in AI. These insights help avoid common pitfalls in capturing human intentions and preferences—such as measurement error, framing effects, and unrepresentative sampling—that have plagued opinion research for decades.
We'll close by addressing the fluid and evolving nature of societal norms, emphasizing the need for alignment strategies that are adaptive to cultural and temporal change. Achieving this kind of adaptability requires not just better data, but durable collaborations between social scientists and machine learning researchers—so that updates to human values can be continuously reflected in system design. The goal is to provoke a deeper, interdisciplinary conversation about what it truly means to align AI with human values—and how to do so responsibly, reliably, and at scale.

Frauke Kreuter
Professor Frauke Kreuter is Co-Director of the Social Data Science Center and faculty member in the Joint Program in Survey Methodology at the University of Maryland, USA; and Professor of Statistics and Data Science at the Ludwig-Maximilians-University of Munich. She is an elected fellow of the American Statistical Association and the 2020 recipient of the Warren Mitofsky Innovators Award of the American Association for Public Opinion Research. In addition to her academic work Dr. Kreuter is the Founder of the International Program for Survey and Data Science, developed in response to the increasing demand from researchers and practitioners for the appropriate methods and right tools to face a changing data environment; Co-Founder of the Coleridge Initiative, whose goal is to accelerate data-driven research and policy around human beings and their interactions for program management, policy development, and scholarly purposes by enabling efficient, effective, and secure access to sensitive data about society and the economy. coleridgeinitiative.org; and Co-Founder of the German language podcast Dig Deep.
How to move losses down, and rewards and metrics up: from a robot’s arm motion in my PhD, to the policy of a virtual assistant or of a self-driving car in my Berkeley lab and at Waymo later, to the Gemini model today at Google DeepMind, that’s been the name of the game. But throughout it all, what I cared about more was what those losses/rewards/metrics ought to be in the first place. What started as an intuition in grad school – that what to optimize was the deeper and harder question than how to optimize – became a central pursuit when I became faculty, as my lab and I sought to understand the ins and outs of how agents can accomplish what we want without unintended side effects. Now at the heart of frontier AI development, that experience is coming in handy as we work to make Gemini a useful and safe collaborator for humanity.

Anca Dragan
Anca Dragan co-leads post training for Gemini and heads AI safety and alignment at Google DeepMind. She is on leave from UC Berkeley, where is an associate professor in Electrical Engineering and Computer Science and runs the InterACT lab. Anca obtained her PhD at Carnegie Mellon in the Robotics Institute in 2015. She has been honored by several career awards and spotlights, including the Presidential Early Career Award for Scientists and Engineers, and the Sloan fellowship.
How can we accelerate scientific discovery when experiments are costly and uncertainty is high? From protein engineering to robotics, data efficiency is critical—but advances in lab automation and the rise of foundation models are creating rich new opportunities for intelligent exploration. In this talk, I’ll share recent work toward closing the loop between learning and experimentation, drawing on active learning, Bayesian optimization, and reinforcement learning. I’ll show how we can guide exploration in complex, high-dimensional spaces; how meta-learned generative priors enable rapid adaptation from simulation to reality; and how even foundation models can be adaptively steered at test time to reduce their epistemic uncertainty. I’ll conclude by highlighting key challenges and exciting opportunities for machine learning to drive optimization and discovery across science and engineering.

Andreas Krause
Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group, serves as Academic Co-Director of the Swiss Data Science Center, Chair of the ETH AI Center, and co-founded the ETH spin-off LatticeFlow AI. He is a Fellow at the Max Planck Institute for Intelligent Systems, ACM Fellow, IEEE Fellow, ELLIS Fellow and a Microsoft Research Faculty Fellow. He received the Rössler Prize, ERC Starting Investigator and Consolidator grants, the German Pattern Recognition Award, an NSF CAREER award, Test of Time awards at KDD 2019 and ICML 2020, as well as the ETH Golden Owl teaching award. Andreas Krause served as Program Co-Chair for ICML 2018 and General Chair for ICML 2023 and serves as Action Editor for the Journal of Machine Learning Research. From 2023-24, he served on the United Nations’ High-level Advisory Body on AI.