Timezone: »
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.
Author Information
Niranjan Srinivas (10x Genomics)
Niranjan Srinivas is a Senior Computational Biologist at 10x Genomics, where he leads the development of new technologies that enable high-throughput biological measurements and analysis. Prior to this, he was a Damon Runyon Post-doctoral Fellow at the University of California, Berkeley. He earned his Ph. D. in Computation and Neural Systems from Caltech (2015), where his doctoral dissertation won the Demetriades-Tsafka-Kokkalis prize for the best thesis, publication, or discovery in nanotechnology and related fields.
Andreas Krause (ETH Zurich)

Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center and Chair of the ETH AI Center, and co-founded the ETH spin-off LatticeFlow. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Max Planck Fellow at the Max Planck Institute for Intelligent Systems, an ELLIS Fellow, a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received the Rössler Prize, ERC Starting Investigator and ERC Consolidator grants, the German Pattern Recognition Award, an NSF CAREER award as well as the ETH Golden Owl teaching award. His research has received awards at several premier conferences and journals, including the ACM SIGKDD Test of Time award 2019 and the ICML Test of Time award 2020. Andreas Krause served as Program Co-Chair for ICML 2018, and currently serves as General Chair for ICML 2023 and as Action Editor for the Journal of Machine Learning Research.
Sham Kakade (University of Washington)
Sham Kakade is a Gordon McKay Professor of Computer Science and Statistics at Harvard University and a co-director of the recently announced Kempner Institute. He works on the mathematical foundations of machine learning and AI. Sham's thesis helped in laying the statistical foundations of reinforcement learning. With his collaborators, his additional contributions include: one of the first provably efficient policy search methods, Conservative Policy Iteration, for reinforcement learning; developing the mathematical foundations for the widely used linear bandit models and the Gaussian process bandit models; the tensor and spectral methodologies for provable estimation of latent variable models; the first sharp analysis of the perturbed gradient descent algorithm, along with the design and analysis of numerous other convex and non-convex algorithms. He is the recipient of the ICML Test of Time Award (2020), the IBM Pat Goldberg best paper award (in 2007), INFORMS Revenue Management and Pricing Prize (2014). He has been program chair for COLT 2011. Sham was an undergraduate at Caltech, where he studied physics and worked under the guidance of John Preskill in quantum computing. He then completed his Ph.D. in computational neuroscience at the Gatsby Unit at University College London, under the supervision of Peter Dayan. He was a postdoc at the Dept. of Computer Science, University of Pennsylvania , where he broadened his studies to include computational game theory and economics from the guidance of Michael Kearns. Sham has been a Principal Research Scientist at Microsoft Research, New England, an associate professor at the Department of Statistics, Wharton, UPenn, and an assistant professor at the Toyota Technological Institute at Chicago.
Matthias Seeger (Amazon Research)
Matthias W. Seeger received a Ph.D. from the School of Informatics, Edinburgh university, UK, in 2003 (advisor Christopher Williams). He was a research fellow with Michael Jordan and Peter Bartlett, University of California at Berkeley, from 2003, and with Bernhard Schoelkopf, Max Planck Institute for Intelligent Systems, Tuebingen, Germany, from 2005. He led a research group at the University of Saarbruecken, Germany, from 2008, and was assistant professor at the Ecole Polytechnique Federale de Lausanne from fall 2010. He joined Amazon as machine learning scientist in 2014.
More from the Same Authors
-
2021 : A Short Note on the Relationship of Information Gain and Eluder Dimension »
Kaixuan Huang · Sham Kakade · Jason Lee · Qi Lei -
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2022 : Recovering Stochastic Dynamics via Gaussian Schrödinger Bridges »
Ya-Ping Hsieh · Charlotte Bunne · Marco Cuturi · Andreas Krause -
2022 : Recovering Stochastic Dynamics via Gaussian Schrödinger Bridges »
Charlotte Bunne · Ya-Ping Hsieh · Marco Cuturi · Andreas Krause -
2023 : Anytime Model Selection in Linear Bandits »
Parnian Kassraie · Aldo Pacchiano · Nicolas Emmenegger · Andreas Krause -
2023 : Unbalanced Diffusion Schrödinger Bridge »
Matteo Pariset · Ya-Ping Hsieh · Charlotte Bunne · Andreas Krause · Valentin De Bortoli -
2023 : Aligned Diffusion Schrödinger Bridges »
Vignesh Ram Somnath · Matteo Pariset · Ya-Ping Hsieh · Maria Rodriguez Martinez · Andreas Krause · Charlotte Bunne -
2023 : Graph Neural Network Powered Bayesian Optimization for Large Molecular Spaces »
Miles Wang-Henderson · Bartu Soyuer · Parnian Kassraie · Andreas Krause · Ilija Bogunovic -
2023 : Anytime Model Selection in Linear Bandits »
Parnian Kassraie · Aldo Pacchiano · Nicolas Emmenegger · Andreas Krause -
2023 Poster: Optimizing Hyperparameters with Conformal Quantile Regression »
David Salinas · Jacek Golebiowski · Aaron Klein · Matthias Seeger · Cedric Archambeau -
2023 Panel: ICML Education Outreach Panel »
Andreas Krause · Barbara Engelhardt · Emma Brunskill · Kyunghyun Cho -
2022 Workshop: Adaptive Experimental Design and Active Learning in the Real World »
Mojmir Mutny · Willie Neiswanger · Ilija Bogunovic · Stefano Ermon · Yisong Yue · Andreas Krause -
2022 Poster: Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning »
Max Paulus · Giulia Zarpellon · Andreas Krause · Laurent Charlin · Chris Maddison -
2022 Spotlight: Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning »
Max Paulus · Giulia Zarpellon · Andreas Krause · Laurent Charlin · Chris Maddison -
2022 Poster: Interactively Learning Preference Constraints in Linear Bandits »
David Lindner · Sebastian Tschiatschek · Katja Hofmann · Andreas Krause -
2022 Spotlight: Interactively Learning Preference Constraints in Linear Bandits »
David Lindner · Sebastian Tschiatschek · Katja Hofmann · Andreas Krause -
2022 Poster: Adaptive Gaussian Process Change Point Detection »
Edoardo Caldarelli · Philippe Wenk · Stefan Bauer · Andreas Krause -
2022 Poster: Efficient Model-based Multi-agent Reinforcement Learning via Optimistic Equilibrium Computation »
Pier Giuseppe Sessa · Maryam Kamgarpour · Andreas Krause -
2022 Poster: Meta-Learning Hypothesis Spaces for Sequential Decision-making »
Parnian Kassraie · Jonas Rothfuss · Andreas Krause -
2022 Spotlight: Efficient Model-based Multi-agent Reinforcement Learning via Optimistic Equilibrium Computation »
Pier Giuseppe Sessa · Maryam Kamgarpour · Andreas Krause -
2022 Spotlight: Meta-Learning Hypothesis Spaces for Sequential Decision-making »
Parnian Kassraie · Jonas Rothfuss · Andreas Krause -
2022 Spotlight: Adaptive Gaussian Process Change Point Detection »
Edoardo Caldarelli · Philippe Wenk · Stefan Bauer · Andreas Krause -
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang -
2021 : Data Summarization via Bilevel Coresets »
Andreas Krause -
2021 Poster: PopSkipJump: Decision-Based Attack for Probabilistic Classifiers »
Carl-Johann Simon-Gabriel · Noman Ahmed Sheikh · Andreas Krause -
2021 Spotlight: PopSkipJump: Decision-Based Attack for Probabilistic Classifiers »
Carl-Johann Simon-Gabriel · Noman Ahmed Sheikh · Andreas Krause -
2021 Poster: PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees »
Jonas Rothfuss · Vincent Fortuin · Martin Josifoski · Andreas Krause -
2021 Spotlight: PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees »
Jonas Rothfuss · Vincent Fortuin · Martin Josifoski · Andreas Krause -
2021 Poster: Online Submodular Resource Allocation with Applications to Rebalancing Shared Mobility Systems »
Pier Giuseppe Sessa · Ilija Bogunovic · Andreas Krause · Maryam Kamgarpour -
2021 Poster: How Important is the Train-Validation Split in Meta-Learning? »
Yu Bai · Minshuo Chen · Pan Zhou · Tuo Zhao · Jason Lee · Sham Kakade · Huan Wang · Caiming Xiong -
2021 Spotlight: Online Submodular Resource Allocation with Applications to Rebalancing Shared Mobility Systems »
Pier Giuseppe Sessa · Ilija Bogunovic · Andreas Krause · Maryam Kamgarpour -
2021 Spotlight: How Important is the Train-Validation Split in Meta-Learning? »
Yu Bai · Minshuo Chen · Pan Zhou · Tuo Zhao · Jason Lee · Sham Kakade · Huan Wang · Caiming Xiong -
2021 Poster: Bilinear Classes: A Structural Framework for Provable Generalization in RL »
Simon Du · Sham Kakade · Jason Lee · Shachar Lovett · Gaurav Mahajan · Wen Sun · Ruosong Wang -
2021 Poster: Instabilities of Offline RL with Pre-Trained Neural Representation »
Ruosong Wang · Yifan Wu · Ruslan Salakhutdinov · Sham Kakade -
2021 Poster: No-regret Algorithms for Capturing Events in Poisson Point Processes »
Mojmir Mutny · Andreas Krause -
2021 Poster: Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning »
Sebastian Curi · Ilija Bogunovic · Andreas Krause -
2021 Spotlight: No-regret Algorithms for Capturing Events in Poisson Point Processes »
Mojmir Mutny · Andreas Krause -
2021 Spotlight: Instabilities of Offline RL with Pre-Trained Neural Representation »
Ruosong Wang · Yifan Wu · Ruslan Salakhutdinov · Sham Kakade -
2021 Spotlight: Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning »
Sebastian Curi · Ilija Bogunovic · Andreas Krause -
2021 Oral: Bilinear Classes: A Structural Framework for Provable Generalization in RL »
Simon Du · Sham Kakade · Jason Lee · Shachar Lovett · Gaurav Mahajan · Wen Sun · Ruosong Wang -
2021 Poster: Bias-Robust Bayesian Optimization via Dueling Bandits »
Johannes Kirschner · Andreas Krause -
2021 Poster: Fast Projection Onto Convex Smooth Constraints »
Ilnura Usmanova · Maryam Kamgarpour · Andreas Krause · Kfir Levy -
2021 Spotlight: Fast Projection Onto Convex Smooth Constraints »
Ilnura Usmanova · Maryam Kamgarpour · Andreas Krause · Kfir Levy -
2021 Spotlight: Bias-Robust Bayesian Optimization via Dueling Bandits »
Johannes Kirschner · Andreas Krause -
2020 : QA for invited talk 8 Kakade »
Sham Kakade -
2020 : Invited talk 8 Kakade »
Sham Kakade -
2020 : Constrained Maximization of Lattice Submodular Functions »
Aytunc Sahin · Joachim Buhmann · Andreas Krause -
2020 : Speaker Panel »
Csaba Szepesvari · Martha White · Sham Kakade · Gergely Neu · Shipra Agrawal · Akshay Krishnamurthy -
2020 : Exploration, Policy Gradient Methods, and the Deadly Triad - Sham Kakade »
Sham Kakade -
2020 Poster: From Sets to Multisets: Provable Variational Inference for Probabilistic Integer Submodular Models »
Aytunc Sahin · Yatao Bian · Joachim Buhmann · Andreas Krause -
2020 Poster: Soft Threshold Weight Reparameterization for Learnable Sparsity »
Aditya Kusupati · Vivek Ramanujan · Raghav Somani · Mitchell Wortsman · Prateek Jain · Sham Kakade · Ali Farhadi -
2020 Poster: Calibration, Entropy Rates, and Memory in Language Models »
Mark Braverman · Xinyi Chen · Sham Kakade · Karthik Narasimhan · Cyril Zhang · Yi Zhang -
2020 Poster: The Implicit and Explicit Regularization Effects of Dropout »
Colin Wei · Sham Kakade · Tengyu Ma -
2020 Poster: Provable Representation Learning for Imitation Learning via Bi-level Optimization »
Sanjeev Arora · Simon Du · Sham Kakade · Yuping Luo · Nikunj Umesh Saunshi -
2020 Poster: Meta-learning for Mixed Linear Regression »
Weihao Kong · Raghav Somani · Zhao Song · Sham Kakade · Sewoong Oh -
2019 : Keynote by Sham Kakade: Prediction, Learning, and Memory »
Sham Kakade -
2019 Poster: Online Variance Reduction with Mixtures »
Zalán Borsos · Sebastian Curi · Yehuda Levy · Andreas Krause -
2019 Poster: Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces »
Johannes Kirschner · Mojmir Mutny · Nicole Hiller · Rasmus Ischebeck · Andreas Krause -
2019 Poster: Online Control with Adversarial Disturbances »
Naman Agarwal · Brian Bullins · Elad Hazan · Sham Kakade · Karan Singh -
2019 Oral: Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces »
Johannes Kirschner · Mojmir Mutny · Nicole Hiller · Rasmus Ischebeck · Andreas Krause -
2019 Oral: Online Variance Reduction with Mixtures »
Zalán Borsos · Sebastian Curi · Yehuda Levy · Andreas Krause -
2019 Oral: Online Control with Adversarial Disturbances »
Naman Agarwal · Brian Bullins · Elad Hazan · Sham Kakade · Karan Singh -
2019 Poster: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2019 Poster: AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs »
Gabriele Abbati · Philippe Wenk · Michael A Osborne · Andreas Krause · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Provably Efficient Maximum Entropy Exploration »
Elad Hazan · Sham Kakade · Karan Singh · Abby Van Soest -
2019 Oral: Provably Efficient Maximum Entropy Exploration »
Elad Hazan · Sham Kakade · Karan Singh · Abby Van Soest -
2019 Oral: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2019 Oral: AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs »
Gabriele Abbati · Philippe Wenk · Michael A Osborne · Andreas Krause · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference »
Yatao Bian · Joachim Buhmann · Andreas Krause -
2019 Poster: Online Meta-Learning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine -
2019 Poster: Maximum Likelihood Estimation for Learning Populations of Parameters »
Ramya Korlakai Vinayak · Weihao Kong · Gregory Valiant · Sham Kakade -
2019 Oral: Maximum Likelihood Estimation for Learning Populations of Parameters »
Ramya Korlakai Vinayak · Weihao Kong · Gregory Valiant · Sham Kakade -
2019 Oral: Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference »
Yatao Bian · Joachim Buhmann · Andreas Krause -
2019 Oral: Online Meta-Learning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine -
2018 Poster: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi -
2018 Oral: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi -
2017 Workshop: Principled Approaches to Deep Learning »
Andrzej Pronobis · Robert Gens · Sham Kakade · Pedro Domingos -
2017 Poster: Guarantees for Greedy Maximization of Non-submodular Functions with Applications »
Yatao Bian · Joachim Buhmann · Andreas Krause · Sebastian Tschiatschek -
2017 Poster: Differentially Private Submodular Maximization: Data Summarization in Disguise »
Marko Mitrovic · Mark Bun · Andreas Krause · Amin Karbasi -
2017 Poster: Deletion-Robust Submodular Maximization: Data Summarization with "the Right to be Forgotten" »
Baharan Mirzasoleiman · Amin Karbasi · Andreas Krause -
2017 Poster: Probabilistic Submodular Maximization in Sub-Linear Time »
Serban A Stan · Morteza Zadimoghaddam · Andreas Krause · Amin Karbasi -
2017 Talk: Deletion-Robust Submodular Maximization: Data Summarization with "the Right to be Forgotten" »
Baharan Mirzasoleiman · Amin Karbasi · Andreas Krause -
2017 Talk: Probabilistic Submodular Maximization in Sub-Linear Time »
Serban A Stan · Morteza Zadimoghaddam · Andreas Krause · Amin Karbasi -
2017 Talk: Guarantees for Greedy Maximization of Non-submodular Functions with Applications »
Yatao Bian · Joachim Buhmann · Andreas Krause · Sebastian Tschiatschek -
2017 Talk: Differentially Private Submodular Maximization: Data Summarization in Disguise »
Marko Mitrovic · Mark Bun · Andreas Krause · Amin Karbasi -
2017 Poster: Distributed and Provably Good Seedings for k-Means in Constant Rounds »
Olivier Bachem · Mario Lucic · Andreas Krause -
2017 Poster: Uniform Deviation Bounds for k-Means Clustering »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2017 Poster: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Talk: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Talk: Uniform Deviation Bounds for k-Means Clustering »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2017 Talk: Distributed and Provably Good Seedings for k-Means in Constant Rounds »
Olivier Bachem · Mario Lucic · Andreas Krause