Timezone: »
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.
Author Information
Niranjan Srinivas (10x Genomics)
Niranjan Srinivas is a Senior Computational Biologist at 10x Genomics, where he leads the development of new technologies that enable high-throughput biological measurements and analysis. Prior to this, he was a Damon Runyon Post-doctoral Fellow at the University of California, Berkeley. He earned his Ph. D. in Computation and Neural Systems from Caltech (2015), where his doctoral dissertation won the Demetriades-Tsafka-Kokkalis prize for the best thesis, publication, or discovery in nanotechnology and related fields.
Andreas Krause (ETH Zurich)
Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received ERC Starting Investigator and ERC Consolidator grants, the Deutscher Mustererkennungspreis, an NSF CAREER award, the Okawa Foundation Research Grant recognizing top young researchers in telecommunications as well as the ETH Golden Owl teaching award. His research on machine learning and adaptive systems has received awards at several premier conferences and journals, including the ACM SIGKDD Test of Time award 2019 and the ICML Test of Time award 2020. Andreas Krause served as Program Co-Chair for ICML 2018, and is regularly serving as Area Chair or Senior Program Committee member for ICML, NeurIPS, AAAI and IJCAI, and as Action Editor for the Journal of Machine Learning Research.
Sham Kakade (University of Washington)
Sham Kakade is a Washington Research Foundation Data Science Chair, with a joint appointment in the Department of Computer Science and the Department of Statistics at the University of Washington, and is a co-director for the Algorithmic Foundations of Data Science Institute. He works on the mathematical foundations of machine learning and AI. Sham's thesis helped in laying the foundations of the PAC-MDP framework for reinforcement learning. With his collaborators, his additional contributions include: one of the first provably efficient policy search methods, Conservative Policy Iteration, for reinforcement learning; developing the mathematical foundations for the widely used linear bandit models and the Gaussian process bandit models; the tensor and spectral methodologies for provable estimation of latent variable models (applicable to mixture of Gaussians, HMMs, and LDA); the first sharp analysis of the perturbed gradient descent algorithm, along with the design and analysis of numerous other convex and non-convex algorithms. He is the recipient of the IBM Goldberg best paper award (in 2007) for contributions to fast nearest neighbor search and the best paper, INFORMS Revenue Management and Pricing Section Prize (2014). He has been program chair for COLT 2011. Sham was an undergraduate at Caltech, where he studied physics and worked under the guidance of John Preskill in quantum computing. He then completed his Ph.D. in computational neuroscience at the Gatsby Unit at University College London, under the supervision of Peter Dayan. He was a postdoc at the Dept. of Computer Science, University of Pennsylvania , where he broadened his studies to include computational game theory and economics from the guidance of Michael Kearns. Sham has been a Principal Research Scientist at Microsoft Research, New England, an associate professor at the Department of Statistics, Wharton, UPenn, and an assistant professor at the Toyota Technological Institute at Chicago.
Matthias W Seeger (Amazon Research)
Matthias W. Seeger received a Ph.D. from the School of Informatics, Edinburgh university, UK, in 2003 (advisor Christopher Williams). He was a research fellow with Michael Jordan and Peter Bartlett, University of California at Berkeley, from 2003, and with Bernhard Schoelkopf, Max Planck Institute for Intelligent Systems, Tuebingen, Germany, from 2005. He led a research group at the University of Saarbruecken, Germany, from 2008, and was assistant professor at the Ecole Polytechnique Federale de Lausanne from fall 2010. He joined Amazon as machine learning scientist in 2014.
More from the Same Authors
-
2020 Poster: From Sets to Multisets: Provable Variational Inference for Probabilistic Integer Submodular Models »
Aytunc Sahin · Yatao Bian · Joachim Buhmann · Andreas Krause -
2020 Poster: Soft Threshold Weight Reparameterization for Learnable Sparsity »
Aditya Kusupati · Vivek Ramanujan · Raghav Somani · Mitchell Wortsman · Prateek Jain · Sham Kakade · Ali Farhadi -
2020 Poster: Calibration, Entropy Rates, and Memory in Language Models »
Mark Braverman · Xinyi Chen · Sham Kakade · Karthik Narasimhan · Cyril Zhang · Yi Zhang -
2020 Poster: The Implicit and Explicit Regularization Effects of Dropout »
Colin Wei · Sham Kakade · Tengyu Ma -
2020 Poster: Provable Representation Learning for Imitation Learning via Bi-level Optimization »
Sanjeev Arora · Simon Du · Sham Kakade · Yuping Luo · Nikunj Umesh Saunshi -
2020 Poster: Meta-learning for Mixed Linear Regression »
Weihao Kong · Raghav Somani · Zhao Song · Sham Kakade · Sewoong Oh -
2019 Poster: Online Variance Reduction with Mixtures »
Zalán Borsos · Sebastian Curi · Yehuda Levy · Andreas Krause -
2019 Poster: Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces »
Johannes Kirschner · Mojmir Mutny · Nicole Hiller · Rasmus Ischebeck · Andreas Krause -
2019 Poster: Online Control with Adversarial Disturbances »
Naman Agarwal · Brian Bullins · Elad Hazan · Sham Kakade · Karan Singh -
2019 Oral: Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces »
Johannes Kirschner · Mojmir Mutny · Nicole Hiller · Rasmus Ischebeck · Andreas Krause -
2019 Oral: Online Variance Reduction with Mixtures »
Zalán Borsos · Sebastian Curi · Yehuda Levy · Andreas Krause -
2019 Oral: Online Control with Adversarial Disturbances »
Naman Agarwal · Brian Bullins · Elad Hazan · Sham Kakade · Karan Singh -
2019 Poster: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2019 Poster: AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs »
Gabriele Abbati · Philippe Wenk · Michael A Osborne · Andreas Krause · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Provably Efficient Maximum Entropy Exploration »
Elad Hazan · Sham Kakade · Karan Singh · Abby Van Soest -
2019 Oral: Provably Efficient Maximum Entropy Exploration »
Elad Hazan · Sham Kakade · Karan Singh · Abby Van Soest -
2019 Oral: Learning Generative Models across Incomparable Spaces »
Charlotte Bunne · David Alvarez-Melis · Andreas Krause · Stefanie Jegelka -
2019 Oral: AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs »
Gabriele Abbati · Philippe Wenk · Michael A Osborne · Andreas Krause · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference »
Yatao Bian · Joachim Buhmann · Andreas Krause -
2019 Poster: Online Meta-Learning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine -
2019 Poster: Maximum Likelihood Estimation for Learning Populations of Parameters »
Ramya Korlakai Vinayak · Weihao Kong · Gregory Valiant · Sham Kakade -
2019 Oral: Maximum Likelihood Estimation for Learning Populations of Parameters »
Ramya Korlakai Vinayak · Weihao Kong · Gregory Valiant · Sham Kakade -
2019 Oral: Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference »
Yatao Bian · Joachim Buhmann · Andreas Krause -
2019 Oral: Online Meta-Learning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine -
2018 Poster: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi -
2018 Oral: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi -
2017 Workshop: Principled Approaches to Deep Learning »
Andrzej Pronobis · Robert Gens · Sham Kakade · Pedro Domingos -
2017 Poster: Guarantees for Greedy Maximization of Non-submodular Functions with Applications »
Yatao Bian · Joachim Buhmann · Andreas Krause · Sebastian Tschiatschek -
2017 Poster: Differentially Private Submodular Maximization: Data Summarization in Disguise »
Marko Mitrovic · Mark Bun · Andreas Krause · Amin Karbasi -
2017 Poster: Deletion-Robust Submodular Maximization: Data Summarization with "the Right to be Forgotten" »
Baharan Mirzasoleiman · Amin Karbasi · Andreas Krause -
2017 Poster: Probabilistic Submodular Maximization in Sub-Linear Time »
Serban A Stan · Morteza Zadimoghaddam · Andreas Krause · Amin Karbasi -
2017 Talk: Deletion-Robust Submodular Maximization: Data Summarization with "the Right to be Forgotten" »
Baharan Mirzasoleiman · Amin Karbasi · Andreas Krause -
2017 Talk: Probabilistic Submodular Maximization in Sub-Linear Time »
Serban A Stan · Morteza Zadimoghaddam · Andreas Krause · Amin Karbasi -
2017 Talk: Guarantees for Greedy Maximization of Non-submodular Functions with Applications »
Yatao Bian · Joachim Buhmann · Andreas Krause · Sebastian Tschiatschek -
2017 Talk: Differentially Private Submodular Maximization: Data Summarization in Disguise »
Marko Mitrovic · Mark Bun · Andreas Krause · Amin Karbasi -
2017 Poster: Distributed and Provably Good Seedings for k-Means in Constant Rounds »
Olivier Bachem · Mario Lucic · Andreas Krause -
2017 Poster: Uniform Deviation Bounds for k-Means Clustering »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2017 Poster: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Talk: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Talk: Uniform Deviation Bounds for k-Means Clustering »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2017 Talk: Distributed and Provably Good Seedings for k-Means in Constant Rounds »
Olivier Bachem · Mario Lucic · Andreas Krause