Skip to yearly menu bar Skip to main content


ICML 2020 Awards

To recognize excellent work conducted by members of the ICML community, we have two types of awards. The Test of Time award is given to a paper from ICML ten years ago that has had significant impact, and Outstanding Paper awards are given to papers in the current ICML that are exemplary.

Test of Time Award

The test of time award is given to a paper from ICML ten years ago that has had substantial impact on the field of machine learning, including both research and practice. The award recipient was selected by the ICML 2020 General Chair (David Blei) and Program Co-Chairs (Aarti Singh and Hal Daumé III) in consultation with a Test of Time Award committee, members listed below, including the General and Program Chairs from ICML 2010.

The recipients of the Test of Time award will give a plenary talk at the following times on Monday July 13:

  • Live talk & Q/A @ 11:00-12:00 UTC (4:00 PDT, 7:00 EDT, 13:00 CEST, 16:30 IST, 19:00 CST)
  • Recorded talk & Live Q/A @ 21:00-22:00 UTC (14:00 PDT, 17:00 EDT, 23:00 CEST; Tuesday July 14, 2:30 IST, 5:00 CST)

Once registered, you can access the talks at https://icml.cc/virtual/2020/awards.html

We are thrilled to announce that the 2020 Test of Time Award goes to....

Gaussian Process Optimization in the Bandit Setting:
No Regret and Experimental Design

    by: Niranjan Srinivas Andreas Krause Sham Kakade Matthias Seeger
     Photo of Niranjan Srinivas     Photo of Andreas Krause  Photo of Sham Kakade   Photo of Matthias Seeger

 

 

 

 

 

Abstract: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.

This paper brought together the fields of Bayesian optimization, bandits and experimental design by analyzing Gaussian process bandit optimization, giving a novel approach to derive finite-sample regret bounds in terms of a mutual information gain quantity. This paper has had profound impact over the past ten years, including the method itself, the proof techniques used, and the practical results. These have all enriched our community by sparking creativity in myriad subsequent works, ranging from theory to practice. When nominating papers for this award, members of the award committee had the following to say about this paper:

"The results of this paper have influenced current methods of hyperparameter search in modern deep learning systems, [and the algorithm] continues to be a popular acquisition algorithm even after a decade. The theorems and lemmas of this paper are quoted in recent papers, and their proof technique is borrowed in follow-up papers. On the duo of technical depth and impact this paper clearly stands out."

"Bayesian optimization has become a powerful tool in many machine learning problems in the last 10 years.... the mathematical technique used in the proof influenced subsequent research significantly."

"This is a nice, original analysis that has influenced many subsequent papers on GP bandit optimization and Bayesian optimization, and underlies the convergence analysis in very many Bayesian Optimization papers. Hence, it has become a well-known paper, which has had influence both in theory and practice, due to the ongoing practical impact of Bayesian Optimization for experimental design, hyperparameter tuning and more."

"Bayesian optimization using Gaussian processes is an important technique for black-box hyperparameter tuning and autoML. However, there was no explicit finite sample convergence theory for this method. This paper treats Bayesian optimization as a bandit problem, and obtained a cumulative regret bound for GP-UCB in terms of the information gain. This leads to a first finite sample theoretical analysis for Bayesian optimization.... it will have significant long term impact both theoretically and practically."

Thank you to the Test of Time Award Committee Members:

  • Anima Anandkumar
  • Johannes Furnkranz (ICML 2010 Program Co-Chair)
  • Stefanie Jegelka
  • Thorsten Joachims (ICML 2010 Program Co-Chair)
  • John Langford
  • Hugo Larochelle
  • Claire Monteleoni
  • Sunita Sarawagi
  • Masashi Sugiyama
  • Stefan Wrobel (ICML 2010 General Chair)
  • Tong Zhang

 

Outstanding Paper Awards

The Outstanding Paper awards are given to papers from the current conference that are strong representatives of solid theoretical and empirical work in our field. The following awards were announced at the presentation of the Test of Time award during the conference.
 

Outstanding Paper Awards:

 

On Learning Sets of Symmetric Elements

Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya

 

Tuning-free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems 

Kaixuan Wei, Angelica I Aviles-Rivero, Jingwei Liang, Ying Fu, Carola-Bibiane Schönlieb, Hua Huang

 

Outstanding Paper (Honorable mentions):

 

Efficiently sampling functions from Gaussian process posteriors 

James Wilson, Slava Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Deisenroth

 

Generative Pretraining From Pixels 

Mark Chen, Alec Radford, Rewon Child, Jeffrey K Wu, Heewoo Jun, David Luan, Ilya Sutskever

 

The award recipients were selected from a pool of papers (nominated by Meta Reviewers) by the Paper Awards committee:

  • Alina Beygelzimer (co-chair)
  • Jennifer Dy (co-chair)
  • Nicolas Le Roux (co-chair)
  • Sebastian Bubeck
  • Ali Eslami
  • Mario Figueiredo
  • Ankur Moitra
  • Sayan Mukherjee
  • Sunita Sarawagi
  • Matthias Seeger
  • Jun Zhu