Timezone: »
Author Information
Ramya Korlakai Vinayak (University of Washington)
Weihao Kong (Stanford University)
Gregory Valiant (Stanford University)
Sham Kakade (University of Washington)
Sham Kakade is a Gordon McKay Professor of Computer Science and Statistics at Harvard University and a codirector of the recently announced Kempner Institute. He works on the mathematical foundations of machine learning and AI. Sham's thesis helped in laying the statistical foundations of reinforcement learning. With his collaborators, his additional contributions include: one of the first provably efficient policy search methods, Conservative Policy Iteration, for reinforcement learning; developing the mathematical foundations for the widely used linear bandit models and the Gaussian process bandit models; the tensor and spectral methodologies for provable estimation of latent variable models; the first sharp analysis of the perturbed gradient descent algorithm, along with the design and analysis of numerous other convex and nonconvex algorithms. He is the recipient of the ICML Test of Time Award (2020), the IBM Pat Goldberg best paper award (in 2007), INFORMS Revenue Management and Pricing Prize (2014). He has been program chair for COLT 2011. Sham was an undergraduate at Caltech, where he studied physics and worked under the guidance of John Preskill in quantum computing. He then completed his Ph.D. in computational neuroscience at the Gatsby Unit at University College London, under the supervision of Peter Dayan. He was a postdoc at the Dept. of Computer Science, University of Pennsylvania , where he broadened his studies to include computational game theory and economics from the guidance of Michael Kearns. Sham has been a Principal Research Scientist at Microsoft Research, New England, an associate professor at the Department of Statistics, Wharton, UPenn, and an assistant professor at the Toyota Technological Institute at Chicago.
Related Events (a corresponding poster, oral, or spotlight)

2019 Poster: Maximum Likelihood Estimation for Learning Populations of Parameters »
Wed. Jun 12th 01:30  04:00 AM Room Pacific Ballroom #189
More from the Same Authors

2021 : A Short Note on the Relationship of Information Gain and Eluder Dimension »
Kaixuan Huang · Sham Kakade · Jason Lee · Qi Lei 
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang 
2023 : Lexinvariant Language Models »
Qian Huang · Eric Zelikman · Sarah Chen · Yuhuai Wu · Greg Valiant · Percy Liang 
2023 Poster: Onesided Matrix Completion from Two Observations Per Row »
Steven Cao · Percy Liang · Greg Valiant 
2021 : Sparsity in the Partially Controllable LQR »
Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang 
2021 Poster: Sinkhorn Label Allocation: SemiSupervised Classification via Annealed SelfTraining »
Kai Sheng Tai · Peter Bailis · Gregory Valiant 
2021 Spotlight: Sinkhorn Label Allocation: SemiSupervised Classification via Annealed SelfTraining »
Kai Sheng Tai · Peter Bailis · Gregory Valiant 
2021 Poster: How Important is the TrainValidation Split in MetaLearning? »
Yu Bai · Minshuo Chen · Pan Zhou · Tuo Zhao · Jason Lee · Sham Kakade · Huan Wang · Caiming Xiong 
2021 Spotlight: How Important is the TrainValidation Split in MetaLearning? »
Yu Bai · Minshuo Chen · Pan Zhou · Tuo Zhao · Jason Lee · Sham Kakade · Huan Wang · Caiming Xiong 
2021 Poster: Bilinear Classes: A Structural Framework for Provable Generalization in RL »
Simon Du · Sham Kakade · Jason Lee · Shachar Lovett · Gaurav Mahajan · Wen Sun · Ruosong Wang 
2021 Poster: Instabilities of Offline RL with PreTrained Neural Representation »
Ruosong Wang · Yifan Wu · Ruslan Salakhutdinov · Sham Kakade 
2021 Spotlight: Instabilities of Offline RL with PreTrained Neural Representation »
Ruosong Wang · Yifan Wu · Ruslan Salakhutdinov · Sham Kakade 
2021 Oral: Bilinear Classes: A Structural Framework for Provable Generalization in RL »
Simon Du · Sham Kakade · Jason Lee · Shachar Lovett · Gaurav Mahajan · Wen Sun · Ruosong Wang 
2020 : QA for invited talk 8 Kakade »
Sham Kakade 
2020 : Invited talk 8 Kakade »
Sham Kakade 
2020 : Speaker Panel »
Csaba Szepesvari · Martha White · Sham Kakade · Gergely Neu · Shipra Agrawal · Akshay Krishnamurthy 
2020 : Exploration, Policy Gradient Methods, and the Deadly Triad  Sham Kakade »
Sham Kakade 
2020 Poster: Soft Threshold Weight Reparameterization for Learnable Sparsity »
Aditya Kusupati · Vivek Ramanujan · Raghav Somani · Mitchell Wortsman · Prateek Jain · Sham Kakade · Ali Farhadi 
2020 Poster: Calibration, Entropy Rates, and Memory in Language Models »
Mark Braverman · Xinyi Chen · Sham Kakade · Karthik Narasimhan · Cyril Zhang · Yi Zhang 
2020 Poster: The Implicit and Explicit Regularization Effects of Dropout »
Colin Wei · Sham Kakade · Tengyu Ma 
2020 Poster: Provable Representation Learning for Imitation Learning via Bilevel Optimization »
Sanjeev Arora · Simon Du · Sham Kakade · Yuping Luo · Nikunj Umesh Saunshi 
2020 Poster: Sample Amplification: Increasing Dataset Size even when Learning is Impossible »
Brian Axelrod · Shivam Garg · Vatsal Sharan · Gregory Valiant 
2020 Poster: Metalearning for Mixed Linear Regression »
Weihao Kong · Raghav Somani · Zhao Song · Sham Kakade · Sewoong Oh 
2020 Test Of Time: Test of Time: Gaussian Process Optimization in the Bandit Settings: No Regret and Experimental Design »
Niranjan Srinivas · Andreas Krause · Sham Kakade · Matthias Seeger 
2019 : Keynote by Sham Kakade: Prediction, Learning, and Memory »
Sham Kakade 
2019 Poster: Online Control with Adversarial Disturbances »
Naman Agarwal · Brian Bullins · Elad Hazan · Sham Kakade · Karan Singh 
2019 Oral: Online Control with Adversarial Disturbances »
Naman Agarwal · Brian Bullins · Elad Hazan · Sham Kakade · Karan Singh 
2019 Poster: Compressed Factorization: Fast and Accurate LowRank Factorization of CompressivelySensed Data »
Vatsal Sharan · Kai Sheng Tai · Peter Bailis · Gregory Valiant 
2019 Poster: Equivariant Transformer Networks »
Kai Sheng Tai · Peter Bailis · Gregory Valiant 
2019 Poster: Provably Efficient Maximum Entropy Exploration »
Elad Hazan · Sham Kakade · Karan Singh · Abby Van Soest 
2019 Oral: Provably Efficient Maximum Entropy Exploration »
Elad Hazan · Sham Kakade · Karan Singh · Abby Van Soest 
2019 Oral: Equivariant Transformer Networks »
Kai Sheng Tai · Peter Bailis · Gregory Valiant 
2019 Oral: Compressed Factorization: Fast and Accurate LowRank Factorization of CompressivelySensed Data »
Vatsal Sharan · Kai Sheng Tai · Peter Bailis · Gregory Valiant 
2019 Poster: Online MetaLearning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine 
2019 Oral: Online MetaLearning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine 
2018 Poster: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi 
2018 Oral: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi 
2017 Workshop: Principled Approaches to Deep Learning »
Andrzej Pronobis · Robert Gens · Sham Kakade · Pedro Domingos 
2017 Poster: Estimating the unseen from multiple populations »
Aditi Raghunathan · Greg Valiant · James Zou 
2017 Poster: Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use »
Vatsal Sharan · Gregory Valiant 
2017 Poster: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan 
2017 Talk: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan 
2017 Talk: Estimating the unseen from multiple populations »
Aditi Raghunathan · Greg Valiant · James Zou 
2017 Talk: Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use »
Vatsal Sharan · Gregory Valiant