Timezone: »
Tutorial
Random Matrix Theory and ML (RMT+ML)
Fabian Pedregosa · Courtney Paquette · Thomas Trogdon · Jeffrey Pennington
Mon Jul 19 12:00 PM -- 03:00 PM (PDT) @ Virtual
Event URL: https://random-matrix-learning.github.io/ »
In recent years, random matrix theory (RMT) has come to the forefront of learning theory as a tool to understand some of its most important challenges. From generalization of deep learning models to a precise analysis of optimization algorithms, RMT provides analytically tractable models.
Mon 12:00 p.m. - 12:05 p.m.
|
Live Intro
(
Introduction by moderator
)
SlidesLive Video » |
🔗 |
Mon 12:05 p.m. - 1:06 p.m.
|
Introduction
(
Tutorial
)
SlidesLive Video » |
Fabian Pedregosa · Courtney Paquette 🔗 |
Mon 1:06 p.m. - 1:30 p.m.
|
Q&A
(
Live Q&A
)
|
🔗 |
Mon 1:30 p.m. - 2:00 p.m.
|
Analysis of numerical algorithms
(
Tutorial
)
SlidesLive Video » |
Thomas Trogdon 🔗 |
Mon 2:00 p.m. - 2:15 p.m.
|
Q&A
(
Live Q&A
)
|
🔗 |
Mon 2:15 p.m. - 2:45 p.m.
|
The Mystery of Generalization: Why Does Deep Learning Work?
(
Tutorial
)
SlidesLive Video » |
Jeffrey Pennington 🔗 |
Mon 2:45 p.m. - 3:00 p.m.
|
Q&A
(
Live Q&A
)
|
🔗 |
Author Information
Fabian Pedregosa (Google)
Courtney Paquette (McGill University)
Thomas Trogdon (University of Washington)
Jeffrey Pennington (Google Brain)
More from the Same Authors
-
2023 Workshop: HiLD: High-dimensional Learning Dynamics Workshop »
Courtney Paquette · Zhenyu Liao · Mihai Nica · Elliot Paquette · Andrew Saxe · Rene Vidal -
2023 Poster: Second-order regression models exhibit progressive sharpening to the edge of stability »
Atish Agarwala · Fabian Pedregosa · Jeffrey Pennington -
2022 Poster: Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm »
Lechao Xiao · Jeffrey Pennington -
2022 Poster: Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling »
Jiri Hron · Roman Novak · Jeffrey Pennington · Jascha Sohl-Dickstein -
2022 Spotlight: Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm »
Lechao Xiao · Jeffrey Pennington -
2022 Spotlight: Wide Bayesian neural networks have a simple weight posterior: theory and accelerated sampling »
Jiri Hron · Roman Novak · Jeffrey Pennington · Jascha Sohl-Dickstein -
2022 Poster: Only tails matter: Average-Case Universality and Robustness in the Convex Regime »
LEONARDO CUNHA · Gauthier Gidel · Fabian Pedregosa · Damien Scieur · Courtney Paquette -
2022 Poster: On Implicit Bias in Overparameterized Bilevel Optimization »
Paul Vicol · Jonathan Lorraine · Fabian Pedregosa · David Duvenaud · Roger Grosse -
2022 Spotlight: On Implicit Bias in Overparameterized Bilevel Optimization »
Paul Vicol · Jonathan Lorraine · Fabian Pedregosa · David Duvenaud · Roger Grosse -
2022 Spotlight: Only tails matter: Average-Case Universality and Robustness in the Convex Regime »
LEONARDO CUNHA · Gauthier Gidel · Fabian Pedregosa · Damien Scieur · Courtney Paquette -
2021 : SGD in the Large: Average-case Analysis, Asymptotics, and Stepsize Criticality »
Courtney Paquette -
2021 Poster: Analysis of stochastic Lanczos quadrature for spectrum approximation »
Tyler Chen · Thomas Trogdon · Shashanka Ubaru -
2021 Oral: Analysis of stochastic Lanczos quadrature for spectrum approximation »
Tyler Chen · Thomas Trogdon · Shashanka Ubaru -
2021 : The Mystery of Generalization: Why Does Deep Learning Work? »
Jeffrey Pennington -
2021 : Analysis of numerical algorithms »
Thomas Trogdon -
2021 : Introduction »
Fabian Pedregosa · Courtney Paquette -
2020 Poster: Acceleration through spectral density estimation »
Fabian Pedregosa · Damien Scieur -
2020 Poster: Universal Asymptotic Optimality of Polyak Momentum »
Damien Scieur · Fabian Pedregosa -
2020 Poster: The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization »
Ben Adlam · Jeffrey Pennington -
2020 Poster: Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization »
Geoffrey Negiar · Gideon Dresdner · Alicia Yi-Ting Tsai · Laurent El Ghaoui · Francesco Locatello · Robert Freund · Fabian Pedregosa -
2020 Poster: Disentangling Trainability and Generalization in Deep Neural Networks »
Lechao Xiao · Jeffrey Pennington · Samuel Schoenholz -
2019 : Poster discussion »
Roman Novak · Maxime Gabella · Frederic Dreyer · Siavash Golkar · Anh Tong · Irina Higgins · Mirco Milletari · Joe Antognini · Sebastian Goldt · Adín Ramírez Rivera · Roberto Bondesan · Ryo Karakida · Remi Tachet des Combes · Michael Mahoney · Nicholas Walker · Stanislav Fort · Samuel Smith · Rohan Ghosh · Aristide Baratin · Diego Granziol · Stephen Roberts · Dmitry Vetrov · Andrew Wilson · César Laurent · Valentin Thomas · Simon Lacoste-Julien · Dar Gilboa · Daniel Soudry · Anupam Gupta · Anirudh Goyal · Yoshua Bengio · Erich Elsen · Soham De · Stanislaw Jastrzebski · Charles H Martin · Samira Shabanian · Aaron Courville · Shorato Akaho · Lenka Zdeborova · Ethan Dyer · Maurice Weiler · Pim de Haan · Taco Cohen · Max Welling · Ping Luo · zhanglin peng · Nasim Rahaman · Loic Matthey · Danilo J. Rezende · Jaesik Choi · Kyle Cranmer · Lechao Xiao · Jaehoon Lee · Yasaman Bahri · Jeffrey Pennington · Greg Yang · Jiri Hron · Jascha Sohl-Dickstein · Guy Gur-Ari -
2019 Workshop: Theoretical Physics for Deep Learning »
Jaehoon Lee · Jeffrey Pennington · Yasaman Bahri · Max Welling · Surya Ganguli · Joan Bruna -
2019 : Opening Remarks »
Jaehoon Lee · Jeffrey Pennington · Yasaman Bahri · Max Welling · Surya Ganguli · Joan Bruna -
2018 Poster: Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks »
Minmin Chen · Jeffrey Pennington · Samuel Schoenholz -
2018 Oral: Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks »
Minmin Chen · Jeffrey Pennington · Samuel Schoenholz -
2018 Poster: Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks »
Lechao Xiao · Yasaman Bahri · Jascha Sohl-Dickstein · Samuel Schoenholz · Jeffrey Pennington -
2018 Oral: Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks »
Lechao Xiao · Yasaman Bahri · Jascha Sohl-Dickstein · Samuel Schoenholz · Jeffrey Pennington -
2017 Poster: Geometry of Neural Network Loss Surfaces via Random Matrix Theory »
Jeffrey Pennington · Yasaman Bahri -
2017 Talk: Geometry of Neural Network Loss Surfaces via Random Matrix Theory »
Jeffrey Pennington · Yasaman Bahri