Timezone: »
Recent work has highlighted the role of initialization scale in determining the structure of the solutions that gradient methods converge to. In particular, it was shown that large initialization leads to the neural tangent kernel regime solution, whereas small initialization leads to so called ``rich regimes''. However, the initialization structure is richer than the overall scale alone and involves relative magnitudes of different weights and layers in the network. Here we show that these relative scales, which we refer to as initialization shape, play an important role in determining the learned model. We develop a novel technique for deriving the inductive bias of gradient-flow and use it to obtain closed-form implicit regularizers for multiple cases of interest.
Author Information
Shahar Azulay (TAU)
Edward Moroshko (Technion)
Mor Shpigel Nacson (Technion)
Blake Woodworth (Toyota Technological Institute at Chicago)
Nati Srebro (Toyota Technological Institute at Chicago)
Amir Globerson (Tel Aviv University, Google)
Daniel Soudry (Technion)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent »
Wed. Jul 21st 04:00 -- 06:00 PM Room
More from the Same Authors
-
2023 Poster: Federated Online and Bandit Convex Optimization »
Kumar Kshitij Patel · Lingxiao Wang · Aadirupa Saha · Nati Srebro -
2023 Poster: Continual Learning in Linear Classification on Separable Data »
Itay Evron · Edward Moroshko · gon buzaglo · Maroun Khriesh · Badea Marjieh · Nati Srebro · Daniel Soudry -
2023 Poster: Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond »
Itai Kreisler · Mor Shpigel Nacson · Daniel Soudry · Yair Carmon -
2022 Poster: Efficient Learning of CNNs using Patch Based Features »
Alon Brutzkus · Amir Globerson · Eran Malach · Alon Regev Netser · Shai Shalev-Shwartz -
2022 Poster: Implicit Bias of the Step Size in Linear Diagonal Neural Networks »
Mor Shpigel Nacson · Kavya Ravichandran · Nati Srebro · Daniel Soudry -
2022 Spotlight: Efficient Learning of CNNs using Patch Based Features »
Alon Brutzkus · Amir Globerson · Eran Malach · Alon Regev Netser · Shai Shalev-Shwartz -
2022 Spotlight: Implicit Bias of the Step Size in Linear Diagonal Neural Networks »
Mor Shpigel Nacson · Kavya Ravichandran · Nati Srebro · Daniel Soudry -
2021 Poster: Fast margin maximization via dual acceleration »
Ziwei Ji · Nati Srebro · Matus Telgarsky -
2021 Spotlight: Fast margin maximization via dual acceleration »
Ziwei Ji · Nati Srebro · Matus Telgarsky -
2021 Poster: Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels »
Eran Malach · Pritish Kamath · Emmanuel Abbe · Nati Srebro -
2021 Spotlight: Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels »
Eran Malach · Pritish Kamath · Emmanuel Abbe · Nati Srebro -
2021 Poster: Dropout: Explicit Forms and Capacity Control »
Raman Arora · Peter Bartlett · Poorya Mianjy · Nati Srebro -
2021 Spotlight: Dropout: Explicit Forms and Capacity Control »
Raman Arora · Peter Bartlett · Poorya Mianjy · Nati Srebro -
2021 Poster: Accurate Post Training Quantization With Small Calibration Sets »
Itay Hubara · Yury Nahshan · Yair Hanani · Ron Banner · Daniel Soudry -
2021 Poster: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2021 Spotlight: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2021 Spotlight: Accurate Post Training Quantization With Small Calibration Sets »
Itay Hubara · Yury Nahshan · Yair Hanani · Ron Banner · Daniel Soudry -
2021 Poster: Towards Understanding Learning in Neural Networks with Linear Teachers »
Roei Sarussi · Alon Brutzkus · Amir Globerson -
2021 Spotlight: Towards Understanding Learning in Neural Networks with Linear Teachers »
Roei Sarussi · Alon Brutzkus · Amir Globerson -
2020 Poster: Efficiently Learning Adversarially Robust Halfspaces with Noise »
Omar Montasser · Surbhi Goel · Ilias Diakonikolas · Nati Srebro -
2020 Poster: Beyond Signal Propagation: Is Feature Diversity Necessary in Deep Neural Network Initialization? »
Yaniv Blumenfeld · Dar Gilboa · Daniel Soudry -
2020 Poster: Is Local SGD Better than Minibatch SGD? »
Blake Woodworth · Kumar Kshitij Patel · Sebastian Stich · Zhen Dai · Brian Bullins · Brendan McMahan · Ohad Shamir · Nati Srebro -
2020 Poster: Fair Learning with Private Demographic Data »
Hussein Mozannar · Mesrob Ohannessian · Nati Srebro -
2019 : Nati Srebro: Optimization’s Untold Gift to Learning: Implicit Regularization »
Nati Srebro -
2019 : Poster discussion »
Roman Novak · Maxime Gabella · Frederic Dreyer · Siavash Golkar · Anh Tong · Irina Higgins · Mirco Milletari · Joe Antognini · Sebastian Goldt · Adín Ramírez Rivera · Roberto Bondesan · Ryo Karakida · Remi Tachet des Combes · Michael Mahoney · Nicholas Walker · Stanislav Fort · Samuel Smith · Rohan Ghosh · Aristide Baratin · Diego Granziol · Stephen Roberts · Dmitry Vetrov · Andrew Wilson · César Laurent · Valentin Thomas · Simon Lacoste-Julien · Dar Gilboa · Daniel Soudry · Anupam Gupta · Anirudh Goyal · Yoshua Bengio · Erich Elsen · Soham De · Stanislaw Jastrzebski · Charles H Martin · Samira Shabanian · Aaron Courville · Shorato Akaho · Lenka Zdeborova · Ethan Dyer · Maurice Weiler · Pim de Haan · Taco Cohen · Max Welling · Ping Luo · zhanglin peng · Nasim Rahaman · Loic Matthey · Danilo J. Rezende · Jaesik Choi · Kyle Cranmer · Lechao Xiao · Jaehoon Lee · Yasaman Bahri · Jeffrey Pennington · Greg Yang · Jiri Hron · Jascha Sohl-Dickstein · Guy Gur-Ari -
2019 : Panel Discussion (Nati Srebro, Dan Roy, Chelsea Finn, Mikhail Belkin, Aleksander Mądry, Jason Lee) »
Nati Srebro · Daniel Roy · Chelsea Finn · Mikhail Belkin · Aleksander Madry · Jason Lee -
2019 Workshop: Understanding and Improving Generalization in Deep Learning »
Dilip Krishnan · Hossein Mobahi · Behnam Neyshabur · Behnam Neyshabur · Peter Bartlett · Dawn Song · Nati Srebro -
2019 Poster: Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem »
Alon Brutzkus · Amir Globerson -
2019 Poster: Semi-Cyclic Stochastic Gradient Descent »
Hubert Eichner · Tomer Koren · Brendan McMahan · Nati Srebro · Kunal Talwar -
2019 Oral: Semi-Cyclic Stochastic Gradient Descent »
Hubert Eichner · Tomer Koren · Brendan McMahan · Nati Srebro · Kunal Talwar -
2019 Oral: Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem »
Alon Brutzkus · Amir Globerson -
2019 Poster: Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints »
Andrew Cotter · Maya Gupta · Heinrich Jiang · Nati Srebro · Karthik Sridharan · Serena Wang · Blake Woodworth · Seungil You -
2019 Poster: Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models »
Mor Shpigel Nacson · Suriya Gunasekar · Jason Lee · Nati Srebro · Daniel Soudry -
2019 Oral: Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints »
Andrew Cotter · Maya Gupta · Heinrich Jiang · Nati Srebro · Karthik Sridharan · Serena Wang · Blake Woodworth · Seungil You -
2019 Oral: Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models »
Mor Shpigel Nacson · Suriya Gunasekar · Jason Lee · Nati Srebro · Daniel Soudry -
2018 Poster: Characterizing Implicit Bias in Terms of Optimization Geometry »
Suriya Gunasekar · Jason Lee · Daniel Soudry · Nati Srebro -
2018 Oral: Characterizing Implicit Bias in Terms of Optimization Geometry »
Suriya Gunasekar · Jason Lee · Daniel Soudry · Nati Srebro -
2018 Poster: Learning to Optimize Combinatorial Functions »
Nir Rosenfeld · Eric Balkanski · Amir Globerson · Yaron Singer -
2018 Poster: Predict and Constrain: Modeling Cardinality in Deep Structured Prediction »
Nataly Brukhim · Amir Globerson -
2018 Oral: Learning to Optimize Combinatorial Functions »
Nir Rosenfeld · Eric Balkanski · Amir Globerson · Yaron Singer -
2018 Oral: Predict and Constrain: Modeling Cardinality in Deep Structured Prediction »
Nataly Brukhim · Amir Globerson -
2017 Poster: Efficient Distributed Learning with Sparsity »
Jialei Wang · Mladen Kolar · Nati Srebro · Tong Zhang -
2017 Talk: Efficient Distributed Learning with Sparsity »
Jialei Wang · Mladen Kolar · Nati Srebro · Tong Zhang -
2017 Poster: Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis »
Dan Garber · Ohad Shamir · Nati Srebro -
2017 Poster: Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs »
Alon Brutzkus · Amir Globerson -
2017 Poster: Learning Infinite Layer Networks without the Kernel Trick »
Roi Livni · Daniel Carmon · Amir Globerson -
2017 Talk: Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs »
Alon Brutzkus · Amir Globerson -
2017 Talk: Learning Infinite Layer Networks without the Kernel Trick »
Roi Livni · Daniel Carmon · Amir Globerson -
2017 Talk: Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis »
Dan Garber · Ohad Shamir · Nati Srebro