Timezone: »
Deep learning empirically achieves high performance in many applications, but its training dynamics has not been fully understood theoretically. In this paper, we explore theoretical analysis on training two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown teacher network through its outputs. We show that with a specific regularization and sufficient over-parameterization, the student network can identify the parameters of the teacher network with high probability via gradient descent with a norm dependent stepsize even though the objective function is highly non-convex. The key theoretical tool is the measure representation of the neural networks and a novel application of a dual certificate argument for sparse estimation on a measure space. We analyze the global minima and global convergence property in the measure space.
Author Information
Shunta Akiyama (The University of Tokyo)
Taiji Suzuki (The University of Tokyo / RIKEN)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting »
Wed. Jul 21st 12:40 -- 12:45 PM Room
More from the Same Authors
-
2023 : Benign Overfitting of Two-Layer Neural Networks under Inputs with Intrinsic Dimension »
Shunta Akiyama · Kazusato Oko · Taiji Suzuki -
2023 : Graph Neural Networks Provably Benefit from Structural Information: A Feature Learning Perspective »
Wei Huang · Yuan Cao · Haonan Wang · Xin Cao · Taiji Suzuki -
2023 : Learning in the Presence of Low-dimensional Structure: A Spiked Random Matrix Perspective »
Jimmy Ba · Murat Erdogdu · Taiji Suzuki · Zhichao Wang · Denny Wu -
2023 : Learning Green's Function Efficiently Using Low-Rank Approximations »
Kishan Wimalawarne · Taiji Suzuki · Sophie Langer -
2023 Poster: DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning »
Tomoya Murata · Taiji Suzuki -
2023 Poster: Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems »
Atsushi Nitanda · Kazusato Oko · Denny Wu · Nobuhito Takenouchi · Taiji Suzuki -
2023 Oral: Diffusion Models are Minimax Optimal Distribution Estimators »
Kazusato Oko · Shunta Akiyama · Taiji Suzuki -
2023 Poster: Approximation and Estimation Ability of Transformers for Sequence-to-Sequence Functions with Infinite Dimensional Input »
Shokichi Takakura · Taiji Suzuki -
2023 Poster: Diffusion Models are Minimax Optimal Distribution Estimators »
Kazusato Oko · Shunta Akiyama · Taiji Suzuki -
2023 Poster: Tight and fast generalization error bound of graph embedding in metric space »
Atsushi Suzuki · Atsushi Nitanda · Taiji Suzuki · Jing Wang · Feng Tian · Kenji Yamanishi -
2021 Poster: Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding »
Akira Nakagawa · Keizo Kato · Taiji Suzuki -
2021 Spotlight: Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding »
Akira Nakagawa · Keizo Kato · Taiji Suzuki -
2021 Poster: Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning »
Tomoya Murata · Taiji Suzuki -
2021 Spotlight: Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning »
Tomoya Murata · Taiji Suzuki -
2019 Poster: Approximation and non-parametric estimation of ResNet-type convolutional neural networks »
Kenta Oono · Taiji Suzuki -
2019 Oral: Approximation and non-parametric estimation of ResNet-type convolutional neural networks »
Kenta Oono · Taiji Suzuki -
2018 Poster: Functional Gradient Boosting based on Residual Network Perception »
Atsushi Nitanda · Taiji Suzuki -
2018 Oral: Functional Gradient Boosting based on Residual Network Perception »
Atsushi Nitanda · Taiji Suzuki