Timezone: »
Convolutional neural networks (CNNs) have been shown to achieve optimal approximation and estimation error rates (in minimax sense) in several function classes. However, previous known optimal CNNs are unrealistically wide and difficult to obtain via optimization due to sparse constraints in important function classes, including the H\"older class. We show a ResNet-type CNN can attain the minimax optimal error rates in these classes in more plausible situations -- it can be dense, and its width, channel size, and filter size are constant with respect to sample size. The key idea is that we can replicate the learning ability of Fully-connected neural networks (FNNs) by tailored CNNs, as long as the FNNs have \textit{block-sparse} structures. Our theory is general in that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs. As an application, we derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H\"older classes with the same strategy.
Author Information
Kenta Oono (University of Tokyo / Preferred Networks)
Taiji Suzuki (The University of Tokyo / RIKEN)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Approximation and non-parametric estimation of ResNet-type convolutional neural networks »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #77
More from the Same Authors
-
2021 Poster: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting »
Shunta Akiyama · Taiji Suzuki -
2021 Spotlight: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting »
Shunta Akiyama · Taiji Suzuki -
2021 Poster: Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding »
Akira Nakagawa · Keizo Kato · Taiji Suzuki -
2021 Spotlight: Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding »
Akira Nakagawa · Keizo Kato · Taiji Suzuki -
2021 Poster: Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning »
Tomoya Murata · Taiji Suzuki -
2021 Spotlight: Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning »
Tomoya Murata · Taiji Suzuki -
2018 Poster: Functional Gradient Boosting based on Residual Network Perception »
Atsushi Nitanda · Taiji Suzuki -
2018 Oral: Functional Gradient Boosting based on Residual Network Perception »
Atsushi Nitanda · Taiji Suzuki