Timezone: »
While successful in many fields, deep neural networks (DNNs) still suffer from some open problems such as bad local minima and unsatisfactory generalization performance. In this work, we propose a novel architecture called Maximum-and-Concatenation Networks (MCN) to try eliminating bad local minima and improving generalization ability as well. Remarkably, we prove that MCN has a very nice property; that is, every local minimum of an (l+1)-layer MCN can be better than, at least as good as, the global minima of the network consisting of its first l layers. In other words, by increasing the network depth, MCN can autonomously improve its local minima's goodness, what is more, it is easy to plug MCN into an existing deep model to make it also have this property. Finally, under mild conditions, we show that MCN can approximate certain continuous function arbitrarily well with high efficiency; that is, the covering number of MCN is much smaller than most existing DNNs such as deep ReLU. Based on this, we further provide a tight generalization bound to guarantee the inference ability of MCN when dealing with testing samples.
Author Information
Xingyu Xie (Peking University)
Hao Kong (Peking University)
Jianlong Wu (Peking University)
Wayne Zhang (SenseTime Research)
Guangcan Liu (Nanjing University of Information Science and Technology)
Zhouchen Lin (Peking University)
More from the Same Authors
-
2021 : Demystifying Adversarial Training via A Unified Probabilistic Framework »
Yisen Wang · Jiansheng Yang · Zhouchen Lin · Yifei Wang -
2022 Poster: PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs »
Zhengyang Shen · Tao Hong · Qi She · Jinwen Ma · Zhouchen Lin -
2022 Spotlight: PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs »
Zhengyang Shen · Tao Hong · Qi She · Jinwen Ma · Zhouchen Lin -
2022 Poster: Kill a Bird with Two Stones: Closing the Convergence Gaps in Non-Strongly Convex Optimization by Directly Accelerated SVRG with Double Compensation and Snapshots »
Yuanyuan Liu · Fanhua Shang · Weixin An · Hongying Liu · Zhouchen Lin -
2022 Spotlight: Kill a Bird with Two Stones: Closing the Convergence Gaps in Non-Strongly Convex Optimization by Directly Accelerated SVRG with Double Compensation and Snapshots »
Yuanyuan Liu · Fanhua Shang · Weixin An · Hongying Liu · Zhouchen Lin -
2022 Poster: Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity »
Huan Li · Zhouchen Lin -
2022 Poster: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Poster: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2022 Poster: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity »
Huan Li · Zhouchen Lin -
2022 Spotlight: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Spotlight: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2021 Poster: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Poster: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Poster: Group Fisher Pruning for Practical Network Compression »
Liyang Liu · Shilong Zhang · Zhanghui Kuang · Aojun Zhou · Jing-Hao Xue · Xinjiang Wang · Yimin Chen · Wenming Yang · Qingmin Liao · Wayne Zhang -
2021 Spotlight: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Spotlight: Group Fisher Pruning for Practical Network Compression »
Liyang Liu · Shilong Zhang · Zhanghui Kuang · Aojun Zhou · Jing-Hao Xue · Xinjiang Wang · Yimin Chen · Wenming Yang · Qingmin Liao · Wayne Zhang -
2021 Oral: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Poster: Uncertainty Principles of Encoding GANs »
Ruili Feng · Zhouchen Lin · Jiapeng Zhu · Deli Zhao · Jingren Zhou · Zheng-Jun Zha -
2021 Spotlight: Uncertainty Principles of Encoding GANs »
Ruili Feng · Zhouchen Lin · Jiapeng Zhu · Deli Zhao · Jingren Zhou · Zheng-Jun Zha -
2020 Poster: PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions »
Zhengyang Shen · Lingshen He · Zhouchen Lin · Jinwen Ma -
2020 Poster: Boosted Histogram Transform for Regression »
Yuchao Cai · Hanyuan Hang · Hanfang Yang · Zhouchen Lin -
2020 Poster: Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability »
Mingjie Li · Lingshen He · Zhouchen Lin -
2019 Poster: Differentiable Linearized ADMM »
Xingyu Xie · Jianlong Wu · Guangcan Liu · Zhisheng Zhong · Zhouchen Lin -
2019 Oral: Differentiable Linearized ADMM »
Xingyu Xie · Jianlong Wu · Guangcan Liu · Zhisheng Zhong · Zhouchen Lin