Timezone: »
Convolutional Neural Networks (CNNs) are typically constructed by stacking multiple building blocks, each of which contains a normalization layer such as batch normalization (BN) and a rectified linear function such as ReLU. However, this work shows that the combination of normalization and rectified linear function leads to inhibited channels, which have small magnitude and contribute little to the learned feature representation, impeding the generalization ability of CNNs. Unlike prior arts that simply removed the inhibited channels, we propose to ``wake them up'' during training by designing a novel neural building block, termed Channel Equilibrium (CE) block, which enables channels at the same layer to contribute equally to the learned representation. We show that CE is able to prevent inhibited channels both empirically and theoretically. CE has several appealing benefits. (1) It can be integrated into many advanced CNN architectures such as ResNet and MobileNet, outperforming their original networks. (2) CE has an interesting connection with the Nash Equilibrium, a well-known solution of a non-cooperative game. (3) Extensive experiments show that CE achieves state-of-the-art performance on various challenging benchmarks such as ImageNet and COCO.
Author Information
Wenqi Shao (The Chinese University of HongKong)
Shitao Tang (Simon Fraser University)
Xingang Pan (The Chinese University of Hong Kong)
Ping Tan (Simon Fraser University)
Xiaogang Wang (Chinese University of Hong Kong, Hong Kong)
Ping Luo (The University of Hong Kong)
More from the Same Authors
-
2023 Poster: $\pi$-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation »
CHENGYUE WU · Teng Wang · Yixiao Ge · Zeyu Lu · Ruisong Zhou · Ying Shan · Ping Luo -
2023 Poster: AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners »
Zhixuan Liang · Yao Mu · Mingyu Ding · Fei Ni · Masayoshi Tomizuka · Ping Luo -
2023 Poster: ChiPFormer: Transferable Chip Placement via Offline Decision Transformer »
Yao LAI · Jinxin Liu · Zhentao Tang · Bin Wang · Jianye Hao · Ping Luo -
2023 Oral: AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners »
Zhixuan Liang · Yao Mu · Mingyu Ding · Fei Ni · Masayoshi Tomizuka · Ping Luo -
2022 Poster: Flow-based Recurrent Belief State Learning for POMDPs »
Xiaoyu Chen · Yao Mu · Ping Luo · Shengbo Li · Jianyu Chen -
2022 Spotlight: Flow-based Recurrent Belief State Learning for POMDPs »
Xiaoyu Chen · Yao Mu · Ping Luo · Shengbo Li · Jianyu Chen -
2021 Poster: Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution »
zhaoyang zhang · Wenqi Shao · Jinwei Gu · Xiaogang Wang · Ping Luo -
2021 Spotlight: Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution »
zhaoyang zhang · Wenqi Shao · Jinwei Gu · Xiaogang Wang · Ping Luo -
2021 Poster: What Makes for End-to-End Object Detection? »
Peize Sun · Yi Jiang · Enze Xie · Wenqi Shao · Zehuan Yuan · Changhu Wang · Ping Luo -
2021 Spotlight: What Makes for End-to-End Object Detection? »
Peize Sun · Yi Jiang · Enze Xie · Wenqi Shao · Zehuan Yuan · Changhu Wang · Ping Luo -
2019 : Poster discussion »
Roman Novak · Maxime Gabella · Frederic Dreyer · Siavash Golkar · Anh Tong · Irina Higgins · Mirco Milletari · Joe Antognini · Sebastian Goldt · Adín Ramírez Rivera · Roberto Bondesan · Ryo Karakida · Remi Tachet des Combes · Michael Mahoney · Nicholas Walker · Stanislav Fort · Samuel Smith · Rohan Ghosh · Aristide Baratin · Diego Granziol · Stephen Roberts · Dmitry Vetrov · Andrew Wilson · César Laurent · Valentin Thomas · Simon Lacoste-Julien · Dar Gilboa · Daniel Soudry · Anupam Gupta · Anirudh Goyal · Yoshua Bengio · Erich Elsen · Soham De · Stanislaw Jastrzebski · Charles H Martin · Samira Shabanian · Aaron Courville · Shorato Akaho · Lenka Zdeborova · Ethan Dyer · Maurice Weiler · Pim de Haan · Taco Cohen · Max Welling · Ping Luo · zhanglin peng · Nasim Rahaman · Loic Matthey · Danilo J. Rezende · Jaesik Choi · Kyle Cranmer · Lechao Xiao · Jaehoon Lee · Yasaman Bahri · Jeffrey Pennington · Greg Yang · Jiri Hron · Jascha Sohl-Dickstein · Guy Gur-Ari -
2019 : Poster Session 1 (all papers) »
Matilde Gargiani · Yochai Zur · Chaim Baskin · Evgenii Zheltonozhskii · Liam Li · Ameet Talwalkar · Xuedong Shang · Harkirat Singh Behl · Atilim Gunes Baydin · Ivo Couckuyt · Tom Dhaene · Chieh Lin · Wei Wei · Min Sun · Orchid Majumder · Michele Donini · Yoshihiko Ozaki · Ryan P. Adams · Christian Geißler · Ping Luo · zhanglin peng · · Ruimao Zhang · John Langford · Rich Caruana · Debadeepta Dey · Charles Weill · Xavi Gonzalvo · Scott Yang · Scott Yak · Eugen Hotaj · Vladimir Macko · Mehryar Mohri · Corinna Cortes · Stefan Webb · Jonathan Chen · Martin Jankowiak · Noah Goodman · Aaron Klein · Frank Hutter · Mojan Javaheripi · Mohammad Samragh · Sungbin Lim · Taesup Kim · SUNGWOONG KIM · Michael Volpp · Iddo Drori · Yamuna Krishnamurthy · Kyunghyun Cho · Stanislaw Jastrzebski · Quentin de Laroussilhe · Mingxing Tan · Xiao Ma · Neil Houlsby · Andrea Gesmundo · Zalán Borsos · Krzysztof Maziarz · Felipe Petroski Such · Joel Lehman · Kenneth Stanley · Jeff Clune · Pieter Gijsbers · Joaquin Vanschoren · Felix Mohr · Eyke Hüllermeier · Zheng Xiong · Wenpeng Zhang · Wenwu Zhu · Weijia Shao · Aleksandra Faust · Michal Valko · Michael Y Li · Hugo Jair Escalante · Marcel Wever · Andrey Khorlin · Tara Javidi · Anthony Francis · Saurajit Mukherjee · Jungtaek Kim · Michael McCourt · Saehoon Kim · Tackgeun You · Seungjin Choi · Nicolas Knudde · Alexander Tornede · Ghassen Jerfel -
2019 Poster: Differentiable Dynamic Normalization for Learning Deep Representation »
Ping Luo · Peng Zhanglin · Shao Wenqi · Zhang ruimao · Ren jiamin · Wu lingyun -
2019 Oral: Differentiable Dynamic Normalization for Learning Deep Representation »
Ping Luo · Peng Zhanglin · Shao Wenqi · Zhang ruimao · Ren jiamin · Wu lingyun -
2017 Poster: Learning Deep Architectures via Generalized Whitened Neural Networks »
Ping Luo -
2017 Talk: Learning Deep Architectures via Generalized Whitened Neural Networks »
Ping Luo