Timezone: »
We discuss three novel insights about dropout for DNNs with ReLUs: 1) dropout encourages each local linear piece of a DNN to be trained on data points from nearby regions; 2) the same dropout rate results in different (effective) deactivation rates for layers with different portions of ReLU-deactivated neurons; and 3) the rescaling factor of dropout causes a normalization inconsistency between training and test when used together with batch normalization. The above leads to three simple but nontrivial modifications resulting in our method ``jumpout.'' Jumpout samples the dropout rate from a monotone decreasing distribution (e.g., the right half of a Gaussian), so each local linear piece is trained, with high probability, to work better for data points from nearby than more distant regions. Jumpout moreover adaptively normalizes the dropout rate at each layer and every training batch, so the effective deactivation rate on the activated neurons is kept the same. Furthermore, it rescales the outputs for a better trade-off that keeps both the variance and mean of neurons more consistent between training and test phases, thereby mitigating the incompatibility between dropout and batch normalization. Jumpout significantly improves the performance of different neural nets on CIFAR10, CIFAR100, Fashion-MNIST, STL10, SVHN, ImageNet-1k, etc., while introducing negligible additional memory and computation costs.
Author Information
Shengjie Wang ("University of Washington, Seattle")
Tianyi Zhou (University of Washington)

Tianyi Zhou is a tenure-track assistant professor of Computer Science and UMIACS at the University of Maryland, College Park. He received his Ph.D. from the University of Washington, Seattle. His research interests are machine learning, optimization, and natural language processing. His recent works focus on curriculum learning, hybrid human-artificial intelligence, trustworthy and robust AI, plasticity-stability trade-off in ML, large language and multi-modality models, reinforcement learning, federated learning, and meta-learning. He has published ~90 papers at NeurIPS, ICML, ICLR, AISTATS, ACL, EMNLP, NAACL, COLING, CVPR, KDD, ICDM, AAAI, IJCAI, ISIT, Machine Learning (Springer), IEEE TIP/TNNLS/TKDE, etc. He is the recipient of the Best Student Paper Award at ICDM 2013 and the 2020 IEEE TCSC Most Influential Paper Award. He served as an SPC member or area chair in AAAI, IJCAI, KDD, WACV, etc. Tianyi was a visiting research scientist at Google and a research intern at Microsoft Research Redmond and Yahoo! Labs.
Jeff Bilmes (UW)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Wed. Jun 12th 12:10 -- 12:15 AM Room Hall A
More from the Same Authors
-
2021 : Tighter m-DPP Coreset Sample Complexity Bounds »
Gantavya Bhatt · Jeff Bilmes -
2022 : Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks »
Haiyan Zhao · Tianyi Zhou · Guodong Long · Jing Jiang · Chengqi Zhang -
2022 : Federated Learning from Pre-Trained Models: A Contrastive Learning Approach »
Yue Tan · Yue Tan · Guodong Long · Guodong Long · Jie Ma · Jie Ma · LU LIU · LU LIU · Tianyi Zhou · Tianyi Zhou · Jing Jiang · Jing Jiang -
2023 : Accelerating Batch Active Learning Using Continual Learning Techniques »
Gantavya Bhatt · Arnav M Das · · Rui Yang · Vianne Gao · Jeff Bilmes -
2023 : Taming Small-sample Bias in Low-budget Active Learning »
Linxin Song · Jieyu Zhang · Xiaotian Lu · Tianyi Zhou -
2023 Poster: Structured Cooperative Learning with Graphical Model Priors »
Shuangtong Li · Tianyi Zhou · Xinmei Tian · Dacheng Tao -
2023 Poster: Does Continual Learning Equally Forget All Parameters? »
Haiyan Zhao · Tianyi Zhou · Guodong Long · Jing Jiang · Chengqi Zhang -
2023 Poster: Continual Task Allocation in Meta-Policy Network via Sparse Prompting »
Yijun Yang · Tianyi Zhou · Jing Jiang · Guodong Long · Yuhui Shi -
2022 : Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks »
Haiyan Zhao · Tianyi Zhou · Guodong Long · Jing Jiang · Chengqi Zhang -
2022 : Does Continual Learning Equally Forget All Parameters? »
Haiyan Zhao · Tianyi Zhou · Guodong Long · Jing Jiang · Chengqi Zhang -
2021 : Tighter m-DPP Coreset Sample Complexity Bounds »
Jeff Bilmes · Gantavya Bhatt -
2021 : More Information, Less Data »
Jeff Bilmes · Jeff Bilmes -
2021 : Introduction by the Organizers »
Abir De · Rishabh Iyer · Ganesh Ramakrishnan · Jeff Bilmes -
2021 Workshop: Subset Selection in Machine Learning: From Theory to Applications »
Rishabh Iyer · Abir De · Ganesh Ramakrishnan · Jeff Bilmes -
2020 Poster: Coresets for Data-efficient Training of Machine Learning Models »
Baharan Mirzasoleiman · Jeff Bilmes · Jure Leskovec -
2020 Poster: Time-Consistent Self-Supervision for Semi-Supervised Learning »
Tianyi Zhou · Shengjie Wang · Jeff Bilmes -
2019 : Jeff Bilmes: Deep Submodular Synergies »
Jeff Bilmes -
2019 Poster: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Oral: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Poster: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal Mohd-Yusof -
2019 Oral: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal Mohd-Yusof -
2018 Poster: Constrained Interacting Submodular Groupings »
Andrew Cotter · Mahdi Milani Fard · Seungil You · Maya Gupta · Jeff Bilmes -
2018 Poster: Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions »
Wenruo Bai · Jeff Bilmes -
2018 Oral: Constrained Interacting Submodular Groupings »
Andrew Cotter · Mahdi Milani Fard · Seungil You · Maya Gupta · Jeff Bilmes -
2018 Oral: Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions »
Wenruo Bai · Jeff Bilmes