Timezone: »
Recent work on mini-batch consistency (MBC) for set functions has brought attention to the need for sequentially processing and aggregating chunks of a partitioned set while guaranteeing the same output for all partitions. However, existing constraints on MBC architectures lead to models with limited expressive power. Additionally, prior work has not addressed how to deal with large sets during training when the full set gradient is required. To address these issues, we propose a Universally MBC (UMBC) class of set functions which can be used in conjunction with arbitrary non-MBC components while still satisfying MBC, enabling a wider range of function classes to be used in MBC settings. Furthermore, we propose an efficient MBC training algorithm which gives an unbiased approximation of the full set gradient and has a constant memory overhead for any set size for both train- and test-time. We conduct extensive experiments including image completion, text classification, unsupervised clustering, and cancer detection on high-resolution images to verify the efficiency and efficacy of our scalable set encoding framework. Our code is available at github.com/jeffwillette/umbc
Author Information
Jeffrey Willette (KAIST)
Seanie Lee (KAIST)
Bruno Andreis (KAIST)
Kenji Kawaguchi (NUS)
Juho Lee (KAIST, AITRICS)
Sung Ju Hwang (UNIST)
More from the Same Authors
-
2021 : Entropy Weighted Adversarial Training »
Minseon Kim · Jihoon Tack · Jinwoo Shin · Sung Ju Hwang -
2021 : Consistency Regularization for Adversarial Robustness »
Jihoon Tack · Sihyun Yu · Jongheon Jeong · Minseon Kim · Sung Ju Hwang · Jinwoo Shin -
2023 : Generalizable Lightweight Proxy for Robust NAS against Diverse Perturbations »
Hyeonjeong Ha · Minseon Kim · Sung Ju Hwang -
2023 : Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks »
Yuzhen Mao · Zhun Deng · Huaxiu Yao · Ting Ye · Kenji Kawaguchi · James Zou -
2023 : Function Space Bayesian Pseudocoreset for Bayesian Neural Networks »
Balhae Kim · Hyungi Lee · Juho Lee -
2023 : Early Exiting for Accelerated Inference in Diffusion Models »
Taehong Moon · Moonseok Choi · EungGu Yun · Jongmin Yoon · Gayoung Lee · Juho Lee -
2023 : Towards Safe Self-Distillation of Internet-Scale Text-to-Image Diffusion Models »
Sanghyun Kim · Seohyeon Jung · Balhae Kim · Moonseok Choi · Jinwoo Shin · Juho Lee -
2023 Poster: GFlowOut: Dropout with Generative Flow Networks »
Dianbo Liu · Moksh Jain · Bonaventure F. P. Dossou · Qianli Shen · Salem Lahlou · Anirudh Goyal · Nikolay Malkin · Chris Emezue · Dinghuai Zhang · Nadhir Hassen · Xu Ji · Kenji Kawaguchi · Yoshua Bengio -
2023 Poster: Personalized Subgraph Federated Learning »
Jinheon Baek · Wonyong Jeong · Jiongdao Jin · Jaehong Yoon · Sung Ju Hwang -
2023 Poster: Probabilistic Imputation for Time-series Classification with Missing Data »
SeungHyun Kim · Hyunsu Kim · EungGu Yun · Hwangrae Lee · Jaehun Lee · Juho Lee -
2023 Poster: Discrete Key-Value Bottleneck »
Frederik Träuble · Anirudh Goyal · Nasim Rahaman · Michael Mozer · Kenji Kawaguchi · Yoshua Bengio · Bernhard Schölkopf -
2023 Poster: Exploring Chemical Space with Score-based Out-of-distribution Generation »
Seul Lee · Jaehyeong Jo · Sung Ju Hwang -
2023 Poster: Regularizing Towards Soft Equivariance Under Mixed Symmetries »
Hyunsu Kim · Hyungi Lee · Hongseok Yang · Juho Lee -
2023 Poster: Continual Learners are Incremental Model Generalizers »
Jaehong Yoon · Sung Ju Hwang · Yue Cao -
2023 Poster: How Does Information Bottleneck Help Deep Learning? »
Kenji Kawaguchi · Zhun Deng · Xu Ji · Jiaoyang Huang -
2023 Poster: Traversing Between Modes in Function Space for Fast Ensembling »
EungGu Yun · Hyungi Lee · Giung Nam · Juho Lee -
2023 Poster: Auxiliary Learning as an Asymmetric Bargaining Game »
Aviv Shamsian · Aviv Navon · Neta Glazer · Kenji Kawaguchi · Gal Chechik · Ethan Fetaya -
2023 Poster: Margin-based Neural Network Watermarking »
Byungjoo Kim · Suyoung Lee · Seanie Lee · Son · Sung Ju Hwang -
2022 Poster: When and How Mixup Improves Calibration »
Linjun Zhang · Zhun Deng · Kenji Kawaguchi · James Zou -
2022 Spotlight: When and How Mixup Improves Calibration »
Linjun Zhang · Zhun Deng · Kenji Kawaguchi · James Zou -
2022 Poster: Robustness Implies Generalization via Data-Dependent Generalization Bounds »
Kenji Kawaguchi · Zhun Deng · Kyle Luh · Jiaoyang Huang -
2022 Poster: Multi-Task Learning as a Bargaining Game »
Aviv Navon · Aviv Shamsian · Idan Achituve · Haggai Maron · Kenji Kawaguchi · Gal Chechik · Ethan Fetaya -
2022 Poster: Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation »
Giung Nam · Hyungi Lee · Byeongho Heo · Juho Lee -
2022 Poster: Set Based Stochastic Subsampling »
Bruno Andreis · Seanie Lee · A. Tuan Nguyen · Juho Lee · Eunho Yang · Sung Ju Hwang -
2022 Spotlight: Set Based Stochastic Subsampling »
Bruno Andreis · Seanie Lee · A. Tuan Nguyen · Juho Lee · Eunho Yang · Sung Ju Hwang -
2022 Spotlight: Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation »
Giung Nam · Hyungi Lee · Byeongho Heo · Juho Lee -
2022 Oral: Robustness Implies Generalization via Data-Dependent Generalization Bounds »
Kenji Kawaguchi · Zhun Deng · Kyle Luh · Jiaoyang Huang -
2022 Spotlight: Multi-Task Learning as a Bargaining Game »
Aviv Navon · Aviv Shamsian · Idan Achituve · Haggai Maron · Kenji Kawaguchi · Gal Chechik · Ethan Fetaya -
2021 Poster: Adversarial Purification with Score-based Generative Models »
Jongmin Yoon · Sung Ju Hwang · Juho Lee -
2021 Spotlight: Adversarial Purification with Score-based Generative Models »
Jongmin Yoon · Sung Ju Hwang · Juho Lee -
2021 Poster: Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth »
Keyulu Xu · Mozhi Zhang · Stefanie Jegelka · Kenji Kawaguchi -
2021 Spotlight: Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth »
Keyulu Xu · Mozhi Zhang · Stefanie Jegelka · Kenji Kawaguchi