Timezone: »
Deep models are designed to operate on huge volumes of high dimensional data such as images. In order to reduce the volume of data these models must process, we propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an \textit{arbitrary} downstream task network (e.g. classifier). In the first stage, we efficiently subsample \textit{candidate elements} using conditionally independent Bernoulli random variables by capturing coarse grained global information using set encoding functions, followed by conditionally dependent autoregressive subsampling of the candidate elements using Categorical random variables by modeling pair-wise interactions using set attention networks in the second stage. We apply our method to feature and instance selection and show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification. Additionally, for nonparametric models such as Neural Processes that require to leverage the whole training data at inference time, we show that our method enhances the scalability of these models.
Author Information
Bruno Andreis (KAIST)
Seanie Lee (KAIST)
A. Tuan Nguyen (University of Oxford)
Juho Lee (KAIST, AITRICS)
Eunho Yang (KAIST)
Sung Ju Hwang (KAIST, AITRICS)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Set Based Stochastic Subsampling »
Tue. Jul 19th 05:55 -- 06:00 PM Room Ballroom 1 & 2
More from the Same Authors
-
2023 Poster: Probabilistic Imputation for Time-series Classification with Missing Data »
SeungHyun Kim · Hyunsu Kim · Eunggu Yun · Hwangrae Lee · Jaehun Lee · Juho Lee -
2023 Poster: Traversing Between Modes in Function Space for Fast Ensembling »
Eunggu Yun · Hyungi Lee · Giung Nam · Juho Lee -
2023 Poster: Regularizing Towards Soft Equivariance Under Mixed Symmetries »
Hyunsu Kim · Hyungi Lee · Hongseok Yang · Juho Lee -
2023 Poster: Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation »
Jeffrey Willette · Seanie Lee · Bruno Andreis · Kenji Kawaguchi · Juho Lee · Sung Ju Hwang -
2023 Poster: Margin-based Neural Network Watermarking »
Byungjoo Kim · Suyoung Lee · Seanie Lee · Son · Sung Ju Hwang -
2022 Poster: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations »
Jaehyeong Jo · Seul Lee · Sung Ju Hwang -
2022 Spotlight: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations »
Jaehyeong Jo · Seul Lee · Sung Ju Hwang -
2022 Poster: Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation »
Giung Nam · Hyungi Lee · Byeongho Heo · Juho Lee -
2022 Poster: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Poster: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Spotlight: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: Improving Ensemble Distillation With Weight Averaging and Diversifying Perturbation »
Giung Nam · Hyungi Lee · Byeongho Heo · Juho Lee -
2021 Poster: Large-Scale Meta-Learning with Continual Trajectory Shifting »
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang -
2021 Spotlight: Large-Scale Meta-Learning with Continual Trajectory Shifting »
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang -
2021 Poster: Learning to Generate Noise for Multi-Attack Robustness »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2021 Poster: Adversarial Purification with Score-based Generative Models »
Jongmin Yoon · Sung Ju Hwang · Juho Lee -
2021 Spotlight: Adversarial Purification with Score-based Generative Models »
Jongmin Yoon · Sung Ju Hwang · Juho Lee -
2021 Spotlight: Learning to Generate Noise for Multi-Attack Robustness »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2021 Poster: Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation »
Dongchan Min · Dong Bok Lee · Eunho Yang · Sung Ju Hwang -
2021 Spotlight: Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation »
Dongchan Min · Dong Bok Lee · Eunho Yang · Sung Ju Hwang -
2021 Poster: Federated Continual Learning with Weighted Inter-client Transfer »
Jaehong Yoon · Wonyong Jeong · GiWoong Lee · Eunho Yang · Sung Ju Hwang -
2021 Spotlight: Federated Continual Learning with Weighted Inter-client Transfer »
Jaehong Yoon · Wonyong Jeong · GiWoong Lee · Eunho Yang · Sung Ju Hwang -
2020 Poster: Cost-Effective Interactive Attention Learning with Neural Attention Processes »
Jay Heo · Junhyeon Park · Hyewon Jeong · Kwang Joon Kim · Juho Lee · Eunho Yang · Sung Ju Hwang -
2020 Poster: Meta Variance Transfer: Learning to Augment from the Others »
Seong-Jin Park · Seungju Han · Ji-won Baek · Insoo Kim · Juhwan Song · Hae Beom Lee · Jae-Joon Han · Sung Ju Hwang -
2020 Poster: Self-supervised Label Augmentation via Input Transformations »
Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2020 Poster: Adversarial Neural Pruning with Latent Vulnerability Suppression »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2019 Poster: Learning What and Where to Transfer »
Yunhun Jang · Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2019 Oral: Learning What and Where to Transfer »
Yunhun Jang · Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2018 Poster: Deep Asymmetric Multi-task Feature Learning »
Hae Beom Lee · Eunho Yang · Sung Ju Hwang -
2018 Oral: Deep Asymmetric Multi-task Feature Learning »
Hae Beom Lee · Eunho Yang · Sung Ju Hwang