Timezone: »
We propose a novel interactive learning framework which we refer to as Interactive Attention Learning (IAL), in which the human supervisors interactively manipulate the allocated attentions, to correct the model's behaviour by updating the attention-generating network. However, such a model is prone to overfitting due to scarcity of human annotations, and requires costly retraining. Moreover, it is almost infeasible for the human annotators to examine attentions on tons of instances and features. We tackle these challenges by proposing a sample-efficient attention mechanism and a cost-effective reranking algorithm for instances and features. First, we propose Neural Attention Processes (NAP), which is an attention generator that can update its behaviour by incorporating new attention-level supervisions without any retraining. Secondly, we propose an algorithm which prioritizes the instances and the features by their negative impacts, such that the model can yield large improvements with minimal human feedback. We validate IAL on various time-series datasets from multiple domains (healthcare, real-estate, and computer vision) on which it significantly outperforms baselines with conventional attention mechanisms, or without cost-effective reranking, with substantially less retraining and human-model interaction cost.
Author Information
Jay Heo (KAIST)
Junhyeon Park (KAIST)
Hyewon Jeong (KAIST)
Kwang Joon Kim (Yonsei University College of Medicine)
Juho Lee (AITRICS)
Eunho Yang (KAIST,AITRICS)
Sung Ju Hwang (KAIST, AITRICS)
More from the Same Authors
-
2023 Poster: RGE: A Repulsive Graph Rectification for Node Classification via Influence »
Jaeyun Song · Sungyub Kim · Eunho Yang -
2023 Poster: Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation »
Yeonsung Jung · Hajin Shim · June Yong Yang · Eunho Yang -
2022 Poster: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations »
Jaehyeong Jo · Seul Lee · Sung Ju Hwang -
2022 Spotlight: Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations »
Jaehyeong Jo · Seul Lee · Sung Ju Hwang -
2022 Poster: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Poster: Set Based Stochastic Subsampling »
Bruno Andreis · Seanie Lee · A. Tuan Nguyen · Juho Lee · Eunho Yang · Sung Ju Hwang -
2022 Poster: TAM: Topology-Aware Margin Loss for Class-Imbalanced Node Classification »
Jaeyun Song · Joonhyung Park · Eunho Yang -
2022 Poster: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: Forget-free Continual Learning with Winning Subnetworks »
Haeyong Kang · Rusty Mina · Sultan Rizky Hikmawan Madjid · Jaehong Yoon · Mark Hasegawa-Johnson · Sung Ju Hwang · Chang Yoo -
2022 Spotlight: Bitwidth Heterogeneous Federated Learning with Progressive Weight Dequantization »
Jaehong Yoon · Geon Park · Wonyong Jeong · Sung Ju Hwang -
2022 Spotlight: TAM: Topology-Aware Margin Loss for Class-Imbalanced Node Classification »
Jaeyun Song · Joonhyung Park · Eunho Yang -
2022 Spotlight: Set Based Stochastic Subsampling »
Bruno Andreis · Seanie Lee · A. Tuan Nguyen · Juho Lee · Eunho Yang · Sung Ju Hwang -
2021 Poster: Large-Scale Meta-Learning with Continual Trajectory Shifting »
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang -
2021 Spotlight: Large-Scale Meta-Learning with Continual Trajectory Shifting »
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang -
2021 Poster: Learning to Generate Noise for Multi-Attack Robustness »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2021 Poster: Adversarial Purification with Score-based Generative Models »
Jongmin Yoon · Sung Ju Hwang · Juho Lee -
2021 Spotlight: Adversarial Purification with Score-based Generative Models »
Jongmin Yoon · Sung Ju Hwang · Juho Lee -
2021 Spotlight: Learning to Generate Noise for Multi-Attack Robustness »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2021 Poster: Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation »
Dongchan Min · Dong Bok Lee · Eunho Yang · Sung Ju Hwang -
2021 Spotlight: Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation »
Dongchan Min · Dong Bok Lee · Eunho Yang · Sung Ju Hwang -
2021 Poster: Federated Continual Learning with Weighted Inter-client Transfer »
Jaehong Yoon · Wonyong Jeong · GiWoong Lee · Eunho Yang · Sung Ju Hwang -
2021 Spotlight: Federated Continual Learning with Weighted Inter-client Transfer »
Jaehong Yoon · Wonyong Jeong · GiWoong Lee · Eunho Yang · Sung Ju Hwang -
2020 Poster: Meta Variance Transfer: Learning to Augment from the Others »
Seong-Jin Park · Seungju Han · Ji-won Baek · Insoo Kim · Juhwan Song · Hae Beom Lee · Jae-Joon Han · Sung Ju Hwang -
2020 Poster: Self-supervised Label Augmentation via Input Transformations »
Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2020 Poster: Adversarial Neural Pruning with Latent Vulnerability Suppression »
Divyam Madaan · Jinwoo Shin · Sung Ju Hwang -
2019 Poster: Spectral Approximate Inference »
Sejun Park · Eunho Yang · Se-Young Yun · Jinwoo Shin -
2019 Poster: Learning What and Where to Transfer »
Yunhun Jang · Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2019 Oral: Spectral Approximate Inference »
Sejun Park · Eunho Yang · Se-Young Yun · Jinwoo Shin -
2019 Oral: Learning What and Where to Transfer »
Yunhun Jang · Hankook Lee · Sung Ju Hwang · Jinwoo Shin -
2019 Poster: Beyond the Chinese Restaurant and Pitman-Yor processes: Statistical Models with double power-law behavior »
Fadhel Ayed · Juho Lee · Francois Caron -
2019 Poster: Trimming the $\ell_1$ Regularizer: Statistical Analysis, Optimization, and Applications to Deep Learning »
Jihun Yun · Peng Zheng · Eunho Yang · Aurelie Lozano · Aleksandr Aravkin -
2019 Oral: Beyond the Chinese Restaurant and Pitman-Yor processes: Statistical Models with double power-law behavior »
Fadhel Ayed · Juho Lee · Francois Caron -
2019 Oral: Trimming the $\ell_1$ Regularizer: Statistical Analysis, Optimization, and Applications to Deep Learning »
Jihun Yun · Peng Zheng · Eunho Yang · Aurelie Lozano · Aleksandr Aravkin -
2019 Poster: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2019 Oral: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2018 Poster: Deep Asymmetric Multi-task Feature Learning »
Hae Beom Lee · Eunho Yang · Sung Ju Hwang -
2018 Oral: Deep Asymmetric Multi-task Feature Learning »
Hae Beom Lee · Eunho Yang · Sung Ju Hwang -
2017 Poster: Sparse + Group-Sparse Dirty Models: Statistical Guarantees without Unreasonable Conditions and a Case for Non-Convexity »
Eunho Yang · Aurelie Lozano -
2017 Talk: Sparse + Group-Sparse Dirty Models: Statistical Guarantees without Unreasonable Conditions and a Case for Non-Convexity »
Eunho Yang · Aurelie Lozano -
2017 Poster: Ordinal Graphical Models: A Tale of Two Approaches »
ARUN SAI SUGGALA · Eunho Yang · Pradeep Ravikumar -
2017 Talk: Ordinal Graphical Models: A Tale of Two Approaches »
ARUN SAI SUGGALA · Eunho Yang · Pradeep Ravikumar