Timezone: »
Real-world data often exhibit imbalanced distributions, where certain target values have significantly fewer observations. Existing techniques for dealing with imbalanced data focus on targets with categorical indices, i.e., different classes. However, many tasks involve continuous targets, where hard boundaries between classes do not exist. We define Deep Imbalanced Regression (DIR) as learning from such imbalanced data with continuous targets, dealing with potential missing data for certain target values, and generalizing to the entire target range. Motivated by the intrinsic difference between categorical and continuous label space, we propose distribution smoothing for both labels and features, which explicitly acknowledges the effects of nearby targets, and calibrates both label and learned feature distributions. We curate and benchmark large-scale DIR datasets from common real-world tasks in computer vision, natural language processing, and healthcare domains. Extensive experiments verify the superior performance of our strategies. Our work fills the gap in benchmarks and techniques for practical imbalanced regression problems. Code and data are available at: https://github.com/YyzHarry/imbalanced-regression.
Author Information
Yuzhe Yang (MIT)
Kaiwen Zha (MIT)
YINGCONG CHEN (MIT)
Hao Wang (Rutgers University)
Dr. Hao Wang is currently an assistant professor in the department of computer science at Rutgers University. Previously he was a Postdoctoral Associate at the Computer Science & Artificial Intelligence Lab (CSAIL) of MIT, working with Dina Katabi and Tommi Jaakkola. He received his PhD degree from the Hong Kong University of Science and Technology, as the sole recipient of the School of Engineering PhD Research Excellence Award in 2017. He has been a visiting researcher in the Machine Learning Department of Carnegie Mellon University. His research focuses on statistical machine learning, deep learning, and data mining, with broad applications on recommender systems, healthcare, user profiling, social network analysis, text mining, etc. His work on Bayesian deep learning for recommender systems and personalized modeling has inspired hundreds of follow-up works published at top conferences such as AAAI, ICML, IJCAI, KDD, NIPS, SIGIR, and WWW. It has received over 1000 citations, becoming the most cited paper at KDD 2015. In 2015, he was awarded the Microsoft Fellowship in Asia and the Baidu Research Fellowship for his innovation on Bayesian deep learning and its applications on data mining and social network analysis.
Dina Katabi (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Delving into Deep Imbalanced Regression »
Thu. Jul 22nd 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2023 Poster: Self-Interpretable Time Series Prediction with Counterfactual Explanations »
Jingquan Yan · Hao Wang -
2023 Poster: Change is Hard: A Closer Look at Subpopulation Shift »
Yuzhe Yang · Haoran Zhang · Dina Katabi · Marzyeh Ghassemi -
2023 Poster: Taxonomy-Structured Domain Adaptation »
Tianyi Liu · Zihao Xu · Hao He · Guang-Yuan Hao · Guang-He Lee · Hao Wang -
2023 Poster: Robust Perception through Equivariance »
Chengzhi Mao · Lingyu Zhang · Abhishek Joshi · Junfeng Yang · Hao Wang · Carl Vondrick -
2023 Oral: Self-Interpretable Time Series Prediction with Counterfactual Explanations »
Jingquan Yan · Hao Wang -
2023 Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH) »
Weina Jin · Ramin Zabih · S. Kevin Zhou · Yuyin Zhou · Xiaoxiao Li · Yifan Peng · Zongwei Zhou · Yucheng Tang · Yuzhe Yang · Agni Kumar -
2022 Poster: Domain Adaptation for Time Series Forecasting via Attention Sharing »
Xiaoyong Jin · Youngsuk Park · Danielle Robinson · Hao Wang · Yuyang Wang -
2022 Spotlight: Domain Adaptation for Time Series Forecasting via Attention Sharing »
Xiaoyong Jin · Youngsuk Park · Danielle Robinson · Hao Wang · Yuyang Wang -
2021 Poster: STRODE: Stochastic Boundary Ordinary Differential Equation »
Huang Hengguan · Hongfu Liu · Hao Wang · Chang Xiao · Ye Wang -
2021 Poster: Correcting Exposure Bias for Link Recommendation »
Shantanu Gupta · Hao Wang · Zachary Lipton · Yuyang Wang -
2021 Spotlight: Correcting Exposure Bias for Link Recommendation »
Shantanu Gupta · Hao Wang · Zachary Lipton · Yuyang Wang -
2021 Spotlight: STRODE: Stochastic Boundary Ordinary Differential Equation »
Huang Hengguan · Hongfu Liu · Hao Wang · Chang Xiao · Ye Wang -
2020 Poster: Deep Graph Random Process for Relational-Thinking-Based Speech Recognition »
Huang Hengguan · Fuzhao Xue · Hao Wang · Ye Wang -
2020 Poster: Continuously Indexed Domain Adaptation »
Hao Wang · Hao He · Dina Katabi -
2019 Poster: ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation »
Yuzhe Yang · GUO ZHANG · Zhi Xu · Dina Katabi -
2019 Oral: ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation »
Yuzhe Yang · GUO ZHANG · Zhi Xu · Dina Katabi -
2019 Poster: Circuit-GNN: Graph Neural Networks for Distributed Circuit Design »
GUO ZHANG · Hao He · Dina Katabi -
2019 Oral: Circuit-GNN: Graph Neural Networks for Distributed Circuit Design »
GUO ZHANG · Hao He · Dina Katabi -
2017 Poster: Learning Sleep Stages from Radio Signals: A Conditional Adversarial Architecture »
Mingmin Zhao · Shichao Yue · Dina Katabi · Tommi Jaakkola · Matt Bianchi -
2017 Talk: Learning Sleep Stages from Radio Signals: A Conditional Adversarial Architecture »
Mingmin Zhao · Shichao Yue · Dina Katabi · Tommi Jaakkola · Matt Bianchi