Timezone: »
Recently, deep neural networks have gained increasing popularity in the field of time series forecasting. A primary reason for their success is their ability to effectively capture complex temporal dynamics across multiple related time series. The advantages of these deep forecasters only start to emerge in the presence of a sufficient amount of data. This poses a challenge for typical forecasting problems in practice, where there is a limited number of time series or observations per time series, or both. To cope with this data scarcity issue, we propose a novel domain adaptation framework, Domain Adaptation Forecaster (DAF). DAF leverages statistical strengths from a relevant domain with abundant data samples (source) to improve the performance on the domain of interest with limited data (target). In particular, we use an attention-based shared module with a domain discriminator across domains and private modules for individual domains. We induce domain-invariant latent features (queries and keys) and retrain domain-specific features (values) simultaneously to enable joint training of forecasters on source and target domains. A main insight is that our design of aligning keys allows the target domain to leverage source time series even with different characteristics. Extensive experiments on various domains demonstrate that our proposed method outperforms state-of-the-art baselines on synthetic and real-world datasets, and ablation studies verify the effectiveness of our design choices.
Author Information
Xiaoyong Jin (UCSB)
Youngsuk Park (Amazon Research)
Danielle Robinson (Amazon Web Services)
Hao Wang (Rutgers University)
Dr. Hao Wang is currently an assistant professor in the department of computer science at Rutgers University. Previously he was a Postdoctoral Associate at the Computer Science & Artificial Intelligence Lab (CSAIL) of MIT, working with Dina Katabi and Tommi Jaakkola. He received his PhD degree from the Hong Kong University of Science and Technology, as the sole recipient of the School of Engineering PhD Research Excellence Award in 2017. He has been a visiting researcher in the Machine Learning Department of Carnegie Mellon University. His research focuses on statistical machine learning, deep learning, and data mining, with broad applications on recommender systems, healthcare, user profiling, social network analysis, text mining, etc. His work on Bayesian deep learning for recommender systems and personalized modeling has inspired hundreds of follow-up works published at top conferences such as AAAI, ICML, IJCAI, KDD, NIPS, SIGIR, and WWW. It has received over 1000 citations, becoming the most cited paper at KDD 2015. In 2015, he was awarded the Microsoft Fellowship in Asia and the Baidu Research Fellowship for his innovation on Bayesian deep learning and its applications on data mining and social network analysis.
Yuyang Wang (AWS AI Labs)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Domain Adaptation for Time Series Forecasting via Attention Sharing »
Thu. Jul 21st 05:35 -- 05:40 PM Room Hall G
More from the Same Authors
-
2023 : Towards Effective Data Poisoning for Imbalanced Classification »
Snigdha Sushil Mishra · Hao He · Hao Wang -
2023 Oral: Self-Interpretable Time Series Prediction with Counterfactual Explanations »
Jingquan Yan · Hao Wang -
2023 Poster: Taxonomy-Structured Domain Adaptation »
Tianyi Liu · Zihao Xu · Hao He · Guangyuan Hao · Guang-He Lee · Hao Wang -
2023 Poster: Robust Perception through Equivariance »
Chengzhi Mao · Lingyu Zhang · Abhishek Joshi · Junfeng Yang · Hao Wang · Carl Vondrick -
2023 Poster: Learning Physical Models that Can Respect Conservation Laws »
Derek Hansen · Danielle Robinson · Shima Alizadeh · Gaurav Gupta · Michael Mahoney -
2023 Poster: Self-Interpretable Time Series Prediction with Counterfactual Explanations »
Jingquan Yan · Hao Wang -
2023 Poster: Theoretical Guarantees of Learning Ensembling Strategies with Applications to Time Series Forecasting »
Hilaf Hasson · Danielle Robinson · Yuyang Wang · Gaurav Gupta · Youngsuk Park -
2021 Workshop: Time Series Workshop »
Yian Ma · Ehi Nosakhare · Yuyang Wang · Scott Yang · Rose Yu -
2021 Poster: STRODE: Stochastic Boundary Ordinary Differential Equation »
Huang Hengguan · Hongfu Liu · Hao Wang · Chang Xiao · Ye Wang -
2021 Poster: Correcting Exposure Bias for Link Recommendation »
Shantanu Gupta · Hao Wang · Zachary Lipton · Yuyang Wang -
2021 Spotlight: Correcting Exposure Bias for Link Recommendation »
Shantanu Gupta · Hao Wang · Zachary Lipton · Yuyang Wang -
2021 Spotlight: STRODE: Stochastic Boundary Ordinary Differential Equation »
Huang Hengguan · Hongfu Liu · Hao Wang · Chang Xiao · Ye Wang -
2021 Poster: Delving into Deep Imbalanced Regression »
Yuzhe Yang · Kaiwen Zha · YINGCONG CHEN · Hao Wang · Dina Katabi -
2021 Oral: Delving into Deep Imbalanced Regression »
Yuzhe Yang · Kaiwen Zha · YINGCONG CHEN · Hao Wang · Dina Katabi -
2021 Poster: Variance Reduced Training with Stratified Sampling for Forecasting Models »
Yucheng Lu · Youngsuk Park · Lifan Chen · Yuyang Wang · Christopher De Sa · Dean Foster -
2021 Spotlight: Variance Reduced Training with Stratified Sampling for Forecasting Models »
Yucheng Lu · Youngsuk Park · Lifan Chen · Yuyang Wang · Christopher De Sa · Dean Foster -
2020 Poster: Deep Graph Random Process for Relational-Thinking-Based Speech Recognition »
Huang Hengguan · Fuzhao Xue · Hao Wang · Ye Wang -
2019 Workshop: ICML 2019 Time Series Workshop »
Vitaly Kuznetsov · Scott Yang · Rose Yu · Cheng Tang · Yuyang Wang -
2019 Poster: Deep Factors for Forecasting »
Yuyang Wang · Alex Smola · Danielle Robinson · Jan Gasthaus · Dean Foster · Tim Januschowski -
2019 Oral: Deep Factors for Forecasting »
Yuyang Wang · Alex Smola · Danielle Robinson · Jan Gasthaus · Dean Foster · Tim Januschowski