Timezone: »
Most of time series forecasting techniques assume that the training data is clean without anomalies. This assumption is unrealistic since the collected time series data can be contaminated in practice. The forecasting model will be inferior if it is directly trained by time series with anomalies. In this paper, we aim to develop methods to automatically learn a robust forecasting model from a data-centric perspective. Specifically, we first statistically define three types of anomalies in time series data, then theoretically and experimentally analyze the \emph{loss robustness} and \emph{sample robustness} when these anomalies exist. Based on our analyses, we propose a simple and efficient algorithm to learn a robust forecasting model which outperforms all existing approaches.
Author Information
Hao Cheng (University of California, Santa Cruz)
Qingsong Wen (Alibaba Group (U.S.) Inc.)
I work at Alibaba DAMO Academy-Decision Intelligence Lab as a Staff Engineer / Researcher / Manager at Greater Seattle Area, WA, USA, working on Intelligent Time Series and Decision (AI for Time Series, AIOps) for Cloud Computing, E-Commerce, and Energy Industry.
Yang Liu (UC Santa Cruz/ByteDance Research)
Liang Sun (Alibaba Group)
More from the Same Authors
-
2020 : Contributed Talk: Incentives for Federated Learning: a Hypothesis Elicitation Approach »
Yang Liu · Jiaheng Wei -
2020 : Contributed Talk: Linear Models are Robust Optimal Under Strategic Behavior »
Wei Tang · Chien-Ju Ho · Yang Liu -
2021 : Linear Classifiers that Encourage Constructive Adaptation »
Yatong Chen · Jialu Wang · Yang Liu -
2021 : When Optimizing f-divergence is Robust with Label Noise »
Jiaheng Wei · Yang Liu -
2022 : Adaptive Data Debiasing Through Bounded Exploration »
Yifan Yang · Yang Liu · Parinaz Naghizadeh -
2023 : To Aggregate or Not? Learning with Separate Noisy Labels »
Jiaheng Wei · Zhaowei Zhu · Tianyi Luo · Ehsan Amid · Abhishek Kumar · Yang Liu -
2023 : Understanding Unfairness via Training Concept Influence »
Yuanshun Yao · Yang Liu -
2023 : Enhancing Time Series Forecasting Models under Concept Drift by Data-centric Online Ensembling »
Yi-Fan Zhang · Qingsong Wen · Xue Wang · Weiqi Chen · Liang Sun · Zhang Zhang · Liang Wang · Rong Jin · Tieniu Tan -
2023 Workshop: DMLR Workshop: Data-centric Machine Learning Research »
Ce Zhang · Praveen Paritosh · Newsha Ardalani · Nezihe Merve Gürel · William Gaviria Rojas · Yang Liu · Rotem Dror · Manil Maskey · Lilith Bat-Leah · Tzu-Sheng Kuo · Luis Oala · Max Bartolo · Ludwig Schmidt · Alicia Parrish · Daniel Kondermann · Najoung Kim -
2023 Poster: Identifiability of Label Noise Transition Matrix »
Yang Liu · Hao Cheng · Kun Zhang -
2023 Poster: Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes »
Zhaowei Zhu · Yuanshun Yao · Jiankai Sun · Hang Li · Yang Liu -
2023 Poster: Model Transferability with Responsive Decision Subjects »
Yatong Chen · Zeyu Tang · Kun Zhang · Yang Liu -
2022 : Model Transferability With Responsive Decision Subjects »
Yang Liu · Yatong Chen · Zeyu Tang · Kun Zhang -
2022 Poster: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Detecting Corrupted Labels Without Training a Model to Predict »
Zhaowei Zhu · Zihao Dong · Yang Liu -
2022 Poster: Understanding Instance-Level Impact of Fairness Constraints »
Jialu Wang · Xin Eric Wang · Yang Liu -
2022 Spotlight: Understanding Instance-Level Impact of Fairness Constraints »
Jialu Wang · Xin Eric Wang · Yang Liu -
2022 Spotlight: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Metric-Fair Classifier Derandomization »
Jimmy Wu · Yatong Chen · Yang Liu -
2022 Poster: Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features »
Zhaowei Zhu · Jialu Wang · Yang Liu -
2022 Spotlight: Detecting Corrupted Labels Without Training a Model to Predict »
Zhaowei Zhu · Zihao Dong · Yang Liu -
2022 Spotlight: Metric-Fair Classifier Derandomization »
Jimmy Wu · Yatong Chen · Yang Liu -
2022 Spotlight: Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features »
Zhaowei Zhu · Jialu Wang · Yang Liu -
2022 Poster: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2022 Poster: FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting »
Tian Zhou · Ziqing MA · Qingsong Wen · Xue Wang · Liang Sun · rong jin -
2022 Spotlight: FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting »
Tian Zhou · Ziqing MA · Qingsong Wen · Xue Wang · Liang Sun · rong jin -
2022 Oral: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2021 Poster: Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels »
Zhaowei Zhu · Yiwen Song · Yang Liu -
2021 Spotlight: Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels »
Zhaowei Zhu · Yiwen Song · Yang Liu -
2021 Poster: Understanding Instance-Level Label Noise: Disparate Impacts and Treatments »
Yang Liu -
2021 Oral: Understanding Instance-Level Label Noise: Disparate Impacts and Treatments »
Yang Liu -
2020 Workshop: Incentives in Machine Learning »
Boi Faltings · Yang Liu · David Parkes · Goran Radanovic · Dawn Song -
2020 Poster: Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates »
Yang Liu · Hongyi Guo -
2019 Poster: Fairness without Harm: Decoupled Classifiers with Preference Guarantees »
Berk Ustun · Yang Liu · David Parkes -
2019 Oral: Fairness without Harm: Decoupled Classifiers with Preference Guarantees »
Berk Ustun · Yang Liu · David Parkes