Timezone: »

Self-Interpretable Time Series Prediction with Counterfactual Explanations
Jingquan Yan · Hao Wang

Tue Jul 25 02:00 PM -- 04:30 PM (PDT) @ Exhibit Hall 1 #320

Interpretable time series prediction is crucial for safety-critical areas such as healthcare and autonomous driving. Most existing methods focus on interpreting predictions by assigning important scores to segments of time series. In this paper, we take a different and more challenging route and aim at developing a self-interpretable model, dubbed Counterfactual Time Series (CounTS), which generates counterfactual and actionable explanations for time series predictions. Specifically, we formalize the problem of time series counterfactual explanations, establish associated evaluation protocols, and propose a variational Bayesian deep learning model equipped with counterfactual inference capability of time series abduction, action, and prediction. Compared with state-of-the-art baselines, our self-interpretable model can generate better counterfactual explanations while maintaining comparable prediction accuracy.

Author Information

Jingquan Yan (Rutgers University)
Hao Wang (Rutgers University)
Hao Wang

Dr. Hao Wang is currently an assistant professor in the department of computer science at Rutgers University. Previously he was a Postdoctoral Associate at the Computer Science & Artificial Intelligence Lab (CSAIL) of MIT, working with Dina Katabi and Tommi Jaakkola. He received his PhD degree from the Hong Kong University of Science and Technology, as the sole recipient of the School of Engineering PhD Research Excellence Award in 2017. He has been a visiting researcher in the Machine Learning Department of Carnegie Mellon University. His research focuses on statistical machine learning, deep learning, and data mining, with broad applications on recommender systems, healthcare, user profiling, social network analysis, text mining, etc. His work on Bayesian deep learning for recommender systems and personalized modeling has inspired hundreds of follow-up works published at top conferences such as AAAI, ICML, IJCAI, KDD, NIPS, SIGIR, and WWW. It has received over 1000 citations, becoming the most cited paper at KDD 2015. In 2015, he was awarded the Microsoft Fellowship in Asia and the Baidu Research Fellowship for his innovation on Bayesian deep learning and its applications on data mining and social network analysis.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors