For recurrent neural networks trained on time series with target and exogenous variables, in addition to accurate prediction, it is also desired to provide interpretable insights into the data. In this paper, we explore the structure of LSTM recurrent neural networks to learn variable-wise hidden states, with the aim to capture different dynamics in multi-variable time series and distinguish the contribution of variables to the prediction. With these variable-wise hidden states, a mixture attention mechanism is proposed to model the generative process of the target. Then we develop associated training methods to jointly learn network parameters, variable and temporal importance w.r.t the prediction of the target variable. Extensive experiments on real datasets demonstrate enhanced prediction performance by capturing the dynamics of different variables. Meanwhile, we evaluate the interpretation results both qualitatively and quantitatively. It exhibits the prospect as an end-to-end framework for both forecasting and knowledge extraction over multi-variable data.
Tian Guo (ETH Zurich)
Tao LIN (EPFL)
Nino Antulov-Fantulin (ETHZ)
Related Events (a corresponding poster, oral, or spotlight)
2019 Oral: Exploring interpretable LSTM neural networks over multi-variable data »
Thu Jun 13th 09:35 -- 09:40 AM Room Grand Ballroom