Timezone: »

Exploring interpretable LSTM neural networks over multi-variable data
Tian Guo · Tao Lin · Nino Antulov-Fantulin

Thu Jun 13 09:35 AM -- 09:40 AM (PDT) @ Grand Ballroom

For a recurrent neural network trained on time series with target and exogenous variables, in addition to accurate prediction, it is also desired to provide interpretable insights into the data. In this paper, we explore the structure of LSTM recurrent neural network to learn variable-wise hidden states, with the aim to capture different dynamics in multi-variable time series and distinguish the contribution of variables to the prediction. With these variable-wise hidden states, a mixture attention mechanism is proposed to model the generative process of the target. Then we develop the associated training method to learn network parameters, variable and temporal importance w.r.t the prediction of the target variable. Extensive experiments on real datasets demonstrate that by modeling dynamics of different variables, the prediction performance is enhanced. Meanwhile, we evaluate the interpretation results both qualitatively and quantitatively. It exhibits the prospect of the developed method as an end-to-end framework for both forecasting and knowledge extraction over multi-variable data.

Author Information

Tian Guo (ETH Zurich)
Tao Lin (EPFL)
Nino Antulov-Fantulin (ETHZ)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors