DRL-STAF: A DRL Framework for State-aware Forecasting of Complex Multivariate Hidden Markov Process
Abstract
Forecasting multivariate hidden Markov processes is challenging due to nonlinear and nonstationary observations, latent state transitions, and cross-sequence dependencies. While deep learning methods achieve strong predictive accuracy, they typically lack explicit state modeling, whereas Hidden Markov Models (HMMs) provide interpretable latent states but struggle with complex nonlinear emissions and scalability. To address these limitations, we propose DRL-STAF, a Deep Reinforcement Learning based STate-Aware Forecasting framework that jointly predicts next-step observations and estimates the corresponding hidden states for complex multivariate hidden Markov processes. Specifically, DRL-STAF models complex nonlinear emissions using deep neural networks and estimates hidden state transitions via reinforcement learning, avoiding predefined transition structures and enabling flexible adaptation to diverse and high-order dynamics. In particular, DRL-STAF remains effective when typical HMM-based methods suffer from state-space explosion. Extensive experiments demonstrate that DRL-STAF consistently outperforms HMM variants, standalone deep learning models, and existing DL–HMM hybrids in both forecasting accuracy and hidden state estimation.