FIPN: Forward Self-Organizing Interpretable Polynomial Networks for Time Series Forecasting
Abstract
Most existing time series forecasting models are trained with backpropagation, which often brings high computational cost and limited transparency, so it can be hard to understand why a model makes a given prediction. This paper presents FIPN, a forward self-organizing interpretable polynomial network for time series forecasting. FIPN grows its architecture layer by layer and avoids backpropagation. Each neuron couples a fuzzy-rule antecedent with a Fourier-enhanced polynomial consequent: fuzzy clustering softly partitions the input space and produces interpretable rule weights for local regimes, while the consequent operates directly on the original features and uses Fourier functions to capture periodic and frequency-related structure. Forward growth can lead to redundancy, collinearity, and overfitting as depth increases, so FIPN introduces regularized node scoring, node-level dropout, and persistent access to raw inputs at every layer to stabilize closed-form estimation and improve generalization. Experiments on long-horizon forecasting benchmarks show that FIPN achieves competitive accuracy with a compact model size, and the learned fuzzy rules provide consistent, structure-based explanations. These results suggest that forward self-organizing polynomial networks offer a practical balance among accuracy, efficiency, and interpretability for long-term time series forecasting.