Measuring Memorization and Generalization in Forecasting Models via Structured Perturbations of Chaotic Systems
Max Kanwal · Caryn Tran
Abstract
We introduce a benchmarking method for evaluating generalization and memorization in time series forecasting models of chaotic dynamical systems. By generating two complementary types of test sets—by perturbating training trajectories to minimally/maximally diverge over a fixed time horizon—we quantify each model's sensitivity to distribution shift. Our results reveal consistent trade-offs between training accuracy and OOD generalization across neural architectures, offering a lightweight diagnostic tool for model evaluation in the small-data regime.
Chat is not available.
Successful Page Load