Spectral Flow Matching: Stabilizing Stochastic GFlowNets via Frequency-Domain Regularization
Abstract
Generative Flow Networks (GFNs) offer a powerful paradigm for diverse sampling, yet they often exhibit instability and poor convergence when applied to stochastic or sparse-reward environments. To mitigate the high variance inherent in these settings, we propose a fundamental re-framing of the GFlowNet training objective within the frequency domain. We present \textbf{Spectral Time-Dependent GFlowNets (ST-GFNs)}, a framework that leverages Fourier analysis to enforce smoothness and stability in learned policies. Our theoretical analysis proves that our proposed spectral loss is mathematically equivalent to regularized value iteration, acting as a principled low-pass filter that separates signal from noise. Furthermore, we tackle the challenge of exploration in sparse landscapes by introducing a novel autocorrelated intrinsic reward derived from the Wiener-Khinchin theorem. Through extensive experiments ranging from adversarial games and noisy sequence generation to high-dimensional single-cell perturbation modelling, we demonstrate that ST-GFNs significantly outperform existing baselines in terms of robustness, sample efficiency, and mode discovery.