Scalable and Stable Estimation of Amari $\alpha$-Divergence using Random Fourier Features
Jiaolong Wang ⋅ Fode Zhang ⋅ Lingrui Wang
Abstract
Reliable estimation of Amari $\alpha$-divergences underpins variational inference, yet unconstrained neural critics are notoriously prone to instability. We propose a scalable estimator by constraining the critic to a Reproducing Kernel Hilbert Space (RKHS) ball and approximating the kernel via band-limited Random Fourier Features (RFF). This formulation yields a linear-time objective amenable to mini-batch stochastic optimization while avoiding the cubic complexity of Gram-matrix methods. We present a unified analysis based on a four-term error decomposition—comprising RKHS approximation, feature discretization, statistical deviation, and optimization residual. Under a spectral source condition, we derive non-asymptotic bounds establishing that the RKHS approximation bias scales as $\mathcal{O}(R^{-\gamma})$, the RFF discretization error as $\mathcal{O}(R D^{-1/2})$, and the statistical error as $\mathcal{O}(R n^{-1/2})$. We further show that statistical non-degeneracy induces intrinsic local curvature, enabling our proposed Armijo-SGD to achieve local linear convergence. Empirical evaluations demonstrate that the RFF-RKHS estimator outperforms varying-representation baselines in stability, and applying this spectral regularization to GAN critics significantly enhances the capture of high-frequency data components.
Successful Page Load